Classification of Depression Audio Data by Deep Learning
Issued Date
2022-01-01
Resource Type
Scopus ID
2-s2.0-85147247650
Journal Title
BMEiCON 2022 - 14th Biomedical Engineering International Conference
Rights Holder(s)
SCOPUS
Bibliographic Citation
BMEiCON 2022 - 14th Biomedical Engineering International Conference (2022)
Suggested Citation
Homsiang P., Treebupachatsakul T., Kiatrungrit K., Poomrittigul S. Classification of Depression Audio Data by Deep Learning. BMEiCON 2022 - 14th Biomedical Engineering International Conference (2022). doi:10.1109/BMEiCON56653.2022.10012102 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/84312
Title
Classification of Depression Audio Data by Deep Learning
Other Contributor(s)
Abstract
Due to many factors such as anxiety from contracting the disease and concern about the socioeconomic impacts, Thai people have accumulated stress and are at risk of depression. The diagnosis of depression can be primarily assessed by testing the assessments such as PHQ8, PHQ-9, and CES-D. The applied deep learning technology in medicine has received research interest and has been developing. In this research, we tried the classification of depression and non-depression audio datasets with the implementation of 4 model architectures: 1D CNN, 2D CNN, LSTM, and GRU. By converting wave audio format (WAV) of Daic-woz database to the Melfrequency cepstrum (MFC). We have done the training and evaluated the 4 model architectures and compared the results between non-augmented and augmented datasets. The highest accuracy was obtained from 1D CNN with a non-data augmentation of 95%, and a 2D CNN with a data augmentation of 75%. These results confirm that human voices can differentiate between depression and non-depression.