Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection
Issued Date
2025-01-01
Resource Type
eISSN
16722930
Scopus ID
2-s2.0-105003782316
Journal Title
Journal of Otology
Volume
20
Issue
1
Start Page
26
End Page
32
Rights Holder(s)
SCOPUS
Bibliographic Citation
Journal of Otology Vol.20 No.1 (2025) , 26-32
Suggested Citation
Achakulvisut T., Phanthong S., Timpitak T., Vesessook K., Junthong S., Utainrat W., Bunnag K. Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection. Journal of Otology Vol.20 No.1 (2025) , 26-32. 32. doi:10.26599/JOTO.2025.9540005 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/110007
Title
Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
Objective: To develop and evaluate an automated system for digitizing audiograms, classifying hearing loss levels, and comparing their performance with traditional methods and otolaryngologists' interpretations.Designed and Methods: We conducted a retrospective diagnostic study using 1,959 audiogram images from patients aged 7 years and older at the Faculty of Medicine, Vajira Hospital, Navamindradhiraj University. We employed an object detection approach to digitize audiograms and developed multiple machine learning models to classify six hearing loss levels. The dataset was split into 70% training (1,407 images) and 30% testing (352 images) sets. We compared our model's performance with classifications based on manually extracted audiogram values and otolaryngologists' interpretations.Result: Our object detection-based model achieved an F1-score of 94.72% in classifying hearing loss levels, comparable to the 96.43% F1-score obtained using manually extracted values. The Light Gradient Boosting Machine (LGBM) model is used as the classifier for the manually extracted data, which achieved top performance with 94.72% accuracy, 94.72% f1-score, 94.72 recall, and 94.72 precision. In object detection based model, The Random Forest Classifier (RFC) model showed the highest 96.43% accuracy in predicting hearing loss level, with a F1-score of 96.43%, recall of 96.43%, and precision of 96.45%.Conclusion: Our proposed automated approach for audiogram digitization and hearing loss classification performs comparably to traditional methods and otolaryngologists' interpretations. This system can potentially assist otolaryngologists in providing more timely and effective treatment by quickly and accurately classifying hearing loss.
