Explainable Deep Learning for Glaucomatous Visual Field Prediction: Artifact Correction Enhances Transformer Models
Issued Date
2025-01-02
Resource Type
eISSN
21642591
Scopus ID
2-s2.0-85216608372
Pubmed ID
39847375
Journal Title
Translational vision science & technology
Volume
14
Issue
1
Rights Holder(s)
SCOPUS
Bibliographic Citation
Translational vision science & technology Vol.14 No.1 (2025) , 22
Suggested Citation
Sriwatana K., Puttanawarut C., Suwan Y., Achakulvisut T. Explainable Deep Learning for Glaucomatous Visual Field Prediction: Artifact Correction Enhances Transformer Models. Translational vision science & technology Vol.14 No.1 (2025) , 22. doi:10.1167/tvst.14.1.22 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/104205
Title
Explainable Deep Learning for Glaucomatous Visual Field Prediction: Artifact Correction Enhances Transformer Models
Author(s)
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test. Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model. Three convolutional neural networks and 2 transformer-based models were trained on original and artifact-corrected datasets to estimate 54 sensitivity thresholds of the 24-2 HVF test. Results: Predictive performances were calculated using root mean square error (RMSE) and mean absolute error (MAE), with explainability evaluated through GradCAM, attention maps, and dimensionality reduction techniques. The Distillation with No Labels (DINO) Vision Transformers (ViT) trained on artifact-corrected datasets achieved the highest accuracy (RMSE, 95% confidence interval [CI] = 4.44, 95% CI = 4.07, 4.82 decibel [dB], MAE = 3.46, 95% CI = 3.14, 3.79 dB), and the greatest interpretability, showing improvements of 0.15 dB in global RMSE and MAE (P < 0.05) compared to the performance on original maps. Feature maps and visualization tools indicate that artifacts compromise DINO-ViT's predictive ability but improve with artifact correction. Conclusions: Combining self-supervised ViTs with generative artifact correction enhances the correlation between glaucomatous structures and functions. Translational Relevance: Our approach offers a comprehensive tool for glaucoma management, facilitates the exploration of structure-function correlations in research, and underscores the importance of addressing artifacts in the clinical interpretation of OCT.