Empowering inclusivity: improving readability of living kidney donation information with ChatGPT
Issued Date
2024-01-01
Resource Type
eISSN
2673253X
Scopus ID
2-s2.0-85191170909
Journal Title
Frontiers in Digital Health
Volume
6
Rights Holder(s)
SCOPUS
Bibliographic Citation
Frontiers in Digital Health Vol.6 (2024)
Suggested Citation
Garcia Valencia O.A., Thongprayoon C., Miao J., Suppadungsuk S., Krisanapan P., Craici I.M., Jadlowiec C.C., Mao S.A., Mao M.A., Leeaphorn N., Budhiraja P., Cheungpasitporn W. Empowering inclusivity: improving readability of living kidney donation information with ChatGPT. Frontiers in Digital Health Vol.6 (2024). doi:10.3389/fdgth.2024.1366967 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/98181
Title
Empowering inclusivity: improving readability of living kidney donation information with ChatGPT
Corresponding Author(s)
Other Contributor(s)
Abstract
Background: Addressing disparities in living kidney donation requires making information accessible across literacy levels, especially important given that the average American adult reads at an 8th-grade level. This study evaluated the effectiveness of ChatGPT, an advanced AI language model, in simplifying living kidney donation information to an 8th-grade reading level or below. Methods: We used ChatGPT versions 3.5 and 4.0 to modify 27 questions and answers from Donate Life America, a key resource on living kidney donation. We measured the readability of both original and modified texts using the Flesch-Kincaid formula. A paired t-test was conducted to assess changes in readability levels, and a statistical comparison between the two ChatGPT versions was performed. Results: Originally, the FAQs had an average reading level of 9.6 ± 1.9. Post-modification, ChatGPT 3.5 achieved an average readability level of 7.72 ± 1.85, while ChatGPT 4.0 reached 4.30 ± 1.71, both with a p-value <0.001 indicating significant reduction. ChatGPT 3.5 made 59.26% of answers readable below 8th-grade level, whereas ChatGPT 4.0 did so for 96.30% of the texts. The grade level range for modified answers was 3.4–11.3 for ChatGPT 3.5 and 1–8.1 for ChatGPT 4.0. Conclusion: Both ChatGPT 3.5 and 4.0 effectively lowered the readability grade levels of complex medical information, with ChatGPT 4.0 being more effective. This suggests ChatGPT's potential role in promoting diversity and equity in living kidney donation, indicating scope for further refinement in making medical information more accessible.