Examining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications

dc.contributor.authorSuppadungsuk S.
dc.contributor.authorThongprayoon C.
dc.contributor.authorKrisanapan P.
dc.contributor.authorTangpanithandee S.
dc.contributor.authorGarcia Valencia O.
dc.contributor.authorMiao J.
dc.contributor.authorMekraksakit P.
dc.contributor.authorKashani K.
dc.contributor.authorCheungpasitporn W.
dc.contributor.otherMahidol University
dc.date.accessioned2023-09-16T18:01:42Z
dc.date.available2023-09-16T18:01:42Z
dc.date.issued2023-09-01
dc.description.abstractLiterature reviews are valuable for summarizing and evaluating the available evidence in various medical fields, including nephrology. However, identifying and exploring the potential sources requires focus and time devoted to literature searching for clinicians and researchers. ChatGPT is a novel artificial intelligence (AI) large language model (LLM) renowned for its exceptional ability to generate human-like responses across various tasks. However, whether ChatGPT can effectively assist medical professionals in identifying relevant literature is unclear. Therefore, this study aimed to assess the effectiveness of ChatGPT in identifying references to literature reviews in nephrology. We keyed the prompt “Please provide the references in Vancouver style and their links in recent literature on… name of the topic” into ChatGPT-3.5 (03/23 Version). We selected all the results provided by ChatGPT and assessed them for existence, relevance, and author/link correctness. We recorded each resource’s citations, authors, title, journal name, publication year, digital object identifier (DOI), and link. The relevance and correctness of each resource were verified by searching on Google Scholar. Of the total 610 references in the nephrology literature, only 378 (62%) of the references provided by ChatGPT existed, while 31% were fabricated, and 7% of citations were incomplete references. Notably, only 122 (20%) of references were authentic. Additionally, 256 (68%) of the links in the references were found to be incorrect, and the DOI was inaccurate in 206 (54%) of the references. Moreover, among those with a link provided, the link was correct in only 20% of cases, and 3% of the references were irrelevant. Notably, an analysis of specific topics in electrolyte, hemodialysis, and kidney stones found that >60% of the references were inaccurate or misleading, with less reliable authorship and links provided by ChatGPT. Based on our findings, the use of ChatGPT as a sole resource for identifying references to literature reviews in nephrology is not recommended. Future studies could explore ways to improve AI language models’ performance in identifying relevant nephrology literature.
dc.identifier.citationJournal of Clinical Medicine Vol.12 No.17 (2023)
dc.identifier.doi10.3390/jcm12175550
dc.identifier.eissn20770383
dc.identifier.scopus2-s2.0-85170260854
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/123456789/90040
dc.rights.holderSCOPUS
dc.subjectMedicine
dc.titleExamining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications
dc.typeArticle
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85170260854&origin=inward
oaire.citation.issue17
oaire.citation.titleJournal of Clinical Medicine
oaire.citation.volume12
oairecerif.author.affiliationThammasat University Hospital
oairecerif.author.affiliationFaculty of Medicine Ramathibodi Hospital, Mahidol University
oairecerif.author.affiliationMayo Clinic

Files

Collections