Navigating the Landscape of Personalized Medicine: The Relevance of ChatGPT, BingChat, and Bard AI in Nephrology Literature Searches

dc.contributor.authorAiumtrakul N.
dc.contributor.authorThongprayoon C.
dc.contributor.authorSuppadungsuk S.
dc.contributor.authorKrisanapan P.
dc.contributor.authorMiao J.
dc.contributor.authorQureshi F.
dc.contributor.authorCheungpasitporn W.
dc.contributor.otherMahidol University
dc.date.accessioned2023-11-10T18:02:07Z
dc.date.available2023-11-10T18:02:07Z
dc.date.issued2023-10-01
dc.description.abstractBackground and Objectives: Literature reviews are foundational to understanding medical evidence. With AI tools like ChatGPT, Bing Chat and Bard AI emerging as potential aids in this domain, this study aimed to individually assess their citation accuracy within Nephrology, comparing their performance in providing precise. Materials and Methods: We generated the prompt to solicit 20 references in Vancouver style in each 12 Nephrology topics, using ChatGPT, Bing Chat and Bard. We verified the existence and accuracy of the provided references using PubMed, Google Scholar, and Web of Science. We categorized the validity of the references from the AI chatbot into (1) incomplete, (2) fabricated, (3) inaccurate, and (4) accurate. Results: A total of 199 (83%), 158 (66%) and 112 (47%) unique references were provided from ChatGPT, Bing Chat and Bard, respectively. ChatGPT provided 76 (38%) accurate, 82 (41%) inaccurate, 32 (16%) fabricated and 9 (5%) incomplete references. Bing Chat provided 47 (30%) accurate, 77 (49%) inaccurate, 21 (13%) fabricated and 13 (8%) incomplete references. In contrast, Bard provided 3 (3%) accurate, 26 (23%) inaccurate, 71 (63%) fabricated and 12 (11%) incomplete references. The most common error type across platforms was incorrect DOIs. Conclusions: In the field of medicine, the necessity for faultless adherence to research integrity is highlighted, asserting that even small errors cannot be tolerated. The outcomes of this investigation draw attention to inconsistent citation accuracy across the different AI tools evaluated. Despite some promising results, the discrepancies identified call for a cautious and rigorous vetting of AI-sourced references in medicine. Such chatbots, before becoming standard tools, need substantial refinements to assure unwavering precision in their outputs.
dc.identifier.citationJournal of Personalized Medicine Vol.13 No.10 (2023)
dc.identifier.doi10.3390/jpm13101457
dc.identifier.eissn20754426
dc.identifier.scopus2-s2.0-85175468015
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/123456789/90988
dc.rights.holderSCOPUS
dc.subjectMedicine
dc.titleNavigating the Landscape of Personalized Medicine: The Relevance of ChatGPT, BingChat, and Bard AI in Nephrology Literature Searches
dc.typeArticle
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85175468015&origin=inward
oaire.citation.issue10
oaire.citation.titleJournal of Personalized Medicine
oaire.citation.volume13
oairecerif.author.affiliationFaculty of Medicine Ramathibodi Hospital, Mahidol University
oairecerif.author.affiliationFaculty of Medicine, Thammasat University
oairecerif.author.affiliationJohn A. Burns School of Medicine
oairecerif.author.affiliationMayo Clinic

Files

Collections