Leveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness.

TitleLeveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness.
Publication TypeJournal Article
Year of Publication2024
AuthorsZhang G, Jin Q, McInerney DJered, Chen Y, Wang F, Cole CL, Yang Q, Wang Y, Malin BA, Peleg M, Wallace BC, Lu Z, Weng C, Peng Y
JournalJ Biomed Inform
Volume153
Pagination104640
Date Published2024 May
ISSN1532-0480
KeywordsArtificial Intelligence, Evidence-Based Medicine, Humans, Natural Language Processing, Trust
Abstract

Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence. The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information. Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task. However, developing accountable, fair, and inclusive models remains a complicated undertaking. In this perspective, we discuss the trustworthiness of generative AI in the context of automated summarization of medical evidence.

DOI10.1016/j.jbi.2024.104640
Alternate JournalJ Biomed Inform
PubMed ID38608915
PubMed Central IDPMC11217921
Grant ListR01 LM014306 / LM / NLM NIH HHS / United States
R01 LM009886 / LM / NLM NIH HHS / United States
UL1 TR001873 / TR / NCATS NIH HHS / United States
R01 LM012086 / LM / NLM NIH HHS / United States
R01 LM013772 / LM / NLM NIH HHS / United States
R01 LM014344 / LM / NLM NIH HHS / United States