Trustworthy assertion classification through prompting.

TitleTrustworthy assertion classification through prompting.
Publication TypeJournal Article
Year of Publication2022
AuthorsWang S, Tang L, Majety A, Rousseau JF, Shih G, Ding Y, Peng Y
JournalJ Biomed Inform
Volume132
Pagination104139
Date Published2022 08
ISSN1532-0480
KeywordsElectronic Health Records, Humans, Linguistics, Machine Learning, Natural Language Processing
Abstract

Accurate identification of the presence, absence or possibility of relevant entities in clinical notes is important for healthcare professionals to quickly understand crucial clinical information. This introduces the task of assertion classification - to correctly identify the assertion status of an entity in the unstructured clinical notes. Recent rule-based and machine-learning approaches suffer from labor-intensive pattern engineering and severe class bias toward majority classes. To solve this problem, in this study, we propose a prompt-based learning approach, which treats the assertion classification task as a masked language auto-completion problem. We evaluated the model on six datasets. Our prompt-based method achieved a micro-averaged F-1 of 0.954 on the i2b2 2010 assertion dataset, with ∼1.8% improvements over previous works. In particular, our model showed excellence in detecting classes with few instances (few-shot). Evaluations on five external datasets showcase the outstanding generalizability of the prompt-based method to unseen data. To examine the rationality of our model, we further introduced two rationale faithfulness metrics: comprehensiveness and sufficiency. The results reveal that compared to the "pre-train, fine-tune" procedure, our prompt-based model has a stronger capability of identifying the comprehensive (∼63.93%) and sufficient (∼11.75%) linguistic features from free text. We further evaluated the model-agnostic explanations using LIME. The results imply a better rationale agreement between our model and human beings (∼71.93% in average F-1), which demonstrates the superior trustworthiness of our model.

DOI10.1016/j.jbi.2022.104139
Alternate JournalJ Biomed Inform
PubMed ID35811026
PubMed Central IDPMC9378721
Grant ListR00 LM013001 / LM / NLM NIH HHS / United States