1
|
Spinos D, Martinos A, Petsiou DP, Mistry N, Garas G. Artificial Intelligence in Temporal Bone Imaging: A Systematic Review. Laryngoscope 2024. [PMID: 39352072 DOI: 10.1002/lary.31809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/03/2024] [Accepted: 09/17/2024] [Indexed: 10/03/2024]
Abstract
OBJECTIVE The human temporal bone comprises more than 30 identifiable anatomical components. With the demand for precise image interpretation in this complex region, the utilization of artificial intelligence (AI) applications is steadily increasing. This systematic review aims to highlight the current role of AI in temporal bone imaging. DATA SOURCES A Systematic Review of English Publications searching MEDLINE (PubMed), COCHRANE Library, and EMBASE. REVIEW METHODS The search algorithm employed consisted of key items such as 'artificial intelligence,' 'machine learning,' 'deep learning,' 'neural network,' 'temporal bone,' and 'vestibular schwannoma.' Additionally, manual retrieval was conducted to capture any studies potentially missed in our initial search. All abstracts and full texts were screened based on our inclusion and exclusion criteria. RESULTS A total of 72 studies were included. 95.8% were retrospective and 88.9% were based on internal databases. Approximately two-thirds involved an AI-to-human comparison. Computed tomography (CT) was the imaging modality in 54.2% of the studies, with vestibular schwannoma (VS) being the most frequent study item (37.5%). Fifty-eight out of 72 articles employed neural networks, with 72.2% using various types of convolutional neural network models. Quality assessment of the included publications yielded a mean score of 13.6 ± 2.5 on a 20-point scale based on the CONSORT-AI extension. CONCLUSION Current research data highlight AI's potential in enhancing diagnostic accuracy with faster results and decreased performance errors compared to those of clinicians, thus improving patient care. However, the shortcomings of the existing research, often marked by heterogeneity and variable quality, underscore the need for more standardized methodological approaches to ensure the consistency and reliability of future data. LEVEL OF EVIDENCE NA Laryngoscope, 2024.
Collapse
Affiliation(s)
- Dimitrios Spinos
- South Warwickshire NHS Foundation Trust, Warwick, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anastasios Martinos
- National and Kapodistrian University of Athens School of Medicine, Athens, Greece
| | | | - Nina Mistry
- Gloucestershire Hospitals NHS Foundation Trust, ENT, Head and Neck Surgery, Gloucester, UK
| | - George Garas
- Surgical Innovation Centre, Department of Surgery and Cancer, Imperial College London, St. Mary's Hospital, London, UK
- Athens Medical Center, Marousi & Psychiko Clinic, Athens, Greece
| |
Collapse
|
2
|
Li J, Lin Y, Zhao P, Liu W, Cai L, Sun J, Zhao L, Yang Z, Song H, Lv H, Wang Z. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT). BMC Med Inform Decis Mak 2022; 22:200. [PMID: 35907966 PMCID: PMC9338483 DOI: 10.1186/s12911-022-01946-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Background Given the increasing number of people suffering from tinnitus, the accurate categorization of patients with actionable reports is attractive in assisting clinical decision making. However, this process requires experienced physicians and significant human labor. Natural language processing (NLP) has shown great potential in big data analytics of medical texts; yet, its application to domain-specific analysis of radiology reports is limited. Objective The aim of this study is to propose a novel approach in classifying actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer BERT-based models and evaluate the benefits of in domain pre-training (IDPT) along with a sequence adaptation strategy. Methods A total of 5864 temporal bone computed tomography(CT) reports are labeled by two experienced radiologists as follows: (1) normal findings without notable lesions; (2) notable lesions but uncorrelated to tinnitus; and (3) at least one lesion considered as potential cause of tinnitus. We then constructed a framework consisting of deep learning (DL) neural networks and self-supervised BERT models. A tinnitus domain-specific corpus is used to pre-train the BERT model to further improve its embedding weights. In addition, we conducted an experiment to evaluate multiple groups of max sequence length settings in BERT to reduce the excessive quantity of calculations. After a comprehensive comparison of all metrics, we determined the most promising approach through the performance comparison of F1-scores and AUC values. Results In the first experiment, the BERT finetune model achieved a more promising result (AUC-0.868, F1-0.760) compared with that of the Word2Vec-based models(AUC-0.767, F1-0.733) on validation data. In the second experiment, the BERT in-domain pre-training model (AUC-0.948, F1-0.841) performed significantly better than the BERT based model(AUC-0.868, F1-0.760). Additionally, in the variants of BERT fine-tuning models, Mengzi achieved the highest AUC of 0.878 (F1-0.764). Finally, we found that the BERT max-sequence-length of 128 tokens achieved an AUC of 0.866 (F1-0.736), which is almost equal to the BERT max-sequence-length of 512 tokens (AUC-0.868,F1-0.760). Conclusion In conclusion, we developed a reliable BERT-based framework for tinnitus diagnosis from Chinese radiology reports, along with a sequence adaptation strategy to reduce computational resources while maintaining accuracy. The findings could provide a reference for NLP development in Chinese radiology reports. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-022-01946-y.
Collapse
Affiliation(s)
- Jia Li
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, No.5 Zhongguancun East Road, Beijing, 100050, People's Republic of China
| | - Pengfei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Wenjuan Liu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Linkun Cai
- School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China
| | - Jing Sun
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Lei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing, 100050, People's Republic of China.
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China.
| | - Zhenchang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China. .,School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China.
| |
Collapse
|
3
|
Linna N, Kahn CE. Applications of Natural Language Processing in Radiology: A Systematic Review. Int J Med Inform 2022; 163:104779. [DOI: 10.1016/j.ijmedinf.2022.104779] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/28/2022] [Accepted: 04/21/2022] [Indexed: 12/27/2022]
|
4
|
Kumar A, Goodrum H, Kim A, Stender C, Roberts K, Bernstam EV. Closing the loop: automatically identifying abnormal imaging results in scanned documents. J Am Med Inform Assoc 2022; 29:831-840. [PMID: 35146510 PMCID: PMC9714594 DOI: 10.1093/jamia/ocac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/29/2021] [Accepted: 01/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Scanned documents (SDs), while common in electronic health records and potentially rich in clinically relevant information, rarely fit well with clinician workflow. Here, we identify scanned imaging reports requiring follow-up with high recall and practically useful precision. MATERIALS AND METHODS We focused on identifying imaging findings for 3 common causes of malpractice claims: (1) potentially malignant breast (mammography) and (2) lung (chest computed tomography [CT]) lesions and (3) long-bone fracture (X-ray) reports. We train our ClinicalBERT-based pipeline on existing typed/dictated reports classified manually or using ICD-10 codes, evaluate using a test set of manually classified SDs, and compare against string-matching (baseline approach). RESULTS A total of 393 mammograms, 305 chest CT, and 683 bone X-ray reports were manually reviewed. The string-matching approach had an F1 of 0.667. For mammograms, chest CTs, and bone X-rays, respectively: models trained on manually classified training data and optimized for F1 reached an F1 of 0.900, 0.905, and 0.817, while separate models optimized for recall achieved a recall of 1.000 with precisions of 0.727, 0.518, and 0.275. Models trained on ICD-10-labelled data and optimized for F1 achieved F1 scores of 0.647, 0.830, and 0.643, while those optimized for recall achieved a recall of 1.0 with precisions of 0.407, 0.683, and 0.358. DISCUSSION Our pipeline can identify abnormal reports with potentially useful performance and so decrease the manual effort required to screen for abnormal findings that require follow-up. CONCLUSION It is possible to automatically identify clinically significant abnormalities in SDs with high recall and practically useful precision in a generalizable and minimally laborious way.
Collapse
Affiliation(s)
- Akshat Kumar
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA,McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Heath Goodrum
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Ashley Kim
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Carly Stender
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Kirk Roberts
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Elmer V Bernstam
- Corresponding Author: Elmer V. Bernstam, MD, MSE, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, 7000 Fannin Street, Suite 600, Houston, TX 77030, USA;
| |
Collapse
|
5
|
Casey A, Davidson E, Poon M, Dong H, Duma D, Grivas A, Grover C, Suárez-Paniagua V, Tobin R, Whiteley W, Wu H, Alex B. A systematic review of natural language processing applied to radiology reports. BMC Med Inform Decis Mak 2021; 21:179. [PMID: 34082729 PMCID: PMC8176715 DOI: 10.1186/s12911-021-01533-7] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 05/17/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Natural language processing (NLP) has a significant role in advancing healthcare and has been found to be key in extracting structured information from radiology reports. Understanding recent developments in NLP application to radiology is of significance but recent reviews on this are limited. This study systematically assesses and quantifies recent literature in NLP applied to radiology reports. METHODS We conduct an automated literature search yielding 4836 results using automated filtering, metadata enriching steps and citation search combined with manual review. Our analysis is based on 21 variables including radiology characteristics, NLP methodology, performance, study, and clinical application characteristics. RESULTS We present a comprehensive analysis of the 164 publications retrieved with publications in 2019 almost triple those in 2015. Each publication is categorised into one of 6 clinical application categories. Deep learning use increases in the period but conventional machine learning approaches are still prevalent. Deep learning remains challenged when data is scarce and there is little evidence of adoption into clinical practice. Despite 17% of studies reporting greater than 0.85 F1 scores, it is hard to comparatively evaluate these approaches given that most of them use different datasets. Only 14 studies made their data and 15 their code available with 10 externally validating results. CONCLUSIONS Automated understanding of clinical narratives of the radiology reports has the potential to enhance the healthcare process and we show that research in this field continues to grow. Reproducibility and explainability of models are important if the domain is to move applications into clinical use. More could be done to share code enabling validation of methods on different institutional data and to reduce heterogeneity in reporting of study properties allowing inter-study comparisons. Our results have significance for researchers in the field providing a systematic synthesis of existing work to build on, identify gaps, opportunities for collaboration and avoid duplication.
Collapse
Affiliation(s)
- Arlene Casey
- School of Literatures, Languages and Cultures (LLC), University of Edinburgh, Edinburgh, Scotland
| | - Emma Davidson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, Scotland
| | - Michael Poon
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, Scotland
| | - Hang Dong
- Centre for Medical Informatics, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, Scotland
- Health Data Research UK, London, UK
| | - Daniel Duma
- School of Literatures, Languages and Cultures (LLC), University of Edinburgh, Edinburgh, Scotland
| | - Andreas Grivas
- Institute for Language, Cognition and Computation, School of informatics, University of Edinburgh, Edinburgh, Scotland
| | - Claire Grover
- Institute for Language, Cognition and Computation, School of informatics, University of Edinburgh, Edinburgh, Scotland
| | - Víctor Suárez-Paniagua
- Centre for Medical Informatics, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, Scotland
- Health Data Research UK, London, UK
| | - Richard Tobin
- Institute for Language, Cognition and Computation, School of informatics, University of Edinburgh, Edinburgh, Scotland
| | - William Whiteley
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, Scotland
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Honghan Wu
- Health Data Research UK, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Beatrice Alex
- School of Literatures, Languages and Cultures (LLC), University of Edinburgh, Edinburgh, Scotland
- Edinburgh Futures Institute, University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
6
|
Short RG, Bralich J, Bogaty D, Befera NT. Comprehensive Word-Level Classification of Screening Mammography Reports Using a Neural Network Sequence Labeling Approach. J Digit Imaging 2020; 32:685-692. [PMID: 30338478 DOI: 10.1007/s10278-018-0141-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Radiology reports contain a large amount of potentially valuable unstructured data. Recently, neural networks have been employed to perform classification of radiology reports over a few classes at the document level. The success of neural networks in sequence-labeling problems such as named entity recognition and part of speech tagging suggests that they could be used to classify radiology report text with greater granularity. We employed a neural network architecture to comprehensively classify mammography report text at the word level using a sequence labeling approach. Two radiologists devised a comprehensive classification system for screening mammography reports. Each word in each report was manually categorized by a radiologist into one of 33 categories according to the classification system. Tagged words referencing the same finding were grouped into unique sets. We pre-labeled reports with a rule-based algorithm and then manually edited these annotations for 6705 screening mammography reports (25.1%, 66.8%, and 8.1% BI-RADS 0, 1, and 2, respectively). A combined convolutional and recurrent neural network model was used to label words in each sentence of the individual reports. A siamese recurrent neural network was then used to group findings into sets. Performance of the neural network-based method was compared to a rule-based algorithm and a conditional random field (CRF) model. Global accuracy (percentage of documents where all word tags were predicted correctly) and keyword accuracy (percentage of all words that were labeled correctly, excluding words tagged as unimportant) were calculated on an unseen 519 report test set. Two-tailed t tests were used to assess differences between algorithm performance, and p < 0.05 was used to determine statistical significance. The neural network-based approach showed significantly higher global accuracy compared to both the rule-based algorithm (88.3 vs 57.0%, p < 0.001) and the CRF model (88.3% vs. 75.8%, p < 0.001). The neural network also showed significantly higher keyword level accuracy compared to the rule-based algorithm (95.5% vs. 80.9% p < 0.001) and CRF model (95.5% vs. 76.9%, p < 0.001). We demonstrate the potential of neural networks to accurately perform word-level multilabel classification of free text radiology reports across 33 classes, thus showing the utility of a sequence labeling approach to NLP of radiology reports. We found that a neural network classifier outperforms a rule-based algorithm and a CRF classifier for comprehensive multilabel classification of free text screening mammography reports at the word level. By approaching radiology report classification as a sequence-labeling problem, we demonstrate the ability of neural networks to extract data from free text radiology reports at a level of granularity not previously reported.
Collapse
Affiliation(s)
- Ryan G Short
- Department of Radiology, Duke University Medical Center, 2301 Erwin Road, Box 3808, Durham, NC, 27710, USA.
| | | | | | - Nicholas T Befera
- Department of Radiology, Duke University Medical Center, 2301 Erwin Road, Box 3808, Durham, NC, 27710, USA
| |
Collapse
|
7
|
Spandorfer A, Branch C, Sharma P, Sahbaee P, Schoepf UJ, Ravenel JG, Nance JW. Deep learning to convert unstructured CT pulmonary angiography reports into structured reports. Eur Radiol Exp 2019; 3:37. [PMID: 31549323 PMCID: PMC6757071 DOI: 10.1186/s41747-019-0118-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 07/24/2019] [Indexed: 01/29/2023] Open
Abstract
Background Structured reports have been shown to improve communication between radiologists and providers. However, some radiologists are concerned about resultant decreased workflow efficiency. We tested a machine learning-based algorithm designed to convert unstructured computed tomography pulmonary angiography (CTPA) reports into structured reports. Methods A self-supervised convolutional neural network-based algorithm was trained on a dataset of 475 manually structured CTPA reports. Labels for individual statements included “pulmonary arteries,” “lungs and airways,” “pleura,” “mediastinum and lymph nodes,” “cardiovascular,” “soft tissues and bones,” “upper abdomen,” and “lines/tubes.” The algorithm was applied to a test set of 400 unstructured CTPA reports, generating a predicted label for each statement, which was evaluated by two independent observers. Per-statement accuracy was calculated based on strict criteria (algorithm label counted as correct if the statement unequivocally contained content only related to that particular label) and a modified criteria, accounting for problematic statements, including typographical errors, statements that did not fit well into the classification scheme, statements containing content for multiple labels, etc. Results Of the 4,157 statements, 3,806 (91.6%) and 3,986 (95.9%) were correctly labeled by the algorithm using strict and modified criteria, respectively, while 274 (6.6%) were problematic for the manual observers to label, the majority of which (n = 173) were due to more than one section being included in one statement. Conclusion This algorithm showed high accuracy in converting free-text findings into structured reports, which could improve communication between radiologists and clinicians without loss of productivity and provide more structured data for research/data mining applications.
Collapse
Affiliation(s)
- Adam Spandorfer
- Department of Radiology, Medical University of South Carolina, 171 Ashley Avenue, Charleston, SC, 29425, USA
| | - Cody Branch
- Department of Radiology, Medical University of South Carolina, 171 Ashley Avenue, Charleston, SC, 29425, USA
| | - Puneet Sharma
- Siemens Medical Solutions USA, Inc., 40 Liberty Boulevard, Malvern, PA, 19355, USA
| | - Pooyan Sahbaee
- Siemens Medical Solutions USA, Inc., 40 Liberty Boulevard, Malvern, PA, 19355, USA
| | - U Joseph Schoepf
- Department of Radiology, Medical University of South Carolina, 171 Ashley Avenue, Charleston, SC, 29425, USA
| | - James G Ravenel
- Department of Radiology, Medical University of South Carolina, 171 Ashley Avenue, Charleston, SC, 29425, USA
| | - John W Nance
- Department of Radiology, Medical University of South Carolina, 171 Ashley Avenue, Charleston, SC, 29425, USA.
| |
Collapse
|
8
|
Becker A. Artificial intelligence in medicine: What is it doing for us today? HEALTH POLICY AND TECHNOLOGY 2019. [DOI: 10.1016/j.hlpt.2019.03.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
9
|
Automatic Disease Annotation From Radiology Reports Using Artificial Intelligence Implemented by a Recurrent Neural Network. AJR Am J Roentgenol 2019; 212:734-740. [DOI: 10.2214/ajr.18.19869] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
10
|
Selby LV, Narain WR, Russo A, Strong VE, Stetson P. Autonomous detection, grading, and reporting of postoperative complications using natural language processing. Surgery 2018; 164:1300-1305. [PMID: 30056994 DOI: 10.1016/j.surg.2018.05.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 03/19/2018] [Accepted: 05/05/2018] [Indexed: 01/06/2023]
Abstract
INTRODUCTION Natural language processing, a computer science technique that allows interpretation of narrative text, is infrequently used to identify surgical complications. We designed a natural language processing algorithm to identify and grade the severity of deep venous thrombosis and pulmonary embolism (together: venous thromboembolism). METHODS Patients from our 2011-2014 American College of Surgeons National Surgical Quality Improvement Project cohorts with a duplex ultrasound or a computerized tomography angiography of the chest performed within 30 days of surgery were divided into training and validation datasets. A "bag of words" approach classified the reports; other electronic health record data classified the venous thromboembolism's severity. RESULTS Of the 10,295 American College of Surgeons National Surgical Quality Improvement Project patients, 251 were used in our deep venous thromboses validation cohort (273 total ultrasounds) and 506 in our pulmonary embolisms cohort (552 total computerized tomography angiographies). For deep venous thromboses the sensitivity and specificity were 85.1% and 94.6%, while for pulmonary embolisms they were 90% and 98.7%. Most discordances were due to lack of imaging documentation of a deep venous thrombosis (28/41, 68.3%) or pulmonary embolism (6/6, 100%). Most deep venous thromboses (28 patients, 54.6%) and pulmonary embolisms (25 patients, 75.8%) required administration of therapeutic intravenous or subcutaneous anticoagulation. CONCLUSION Natural language processing can reliably detect the presence of postoperative venous thromboembolisms, and its use should be expanded for the detection of other conditions from narrative documentation.
Collapse
Affiliation(s)
- Luke V Selby
- Department of Surgery, Memorial Sloan Kettering Cancer Center
| | - Wazim R Narain
- Department of Surgery Health Informatics, Memorial Sloan Kettering Cancer Center
| | - Ashley Russo
- Department of Surgery, Memorial Sloan Kettering Cancer Center
| | - Vivian E Strong
- Department of Surgery, Memorial Sloan Kettering Cancer Center.
| | - Peter Stetson
- Department of Surgery Health Informatics, Memorial Sloan Kettering Cancer Center
| |
Collapse
|
11
|
Névéol A, Zweigenbaum P. Making Sense of Big Textual Data for Health Care: Findings from the Section on Clinical Natural Language Processing. Yearb Med Inform 2017; 26:228-234. [PMID: 29063569 PMCID: PMC6239234 DOI: 10.15265/iy-2017-027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Indexed: 02/01/2023] Open
Abstract
Objectives: To summarize recent research and present a selection of the best papers published in 2016 in the field of clinical Natural Language Processing (NLP). Method: A survey of the literature was performed by the two section editors of the IMIA Yearbook NLP section. Bibliographic databases were searched for papers with a focus on NLP efforts applied to clinical texts or aimed at a clinical outcome. Papers were automatically ranked and then manually reviewed based on titles and abstracts. A shortlist of candidate best papers was first selected by the section editors before being peer-reviewed by independent external reviewers. Results: The five clinical NLP best papers provide a contribution that ranges from emerging original foundational methods to transitioning solid established research results to a practical clinical setting. They offer a framework for abbreviation disambiguation and coreference resolution, a classification method to identify clinically useful sentences, an analysis of counseling conversations to improve support to patients with mental disorder and grounding of gradable adjectives. Conclusions: Clinical NLP continued to thrive in 2016, with an increasing number of contributions towards applications compared to fundamental methods. Fundamental work addresses increasingly complex problems such as lexical semantics, coreference resolution, and discourse analysis. Research results translate into freely available tools, mainly for English.
Collapse
Affiliation(s)
- A. Névéol
- LIMSI, CNRS, Université Paris Saclay, Orsay, France
| | | | | |
Collapse
|
12
|
Cocos A, Qian T, Callison-Burch C, Masino AJ. Crowd control: Effectively utilizing unscreened crowd workers for biomedical data annotation. J Biomed Inform 2017; 69:86-92. [DOI: 10.1016/j.jbi.2017.04.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 03/27/2017] [Accepted: 04/02/2017] [Indexed: 01/17/2023]
|
13
|
Khozeimeh F, Alizadehsani R, Roshanzamir M, Khosravi A, Layegh P, Nahavandi S. An expert system for selecting wart treatment method. Comput Biol Med 2017; 81:167-175. [DOI: 10.1016/j.compbiomed.2017.01.001] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Revised: 12/31/2016] [Accepted: 01/03/2017] [Indexed: 01/15/2023]
|