1
|
Marchi F, Bellini E, Iandelli A, Sampieri C, Peretti G. Exploring the landscape of AI-assisted decision-making in head and neck cancer treatment: a comparative analysis of NCCN guidelines and ChatGPT responses. Eur Arch Otorhinolaryngol 2024; 281:2123-2136. [PMID: 38421392 DOI: 10.1007/s00405-024-08525-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 02/02/2024] [Indexed: 03/02/2024]
Abstract
PURPOSE Recent breakthroughs in natural language processing and machine learning, exemplified by ChatGPT, have spurred a paradigm shift in healthcare. Released by OpenAI in November 2022, ChatGPT rapidly gained global attention. Trained on massive text datasets, this large language model holds immense potential to revolutionize healthcare. However, existing literature often overlooks the need for rigorous validation and real-world applicability. METHODS This head-to-head comparative study assesses ChatGPT's capabilities in providing therapeutic recommendations for head and neck cancers. Simulating every NCCN Guidelines scenarios. ChatGPT is queried on primary treatments, adjuvant treatment, and follow-up, with responses compared to the NCCN Guidelines. Performance metrics, including sensitivity, specificity, and F1 score, are employed for assessment. RESULTS The study includes 68 hypothetical cases and 204 clinical scenarios. ChatGPT exhibits promising capabilities in addressing NCCN-related queries, achieving high sensitivity and overall accuracy across primary treatment, adjuvant treatment, and follow-up. The study's metrics showcase robustness in providing relevant suggestions. However, a few inaccuracies are noted, especially in primary treatment scenarios. CONCLUSION Our study highlights the proficiency of ChatGPT in providing treatment suggestions. The model's alignment with the NCCN Guidelines sets the stage for a nuanced exploration of AI's evolving role in oncological decision support. However, challenges related to the interpretability of AI in clinical decision-making and the importance of clinicians understanding the underlying principles of AI models remain unexplored. As AI continues to advance, collaborative efforts between models and medical experts are deemed essential for unlocking new frontiers in personalized cancer care.
Collapse
Affiliation(s)
- Filippo Marchi
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy
| | - Elisa Bellini
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy.
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy.
| | - Andrea Iandelli
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
| | - Claudio Sampieri
- Department of Experimental Medicine (DIMES), University of Genoa, Genoa, Italy
- Department of Otolaryngology-Hospital Cliníc, Barcelona, Spain
- Functional Unit of Head and Neck Tumors-Hospital Cliníc, Barcelona, Spain
| | - Giorgio Peretti
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy
| |
Collapse
|
2
|
Wasserthal J, Breit HC, Meyer MT, Pradella M, Hinck D, Sauter AW, Heye T, Boll DT, Cyriac J, Yang S, Bach M, Segeroth M. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiol Artif Intell 2023; 5:e230024. [PMID: 37795137 PMCID: PMC10546353 DOI: 10.1148/ryai.230024] [Citation(s) in RCA: 129] [Impact Index Per Article: 129.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/16/2023] [Accepted: 06/14/2023] [Indexed: 10/06/2023]
Abstract
Purpose To present a deep learning segmentation model that can automatically and robustly segment all major anatomic structures on body CT images. Materials and Methods In this retrospective study, 1204 CT examinations (from 2012, 2016, and 2020) were used to segment 104 anatomic structures (27 organs, 59 bones, 10 muscles, and eight vessels) relevant for use cases such as organ volumetry, disease characterization, and surgical or radiation therapy planning. The CT images were randomly sampled from routine clinical studies and thus represent a real-world dataset (different ages, abnormalities, scanners, body parts, sequences, and sites). The authors trained an nnU-Net segmentation algorithm on this dataset and calculated Dice similarity coefficients to evaluate the model's performance. The trained algorithm was applied to a second dataset of 4004 whole-body CT examinations to investigate age-dependent volume and attenuation changes. Results The proposed model showed a high Dice score (0.943) on the test set, which included a wide range of clinical data with major abnormalities. The model significantly outperformed another publicly available segmentation model on a separate dataset (Dice score, 0.932 vs 0.871; P < .001). The aging study demonstrated significant correlations between age and volume and mean attenuation for a variety of organ groups (eg, age and aortic volume [rs = 0.64; P < .001]; age and mean attenuation of the autochthonous dorsal musculature [rs = -0.74; P < .001]). Conclusion The developed model enables robust and accurate segmentation of 104 anatomic structures. The annotated dataset (https://doi.org/10.5281/zenodo.6802613) and toolkit (https://www.github.com/wasserth/TotalSegmentator) are publicly available.Keywords: CT, Segmentation, Neural Networks Supplemental material is available for this article. © RSNA, 2023See also commentary by Sebro and Mongan in this issue.
Collapse
Affiliation(s)
- Jakob Wasserthal
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Hanns-Christian Breit
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Manfred T. Meyer
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Maurice Pradella
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Daniel Hinck
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Alexander W. Sauter
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Tobias Heye
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Daniel T. Boll
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Joshy Cyriac
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Shan Yang
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Michael Bach
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Martin Segeroth
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| |
Collapse
|
3
|
Wang C, Cui Z, Yang J, Han M, Carneiro G, Shen D. BowelNet: Joint Semantic-Geometric Ensemble Learning for Bowel Segmentation From Both Partially and Fully Labeled CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1225-1236. [PMID: 36449590 DOI: 10.1109/tmi.2022.3225667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate bowel segmentation is essential for diagnosis and treatment of bowel cancers. Unfortunately, segmenting the entire bowel in CT images is quite challenging due to unclear boundary, large shape, size, and appearance variations, as well as diverse filling status within the bowel. In this paper, we present a novel two-stage framework, named BowelNet, to handle the challenging task of bowel segmentation in CT images, with two stages of 1) jointly localizing all types of the bowel, and 2) finely segmenting each type of the bowel. Specifically, in the first stage, we learn a unified localization network from both partially- and fully-labeled CT images to robustly detect all types of the bowel. To better capture unclear bowel boundary and learn complex bowel shapes, in the second stage, we propose to jointly learn semantic information (i.e., bowel segmentation mask) and geometric representations (i.e., bowel boundary and bowel skeleton) for fine bowel segmentation in a multi-task learning scheme. Moreover, we further propose to learn a meta segmentation network via pseudo labels to improve segmentation accuracy. By evaluating on a large abdominal CT dataset, our proposed BowelNet method can achieve Dice scores of 0.764, 0.848, 0.835, 0.774, and 0.824 in segmenting the duodenum, jejunum-ileum, colon, sigmoid, and rectum, respectively. These results demonstrate the effectiveness of our proposed BowelNet framework in segmenting the entire bowel from CT images.
Collapse
|
4
|
Liang F, Wang S, Zhang K, Liu TJ, Li JN. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J Gastrointest Oncol 2022; 14:124-152. [PMID: 35116107 PMCID: PMC8790413 DOI: 10.4251/wjgo.v14.i1.124] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/19/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) technology has made leaps and bounds since its invention. AI technology can be subdivided into many technologies such as machine learning and deep learning. The application scope and prospect of different technologies are also totally different. Currently, AI technologies play a pivotal role in the highly complex and wide-ranging medical field, such as medical image recognition, biotechnology, auxiliary diagnosis, drug research and development, and nutrition. Colorectal cancer (CRC) is a common gastrointestinal cancer that has a high mortality, posing a serious threat to human health. Many CRCs are caused by the malignant transformation of colorectal polyps. Therefore, early diagnosis and treatment are crucial to CRC prognosis. The methods of diagnosing CRC are divided into imaging diagnosis, endoscopy, and pathology diagnosis. Treatment methods are divided into endoscopic treatment, surgical treatment, and drug treatment. AI technology is in the weak era and does not have communication capabilities. Therefore, the current AI technology is mainly used for image recognition and auxiliary analysis without in-depth communication with patients. This article reviews the application of AI in the diagnosis, treatment, and prognosis of CRC and provides the prospects for the broader application of AI in CRC.
Collapse
Affiliation(s)
- Feng Liang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Shu Wang
- Department of Radiotherapy, Jilin University Second Hospital, Changchun 130041, Jilin Province, China
| | - Kai Zhang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Tong-Jun Liu
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Jian-Nan Li
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| |
Collapse
|
5
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
6
|
Gonzalez Y, Shen C, Jung H, Nguyen D, Jiang SB, Albuquerque K, Jia X. Semi-automatic sigmoid colon segmentation in CT for radiation therapy treatment planning via an iterative 2.5-D deep learning approach. Med Image Anal 2021; 68:101896. [PMID: 33383333 PMCID: PMC7847132 DOI: 10.1016/j.media.2020.101896] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 11/03/2020] [Accepted: 11/04/2020] [Indexed: 10/22/2022]
Abstract
Automatic sigmoid colon segmentation in CT for radiotherapy treatment planning is challenging due to complex organ shape, close distances to other organs, and large variations in size, shape, and filling status. The patient bowel is often not evacuated, and CT contrast enhancement is not used, which further increase problem difficulty. Deep learning (DL) has demonstrated its power in many segmentation problems. However, standard 2-D approaches cannot handle the sigmoid segmentation problem due to incomplete geometry information and 3-D approaches often encounters the challenge of a limited training data size. Motivated by human's behavior that segments the sigmoid slice by slice while considering connectivity between adjacent slices, we proposed an iterative 2.5-D DL approach to solve this problem. We constructed a network that took an axial CT slice, the sigmoid mask in this slice, and an adjacent CT slice to segment as input and output the predicted mask on the adjacent slice. We also considered other organ masks as prior information. We trained the iterative network with 50 patient cases using five-fold cross validation. The trained network was repeatedly applied to generate masks slice by slice. The method achieved average Dice similarity coefficients of 0.82 0.06 and 0.88 0.02 in 10 test cases without and with using prior information.
Collapse
Affiliation(s)
- Yesenia Gonzalez
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Chenyang Shen
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| | - Hyunuk Jung
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve B Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Kevin Albuquerque
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xun Jia
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| |
Collapse
|
7
|
Häfner SJ. Tumour travel tours - Why circulating cancer cells value company. Biomed J 2020; 43:1-7. [PMID: 32200951 PMCID: PMC7090313 DOI: 10.1016/j.bj.2020.02.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 02/03/2020] [Indexed: 12/17/2022] Open
Abstract
Welcome to the New Year and a new issue of the Biomedical Journal, where we learn that travelling with company boosts the metastatic potential of circulating tumour cells, as well as that a worm could be an excellent model to study antidiabetic drugs. In addition, we discover another pair of molecular scissors for genetic engineering, how exactly Leptospira wreaks havoc on its run through the host organism, and that hyperparathyroidism brings its own risks, but does not worsen the outcome of papillary thyroid carcinoma. Furthermore, the importance of taking into account differing beauty ideals for aesthetic surgery surveys is discussed, alongside the question how bad isolated local recurrence is in the case of HR + breast cancer. Finally, we find out that virtual colonoscopy deserves more credit, that the first medical experiment in space was all about the H-reflex, and that it is possible to survive advanced necrotising fasciitis of the face and neck.
Collapse
Affiliation(s)
- Sophia Julia Häfner
- University of Copenhagen, BRIC Biotech Research & Innovation Centre, Anders Lund Group, Copenhagen, Denmark.
| |
Collapse
|
8
|
Xiao H, Qi L, Xu L, Li D, Hu B, Zhao P, Ren H, Huang J. Estimation of wave reflection in aorta from radial pulse waveform by artificial neural network: a numerical study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:105064. [PMID: 31518768 DOI: 10.1016/j.cmpb.2019.105064] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 08/01/2019] [Accepted: 09/02/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Wave reflection in aorta has been shown to have incremental value for predicting cardiovascular events. However, its estimation by wave separation analysis (WSA) is complex. METHODS In this study, a novel method was proposed based on a cascade artificial neural network (ANN) for wave reflection estimation by the frequency features of radial pressure waveform alone. The simulation database of 4000 samples was generated by a 55-segment transmission line model of human arterial tree and was used for evaluating the ANN with 10-fold cross validation for the estimation of reflection magnitude (RMANN) and reflection index (RIANN) of wave reflection in aorta. RM and RI also were estimated by the WSA with a triangle waveform of aortic flow (RMWSA and RIWSA) and with a real aortic flow waveform (RMRef and RIRef) as reference values. RESULTS The results showed the correlation coefficient and mean difference between RMANN and RMRef (R2 = 0.92, mean ± standard deviation (SD) = 0.0 ± 0.02) and those between RIANN and RIRef (R2 = 0.91, mean ± SD = 0.0 ± 0.01) were better than those between RMWSA and RMRef (R2 = 0.51, mean ± SD = 0.01 ± 0.07) and those between RIWSA and RIRef (R2 = 0.50, mean ± SD = 0.0 ± 0.02). As the sample diversity in the simulation database was increased but the total number of samples keeps constant, the advantage of the ANN, though decreased slightly, became more significant than those of WSA (RMANN VS. RMRef and RIANN VS. RIRef: R2 = 0.88 and 0.88, mean ± SD = 0.0 ± 0.05 and 0.0 ± 0.05; RMWSA VS. RMRef and RIWSA VS. RIRef: R2 = 0.24 and 0.24, mean ± SD = 0.07 ± 0.24 and 0.02 ± 0.08, respectively). In addition, the ANN can achieve better results than the traditional method WSA even only two hidden neurons are used. CONCLUSIONS ANN is a potential method for the estimation of wave reflection in aorta by a single radial pulse waveform, but further validation of this method in clinic trials is needed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, No. 69 Hongguang Rd, Banan, Chongqing 400050, PR China.
| | - Lin Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, LiaoNing 110167, PR China
| | - Lisheng Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, LiaoNing 110167, PR China
| | - Decai Li
- Sichuan Mianyang 404 Hospital, No. 56 Yuejing Road, Fucheng District, Mianyang, Sichuan 400050, PR China
| | - Bo Hu
- Sichuan Mianyang 404 Hospital, No. 56 Yuejing Road, Fucheng District, Mianyang, Sichuan 400050, PR China
| | - Pengdong Zhao
- College of Artificial Intelligent, Chongqing University of Technology, No. 69 Hongguang Rd, Banan, Chongqing 400050, PR China
| | - Huijiao Ren
- College of Artificial Intelligent, Chongqing University of Technology, No. 69 Hongguang Rd, Banan, Chongqing 400050, PR China
| | - Jinfeng Huang
- College of Artificial Intelligent, Chongqing University of Technology, No. 69 Hongguang Rd, Banan, Chongqing 400050, PR China
| |
Collapse
|
9
|
Xiao H, Butlin M, Tan I, Qasem A, Avolio AP, Butlin M, Tan I, Qasem A, Avolio AP. Estimation of Pulse Transit Time From Radial Pressure Waveform Alone by Artificial Neural Network. IEEE J Biomed Health Inform 2017; 22:1140-1147. [PMID: 28880196 DOI: 10.1109/jbhi.2017.2748280] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
OBJECTIVE To validate the feasibility of the estimation of pulse transit time (PTT) by artificial neural network (ANN) from radial pressure waveform alone. METHODS A cascade ANN with ten-fold cross validation was applied to invasively and simultaneously recorded aortic and radial pressure waveforms during rest and nitroglycerin infusion () for the estimation of mean and beat-to-beat PTT. The results of the ANN models were compared to a multiple linear regression (LR) model when the features of radial arterial pressure waveform in time and frequency domains were used as the predictors of the models. RESULTS For the estimation of mean PTT and beat-to-beat PTT by ANN ( ), the correlation coefficient between the and the measured PTT () (mean: ; beat-to-beat: ) is higher than that between the PTT estimated by LR ( ) and (mean: ; beat-to-beat: ). The standard deviation (SD) of the difference between the and ( ; beat-to-beat: ) is significantly less than that between the and (; beat-to-beat: 10 ms), but no significant difference exists between their mean ( ). The lack of frequency features of radial pressure waveform caused obvious reduction in the correlation coefficient and SD of the difference between the and . The performance of the ANN was improved by increasing the sample number but not by increasing the neuron number. CONCLUSION ANN is a potential method of PTT estimation from a single pressure measurement at radial artery.
Collapse
|