1
|
Branch F, Williams KM, Santana IN, Hegdé J. How well do practicing radiologists interpret the results of CAD technology? A quantitative characterization. Cogn Res Princ Implic 2022; 7:52. [PMID: 35723763 PMCID: PMC9209598 DOI: 10.1186/s41235-022-00375-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 02/21/2022] [Indexed: 11/21/2022] Open
Abstract
Many studies have shown that using a computer-aided detection (CAD) system does not significantly improve diagnostic accuracy in radiology, possibly because radiologists fail to interpret the CAD results properly. We tested this possibility using screening mammography as an illustrative example. We carried out two experiments, one using 28 practicing radiologists, and a second one using 25 non-professional subjects. During each trial, subjects were shown the following four pieces of information necessary for evaluating the actual probability of cancer in a given unseen mammogram: the binary decision of the CAD system as to whether the mammogram was positive for cancer, the true-positive and false-positive rates of the system, and the prevalence of breast cancer in the relevant patient population. Based only on this information, the subjects had to estimate the probability that the unseen mammogram in question was positive for cancer. Additionally, the non-professional subjects also had to decide, based on the same information, whether to recall the patients for additional testing. Both groups of subjects similarly (and significantly) overestimated the cancer probability regardless of the categorical CAD decision, suggesting that this effect is not peculiar to either group. The misestimations were not fully attributable to causes well-known in other contexts, such as base rate neglect or inverse fallacy. Non-professional subjects tended to recall the patients at high rates, even when the actual probably of cancer was at or near zero. Moreover, the recall rates closely reflected the subjects’ estimations of cancer probability. Together, our results show that subjects interpret CAD system output poorly when only the probabilistic information about the underlying decision parameters is available to them. Our results also highlight the need for making the output of CAD systems more readily interpretable, and for providing training and assistance to radiologists in evaluating the output.
Collapse
Affiliation(s)
- Fallon Branch
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA
| | - K Matthew Williams
- Department of Psychological Sciences, Augusta University, Augusta, GA, USA
| | - Isabella Noel Santana
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA
| | - Jay Hegdé
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA. .,Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA, USA. .,James and Jean Culver Vision Discovery Institute, Augusta University, Augusta, GA, USA. .,The Graduate School, Augusta University, Augusta, GA, USA.
| |
Collapse
|
2
|
Liang F, Wang S, Zhang K, Liu TJ, Li JN. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J Gastrointest Oncol 2022; 14:124-152. [PMID: 35116107 PMCID: PMC8790413 DOI: 10.4251/wjgo.v14.i1.124] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/19/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) technology has made leaps and bounds since its invention. AI technology can be subdivided into many technologies such as machine learning and deep learning. The application scope and prospect of different technologies are also totally different. Currently, AI technologies play a pivotal role in the highly complex and wide-ranging medical field, such as medical image recognition, biotechnology, auxiliary diagnosis, drug research and development, and nutrition. Colorectal cancer (CRC) is a common gastrointestinal cancer that has a high mortality, posing a serious threat to human health. Many CRCs are caused by the malignant transformation of colorectal polyps. Therefore, early diagnosis and treatment are crucial to CRC prognosis. The methods of diagnosing CRC are divided into imaging diagnosis, endoscopy, and pathology diagnosis. Treatment methods are divided into endoscopic treatment, surgical treatment, and drug treatment. AI technology is in the weak era and does not have communication capabilities. Therefore, the current AI technology is mainly used for image recognition and auxiliary analysis without in-depth communication with patients. This article reviews the application of AI in the diagnosis, treatment, and prognosis of CRC and provides the prospects for the broader application of AI in CRC.
Collapse
Affiliation(s)
- Feng Liang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Shu Wang
- Department of Radiotherapy, Jilin University Second Hospital, Changchun 130041, Jilin Province, China
| | - Kai Zhang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Tong-Jun Liu
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Jian-Nan Li
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| |
Collapse
|
3
|
Vogrin M, Trojner T, Kelc R. Artificial intelligence in musculoskeletal oncological radiology. Radiol Oncol 2020; 55:1-6. [PMID: 33885240 PMCID: PMC7877260 DOI: 10.2478/raon-2020-0068] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 09/29/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Due to the rarity of primary bone tumors, precise radiologic diagnosis often requires an experienced musculoskeletal radiologist. In order to make the diagnosis more precise and to prevent the overlooking of potentially dangerous conditions, artificial intelligence has been continuously incorporated into medical practice in recent decades. This paper reviews some of the most promising systems developed, including those for diagnosis of primary and secondary bone tumors, breast, lung and colon neoplasms. CONCLUSIONS Although there is still a shortage of long-term studies confirming its benefits, there is probably a considerable potential for further development of computer-based expert systems aiming at a more efficient diagnosis of bone and soft tissue tumors.
Collapse
Affiliation(s)
- Matjaz Vogrin
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
- Faculty of Medicine, University of Maribor, Maribor, Slovenia
| | - Teodor Trojner
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
| | - Robi Kelc
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
- Faculty of Medicine, University of Maribor, Maribor, Slovenia
| |
Collapse
|
4
|
A generative flow-based model for volumetric data augmentation in 3D deep learning for computed tomographic colonography. Int J Comput Assist Radiol Surg 2020; 16:81-89. [PMID: 33150471 PMCID: PMC7822776 DOI: 10.1007/s11548-020-02275-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/30/2020] [Indexed: 01/08/2023]
Abstract
Purpose Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. Methods We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. Results The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. Conclusion The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.
Collapse
|
5
|
Artificial intelligence: Who is responsible for the diagnosis? Radiol Med 2020; 125:517-521. [PMID: 32006241 DOI: 10.1007/s11547-020-01135-9] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 01/16/2020] [Indexed: 12/15/2022]
Abstract
The aim of the paper is to find an answer to the question "Who or what is responsible for the benefits and harms of using artificial intelligence in radiology?" When human beings make decisions, the action itself is normally connected with a direct responsibility by the agent who generated the action. You have an effect on others, and therefore, you are responsible for what you do and what you decide to do. But if you do not do this yourself, but an artificial intelligence system, it becomes difficult and important to be able to ascribe responsibility when something goes wrong. The manuscript addresses the following statements: (1) using AI, the radiologist is responsible for the diagnosis; (2) radiologists must be trained on the use of AI since they are responsible for the actions of machines; (3) radiologists involved in R&D have the responsibility to guide the respect of rules for a trustworthy AI; (4) radiologist responsibility is at risk of validating the unknown (black box); (5) radiologist decision may be biased by the AI automation; (6)risk of a paradox: increasing AI tools to compensate the lack of radiologists; (7) need of informed consent and quality measures. Future legislation must outline the contours of the professional's responsibility, with respect to the provision of the service performed autonomously by AI, balancing the professional's ability to influence and therefore correct the machine, limiting the sphere of autonomy that instead technological evolution would like to recognize to robots.
Collapse
|
6
|
Yanase J, Triantaphyllou E. The seven key challenges for the future of computer-aided diagnosis in medicine. Int J Med Inform 2019; 129:413-422. [DOI: 10.1016/j.ijmedinf.2019.06.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 06/15/2019] [Accepted: 06/19/2019] [Indexed: 12/23/2022]
|
7
|
What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 2019; 10:44. [PMID: 30949865 PMCID: PMC6449411 DOI: 10.1186/s13244-019-0738-2] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 03/20/2019] [Indexed: 02/08/2023] Open
Abstract
This paper aims to provide a review of the basis for application of AI in radiology, to discuss the immediate ethical and professional impact in radiology, and to consider possible future evolution.Even if AI does add significant value to image interpretation, there are implications outside the traditional radiology activities of lesion detection and characterisation. In radiomics, AI can foster the analysis of the features and help in the correlation with other omics data. Imaging biobanks would become a necessary infrastructure to organise and share the image data from which AI models can be trained. AI can be used as an optimising tool to assist the technologist and radiologist in choosing a personalised patient's protocol, tracking the patient's dose parameters, providing an estimate of the radiation risks. AI can also aid the reporting workflow and help the linking between words, images, and quantitative data. Finally, AI coupled with CDS can improve the decision process and thereby optimise clinical and radiological workflow.
Collapse
|
8
|
How are we going to train a generation of radiologists (and urologists) to read prostate MRI? Curr Opin Urol 2016; 25:522-35. [PMID: 26375060 DOI: 10.1097/mou.0000000000000217] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PURPOSE OF REVIEW Multiparametric MRI has gained tremendous importance in the daily practice for patients at risk or diagnosed with prostate cancer. Interpretation of multiparametric-MRI is a complex task, supposedly restricted to experienced radiologists. The purpose of this review is to analyze fundamentals of multiparametric-MRI interpretation and to describe how multiparametric-MRI training could be organized. RECENT FINDINGS Recently, professional guidelines have been published to provide technical and interpretation frameworks and harmonize multiparametric-MRI practice, but the question of physicians training in prostate multiparametric-MRI reading is still pending. What kind of education, practice, and training makes a radiologist able to reliably interpret a prostate multiparametric-MRI? How can findings be reported to be easily understood? How much experience is needed? How can we train urologists and other physicians to review the examinations they request? Is double-reading necessary? SUMMARY An institutional-based competency certification process for prostate multiparametric-MRI interpretation may encourage nonspecialized radiologists to qualify for prostate imaging in a standardized and reproducible way, exactly as urologists need it.
Collapse
|
9
|
Motai Y, Ma D, Docef A, Yoshida H. Smart Colonography for Distributed Medical Databases with Group Kernel Feature Analysis. ACM T INTEL SYST TEC 2015. [DOI: 10.1145/2668136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Computer-Aided Detection (CAD) of polyps in Computed Tomographic (CT) colonography is currently very limited since a single database at each hospital/institution doesn't provide sufficient data for training the CAD system's classification algorithm. To address this limitation, we propose to use multiple databases, (e.g., big data studies) to create multiple institution-wide databases using distributed computing technologies, which we call smart colonography. Smart colonography may be built by a larger colonography database networked through the participation of multiple institutions via distributed computing. The motivation herein is to create a distributed database that increases the detection accuracy of CAD diagnosis by covering many true-positive cases. Colonography data analysis is mutually accessible to increase the availability of resources so that the knowledge of radiologists is enhanced. In this article, we propose a scalable and efficient algorithm called Group Kernel Feature Analysis (GKFA), which can be applied to multiple cancer databases so that the overall performance of CAD is improved. The key idea behind the proposed GKFA method is to allow the feature space to be updated as the training proceeds with more data being fed from other institutions into the algorithm. Experimental results show that GKFA achieves very good classification accuracy.
Collapse
Affiliation(s)
| | | | - Alen Docef
- Virginia Commonwealth University, VA, USA
| | - Hiroyuki Yoshida
- Massachusetts General Hospital and Harvard Medical School, MA, USA
| |
Collapse
|
10
|
Green D, Maximin S. Autoflight in Ultrasonography: Practice Corner. Radiographics 2015; 35:1314-5. [DOI: 10.1148/rg.2015140329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
11
|
Prostate cancer identification: quantitative analysis of T2-weighted MR images based on a back propagation artificial neural network model. SCIENCE CHINA-LIFE SCIENCES 2015; 58:666-73. [PMID: 26025283 DOI: 10.1007/s11427-015-4876-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 04/23/2015] [Indexed: 10/23/2022]
Abstract
Computer-aided diagnosis (CAD) systems have been proposed to assist radiologists in making diagnostic decisions by providing helpful information. As one of the most important sequences in prostate magnetic resonance imaging (MRI), image features from T2-weighted images (T2WI) were extracted and evaluated for the diagnostic performances by using CAD. We extracted 12 quantitative image features from prostate T2-weighted MR images. The importance of each feature in cancer identification was compared in the peripheral zone (PZ) and central gland (CG), respectively. The performance of the computer-aided diagnosis system supported by an artificial neural network was tested. With computer-aided analysis of T2-weighted images, many characteristic features with different diagnostic capabilities can be extracted. We discovered most of the features (10/12) had significant difference (P<0.01) between PCa and non-PCa in the PZ, while only five features (sum average, minimum value, standard deviation, 10th percentile, and entropy) had significant difference in CG. CAD prediction by features from T2w images can reach high accuracy and specificity while maintaining acceptable sensitivity. The outcome is convictive and helpful in medical diagnosis.
Collapse
|
12
|
Nasirudin RA, Tachibana R, Näppi JJ, Mei K, Kopp FK, Rummeny EJ, Yoshida H, Noël PB. A comparison of material decomposition techniques for dual-energy CT colonography. ACTA ACUST UNITED AC 2015; 9412. [PMID: 25918480 DOI: 10.1117/12.2081982] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In recent years, dual-energy computed tomography (DECT) has been widely used in the clinical routine due to improved diagnostics capability from additional spectral information. One promising application for DECT is CT colonography (CTC) in combination with computer-aided diagnosis (CAD) for detection of lesions and polyps. While CAD has demonstrated in the past that it is able to detect small polyps, its performance is highly dependent on the quality of the input data. The presence of artifacts such as beam-hardening and noise in ultra-low-dose CTC may severely degrade detection performances of small polyps. In this work, we investigate and compare virtual monochromatic images, generated by image-based decomposition and projection-based decomposition, with respect to CAD performance. In the image-based method, reconstructed images are firstly decomposed into water and iodine before the virtual monochromatic images are calculated. On the contrary, in the projection-based method, the projection data are first decomposed before calculation of virtual monochromatic projection and reconstruction. Both material decomposition methods are evaluated with regards to the accuracy of iodine detection. Further, the performance of the virtual monochromatic images is qualitatively and quantitatively assessed. Preliminary results show that the projection-based method does not only have a more accurate detection of iodine, but also delivers virtual monochromatic images with reduced beam hardening artifacts in comparison with the image-based method. With regards to the CAD performance, the projection-based method yields an improved detection performance of polyps in comparison with that of the image-based method.
Collapse
Affiliation(s)
- Radin A Nasirudin
- Department of Diagnostic and Interventional Radiology, Technische Universität München, Munich, Germany
| | - Rie Tachibana
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kai Mei
- Department of Diagnostic and Interventional Radiology, Technische Universität München, Munich, Germany
| | - Felix K Kopp
- Department of Diagnostic and Interventional Radiology, Technische Universität München, Munich, Germany
| | - Ernst J Rummeny
- Department of Diagnostic and Interventional Radiology, Technische Universität München, Munich, Germany
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Peter B Noël
- Department of Diagnostic and Interventional Radiology, Technische Universität München, Munich, Germany
| |
Collapse
|
13
|
Tachibana R, Näppi JJ, Yoshida H. Application of Pseudo-enhancement Correction to Virtual Monochromatic CT Colonography. ABDOMINAL IMAGING : COMPUTATIONAL AND CLINICAL APPLICATIONS : 6TH INTERNATIONAL WORKSHOP, ABDI 2014, HELD IN CONJUNCTION WITH MICCAI 2014, CAMBRIDGE, MA, USA, SEPTEMBER 14, 2014. ABDI (WORKSHOP) (6TH : 2014 : CAMBRIDGE, MASS.) 2014; 8676:169-178. [PMID: 26236781 DOI: 10.1007/978-3-319-13692-9_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In CT colonography, orally administered positive-contrast fecal-tagging agents are used for differentiating residual fluid and feces from true lesions. However, the presence of high-density tagging agent in the colon can introduce erroneous artifacts, such as local pseudo-enhancement and beam-hardening, on the reconstructed CT images, thereby complicating reliable detection of soft-tissue lesions. In dual-energy CT colonography, such image artifacts can be reduced by the calculation of virtual monochromatic CT images, which provide more accurate quantitative attenuation measurements than conventional single-energy CT colonography. In practice, however, virtual monochromatic images may still contain some pseudo-enhancement artifacts, and efforts to minimize radiation dose may enhance such artifacts. In this study, we evaluated the effect of image-based pseudo-enhancement post-correction on virtual monochromatic images in standard-dose and low-dose dual-energy CT colonography. The mean CT values of the virtual monochromatic standard-dose CT images of 51 polyps and those of the virtual monochromatic low-dose CT images of 20 polyps were measured without and with the pseudo-enhancement correction. Statistically significant differences were observed between uncorrected and pseudo-enhancement-corrected images of polyps covered by fecal tagging in standard-dose CT (p < 0.001) and in low-dose CT (p < 0.05). The results indicate that image-based pseudo-enhancement post-correction can be useful for optimizing the performance of image-processing applications in virtual monochromatic CT colonography.
Collapse
|
14
|
CT colonography: effect of computer-aided detection of colonic polyps as a second and concurrent reader for general radiologists with moderate experience in CT colonography. Eur Radiol 2014; 24:1466-76. [DOI: 10.1007/s00330-014-3158-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2013] [Revised: 02/23/2014] [Accepted: 03/20/2014] [Indexed: 02/03/2023]
|
15
|
Lefere P, Silva C, Gryspeerdt S, Rodrigues A, Vasconcelos R, Teixeira R, de Gouveia FH. Teleradiology based CT colonography to screen a population group of a remote island; at average risk for colorectal cancer. Eur J Radiol 2013; 82:e262-7. [PMID: 23473734 DOI: 10.1016/j.ejrad.2013.02.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2012] [Revised: 01/13/2013] [Accepted: 02/07/2013] [Indexed: 01/25/2023]
Abstract
PURPOSE To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. MATERIALS AND METHODS A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. RESULTS 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6-99.9% 95% CI), 90.97% (87.8-93.4% 95% CI), 56.52% (45.8-66.7% 95% CI), 99.75% (98.4-99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7-100% 95% CI), 87.07% (83.6-89.9% 95% CI), 34.78% (25.3-45.5% 95% CI) and 100% (98.8-100% 95% CI), respectively. CONCLUSION In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer.
Collapse
Affiliation(s)
- Philippe Lefere
- VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede, Belgium.
| | | | | | | | | | | | | |
Collapse
|