1
|
Shia WC, Kuo YH, Hsu FR, Lin J, Wu WP, Wu HK, Yeh WC, Chen DR. Evaluating the Margins of Breast Cancer Tumors by Using Digital Breast Tomosynthesis with Deep Learning: A Preliminary Assessment. Diagnostics (Basel) 2024; 14:1032. [PMID: 38786329 PMCID: PMC11119441 DOI: 10.3390/diagnostics14101032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/03/2024] [Accepted: 05/14/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND The assessment information of tumor margins is extremely important for the success of the breast cancer surgery and whether the patient undergoes a second operation. However, conducting surgical margin assessments is a time-consuming task that requires pathology-related skills and equipment, and often cannot be provided in a timely manner. To address this challenge, digital breast tomosynthesis technology was utilized to generate detailed cross-sectional images of the breast tissue and integrate deep learning algorithms for image segmentation, achieving an assessment of tumor margins during surgery. METHODS this study utilized post-operative tissue samples from 46 patients who underwent breast-conserving treatment, and generated image sets using digital breast tomosynthesis for the training and evaluation of deep learning models. RESULTS Deep learning algorithms effectively identifying the tumor area. They achieved a Mean Intersection over Union (MIoU) of 0.91, global accuracy of 99%, weighted IoU of 44%, precision of 98%, recall of 83%, F1 score of 89%, and dice coefficient of 93% on the training dataset; for the testing dataset, MIoU was at 83%, global accuracy at 97%, weighted IoU at 38%, precision at 87%, recall rate at 69%, F1 score at 76%, dice coefficient at 86%. CONCLUSIONS The initial evaluation suggests that the deep learning-based image segmentation method is highly accurate in measuring breast tumor margins. This helps provide information related to tumor margins during surgery, and by using different datasets, this research method can also be applied to the surgical margin assessment of various types of tumors.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
- School of Big Data and Artificial Intelligence, Fujian Polytechnic Normal University, Fuqing 350300, China
| | - Yu-Hsun Kuo
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan (F.-R.H.)
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan (F.-R.H.)
| | - Joseph Lin
- Cancer Research Center, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
- Department of Animal Science and Biotechnology, Tunghai University, Taichung 407, Taiwan
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan
| | - Wen-Pei Wu
- Department of Medical Image, Changhua Christian Hospital, Changhua 500, Taiwan
| | - Hwa-Koon Wu
- Department of Medical Image, Changhua Christian Hospital, Changhua 500, Taiwan
| | - Wei-Cheng Yeh
- Department of Medical Imaging, Chang Bing Show Chwan Memorial Hospital, Changhua 505, Taiwan
| | - Dar-Ren Chen
- Cancer Research Center, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan
| |
Collapse
|
2
|
Liu HC, Lin MH, Chang WC, Zeng RC, Wang YM, Sun CW. Rapid On-Site AI-Assisted Grading for Lung Surgery Based on Optical Coherence Tomography. Cancers (Basel) 2023; 15:5388. [PMID: 38001648 PMCID: PMC10670228 DOI: 10.3390/cancers15225388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/02/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023] Open
Abstract
The determination of resection extent traditionally relies on the microscopic invasiveness of frozen sections (FSs) and is crucial for surgery of early lung cancer with preoperatively unknown histology. While previous research has shown the value of optical coherence tomography (OCT) for instant lung cancer diagnosis, tumor grading through OCT remains challenging. Therefore, this study proposes an interactive human-machine interface (HMI) that integrates a mobile OCT system, deep learning algorithms, and attention mechanisms. The system is designed to mark the lesion's location on the image smartly and perform tumor grading in real time, potentially facilitating clinical decision making. Twelve patients with a preoperatively unknown tumor but a final diagnosis of adenocarcinoma underwent thoracoscopic resection, and the artificial intelligence (AI)-designed system mentioned above was used to measure fresh specimens. Results were compared to FSs benchmarked on permanent pathologic reports. Current results show better differentiating power among minimally invasive adenocarcinoma (MIA), invasive adenocarcinoma (IA), and normal tissue, with an overall accuracy of 84.9%, compared to 20% for FSs. Additionally, the sensitivity and specificity, the sensitivity and specificity were 89% and 82.7% for MIA and 94% and 80.6% for IA, respectively. The results suggest that this AI system can potentially produce rapid and efficient diagnoses and ultimately improve patient outcomes.
Collapse
Affiliation(s)
- Hung-Chang Liu
- Section of Thoracic Surgery, Mackay Memorial Hospital, Taipei City 10449, Taiwan;
- Intensive Care Unit, Mackay Memorial Hospital, Taipei City 10449, Taiwan
- Department of Medicine, Mackay Medical College, New Taipei City 25245, Taiwan
- Department of Optometry, Mackay Junior College of Medicine, Nursing, and Management, Taipei City 11260, Taiwan
| | - Miao-Hui Lin
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan; (M.-H.L.); (R.-C.Z.); (Y.-M.W.)
| | - Wei-Chin Chang
- Department of Pathology, Mackay Memorial Hospital, New Taipei City 25160, Taiwan;
- Department of Pathology, Taipei Medical University Hospital, Taipei City 11030, Taiwan
- Department of Pathology, School of Medicine, College of Medicine, Taipei Medical University, Taipei City 11030, Taiwan
| | - Rui-Cheng Zeng
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan; (M.-H.L.); (R.-C.Z.); (Y.-M.W.)
| | - Yi-Min Wang
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan; (M.-H.L.); (R.-C.Z.); (Y.-M.W.)
| | - Chia-Wei Sun
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan; (M.-H.L.); (R.-C.Z.); (Y.-M.W.)
- Institute of Biomedical Engineering, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan
- Medical Device Innovation and Translation Center, National Yang Ming Chiao Tung University, Taipei City 11259, Taiwan
| |
Collapse
|
3
|
Wako BD, Dese K, Ulfata RE, Nigatu TA, Turunbedu SK, Kwa T. Squamous Cell Carcinoma of Skin Cancer Margin Classification From Digital Histopathology Images Using Deep Learning. Cancer Control 2022; 29:10732748221132528. [PMID: 36194624 PMCID: PMC9536105 DOI: 10.1177/10732748221132528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Objectives Now a days, squamous cell carcinoma (SCC) margin assessment is done by examining histopathology images and inspection of whole slide images (WSI) using a conventional microscope. This is time-consuming, tedious, and depends on experts’ experience which may lead to misdiagnosis and mistreatment plans. This study aims to develop a system for the automatic diagnosis of skin cancer margin for squamous cell carcinoma from histopathology microscopic images by applying deep learning techniques. Methods The system was trained, validated, and tested using histopathology images of SCC cancer locally acquired from Jimma Medical Center Pathology Department from seven different skin sites using an Olympus digital microscope. All images were preprocessed and trained with transfer learning pre-trained models by fine-tuning the hyper-parameter of the selected models. Results The overall best training accuracy of the models become 95.3%, 97.1%, 89.8%, and 89.9% on EffecientNetB0, MobileNetv2, ResNet50, VGG16 respectively. In addition to this, the best validation accuracy of the models was 94.7%, 91.8%, 87.8%, and 86.7% respectively. The best testing accuracy of the models at the same epoch was 95.2%, 91.5%, 87%, and 85.5% respectively. From these models, EfficientNetB0 showed the best average training and testing accuracy than the other models. Conclusions The system assists the pathologist during the margin assessment of SCC by decreasing the diagnosis time from an average of 25 minutes to less than a minute.
Collapse
Affiliation(s)
- Beshatu Debela Wako
- School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia,Center of Biomedical Engineering, Jimma University Medical Center, Jimma, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia,Artificial Intelligence and Biomedical Imaging Research Lab, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia,Kokeb Dese, Department of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia. ,
| | - Roba Elala Ulfata
- Department of Pathology, Jimma Institute of Health, Jimma University, Jimma, Ethiopia,Department of Pathology, Adama General Hospital and Medical College, Adama, Ethiopia
| | - Tilahun Alemayehu Nigatu
- Department of Biomedical Sciences (Anatomy Course Unit), Jimma Institute of Health, Jimma University, Jimma, Ethiopia
| | | | - Timothy Kwa
- School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia,Department of Biomedical Engineering, University of California, 451 Health Sciences, Davis, CA, USA,Medtronic MiniMed, 18000 Devonshire St., Northridge, Los Angeles, CA, USA,Timothy Kwa, Department of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia.
| |
Collapse
|
4
|
Binary dose level classification of tumour microvascular response to radiotherapy using artificial intelligence analysis of optical coherence tomography images. Sci Rep 2022; 12:13995. [PMID: 35978040 PMCID: PMC9385745 DOI: 10.1038/s41598-022-18393-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/10/2022] [Indexed: 12/26/2022] Open
Abstract
The dominant consequence of irradiating biological systems is cellular damage, yet microvascular damage begins to assume an increasingly important role as the radiation dose levels increase. This is currently becoming more relevant in radiation medicine with its pivot towards higher-dose-per-fraction/fewer fractions treatment paradigm (e.g., stereotactic body radiotherapy (SBRT)). We have thus developed a 3D preclinical imaging platform based on speckle-variance optical coherence tomography (svOCT) for longitudinal monitoring of tumour microvascular radiation responses in vivo. Here we present an artificial intelligence (AI) approach to analyze the resultant microvascular data. In this initial study, we show that AI can successfully classify SBRT-relevant clinical radiation dose levels at multiple timepoints (t = 2–4 weeks) following irradiation (10 Gy and 30 Gy cohorts) based on induced changes in the detected microvascular networks. Practicality of the obtained results, challenges associated with modest number of animals, their successful mitigation via augmented data approaches, and advantages of using 3D deep learning methodologies, are discussed. Extension of this encouraging initial study to longitudinal AI-based time-series analysis for treatment outcome predictions at finer dose level gradations is envisioned.
Collapse
|
5
|
|
6
|
Foo KY, Newman K, Fang Q, Gong P, Ismail HM, Lakhiani DD, Zilkens R, Dessauvagie BF, Latham B, Saunders CM, Chin L, Kennedy BF. Multi-class classification of breast tissue using optical coherence tomography and attenuation imaging combined via deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:3380-3400. [PMID: 35781967 PMCID: PMC9208580 DOI: 10.1364/boe.455110] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/23/2022] [Accepted: 04/25/2022] [Indexed: 05/27/2023]
Abstract
We demonstrate a convolutional neural network (CNN) for multi-class breast tissue classification as adipose tissue, benign dense tissue, or malignant tissue, using multi-channel optical coherence tomography (OCT) and attenuation images, and a novel Matthews correlation coefficient (MCC)-based loss function that correlates more strongly with performance metrics than the commonly used cross-entropy loss. We hypothesized that using multi-channel images would increase tumor detection performance compared to using OCT alone. 5,804 images from 29 patients were used to fine-tune a pre-trained ResNet-18 network. Adding attenuation images to OCT images yields statistically significant improvements in several performance metrics, including benign dense tissue sensitivity (68.0% versus 59.6%), malignant tissue positive predictive value (PPV) (79.4% versus 75.5%), and total accuracy (85.4% versus 83.3%), indicating that the additional contrast from attenuation imaging is most beneficial for distinguishing between benign dense tissue and malignant tissue.
Collapse
Affiliation(s)
- Ken Y. Foo
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Kyle Newman
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Qi Fang
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Peijun Gong
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Hina M. Ismail
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Devina D. Lakhiani
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Renate Zilkens
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA 6009, Australia
| | - Benjamin F. Dessauvagie
- Division of Pathology and Laboratory Medicine, Medical School, The University of Western Australia, Perth, WA 6009, Australia
- PathWest, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
| | - Bruce Latham
- PathWest, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
- School of Medicine, The University of Notre Dame, Fremantle, WA 6160, Australia
| | - Christobel M. Saunders
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA 6009, Australia
- Breast Centre, Fiona Stanley Hospital, Murdoch, WA 6150, Australia
- Breast Clinic, Royal Perth Hospital, Perth, WA 6000, Australia
- Department of Surgery, Melbourne Medical School, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Lixin Chin
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
| | - Brendan F. Kennedy
- BRITElab, Harry Perkins Institute of Medical Research, QEII Medical Centre, Nedlands, and Centre for Medical Research, The University of Western Australia, Perth, WA 6009, Australia
- Department of Electrical, Electronic & Computer Engineering, School of Engineering, The University of Western Australia, Perth, WA 6009, Australia
- Australian Research Council Centre for Personalised Therapeutics Technologies, Perth, WA 6000, Australia
| |
Collapse
|
7
|
Li W, Li X. Development of intraoperative assessment of margins in breast conserving surgery: a narrative review. Gland Surg 2022; 11:258-269. [PMID: 35242687 PMCID: PMC8825505 DOI: 10.21037/gs-21-652] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Accepted: 11/17/2021] [Indexed: 07/28/2023]
Abstract
OBJECTIVE We intend to provide an informative and up-to-date summary on the topic of intraoperative assessment of margins in breast conserving surgery (BCS). Conventional methods as well as cutting-edge technologies are analyzed for their advantages and limitations in the hope that clinicians can turn to this for reference. This review can also offer guidance for technicians in the future design of intraoperative margin assessment tools. BACKGROUND Achieving negative margins during BCS is one of the vital factors for preventing local recurrence. Conducting intraoperative margin assessment can ensure negative margins to a large extent and possibly relieve patients of the anguish of re-interventions. In recent years, innovative methods for margin assessment during BCS are advancing rapidly. And there is a lack of summary regarding the development of intraoperative margin assessment in BCS. METHODS A PubMed search with keywords "intraoperative margin assessment" and "breast conserving surgery" was conducted. Relevant publications were screened manually for its title, abstract and even full text to determine its true relevance. Publications on neo-adjuvant therapy and intraoperative radiotherapy were excluded. References from the searched articles and other supplementary articles were also looked into. CONCLUSIONS Conventional methods for margin assessment yields stable outcome but its use is limited because of the demand on pathology staff and the trade-off between time and precision. Conventional imaging techniques pass the workload to radiologists at the cost of a significantly low duration of time. Involving artificial intelligence for image-based assessment is a further improvement. However, conventional imaging is inherently flawed in that occult lesions can't show on the image and the showing ones are ambiguous and open to interpretation. Unconventional techniques which base their judgment on cellular composition are more reassuring. Nonetheless, unconventional techniques should be subjected to clinical trials before putting into practice. And studies regarding comparison between conventional methods and unconventional methods are also needed to evaluate their relative efficacy.
Collapse
Affiliation(s)
- Wanheng Li
- First Clinical Medical School, Southern Medical University, Guangzhou, China
| | - Xiru Li
- Department of General Surgery, The First Medical Center of Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
8
|
Hsiao T, Ho Y, Chen M, Lee S, Sun C. Disease activation maps for subgingival dental calculus identification based on intelligent dental optical coherence tomography. TRANSLATIONAL BIOPHOTONICS 2021. [DOI: 10.1002/tbio.202100001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Affiliation(s)
- Tien‐Yu Hsiao
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| | - Yi‐Ching Ho
- School of Dentistry National Yang Ming Chiao Tung University Taipei Taiwan, ROC
- Department of Stomatology Taipei Veterans General Hospital Taipei Taiwan, ROC
| | - Mei‐Ru Chen
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| | - Shyh‐Yuan Lee
- School of Dentistry National Yang Ming Chiao Tung University Taipei Taiwan, ROC
- Department of Stomatology Taipei Veterans General Hospital Taipei Taiwan, ROC
- Department of Dentistry Yangming Branch of Taipei City Hospital Taipei Taiwan, ROC
| | - Chia‐Wei Sun
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| |
Collapse
|
9
|
Mojahed D, Ha RS, Chang P, Gan Y, Yao X, Angelini B, Hibshoosh H, Taback B, Hendon CP. Fully Automated Postlumpectomy Breast Margin Assessment Utilizing Convolutional Neural Network Based Optical Coherence Tomography Image Classification Method. Acad Radiol 2020; 27:e81-e86. [PMID: 31324579 DOI: 10.1016/j.acra.2019.06.018] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 06/21/2019] [Accepted: 06/24/2019] [Indexed: 12/21/2022]
Abstract
BACKGROUND The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. METHODS A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. RESULTS The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). CONCLUSION It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
Collapse
|
10
|
Unger J, Hebisch C, Phipps JE, Lagarto JL, Kim H, Darrow MA, Bold RJ, Marcu L. Real-time diagnosis and visualization of tumor margins in excised breast specimens using fluorescence lifetime imaging and machine learning. BIOMEDICAL OPTICS EXPRESS 2020; 11:1216-1230. [PMID: 32206404 PMCID: PMC7075618 DOI: 10.1364/boe.381358] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Revised: 01/14/2020] [Accepted: 01/14/2020] [Indexed: 05/03/2023]
Abstract
Tumor-free surgical margins are critical in breast-conserving surgery. In up to 38% of the cases, however, patients undergo a second surgery since malignant cells are found at the margins of the excised resection specimen. Thus, advanced imaging tools are needed to ensure clear margins at the time of surgery. The objective of this study was to evaluate a random forest classifier that makes use of parameters derived from point-scanning label-free fluorescence lifetime imaging (FLIm) measurements of breast specimens as a means to diagnose tumor at the resection margins and to enable an intuitive visualization of a probabilistic classifier on tissue specimen. FLIm data from fresh lumpectomy and mastectomy specimens from 18 patients were used in this study. The supervised training was based on a previously developed registration technique between autofluorescence imaging data and cross-sectional histology slides. A pathologist's histology annotations provide the ground truth to distinguish between adipose, fibrous, and tumor tissue. Current results demonstrate the ability of this approach to classify the tumor with 89% sensitivity and 93% specificity and to rapidly (∼ 20 frames per second) overlay the probabilistic classifier overlaid on excised breast specimens using an intuitive color scheme. Furthermore, we show an iterative imaging refinement that allows surgeons to switch between rapid scans with a customized, low spatial resolution to quickly cover the specimen and slower scans with enhanced resolution (400 μm per point measurement) in suspicious regions where more details are required. In summary, this technique provides high diagnostic prediction accuracy, rapid acquisition, adaptive resolution, nondestructive probing, and facile interpretation of images, thus holding potential for clinical breast imaging based on label-free FLIm.
Collapse
Affiliation(s)
- Jakob Unger
- Department of Biomedical Engineering, University of California Davis, California, CA 95616, USA
- Corresponding authors
| | - Christoph Hebisch
- Department of Biomedical Engineering, University of California Davis, California, CA 95616, USA
| | - Jennifer E. Phipps
- Department of Biomedical Engineering, University of California Davis, California, CA 95616, USA
| | - João L. Lagarto
- Department of Biomedical Engineering, University of California Davis, California, CA 95616, USA
| | - Hanna Kim
- Department of Otolaryngology, University of California Davis, California, CA 95817, USA
| | - Morgan A. Darrow
- Department of Pathology and Laboratory Medicine, University of California Davis, California, CA 95817, USA
| | - Richard J. Bold
- Department of Surgery, University of California Davis, California, CA 95817, USA
| | - Laura Marcu
- Department of Biomedical Engineering, University of California Davis, California, CA 95616, USA
- Corresponding authors
| |
Collapse
|
11
|
Singla N, Dubey K, Srivastava V. Automated assessment of breast cancer margin in optical coherence tomography images via pretrained convolutional neural network. JOURNAL OF BIOPHOTONICS 2019; 12:e201800255. [PMID: 30318761 DOI: 10.1002/jbio.201800255] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Accepted: 10/12/2018] [Indexed: 06/08/2023]
Abstract
The benchmark method for the evaluation of breast cancers involves microscopic testing of a hematoxylin and eosin (H&E)-stained tissue biopsy. Resurgery is required in 20% to 30% of cases because of incomplete excision of malignant tissues. Therefore, a more accurate method is required to detect the cancer margin to avoid the risk of recurrence. In the recent years, convolutional neural networks (CNNs) has achieved excellent performance in the field of medical images diagnosis. It automatically extracts the features from the images and classifies them. In the proposed study, we apply a pretrained Inception-v3 CNN with reverse active learning for the classification of healthy and malignancy breast tissue using optical coherence tomography (OCT) images. This proposed method attained the sensitivity, specificity and accuracy is 90.2%, 91.7% and 90%, respectively, with testing datasets collected from 48 patients (22 normal fibro-adipose tissue and 26 Invasive ductal carcinomas cancerous tissues). The trained network utilizes for the breast cancer margin assessment to predict the tumor with negative margins. Additionally, the network output is correlated with the corresponding histology image. Our results lay the foundation for the future that the proposed method can be used to perform automatic intraoperative identification of breast cancer margins in real-time and to guide core needle biopsies.
Collapse
Affiliation(s)
- Neeru Singla
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| | - Kavita Dubey
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| | - Vishal Srivastava
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
- Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, California
| |
Collapse
|
12
|
Chang HY, Jung CK, Woo JI, Lee S, Cho J, Kim SW, Kwak TY. Artificial Intelligence in Pathology. J Pathol Transl Med 2019; 53:1-12. [PMID: 30599506 PMCID: PMC6344799 DOI: 10.4132/jptm.2018.12.16] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 12/16/2018] [Indexed: 02/06/2023] Open
Abstract
As in other domains, artificial intelligence is becoming increasingly important in medicine. In particular,deep learning-based pattern recognition methods can advance the field of pathology byincorporating clinical, radiologic, and genomic data to accurately diagnose diseases and predictpatient prognoses. In this review, we present an overview of artificial intelligence, the brief historyof artificial intelligence in the medical domain, recent advances in artificial intelligence applied topathology, and future prospects of pathology driven by artificial intelligence.
Collapse
Affiliation(s)
| | - Chan Kwon Jung
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | | | | | | | | | | |
Collapse
|