1
|
Wang YL, Gao S, Xiao Q, Li C, Grzegorzek M, Zhang YY, Li XH, Kang Y, Liu FH, Huang DH, Gong TT, Wu QJ. Role of artificial intelligence in digital pathology for gynecological cancers. Comput Struct Biotechnol J 2024; 24:205-212. [PMID: 38510535 PMCID: PMC10951449 DOI: 10.1016/j.csbj.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/08/2024] [Accepted: 03/09/2024] [Indexed: 03/22/2024] Open
Abstract
The diagnosis of cancer is typically based on histopathological sections or biopsies on glass slides. Artificial intelligence (AI) approaches have greatly enhanced our ability to extract quantitative information from digital histopathology images as a rapid growth in oncology data. Gynecological cancers are major diseases affecting women's health worldwide. They are characterized by high mortality and poor prognosis, underscoring the critical importance of early detection, treatment, and identification of prognostic factors. This review highlights the various clinical applications of AI in gynecological cancers using digitized histopathology slides. Particularly, deep learning models have shown promise in accurately diagnosing, classifying histopathological subtypes, and predicting treatment response and prognosis. Furthermore, the integration with transcriptomics, proteomics, and other multi-omics techniques can provide valuable insights into the molecular features of diseases. Despite the considerable potential of AI, substantial challenges remain. Further improvements in data acquisition and model optimization are required, and the exploration of broader clinical applications, such as the biomarker discovery, need to be explored.
Collapse
Affiliation(s)
- Ya-Li Wang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Information Center, The Fourth Affiliated Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Ying-Ying Zhang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiao-Han Li
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ye Kang
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Dong-Hui Huang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- NHC Key Laboratory of Advanced Reproductive Medicine and Fertility (China Medical University), National Health Commission, Shenyang, China
| |
Collapse
|
2
|
Wang Z, Zhao D, Heidari AA, Chen Y, Chen H, Liang G. Improved Latin hypercube sampling initialization-based whale optimization algorithm for COVID-19 X-ray multi-threshold image segmentation. Sci Rep 2024; 14:13239. [PMID: 38853172 PMCID: PMC11163015 DOI: 10.1038/s41598-024-63739-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 05/31/2024] [Indexed: 06/11/2024] Open
Abstract
Image segmentation techniques play a vital role in aiding COVID-19 diagnosis. Multi-threshold image segmentation methods are favored for their computational simplicity and operational efficiency. Existing threshold selection techniques in multi-threshold image segmentation, such as Kapur based on exhaustive enumeration, often hamper efficiency and accuracy. The whale optimization algorithm (WOA) has shown promise in addressing this challenge, but issues persist, including poor stability, low efficiency, and accuracy in COVID-19 threshold image segmentation. To tackle these issues, we introduce a Latin hypercube sampling initialization-based multi-strategy enhanced WOA (CAGWOA). It incorporates a COS sampling initialization strategy (COSI), an adaptive global search approach (GS), and an all-dimensional neighborhood mechanism (ADN). COSI leverages probability density functions created from Latin hypercube sampling, ensuring even solution space coverage to improve the stability of the segmentation model. GS widens the exploration scope to combat stagnation during iterations and improve segmentation efficiency. ADN refines convergence accuracy around optimal individuals to improve segmentation accuracy. CAGWOA's performance is validated through experiments on various benchmark function test sets. Furthermore, we apply CAGWOA alongside similar methods in a multi-threshold image segmentation model for comparative experiments on lung X-ray images of infected patients. The results demonstrate CAGWOA's superiority, including better image detail preservation, clear segmentation boundaries, and adaptability across different threshold levels.
Collapse
Affiliation(s)
- Zhen Wang
- College of Computer Science and Technology, Changchun Normal University, Changchun, 130032, Jilin, China
| | - Dong Zhao
- College of Computer Science and Technology, Changchun Normal University, Changchun, 130032, Jilin, China.
| | - Ali Asghar Heidari
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Yi Chen
- Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University, Wenzhou, 325035, China
| | - Huiling Chen
- Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University, Wenzhou, 325035, China.
| | - Guoxi Liang
- Department of Artificial Intelligence, Wenzhou Polytechnic, Wenzhou, 325035, China.
| |
Collapse
|
3
|
Sun G, Cai L, Yan X, Nie W, Liu X, Xu J, Zou X. A prediction model based on digital breast pathology image information. PLoS One 2024; 19:e0294923. [PMID: 38758814 PMCID: PMC11101065 DOI: 10.1371/journal.pone.0294923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 11/11/2023] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND The workload of breast cancer pathological diagnosis is very heavy. The purpose of this study is to establish a nomogram model based on pathological images to predict the benign and malignant nature of breast diseases and to validate its predictive performance. METHODS In retrospect, a total of 2,723 H&E-stained pathological images were collected from 1,474 patients at Qingdao Central Hospital between 2019 and 2022. The dataset consisted of 509 benign tumor images (adenosis and fibroadenoma) and 2,214 malignant tumor images (infiltrating ductal carcinoma). The images were divided into a training set (1,907) and a validation set (816). Python3.7 was used to extract the values of the R channel, G channel, B channel, and one-dimensional information entropy from all images. Multivariable logistic regression was used to select variables and establish the breast tissue pathological image prediction model. RESULTS The R channel value, B channel value, and one-dimensional information entropy of the images were identified as independent predictive factors for the classification of benign and malignant pathological images (P < 0.05). The area under the curve (AUC) of the nomogram model in the training set was 0.889 (95% CI: 0.869, 0.909), and the AUC in the validation set was 0.838 (95% CI: 0.7980.877). The calibration curve results showed that the calibration curve of this nomogram model was close to the ideal curve. The decision curve results indicated that the predictive model curve had a high value for auxiliary diagnosis. CONCLUSION The nomogram model for the prediction of benign and malignant breast diseases based on pathological images demonstrates good predictive performance. This model can assist in the diagnosis of breast tissue pathological images.
Collapse
Affiliation(s)
- Guoxin Sun
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Liying Cai
- College of Nursing and Rehabilitation, North China University of Science and Technology, Tangshan City, China
| | - Xiong Yan
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Weihong Nie
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Xin Liu
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Jing Xu
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Xiao Zou
- Department of Breast Surgery, Xiangdong Hospital Affiliated to Hunan Normal University, Hunan, China
| |
Collapse
|
4
|
Kaur A, Kaushal C, Sandhu JK, Damaševičius R, Thakur N. Histopathological Image Diagnosis for Breast Cancer Diagnosis Based on Deep Mutual Learning. Diagnostics (Basel) 2023; 14:95. [PMID: 38201406 PMCID: PMC10795733 DOI: 10.3390/diagnostics14010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Every year, millions of women across the globe are diagnosed with breast cancer (BC), an illness that is both common and potentially fatal. To provide effective therapy and enhance patient outcomes, it is essential to make an accurate diagnosis as soon as possible. In recent years, deep-learning (DL) approaches have shown great effectiveness in a variety of medical imaging applications, including the processing of histopathological images. Using DL techniques, the objective of this study is to recover the detection of BC by merging qualitative and quantitative data. Using deep mutual learning (DML), the emphasis of this research was on BC. In addition, a wide variety of breast cancer imaging modalities were investigated to assess the distinction between aggressive and benign BC. Based on this, deep convolutional neural networks (DCNNs) have been established to assess histopathological images of BC. In terms of the Break His-200×, BACH, and PUIH datasets, the results of the trials indicate that the level of accuracy achieved by the DML model is 98.97%, 96.78, and 96.34, respectively. This indicates that the DML model outperforms and has the greatest value among the other methodologies. To be more specific, it improves the results of localization without compromising the performance of the classification, which is an indication of its increased utility. We intend to proceed with the development of the diagnostic model to make it more applicable to clinical settings.
Collapse
Affiliation(s)
- Amandeep Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Chetna Kaushal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Jasjeet Kaur Sandhu
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Akademija, Lithuania
| | - Neetika Thakur
- Junior Laboratory Technician, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| |
Collapse
|
5
|
Singh A, Paruthy SB, Belsariya V, Chandra J N, Singh SK, Manivasagam SS, Choudhary S, Kumar MA, Khera D, Kuraria V. Revolutionizing Breast Healthcare: Harnessing the Role of Artificial Intelligence. Cureus 2023; 15:e50203. [PMID: 38192969 PMCID: PMC10772314 DOI: 10.7759/cureus.50203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2023] [Indexed: 01/10/2024] Open
Abstract
Breast cancer has the highest incidence and second-highest mortality rate among all cancers. The management of breast cancer is being revolutionized by artificial intelligence (AI), which is improving early detection, pathological diagnosis, risk assessment, individualized treatment recommendations, and treatment response prediction. Nuclear medicine has used artificial intelligence (AI) for over 50 years, but more recent advances in machine learning (ML) and deep learning (DL) have given AI in nuclear medicine additional capabilities. AI accurately analyzes breast imaging scans for early detection, minimizing false negatives while offering radiologists reliable, swift image processing assistance. It smoothly fits into radiology workflows, which may result in early treatments and reduced expenditures. In pathological diagnosis, artificial intelligence improves the quality of diagnostic data by ensuring accurate diagnoses, lowering inter-observer variability, speeding up the review process, and identifying errors or poor slides. By taking into consideration nutritional, genetic, and environmental factors, providing individualized risk assessments, and recommending more regular tests for higher-risk patients, AI aids with the risk assessment of breast cancer. The integration of clinical and genetic data into individualized treatment recommendations by AI facilitates collaborative decision-making and resource allocation optimization while also enabling patient progress monitoring, drug interaction consideration, and alignment with clinical guidelines. AI is used to analyze patient data, imaging, genomic data, and pathology reports in order to forecast how a treatment would respond. These models anticipate treatment outcomes, make sure that clinical recommendations are followed, and learn from historical data. The implementation of AI in medicine is hampered by issues with data quality, integration with healthcare IT systems, data protection, bias reduction, and ethical considerations, necessitating transparency and constant surveillance. Protecting patient privacy, resolving biases, maintaining transparency, identifying fault for mistakes, and ensuring fair access are just a few examples of ethical considerations. To preserve patient trust and address the effect on the healthcare workforce, ethical frameworks must be developed. The amazing potential of AI in the treatment of breast cancer calls for careful examination of its ethical and practical implications. We aim to review the comprehensive role of artificial intelligence in breast cancer management.
Collapse
Affiliation(s)
- Arun Singh
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Shivani B Paruthy
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Vivek Belsariya
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Nemi Chandra J
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Sunil Kumar Singh
- Surgical Oncology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | | | - Sushila Choudhary
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - M Anil Kumar
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Dhananjay Khera
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| | - Vaibhav Kuraria
- General Surgery, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, IND
| |
Collapse
|
6
|
Yu Y, Zhou T, Cao L. Use and application of organ-on-a-chip platforms in cancer research. J Cell Commun Signal 2023:10.1007/s12079-023-00790-7. [PMID: 38032444 DOI: 10.1007/s12079-023-00790-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 10/31/2023] [Indexed: 12/01/2023] Open
Abstract
Tumors are a major cause of death worldwide, and much effort has been made to develop appropriate anti-tumor therapies. Existing in vitro and in vivo tumor models cannot reflect the critical features of cancer. The development of organ-on-a-chip models has enabled the integration of organoids, microfluidics, tissue engineering, biomaterials research, and microfabrication, offering conditions that mimic tumor physiology. Three-dimensional in vitro human tumor models that have been established as organ-on-a-chip models contain multiple cell types and a structure that is similar to the primary tumor. These models can be applied to various foci of oncology research. Moreover, the high-throughput features of microfluidic organ-on-a-chip models offer new opportunities for achieving large-scale drug screening and developing more personalized treatments. In this review of the literature, we explore the development of organ-on-a-chip technology and discuss its use as an innovative tool in basic and clinical applications and summarize its advancement of cancer research.
Collapse
Affiliation(s)
- Yifan Yu
- Department of Hepatobiliary and Transplant Surgery, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - TingTing Zhou
- The College of Basic Medical Science, Health Sciences Institute, Key Laboratory of Cell Biology of Ministry of Public Health, Key Laboratory of Medical Cell Biology of Ministry of Education, Liaoning Province Collaborative Innovation Center of Aging Related Disease Diagnosis and Treatment and Prevention, China Medical University, No. 77, Puhe Road, Shenyang North New Area, Shenyang, 110122, Liaoning, China
| | - Liu Cao
- The College of Basic Medical Science, Health Sciences Institute, Key Laboratory of Cell Biology of Ministry of Public Health, Key Laboratory of Medical Cell Biology of Ministry of Education, Liaoning Province Collaborative Innovation Center of Aging Related Disease Diagnosis and Treatment and Prevention, China Medical University, No. 77, Puhe Road, Shenyang North New Area, Shenyang, 110122, Liaoning, China.
| |
Collapse
|
7
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
8
|
Luo J, Li X, Wei KL, Chen G, Xiong DD. Advances in the application of computational pathology in diagnosis, immunomicroenvironment recognition, and immunotherapy evaluation of breast cancer: a narrative review. J Cancer Res Clin Oncol 2023; 149:12535-12542. [PMID: 37389595 DOI: 10.1007/s00432-023-05002-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 06/15/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND Breast cancer (BC) is a prevalent and highly lethal malignancy affecting women worldwide. Immunotherapy has emerged as a promising therapeutic strategy for BC, offering potential improvements in patient survival. Neoadjuvant therapy (NAT) has also gained significant clinical traction. With the advancement of computer technology, Artificial Intelligence (AI) has been increasingly applied in pathology research, expanding and redefining the scope of the field. This narrative review aims to provide a comprehensive overview of the current literature on the application of computational pathology in BC, specifically focusing on diagnosis, immune microenvironment recognition, and the evaluation of immunotherapy and NAT response. METHODS A thorough examination of relevant literature was conducted, focusing on studies investigating the role of computational pathology in BC diagnosis, immune microenvironment recognition, and immunotherapy and NAT assessment. RESULTS The application of computational pathology has shown significant potential in BC management. AI-based techniques enable improved diagnosis and classification of BC subtypes, enhance the identification and characterization of the immune microenvironment, and facilitate the evaluation of immunotherapy and NAT response. However, challenges related to data quality, standardization, and algorithm development still need to be addressed. CONCLUSION The integration of computational pathology and AI has transformative implications for BC patient care. By leveraging AI-based technologies, clinicians can make more informed decisions in diagnosis, treatment planning, and therapeutic response assessment. Future research should focus on refining AI algorithms, addressing technical challenges, and conducting large-scale clinical validation studies to facilitate the translation of computational pathology into routine clinical practice for BC patients.
Collapse
Affiliation(s)
- Jie Luo
- Department of Oncology, The Second Affiliated Hospital of Guangxi Medical University, Nanning, 530007, Guangxi, People's Republic of China
| | - Xia Li
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, 530021, Guangxi, People's Republic of China
| | - Kang-Lai Wei
- Department of Pathology, The Second Affiliated Hospital of Guangxi Medical University, Nanning, 530007, Guangxi, People's Republic of China
| | - Gang Chen
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, 530021, Guangxi, People's Republic of China
| | - Dan-Dan Xiong
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, 530021, Guangxi, People's Republic of China.
| |
Collapse
|
9
|
Raj M K, Priyadarshani J, Karan P, Bandyopadhyay S, Bhattacharya S, Chakraborty S. Bio-inspired microfluidics: A review. BIOMICROFLUIDICS 2023; 17:051503. [PMID: 37781135 PMCID: PMC10539033 DOI: 10.1063/5.0161809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 09/01/2023] [Indexed: 10/03/2023]
Abstract
Biomicrofluidics, a subdomain of microfluidics, has been inspired by several ideas from nature. However, while the basic inspiration for the same may be drawn from the living world, the translation of all relevant essential functionalities to an artificially engineered framework does not remain trivial. Here, we review the recent progress in bio-inspired microfluidic systems via harnessing the integration of experimental and simulation tools delving into the interface of engineering and biology. Development of "on-chip" technologies as well as their multifarious applications is subsequently discussed, accompanying the relevant advancements in materials and fabrication technology. Pointers toward new directions in research, including an amalgamated fusion of data-driven modeling (such as artificial intelligence and machine learning) and physics-based paradigm, to come up with a human physiological replica on a synthetic bio-chip with due accounting of personalized features, are suggested. These are likely to facilitate physiologically replicating disease modeling on an artificially engineered biochip as well as advance drug development and screening in an expedited route with the minimization of animal and human trials.
Collapse
Affiliation(s)
- Kiran Raj M
- Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036, India
| | - Jyotsana Priyadarshani
- Department of Mechanical Engineering, Biomechanics Section (BMe), KU Leuven, Celestijnenlaan 300, 3001 Louvain, Belgium
| | - Pratyaksh Karan
- Géosciences Rennes Univ Rennes, CNRS, Géosciences Rennes, UMR 6118, 35000 Rennes, France
| | - Saumyadwip Bandyopadhyay
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Soumya Bhattacharya
- Achira Labs Private Limited, 66b, 13th Cross Rd., Dollar Layout, 3–Phase, JP Nagar, Bangalore, Karnataka 560078, India
| | - Suman Chakraborty
- Department of Mechanical Engineering, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| |
Collapse
|
10
|
Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, Dhar M. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell 2023; 6:1227091. [PMID: 37705603 PMCID: PMC10497111 DOI: 10.3389/frai.2023.1227091] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
As the demand for quality healthcare increases, healthcare systems worldwide are grappling with time constraints and excessive workloads, which can compromise the quality of patient care. Artificial intelligence (AI) has emerged as a powerful tool in clinical medicine, revolutionizing various aspects of patient care and medical research. The integration of AI in clinical medicine has not only improved diagnostic accuracy and treatment outcomes, but also contributed to more efficient healthcare delivery, reduced costs, and facilitated better patient experiences. This review article provides an extensive overview of AI applications in history taking, clinical examination, imaging, therapeutics, prognosis and research. Furthermore, it highlights the critical role AI has played in transforming healthcare in developing nations.
Collapse
Affiliation(s)
- Gokul Krishnan
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shiana Singh
- Department of Emergency Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Monika Pathania
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Siddharth Gosavi
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shuchi Abhishek
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Ashwin Parchani
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Minakshi Dhar
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| |
Collapse
|
11
|
Xiao J, Kopycka-Kedzierawski D, Ragusa P, Mendez Chagoya LA, Funkhouser K, Lischka T, Wu TT, Fiscella K, Kar KS, Al Jallad N, Rashwan N, Ren J, Meyerowitz C. Acceptance and Usability of an Innovative mDentistry eHygiene Model Amid the COVID-19 Pandemic Within the US National Dental Practice-Based Research Network: Mixed Methods Study. JMIR Hum Factors 2023; 10:e45418. [PMID: 37594795 PMCID: PMC10474507 DOI: 10.2196/45418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 04/17/2023] [Accepted: 06/17/2023] [Indexed: 08/19/2023] Open
Abstract
BACKGROUND Amid the COVID-19 pandemic and other possible future infectious disease pandemics, dentistry needs to consider modified dental examination regimens that render quality care and ensure the safety of patients and dental health care personnel (DHCP). OBJECTIVE This study aims to assess the acceptance and usability of an innovative mDentistry eHygiene model amid the COVID-19 pandemic. METHODS This pilot study used a 2-stage implementation design to assess 2 critical components of an innovative mDentistry eHygiene model: virtual hygiene examination (eHygiene) and patient self-taken intraoral images (SELFIE), within the National Dental Practice-Based Research Network. Mixed methods (quantitative and qualitative) were used to assess the acceptance and usability of the eHygiene model. RESULTS A total of 85 patients and 18 DHCP participated in the study. Overall, the eHygiene model was well accepted by patients (System Usability Scale [SUS] score: mean 70.0, SD 23.7) and moderately accepted by dentists (SUS score: mean 51.3, SD 15.9) and hygienists (SUS score: mean 57.1, SD 23.8). Dentists and patients had good communication during the eHygiene examination, as assessed using the Dentist-Patient Communication scale. In the SELFIE session, patients completed tasks with minimum challenges and obtained diagnostic intraoral photos. Patients and DHCP suggested that although eHygiene has the potential to improve oral health care services, it should be used selectively depending on patients' conditions. CONCLUSIONS The study results showed promise for the 2 components of the eHygiene model. eHygiene offers a complementary modality for oral health data collection and examination in dental offices, which would be particularly useful during an infectious disease outbreak. In addition, patients being able to capture critical oral health data in their home could facilitate dental treatment triage and oral health self-monitoring and potentially trigger oral health-promoting behaviors.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | - Patricia Ragusa
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | | | - Tamara Lischka
- Kaiser Permanente Center for Health Research, Portland, OR, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY, United States
| | - Kevin Fiscella
- Department of Family Medicine, University of Rochester, Rochester, NY, United States
| | - Kumari Saswati Kar
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Johana Ren
- River Campus, University of Rochester, Rochester, NY, United States
| | - Cyril Meyerowitz
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| |
Collapse
|
12
|
Davri A, Birbas E, Kanavos T, Ntritsos G, Giannakeas N, Tzallas AT, Batistatou A. Deep Learning for Lung Cancer Diagnosis, Prognosis and Prediction Using Histological and Cytological Images: A Systematic Review. Cancers (Basel) 2023; 15:3981. [PMID: 37568797 PMCID: PMC10417369 DOI: 10.3390/cancers15153981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 07/27/2023] [Accepted: 08/03/2023] [Indexed: 08/13/2023] Open
Abstract
Lung cancer is one of the deadliest cancers worldwide, with a high incidence rate, especially in tobacco smokers. Lung cancer accurate diagnosis is based on distinct histological patterns combined with molecular data for personalized treatment. Precise lung cancer classification from a single H&E slide can be challenging for a pathologist, requiring most of the time additional histochemical and special immunohistochemical stains for the final pathology report. According to WHO, small biopsy and cytology specimens are the available materials for about 70% of lung cancer patients with advanced-stage unresectable disease. Thus, the limited available diagnostic material necessitates its optimal management and processing for the completion of diagnosis and predictive testing according to the published guidelines. During the new era of Digital Pathology, Deep Learning offers the potential for lung cancer interpretation to assist pathologists' routine practice. Herein, we systematically review the current Artificial Intelligence-based approaches using histological and cytological images of lung cancer. Most of the published literature centered on the distinction between lung adenocarcinoma, lung squamous cell carcinoma, and small cell lung carcinoma, reflecting the realistic pathologist's routine. Furthermore, several studies developed algorithms for lung adenocarcinoma predominant architectural pattern determination, prognosis prediction, mutational status characterization, and PD-L1 expression status estimation.
Collapse
Affiliation(s)
- Athena Davri
- Department of Pathology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45500 Ioannina, Greece;
| | - Effrosyni Birbas
- Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (E.B.); (T.K.)
| | - Theofilos Kanavos
- Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (E.B.); (T.K.)
| | - Georgios Ntritsos
- Department of Hygiene and Epidemiology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece;
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Nikolaos Giannakeas
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Alexandros T. Tzallas
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Anna Batistatou
- Department of Pathology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45500 Ioannina, Greece;
| |
Collapse
|
13
|
Ong J, Waisberg E, Masalkhi M, Kamran SA, Lowry K, Sarker P, Zaman N, Paladugu P, Tavakkoli A, Lee AG. Artificial Intelligence Frameworks to Detect and Investigate the Pathophysiology of Spaceflight Associated Neuro-Ocular Syndrome (SANS). Brain Sci 2023; 13:1148. [PMID: 37626504 PMCID: PMC10452366 DOI: 10.3390/brainsci13081148] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Spaceflight associated neuro-ocular syndrome (SANS) is a unique phenomenon that has been observed in astronauts who have undergone long-duration spaceflight (LDSF). The syndrome is characterized by distinct imaging and clinical findings including optic disc edema, hyperopic refractive shift, posterior globe flattening, and choroidal folds. SANS serves a large barrier to planetary spaceflight such as a mission to Mars and has been noted by the National Aeronautics and Space Administration (NASA) as a high risk based on its likelihood to occur and its severity to human health and mission performance. While it is a large barrier to future spaceflight, the underlying etiology of SANS is not well understood. Current ophthalmic imaging onboard the International Space Station (ISS) has provided further insights into SANS. However, the spaceflight environment presents with unique challenges and limitations to further understand this microgravity-induced phenomenon. The advent of artificial intelligence (AI) has revolutionized the field of imaging in ophthalmology, particularly in detection and monitoring. In this manuscript, we describe the current hypothesized pathophysiology of SANS and the medical diagnostic limitations during spaceflight to further understand its pathogenesis. We then introduce and describe various AI frameworks that can be applied to ophthalmic imaging onboard the ISS to further understand SANS including supervised/unsupervised learning, generative adversarial networks, and transfer learning. We conclude by describing current research in this area to further understand SANS with the goal of enabling deeper insights into SANS and safer spaceflight for future missions.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI 48105, USA
| | | | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | | | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Phani Paladugu
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Andrew G. Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY 10065, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX 77555, USA
- University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Texas A&M College of Medicine, Bryan, TX 77030, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA 50010, USA
| |
Collapse
|
14
|
Krishna S, Suganthi S, Bhavsar A, Yesodharan J, Krishnamoorthy S. An interpretable decision-support model for breast cancer diagnosis using histopathology images. J Pathol Inform 2023; 14:100319. [PMID: 37416058 PMCID: PMC10320615 DOI: 10.1016/j.jpi.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/29/2023] [Accepted: 06/08/2023] [Indexed: 07/08/2023] Open
Abstract
Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models. To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images. We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%-4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist. The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model's proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.
Collapse
Affiliation(s)
- Sruthi Krishna
- Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India
| | | | - Arnav Bhavsar
- School of Computing and Electrical Engineering, IIT Mandi, Himachal Pradesh, India
| | - Jyotsna Yesodharan
- Department of Pathology, Amrita Institute of Medical Science, Kochi, India
| | | |
Collapse
|
15
|
Gouel P, Callonnec F, Levêque É, Valet C, Blôt A, Cuvelier C, Saï S, Saunier L, Pepin LF, Hapdey S, Libraire J, Vera P, Viard B. Evaluation of the capability and reproducibility of RECIST 1.1. measurements by technologists in breast cancer follow-up: a pilot study. Sci Rep 2023; 13:9148. [PMID: 37277412 DOI: 10.1038/s41598-023-36315-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 05/31/2023] [Indexed: 06/07/2023] Open
Abstract
The evaluation of tumor follow-up according to RECIST 1.1 has become essential in clinical practice given its role in therapeutic decision making. At the same time, radiologists are facing an increase in activity while facing a shortage. Radiographic technologists could contribute to the follow-up of these measures, but no studies have evaluated their ability to perform them. Ninety breast cancer patients were performed three CT follow-ups between September 2017 and August 2021. 270 follow-up treatment CT scans were analyzed including 445 target lesions. The rate of agreement of classifications RECIST 1.1 between five technologists and radiologists yielded moderate (k value between 0.47 and 0.52) and substantial (k value = 0.62 and k = 0.67) agreement values. 112 CT were classified as progressive disease (PD) by the radiologists, and 414 new lesions were identified. The analysis showed a percentage of strict agreement of progressive disease classification between reader-technologists and radiologists ranging from substantial to almost perfect agreement (range 73-97%). Analysis of intra-observer agreement was strong at almost perfect (k > 0.78) for 3 technologists. These results are encouraging regarding the ability of selected technologists to perform measurements according to RECIST 1.1 criteria by CT scan with good identification of disease progression.
Collapse
Affiliation(s)
- Pierrick Gouel
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France.
- QuantIF-LITIS EA4108, University of Rouen, Rouen, Normandy, France.
| | - Françoise Callonnec
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Émilie Levêque
- Department of Statistics and Clinical Research Unit, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Céline Valet
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Axelle Blôt
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Clémence Cuvelier
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Sonia Saï
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Lucie Saunier
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Louis-Ferdinand Pepin
- Department of Statistics and Clinical Research Unit, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Sébastien Hapdey
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
- QuantIF-LITIS EA4108, University of Rouen, Rouen, Normandy, France
| | - Julie Libraire
- Department of Statistics and Clinical Research Unit, Henri Becquerel Cancer Center, Rouen, Normandy, France
| | - Pierre Vera
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
- QuantIF-LITIS EA4108, University of Rouen, Rouen, Normandy, France
| | - Benjamin Viard
- Department of Medical Imaging, Henri Becquerel Cancer Center, Rouen, Normandy, France
| |
Collapse
|
16
|
Bhausaheb DP, Kashyap KL. Shuffled Shepherd Deer Hunting Optimization based Deep Neural Network for Breast Cancer Classification using Breast Histopathology Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
17
|
Li Y, Xu J, Wang P, Li P, Yang G, Chen R. Manifold reconstructed semi-supervised domain adaptation for histopathology images classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
18
|
Nakach FZ, Zerouaoui H, Idri A. Binary classification of multi-magnification histopathological breast cancer images using late fusion and transfer learning. DATA TECHNOLOGIES AND APPLICATIONS 2023. [DOI: 10.1108/dta-08-2022-0330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
Abstract
PurposeHistopathology biopsy imaging is currently the gold standard for the diagnosis of breast cancer in clinical practice. Pathologists examine the images at various magnifications to identify the type of tumor because if only one magnification is taken into account, the decision may not be accurate. This study explores the performance of transfer learning and late fusion to construct multi-scale ensembles that fuse different magnification-specific deep learning models for the binary classification of breast tumor slides.Design/methodology/approachThree pretrained deep learning techniques (DenseNet 201, MobileNet v2 and Inception v3) were used to classify breast tumor images over the four magnification factors of the Breast Cancer Histopathological Image Classification dataset (40×, 100×, 200× and 400×). To fuse the predictions of the models trained on different magnification factors, different aggregators were used, including weighted voting and seven meta-classifiers trained on slide predictions using class labels and the probabilities assigned to each class. The best cluster of the outperforming models was chosen using the Scott–Knott statistical test, and the top models were ranked using the Borda count voting system.FindingsThis study recommends the use of transfer learning and late fusion for histopathological breast cancer image classification by constructing multi-magnification ensembles because they perform better than models trained on each magnification separately.Originality/valueThe best multi-scale ensembles outperformed state-of-the-art integrated models and achieved an accuracy mean value of 98.82 per cent, precision of 98.46 per cent, recall of 100 per cent and F1-score of 99.20 per cent.
Collapse
|
19
|
Scalco R, Hamsafar Y, White CL, Schneider JA, Reichard RR, Prokop S, Perrin RJ, Nelson PT, Mooney S, Lieberman AP, Kukull WA, Kofler J, Keene CD, Kapasi A, Irwin DJ, Gutman DA, Flanagan ME, Crary JF, Chan KC, Murray ME, Dugger BN. The status of digital pathology and associated infrastructure within Alzheimer's Disease Centers. J Neuropathol Exp Neurol 2023; 82:202-211. [PMID: 36692179 PMCID: PMC9941826 DOI: 10.1093/jnen/nlac127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Digital pathology (DP) has transformative potential, especially for Alzheimer disease and related disorders. However, infrastructure barriers may limit adoption. To provide benchmarks and insights into implementation barriers, a survey was conducted in 2019 within National Institutes of Health's Alzheimer's Disease Centers (ADCs). Questions covered infrastructure, funding sources, and data management related to digital pathology. Of the 35 ADCs to which the survey was sent, 33 responded. Most respondents (81%) stated that their ADC had digital slide scanner access, with the most frequent brand being Aperio/Leica (62.9%). Approximately a third of respondents stated there were fees to utilize the scanner. For DP and machine learning (ML) resources, 41% of respondents stated none was supported by their ADC. For scanner purchasing and operations, 50% of respondents stated they received institutional support. Some were unsure of the file size of scanned digital images (37%) and total amount of storage space files occupied (50%). Most (76%) were aware of other departments at their institution working with ML; a similar (76%) percentage were unaware of multiuniversity or industry partnerships. These results demonstrate many ADCs have access to a digital slide scanner; additional investigations are needed to further understand hurdles to implement DP and ML workflows.
Collapse
Affiliation(s)
- Rebeca Scalco
- Department of Pathology and Laboratory Medicine, University of California-Davis, Sacramento, California, USA
| | - Yamah Hamsafar
- Department of Pathology and Laboratory Medicine, University of California-Davis, Sacramento, California, USA
| | - Charles L White
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | | | | | - Stefan Prokop
- Department of Pathology, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Richard J Perrin
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, Missouri, USA
- Department of Neurology, Washington University School of Medicine, Saint Louis, Missouri, USA
- Knight Alzheimer’s Disease Research Center, Washington University School of Medicine, Saint Louis, Missouri, USA
| | | | - Sean Mooney
- Institute for Medical Data Science and Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington, USA
| | - Andrew P Lieberman
- Department of Pathology, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Walter A Kukull
- Institute for Medical Data Science and Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington, USA
| | - Julia Kofler
- Department of Pathology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Christopher Dirk Keene
- Department Laboratory Medicine and Pathology, University of Washington, Seattle, Washington, USA
| | | | - David J Irwin
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - David A Gutman
- Departments of Neurology, Psychiatry, and Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Margaret E Flanagan
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - John F Crary
- Department of Pathology, Ronald M. Loeb Center for Alzheimer’s Disease, Friedman Brain Institute, Neuropathology Brain Bank & Research CoRE, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Neuroscience, Ronald M. Loeb Center for Alzheimer’s Disease, Friedman Brain Institute, Neuropathology Brain Bank & Research CoRE, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Artificial Intelligence & Human Health, Ronald M. Loeb Center for Alzheimer’s Disease, Friedman Brain Institute, Neuropathology Brain Bank & Research CoRE, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Kwun C Chan
- Institute for Medical Data Science and Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington, USA
| | - Melissa E Murray
- Department of Neuroscience, Mayo Clinic, Jacksonville, Florida, USA
| | - Brittany N Dugger
- Department of Pathology and Laboratory Medicine, University of California-Davis, Sacramento, California, USA
| |
Collapse
|
20
|
Amin MS, Ahn H. FabNet: A Features Agglomeration-Based Convolutional Neural Network for Multiscale Breast Cancer Histopathology Images Classification. Cancers (Basel) 2023; 15:cancers15041013. [PMID: 36831359 PMCID: PMC9954749 DOI: 10.3390/cancers15041013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/08/2023] Open
Abstract
The definitive diagnosis of histology specimen images is largely based on the radiologist's comprehensive experience; however, due to the fine to the coarse visual appearance of such images, experts often disagree with their assessments. Sophisticated deep learning approaches can help to automate the diagnosis process of the images and reduce the analysis duration. More efficient and accurate automated systems can also increase the diagnostic impartiality by reducing the difference between the operators. We propose a FabNet model that can learn the fine-to-coarse structural and textural features of multi-scale histopathological images by using accretive network architecture that agglomerate hierarchical feature maps to acquire significant classification accuracy. We expand on a contemporary design by incorporating deep and close integration to finely combine features across layers. Our deep layer accretive model structure combines the feature hierarchy in an iterative and hierarchically manner that infers higher accuracy and fewer parameters. The FabNet can identify malignant tumors from images and patches from histopathology images. We assessed the efficiency of our suggested model standard cancer datasets, which included breast cancer as well as colon cancer histopathology images. Our proposed avant garde model significantly outperforms existing state-of-the-art models in respect of the accuracy, F1 score, precision, and sensitivity, with fewer parameters.
Collapse
|
21
|
Ogundokun RO, Misra S, Akinrotimi AO, Ogul H. MobileNet-SVM: A Lightweight Deep Transfer Learning Model to Diagnose BCH Scans for IoMT-Based Imaging Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:656. [PMID: 36679455 PMCID: PMC9863875 DOI: 10.3390/s23020656] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/02/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients' recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model "MobileNet-SVM", which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
- Department of Computer Science, Landmark University, Omu Aran 251103, Kwara, Nigeria
| | - Sanjay Misra
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| | | | - Hasan Ogul
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| |
Collapse
|
22
|
An Approach toward Automatic Specifics Diagnosis of Breast Cancer Based on an Immunohistochemical Image. J Imaging 2023; 9:jimaging9010012. [PMID: 36662110 PMCID: PMC9866917 DOI: 10.3390/jimaging9010012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 12/30/2022] [Accepted: 01/01/2023] [Indexed: 01/06/2023] Open
Abstract
The paper explored the problem of automatic diagnosis based on immunohistochemical image analysis. The issue of automated diagnosis is a preliminary and advisory statement for a diagnostician. The authors studied breast cancer histological and immunohistochemical images using the following biomarkers progesterone, estrogen, oncoprotein, and a cell proliferation biomarker. The authors developed a breast cancer diagnosis method based on immunohistochemical image analysis. The proposed method consists of algorithms for image preprocessing, segmentation, and the determination of informative indicators (relative area and intensity of cells) and an algorithm for determining the molecular genetic breast cancer subtype. An adaptive algorithm for image preprocessing was developed to improve the quality of the images. It includes median filtering and image brightness equalization techniques. In addition, the authors developed a software module part of the HIAMS software package based on the Java programming language and the OpenCV computer vision library. Four molecular genetic breast cancer subtypes could be identified using this solution: subtype Luminal A, subtype Luminal B, subtype HER2/neu amplified, and basalt-like subtype. The developed algorithm for the quantitative characteristics of the immunohistochemical images showed sufficient accuracy in determining the cancer subtype "Luminal A". It was experimentally established that the relative area of the nuclei of cells covered with biomarkers of progesterone, estrogen, and oncoprotein was more than 85%. The given approach allows for automating and accelerating the process of diagnosis. Developed algorithms for calculating the quantitative characteristics of cells on immunohistochemical images can increase the accuracy of diagnosis.
Collapse
|
23
|
Liao J, Li X, Gan Y, Han S, Rong P, Wang W, Li W, Zhou L. Artificial intelligence assists precision medicine in cancer treatment. Front Oncol 2023; 12:998222. [PMID: 36686757 PMCID: PMC9846804 DOI: 10.3389/fonc.2022.998222] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/22/2022] [Indexed: 01/06/2023] Open
Abstract
Cancer is a major medical problem worldwide. Due to its high heterogeneity, the use of the same drugs or surgical methods in patients with the same tumor may have different curative effects, leading to the need for more accurate treatment methods for tumors and personalized treatments for patients. The precise treatment of tumors is essential, which renders obtaining an in-depth understanding of the changes that tumors undergo urgent, including changes in their genes, proteins and cancer cell phenotypes, in order to develop targeted treatment strategies for patients. Artificial intelligence (AI) based on big data can extract the hidden patterns, important information, and corresponding knowledge behind the enormous amount of data. For example, the ML and deep learning of subsets of AI can be used to mine the deep-level information in genomics, transcriptomics, proteomics, radiomics, digital pathological images, and other data, which can make clinicians synthetically and comprehensively understand tumors. In addition, AI can find new biomarkers from data to assist tumor screening, detection, diagnosis, treatment and prognosis prediction, so as to providing the best treatment for individual patients and improving their clinical outcomes.
Collapse
Affiliation(s)
- Jinzhuang Liao
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Xiaoying Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Yu Gan
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Shuangze Han
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Pengfei Rong
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Wei Wang
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Wei Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| | - Li Zhou
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China,Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China,Department of Pathology, The Xiangya Hospital of Central South University, Changsha, Hunan, China,*Correspondence: Pengfei Rong, ; Wei Wang, ; Wei Li, ; Li Zhou,
| |
Collapse
|
24
|
Applications of artificial neural networks in microorganism image analysis: a comprehensive review from conventional multilayer perceptron to popular convolutional neural network and potential visual transformer. Artif Intell Rev 2023; 56:1013-1070. [PMID: 35528112 PMCID: PMC9066147 DOI: 10.1007/s10462-022-10192-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Microorganisms are widely distributed in the human daily living environment. They play an essential role in environmental pollution control, disease prevention and treatment, and food and drug production. The analysis of microorganisms is essential for making full use of different microorganisms. The conventional analysis methods are laborious and time-consuming. Therefore, the automatic image analysis based on artificial neural networks is introduced to optimize it. However, the automatic microorganism image analysis faces many challenges, such as the requirement of a robust algorithm caused by various application occasions, insignificant features and easy under-segmentation caused by the image characteristic, and various analysis tasks. Therefore, we conduct this review to comprehensively discuss the characteristics of microorganism image analysis based on artificial neural networks. In this review, the background and motivation are introduced first. Then, the development of artificial neural networks and representative networks are presented. After that, the papers related to microorganism image analysis based on classical and deep neural networks are reviewed from the perspectives of different tasks. In the end, the methodology analysis and potential direction are discussed.
Collapse
|
25
|
Rueda J, Rodríguez JD, Jounou IP, Hortal-Carmona J, Ausín T, Rodríguez-Arias D. "Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI & SOCIETY 2022:1-12. [PMID: 36573157 PMCID: PMC9769482 DOI: 10.1007/s00146-022-01614-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022]
Abstract
The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients' benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.
Collapse
Affiliation(s)
- Jon Rueda
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| | | | | | | | - Txetxu Ausín
- Institute of Philosophy, Spanish National Research Council, Madrid, Spain
| | - David Rodríguez-Arias
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| |
Collapse
|
26
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
27
|
Zhang W, Zhang J, Yang S, Wang X, Yang W, Huang J, Wang W, Han X. Knowledge-Based Representation Learning for Nucleus Instance Classification From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3939-3951. [PMID: 36037453 DOI: 10.1109/tmi.2022.3201981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of nuclei in H&E-stained histopathological images is a fundamental step in the quantitative analysis of digital pathology. Most existing methods employ multi-class classification on the detected nucleus instances, while the annotation scale greatly limits their performance. Moreover, they often downplay the contextual information surrounding nucleus instances that is critical for classification. To explicitly provide contextual information to the classification model, we design a new structured input consisting of a content-rich image patch and a target instance mask. The image patch provides rich contextual information, while the target instance mask indicates the location of the instance to be classified and emphasizes its shape. Benefiting from our structured input format, we propose Structured Triplet for representation learning, a triplet learning framework on unlabelled nucleus instances with customized positive and negative sampling strategies. We pre-train a feature extraction model based on this framework with a large-scale unlabeled dataset, making it possible to train an effective classification model with limited annotated data. We also add two auxiliary branches, namely the attribute learning branch and the conventional self-supervised learning branch, to further improve its performance. As part of this work, we will release a new dataset of H&E-stained pathology images with nucleus instance masks, containing 20,187 patches of size 1024 ×1024 , where each patch comes from a different whole-slide image. The model pre-trained on this dataset with our framework significantly reduces the burden of extensive labeling. We show a substantial improvement in nucleus classification accuracy compared with the state-of-the-art methods.
Collapse
|
28
|
SARS-CoV-2 Morphometry Analysis and Prediction of Real Virus Levels Based on Full Recurrent Neural Network Using TEM Images. Viruses 2022; 14:v14112386. [PMID: 36366485 PMCID: PMC9698148 DOI: 10.3390/v14112386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/23/2022] [Accepted: 10/24/2022] [Indexed: 01/31/2023] Open
Abstract
The SARS-CoV-2 virus is responsible for the rapid global spread of the COVID-19 disease. As a result, it is critical to understand and collect primary data on the virus, infection epidemiology, and treatment. Despite the speed with which the virus was detected, studies of its cell biology and architecture at the ultrastructural level are still in their infancy. Therefore, we investigated and analyzed the viral morphometry of SARS-CoV-2 to extract important key points of the virus's characteristics. Then, we proposed a prediction model to identify the real virus levels based on the optimization of a full recurrent neural network (RNN) using transmission electron microscopy (TEM) images. Consequently, identification of virus levels depends on the size of the morphometry of the area (width, height, circularity, roundness, aspect ratio, and solidity). The results of our model were an error score of training network performance 3.216 × 10-11 at 639 epoch, regression of -1.6 × 10-9, momentum gain (Mu) 1 × 10-9, and gradient value of 9.6852 × 10-8, which represent a network with a high ability to predict virus levels. The fully automated system enables virologists to take a high-accuracy approach to virus diagnosis, prevention of mutations, and life cycle and improvement of diagnostic reagents and drugs, adding a point of view to the advancement of medical virology.
Collapse
|
29
|
Breast cancer image analysis using deep learning techniques – a survey. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00703-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
30
|
Acs B, Leung SCY, Kidwell KM, Arun I, Augulis R, Badve SS, Bai Y, Bane AL, Bartlett JMS, Bayani J, Bigras G, Blank A, Buikema H, Chang MC, Dietz RL, Dodson A, Fineberg S, Focke CM, Gao D, Gown AM, Gutierrez C, Hartman J, Kos Z, Lænkholm AV, Laurinavicius A, Levenson RM, Mahboubi-Ardakani R, Mastropasqua MG, Nofech-Mozes S, Osborne CK, Penault-Llorca FM, Piper T, Quintayo MA, Rau TT, Reinhard S, Robertson S, Salgado R, Sugie T, van der Vegt B, Viale G, Zabaglo LA, Hayes DF, Dowsett M, Nielsen TO, Rimm DL. Systematically higher Ki67 scores on core biopsy samples compared to corresponding resection specimen in breast cancer: a multi-operator and multi-institutional study. Mod Pathol 2022; 35:1362-1369. [PMID: 35729220 PMCID: PMC9514990 DOI: 10.1038/s41379-022-01104-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/11/2022] [Accepted: 05/05/2022] [Indexed: 02/06/2023]
Abstract
Ki67 has potential clinical importance in breast cancer but has yet to see broad acceptance due to inter-laboratory variability. Here we tested an open source and calibrated automated digital image analysis (DIA) platform to: (i) investigate the comparability of Ki67 measurement across corresponding core biopsy and resection specimen cases, and (ii) assess section to section differences in Ki67 scoring. Two sets of 60 previously stained slides containing 30 core-cut biopsy and 30 corresponding resection specimens from 30 estrogen receptor-positive breast cancer patients were sent to 17 participating labs for automated assessment of average Ki67 expression. The blocks were centrally cut and immunohistochemically (IHC) stained for Ki67 (MIB-1 antibody). The QuPath platform was used to evaluate tumoral Ki67 expression. Calibration of the DIA method was performed as in published studies. A guideline for building an automated Ki67 scoring algorithm was sent to participating labs. Very high correlation and no systematic error (p = 0.08) was found between consecutive Ki67 IHC sections. Ki67 scores were higher for core biopsy slides compared to paired whole sections from resections (p ≤ 0.001; median difference: 5.31%). The systematic discrepancy between core biopsy and corresponding whole sections was likely due to pre-analytical factors (tissue handling, fixation). Therefore, Ki67 IHC should be tested on core biopsy samples to best reflect the biological status of the tumor.
Collapse
Affiliation(s)
- Balazs Acs
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA.
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden.
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden.
| | | | - Kelley M Kidwell
- Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, MI, USA
| | - Indu Arun
- Tata Medical Center, Kolkata, West Bengal, India
| | - Renaldas Augulis
- Vilnius University Faculty of Medicine and National Center of Pathology, Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Sunil S Badve
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Yalai Bai
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Anita L Bane
- Juravinski Hospital and Cancer Centre, McMaster University, Hamilton, ON, Canada
| | - John M S Bartlett
- Ontario Institute for Cancer Research, Toronto, ON, Canada
- Edinburgh Cancer Research Centre, Western General Hospital, Edinburgh, United Kingdom
| | - Jane Bayani
- Ontario Institute for Cancer Research, Toronto, ON, Canada
| | - Gilbert Bigras
- Department of Laboratory Medicine and Pathology, University of Alberta, Edmonton, AB, Canada
| | - Annika Blank
- Institute of Pathology, University of Bern, Bern, Switzerland
- Institute of Pathology, Triemli Hospital Zurich, Zurich, Switzerland
| | - Henk Buikema
- University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Martin C Chang
- Department of Pathology & Laboratory Medicine, University of Vermont Medical Center, Burlington, VT, USA
| | - Robin L Dietz
- Department of Pathology, Olive View-UCLA Medical Center, Los Angeles, CA, USA
| | - Andrew Dodson
- UK NEQAS for Immunocytochemistry and In-Situ Hybridisation, London, United Kingdom
| | - Susan Fineberg
- Montefiore Medical Center and the Albert Einstein College of Medicine, Bronx, NY, USA
| | - Cornelia M Focke
- Dietrich-Bonhoeffer Medical Center, Neubrandenburg, Mecklenburg-Vorpommern, Germany
| | - Dongxia Gao
- University of British Columbia, Vancouver, BC, Canada
| | | | - Carolina Gutierrez
- Lester and Sue Smith Breast Center and Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX, USA
| | - Johan Hartman
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
| | - Zuzana Kos
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Anne-Vibeke Lænkholm
- Department of Surgical Pathology, Zealand University Hospital, Roskilde, Denmark
| | - Arvydas Laurinavicius
- Vilnius University Faculty of Medicine and National Center of Pathology, Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Richard M Levenson
- Department of Medical Pathology and Laboratory Medicine, University of California Davis Medical Center, Sacramento, CA, USA
| | - Rustin Mahboubi-Ardakani
- Department of Medical Pathology and Laboratory Medicine, University of California Davis Medical Center, Sacramento, CA, USA
| | | | - Sharon Nofech-Mozes
- University of Toronto Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - C Kent Osborne
- Lester and Sue Smith Breast Center and Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX, USA
| | - Frédérique M Penault-Llorca
- Imagerie Moléculaire et Stratégies Théranostiques, UMR1240, Université Clermont Auvergne, INSERM, Clermont-Ferrand, France
- Service de Pathologie, Centre Jean PERRIN, Clermont-Ferrand, France
| | - Tammy Piper
- Edinburgh Cancer Research Centre, Western General Hospital, Edinburgh, United Kingdom
| | | | - Tilman T Rau
- Institute of Pathology, University of Bern, Bern, Switzerland
- Institute of Pathology, Heinrich Heine University and University Hospital of Duesseldorf, Duesseldorf, Germany
| | - Stefan Reinhard
- Institute of Pathology, University of Bern, Bern, Switzerland
| | - Stephanie Robertson
- Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
| | - Roberto Salgado
- Department of Pathology, GZA-ZNA, Antwerp, Belgium
- Peter MacCallum Cancer Centre, University of Melbourne, Melbourne, VIC, Australia
| | | | - Bert van der Vegt
- University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Giuseppe Viale
- European Institute of Oncology, Milan, Italy
- European Institute of Oncology IRCCS, and University of Milan, Milan, Italy
| | - Lila A Zabaglo
- The Institute of Cancer Research, London, United Kingdom
| | - Daniel F Hayes
- University of Michigan Rogel Cancer Center, Ann Arbor, MI, USA
| | - Mitch Dowsett
- The Institute of Cancer Research, London, United Kingdom
| | | | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA.
| | | |
Collapse
|
31
|
A multi-view deep learning model for pathology image diagnosis. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
32
|
Peloso A, Moeckli B, Delaune V, Oldani G, Andres A, Compagnon P. Artificial Intelligence: Present and Future Potential for Solid Organ Transplantation. Transpl Int 2022; 35:10640. [PMID: 35859667 PMCID: PMC9290190 DOI: 10.3389/ti.2022.10640] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 06/13/2022] [Indexed: 12/12/2022]
Abstract
Artificial intelligence (AI) refers to computer algorithms used to complete tasks that usually require human intelligence. Typical examples include complex decision-making and- image or speech analysis. AI application in healthcare is rapidly evolving and it undoubtedly holds an enormous potential for the field of solid organ transplantation. In this review, we provide an overview of AI-based approaches in solid organ transplantation. Particularly, we identified four key areas of transplantation which could be facilitated by AI: organ allocation and donor-recipient pairing, transplant oncology, real-time immunosuppression regimes, and precision transplant pathology. The potential implementations are vast—from improved allocation algorithms, smart donor-recipient matching and dynamic adaptation of immunosuppression to automated analysis of transplant pathology. We are convinced that we are at the beginning of a new digital era in transplantation, and that AI has the potential to improve graft and patient survival. This manuscript provides a glimpse into how AI innovations could shape an exciting future for the transplantation community.
Collapse
Affiliation(s)
- Andrea Peloso
- Department of General Surgery, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
- Department of Transplantation, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
- *Correspondence: Andrea Peloso,
| | - Beat Moeckli
- Department of General Surgery, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
- Department of Transplantation, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
| | - Vaihere Delaune
- Department of General Surgery, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
| | - Graziano Oldani
- Department of General Surgery, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
- Department of Transplantation, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
| | - Axel Andres
- Department of General Surgery, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
- Department of Transplantation, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
| | - Philippe Compagnon
- Department of Transplantation, University of Geneva Hospitals, University of Geneva, Geneva, Switzerland
| |
Collapse
|
33
|
Beyond the colors: enhanced deep learning on invasive ductal carcinoma. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07478-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
34
|
Al-Jallad N, Ly-Mapes O, Hao P, Ruan J, Ramesh A, Luo J, Wu TT, Dye T, Rashwan N, Ren J, Jang H, Mendez L, Alomeir N, Bullock S, Fiscella K, Xiao J. Artificial intelligence-powered smartphone application, AICaries, improves at-home dental caries screening in children: Moderated and unmoderated usability test. PLOS DIGITAL HEALTH 2022; 1:e0000046. [PMID: 36381137 PMCID: PMC9645586 DOI: 10.1371/journal.pdig.0000046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/15/2022] [Indexed: 06/16/2023]
Abstract
Early Childhood Caries (ECC) is the most common childhood disease worldwide and a health disparity among underserved children. ECC is preventable and reversible if detected early. However, many children from low-income families encounter barriers to dental care. An at-home caries detection technology could potentially improve access to dental care regardless of patients' economic status and address the overwhelming prevalence of ECC. Our team has developed a smartphone application (app), AICaries, that uses artificial intelligence (AI)-powered technology to detect caries using children's teeth photos. We used mixed methods to assess the acceptance, usability, and feasibility of the AICaries app among underserved parent-child dyads. We conducted moderated usability testing (Step 1) with ten parent-child dyads using "Think-aloud" methods to assess the flow and functionality of the app and analyze the data to refine the app and procedures. Next, we conducted unmoderated field testing (Step 2) with 32 parent-child dyads to test the app within their natural environment (home) over two weeks. We administered the System Usability Scale (SUS) and conducted semi-structured individual interviews with parents and conducted thematic analyses. AICaries app received a 78.4 SUS score from the participants, indicating an excellent acceptance. Notably, the majority (78.5%) of parent-taken photos of children's teeth were satisfactory in quality for detection of caries using the AI app. Parents suggested using community health workers to provide training to parents needing assistance in taking high quality photos of their young child's teeth. Perceived benefits from using the AICaries app include convenient at-home caries screening, informative on caries risk and education, and engaging family members. Data from this study support future clinical trial that evaluates the real-world impact of using this innovative smartphone app on early detection and prevention of ECC among low-income children.
Collapse
Affiliation(s)
- Nisreen Al-Jallad
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Peirong Hao
- Department of Computer Science, University of Rochester, United States of America
| | - Jinlong Ruan
- Department of Computer Science, University of Rochester, United States of America
| | - Ashwin Ramesh
- Department of Computer Science, University of Rochester, United States of America
| | - Jiebo Luo
- Department of Computer Science, University of Rochester, United States of America
| | - Tong Tong Wu
- Department of Biostatistics and computational biology, University of Rochester Medical Center, Rochester, United States of America
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, United States of America
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Johana Ren
- University of Rochester, United States of America
| | - Hoonji Jang
- Temple University School of Dentistry, Pennsylvania, United States of America
| | - Luis Mendez
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Nora Alomeir
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Jin Xiao
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| |
Collapse
|
35
|
Murthy C, Balaji K. Histopathological analyses of breast cancer using deep learning. CARDIOMETRY 2022. [DOI: 10.18137/cardiometry.2022.22.456461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Deep Learning hosts a plethora of variants and models in Convolution Neural Networks (CNN), where the prudence of these methods is algorithmically proven when implemented with sturdy datasets. Much number of haphazard structures and textures are found in the histopathological images of breast cancer, where dealing with such multicolor and multi-structure components in the images is a challenging task. Working with such data in wet labs proves clinically consistent results, but added with the computational models will improvise them empirically. In this paper, we proposed a model to diagnose breast cancer using raw images of breast cancer with different resolutions, irrespective of the structures and textures. The floating image is mapped with the healthy reference image and examined using different statistics such as cross correlations and phase correlations. Experiments are carried out with the aim of establishing the optimal performance on histopathological images. The model attained satisfactory results and are proved good for decision making in cancer diagnosis.
Collapse
|
36
|
Wong DR, Tang Z, Mew NC, Das S, Athey J, McAleese KE, Kofler JK, Flanagan ME, Borys E, White CL, Butte AJ, Dugger BN, Keiser MJ. Deep learning from multiple experts improves identification of amyloid neuropathologies. Acta Neuropathol Commun 2022; 10:66. [PMID: 35484610 PMCID: PMC9052651 DOI: 10.1186/s40478-022-01365-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 04/11/2022] [Indexed: 12/17/2022] Open
Abstract
Pathologists can label pathologies differently, making it challenging to yield consistent assessments in the absence of one ground truth. To address this problem, we present a deep learning (DL) approach that draws on a cohort of experts, weighs each contribution, and is robust to noisy labels. We collected 100,495 annotations on 20,099 candidate amyloid beta neuropathologies (cerebral amyloid angiopathy (CAA), and cored and diffuse plaques) from three institutions, independently annotated by five experts. DL methods trained on a consensus-of-two strategy yielded 12.6-26% improvements by area under the precision recall curve (AUPRC) when compared to those that learned individualized annotations. This strategy surpassed individual-expert models, even when unfairly assessed on benchmarks favoring them. Moreover, ensembling over individual models was robust to hidden random annotators. In blind prospective tests of 52,555 subsequent expert-annotated images, the models labeled pathologies like their human counterparts (consensus model AUPRC = 0.74 cored; 0.69 CAA). This study demonstrates a means to combine multiple ground truths into a common-ground DL model that yields consistent diagnoses informed by multiple and potentially variable expert opinions.
Collapse
Affiliation(s)
- Daniel R. Wong
- grid.266102.10000 0001 2297 6811Bakar Computational Health Sciences Institute, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Institute for Neurodegenerative Diseases, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pharmaceutical Chemistry, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pediatrics, University of California, San Francisco, CA 94158 USA
| | - Ziqi Tang
- grid.266102.10000 0001 2297 6811Institute for Neurodegenerative Diseases, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pharmaceutical Chemistry, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA 94158 USA
| | - Nicholas C. Mew
- grid.266102.10000 0001 2297 6811Institute for Neurodegenerative Diseases, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pharmaceutical Chemistry, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA 94158 USA
| | - Sakshi Das
- grid.27860.3b0000 0004 1936 9684Department of Pathology and Laboratory Medicine, School of Medicine, University of California, Davis, Sacramento, CA 95817 USA
| | - Justin Athey
- grid.27860.3b0000 0004 1936 9684Department of Pathology and Laboratory Medicine, School of Medicine, University of California, Davis, Sacramento, CA 95817 USA
| | - Kirsty E. McAleese
- grid.1006.70000 0001 0462 7212Translation and Clinical Research Institute, Newcastle University, Newcastle, UK
| | - Julia K. Kofler
- grid.412689.00000 0001 0650 7433Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA 15260 USA
| | - Margaret E. Flanagan
- grid.16753.360000 0001 2299 3507Department of Pathology, Northwestern University, Evanston, IL 60208 USA ,grid.490348.20000000446839645Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern Medicine, Chicago, IL 60611 USA
| | - Ewa Borys
- grid.411451.40000 0001 2215 0876Department of Pathology, Loyola University Medical Center, Maywood, IL 60153 USA
| | - Charles L. White
- grid.267313.20000 0000 9482 7121Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Atul J. Butte
- grid.266102.10000 0001 2297 6811Bakar Computational Health Sciences Institute, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pediatrics, University of California, San Francisco, CA 94158 USA ,grid.30389.310000 0001 2348 0690Center for Data-Driven Insights and Innovation, University of California, Office of the President, Oakland, CA 94607 USA
| | - Brittany N. Dugger
- grid.27860.3b0000 0004 1936 9684Department of Pathology and Laboratory Medicine, School of Medicine, University of California, Davis, Sacramento, CA 95817 USA
| | - Michael J. Keiser
- grid.266102.10000 0001 2297 6811Bakar Computational Health Sciences Institute, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Institute for Neurodegenerative Diseases, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Pharmaceutical Chemistry, University of California, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA 94158 USA
| |
Collapse
|
37
|
Hu H, Qiao S, Hao Y, Bai Y, Cheng R, Zhang W, Zhang G. Breast cancer histopathological images recognition based on two-stage nuclei segmentation strategy. PLoS One 2022; 17:e0266973. [PMID: 35482728 PMCID: PMC9049370 DOI: 10.1371/journal.pone.0266973] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Accepted: 03/30/2022] [Indexed: 11/19/2022] Open
Abstract
Pathological examination is the gold standard for breast cancer diagnosis. The recognition of histopathological images of breast cancer has attracted a lot of attention in the field of medical image processing. In this paper, on the base of the Bioimaging 2015 dataset, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed to make the dataset recognized to be the carcinoma and non-carcinoma recognition. Firstly, stain separation is performed on breast cancer histopathological images. Then the marker-based watershed segmentation method is used for images obtained from stain separation to achieve the nuclei segmentation target. Next, the completed local binary pattern is used to extract texture features from the nuclei regions (images after nuclei segmentation), and color features were extracted by using the color auto-correlation method on the stain-separated images. Finally, the two kinds of features were fused and the support vector machine was used for carcinoma and non-carcinoma recognition. The experimental results show that the two-stage nuclei segmentation strategy proposed in this paper has significant advantages in the recognition of carcinoma and non-carcinoma on breast cancer histopathological images, and the recognition accuracy arrives at 91.67%. The proposed method is also applied to the ICIAR 2018 dataset to realize the automatic recognition of carcinoma and non-carcinoma, and the recognition accuracy arrives at 92.50%.
Collapse
Affiliation(s)
- Hongping Hu
- School of Science, North University of China, Taiyuan, China
| | - Shichang Qiao
- School of Science, North University of China, Taiyuan, China
| | - Yan Hao
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Yanping Bai
- School of Science, North University of China, Taiyuan, China
| | - Rong Cheng
- School of Science, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
38
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
39
|
Stålhammar G, Yeung A, Mendoza P, Dubovy SR, William Harbour J, Grossniklaus HE. Gain of Chromosome 6p Correlates with Severe Anaplasia, Cellular Hyperchromasia, and Extraocular Spread of Retinoblastoma. OPHTHALMOLOGY SCIENCE 2022; 2:100089. [PMID: 36246172 PMCID: PMC9560556 DOI: 10.1016/j.xops.2021.100089] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/03/2021] [Accepted: 12/03/2021] [Indexed: 06/16/2023]
Abstract
PURPOSE Gain of chromosome 6p has been associated with poor ocular survival in retinoblastoma and histopathologic grading of anaplasia with increased risk of metastatic spread and death. This study examined the correlation between these factors and other chromosomal abnormalities as well as results of whole genome sequencing, digital morphometry, and progression-free survival. DESIGN Retrospective cohort study from 2 United States tertiary referral centers. PARTICIPANTS Forty-two children who had undergone enucleation for retinoblastoma from January 2000 through December 2017. METHODS Status of chromosomes 6p, 1q, 9q, and 16q was evaluated with fluorescence in situ hybridization, the degree of anaplasia and presence of histologic high-risk features were assessed by ocular pathologists, digital morphometry was performed on scanned tumor slides, and whole genome sequencing was performed on a subset of tumors. Progression-free survival was defined as absence of distant or local metastases or tumor growth beyond the cut end of the optic nerve. MAIN OUTCOME MEASURES Correlation between each of chromosomal abnormalities, anaplasia, morphometry and sequencing results, and survival. RESULTS Forty-one of 42 included patients underwent primary enucleation and 1 was treated first with intra-arterial chemotherapy. Seven tumors showed mild anaplasia, 19 showed moderate anaplasia, and 16 showed severe anaplasia. All tumors had gain of 1q, 18 tumors had gain of 6p, 6 tumors had gain of 9q, and 36 tumors had loss of 16q. Tumors with severe anaplasia were significantly more likely to harbor 6p gains than tumors with nonsevere anaplasia (P < 0.001). Further, the hematoxylin staining intensity was significantly greater and that of eosin staining significantly lower in tumors with severe anaplasia (P < 0.05). Neither severe anaplasia (P = 0.10) nor gain of 6p (P = 0.21) correlated with histologic high-risk features, and severe anaplasia did not correlate to RB1, CREBBP, NSD1, or BCOR mutations in a subset of 14 tumors (P > 0.5). Patients with gain of 6p showed significantly shorter progression-free survival (P = 0.03, Wilcoxon test). CONCLUSIONS Gain of chromosome 6p emerges as a strong prognostic biomarker in retinoblastoma because it correlates with severe anaplasia, quantifiable changes in tumor cell staining characteristics, and extraocular spread.
Collapse
Affiliation(s)
- Gustav Stålhammar
- Ocular Pathology Service, St. Erik Eye Hospital, Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Aaron Yeung
- Royal Victorian Eye and Ear Hospital, Melbourne, Australia
- Departments of Ophthalmology and Pathology, Emory University School of Medicine, Atlanta, Georgia
| | - Pia Mendoza
- Departments of Ophthalmology and Pathology, Emory University School of Medicine, Atlanta, Georgia
| | - Sander R. Dubovy
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - J. William Harbour
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida
- Interdisciplinary Stem Cell Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Hans E. Grossniklaus
- Departments of Ophthalmology and Pathology, Emory University School of Medicine, Atlanta, Georgia
| |
Collapse
|
40
|
Wahab N, Miligy IM, Dodd K, Sahota H, Toss M, Lu W, Jahanifar M, Bilal M, Graham S, Park Y, Hadjigeorghiou G, Bhalerao A, Lashen AG, Ibrahim AY, Katayama A, Ebili HO, Parkin M, Sorell T, Raza SEA, Hero E, Eldaly H, Tsang YW, Gopalakrishnan K, Snead D, Rakha E, Rajpoot N, Minhas F. Semantic annotation for computational pathology: multidisciplinary experience and best practice recommendations. JOURNAL OF PATHOLOGY CLINICAL RESEARCH 2022; 8:116-128. [PMID: 35014198 PMCID: PMC8822374 DOI: 10.1002/cjp2.256] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/25/2021] [Accepted: 12/10/2021] [Indexed: 02/06/2023]
Abstract
Recent advances in whole‐slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence‐based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well‐defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large‐scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real‐world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.
Collapse
Affiliation(s)
- Noorul Wahab
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Islam M Miligy
- Pathology, University of Nottingham, Nottingham, UK.,Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Kom, Egypt
| | - Katherine Dodd
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Harvir Sahota
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Michael Toss
- Pathology, University of Nottingham, Nottingham, UK
| | - Wenqi Lu
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Mohsin Bilal
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Young Park
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Abhir Bhalerao
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | | | - Ayaka Katayama
- Graduate School of Medicine, Gunma University, Maebashi, Japan
| | | | | | - Tom Sorell
- Department of Politics and International Studies, University of Warwick, Coventry, UK
| | | | - Emily Hero
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK.,Leicester Royal Infirmary, Histopathology, University Hospitals Leicester, Leicester, UK
| | - Hesham Eldaly
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | | | - David Snead
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Emad Rakha
- Pathology, University of Nottingham, Nottingham, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| |
Collapse
|
41
|
El Agouri H, Azizi M, El Attar H, El Khannoussi M, Ibrahimi A, Kabbaj R, Kadiri H, BekarSabein S, EchCharif S, Mounjid C, El Khannoussi B. Assessment of deep learning algorithms to predict histopathological diagnosis of breast cancer: first Moroccan prospective study on a private dataset. BMC Res Notes 2022; 15:66. [PMID: 35183227 PMCID: PMC8857730 DOI: 10.1186/s13104-022-05936-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 01/29/2022] [Indexed: 11/18/2022] Open
Abstract
Objective Breast cancer is a critical public health issue and a leading cause of cancer-related deaths among women worldwide. Its early diagnosis and detection can effectively help in increasing the chances of survival rate. For this reason, the diagnosis and classification of breast cancer using Deep learning algorithms have attracted a lot of attention. Therefore, our study aimed to design a computational approach based on deep convolutional neural networks for an efficient classification of breast cancer histopathological images by using our own created dataset. We collected overall 328 digital slides, from 116 of surgical breast specimens diagnosed with invasive breast carcinoma of non-specific type, and referred to the histopathology department of the National Institute of Oncology in Rabat, Morocco. We used two models of deep neural network architectures in order to accurately classify the images into one of three categories: normal tissue-benign lesions, in situ carcinoma or invasive carcinoma. Results Both Resnet50 and Xception models achieved comparable results, with a small advantage to Xception extracted features. We reported high degrees of overall correct classification accuracy (88%), and sensitivity (95%) for detection of carcinoma cases, which is important for diagnostic pathology workflow in order to assist pathologists for diagnosing breast cancer with precision. The results of the present study showed that the designed classification model has a good generalization performance in predicting diagnosis of breast cancer, in spite of the limited size of the data. To our knowledge, this approach can be highly compared with other common methods in the automated analysis of breast cancer images reported in literature.
Collapse
Affiliation(s)
- H El Agouri
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco.
| | - M Azizi
- Datapathology, 20000, Casablanca, Morocco
| | - H El Attar
- Anatomic Pathology Laboratory Ennassr, 24000, El Jadida, Morocco
| | | | - A Ibrahimi
- Medical Biotechnology Laboratory (MedBiotech), Bioinova Research Center, Rabat Medical & Pharmacy School, Mohammed Vth University in Rabat, 10100, Rabat, Morocco
| | - R Kabbaj
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco
| | - H Kadiri
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco
| | - S BekarSabein
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco
| | - S EchCharif
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco
| | - C Mounjid
- Pathology Department, Oncology National Institute, Faculty of Sciences, Mohammed V University, 10100, Rabat, Morocco
| | - B El Khannoussi
- Pathology Department, Oncology National Institute, Faculty of Medicine and Pharmacy, Mohammed V University, 10100, Rabat, Morocco
| |
Collapse
|
42
|
Hu D, Wang C, Zheng S, Cui X. Investigating the genealogy of the literature on digital pathology: a two-dimensional bibliometric approach. Scientometrics 2022. [DOI: 10.1007/s11192-021-04224-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
43
|
Imparato G, Urciuolo F, Netti PA. Organ on Chip Technology to Model Cancer Growth and Metastasis. Bioengineering (Basel) 2022; 9:28. [PMID: 35049737 PMCID: PMC8772984 DOI: 10.3390/bioengineering9010028] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 01/05/2022] [Accepted: 01/10/2022] [Indexed: 12/18/2022] Open
Abstract
Organ on chip (OOC) has emerged as a major technological breakthrough and distinct model system revolutionizing biomedical research and drug discovery by recapitulating the crucial structural and functional complexity of human organs in vitro. OOC are rapidly emerging as powerful tools for oncology research. Indeed, Cancer on chip (COC) can ideally reproduce certain key aspects of the tumor microenvironment (TME), such as biochemical gradients and niche factors, dynamic cell-cell and cell-matrix interactions, and complex tissue structures composed of tumor and stromal cells. Here, we review the state of the art in COC models with a focus on the microphysiological systems that host multicellular 3D tissue engineering models and can help elucidate the complex biology of TME and cancer growth and progression. Finally, some examples of microengineered tumor models integrated with multi-organ microdevices to study disease progression in different tissues will be presented.
Collapse
Affiliation(s)
- Giorgia Imparato
- Center for Advanced Biomaterials for HealthCare@CRIB, Istituto Italiano di Tecnologia, Largo Barsanti e Matteucci 53, 80125 Naples, Italy; (F.U.); (P.A.N.)
| | - Francesco Urciuolo
- Center for Advanced Biomaterials for HealthCare@CRIB, Istituto Italiano di Tecnologia, Largo Barsanti e Matteucci 53, 80125 Naples, Italy; (F.U.); (P.A.N.)
- Department of Chemical, Materials and Industrial Production (DICMAPI), Interdisciplinary Research Centre on Biomaterials (CRIB), University of Naples Federico II, P.leTecchio 80, 80125 Naples, Italy
| | - Paolo Antonio Netti
- Center for Advanced Biomaterials for HealthCare@CRIB, Istituto Italiano di Tecnologia, Largo Barsanti e Matteucci 53, 80125 Naples, Italy; (F.U.); (P.A.N.)
- Department of Chemical, Materials and Industrial Production (DICMAPI), Interdisciplinary Research Centre on Biomaterials (CRIB), University of Naples Federico II, P.leTecchio 80, 80125 Naples, Italy
| |
Collapse
|
44
|
Wang S, Hou Y, Li X, Meng X, Zhang Y, Wang X. Practical Implementation of Artificial Intelligence-Based Deep Learning and Cloud Computing on the Application of Traditional Medicine and Western Medicine in the Diagnosis and Treatment of Rheumatoid Arthritis. Front Pharmacol 2022; 12:765435. [PMID: 35002704 PMCID: PMC8733656 DOI: 10.3389/fphar.2021.765435] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 12/09/2021] [Indexed: 12/23/2022] Open
Abstract
Rheumatoid arthritis (RA), an autoimmune disease of unknown etiology, is a serious threat to the health of middle-aged and elderly people. Although western medicine, traditional medicine such as traditional Chinese medicine, Tibetan medicine and other ethnic medicine have shown certain advantages in the diagnosis and treatment of RA, there are still some practical shortcomings, such as delayed diagnosis, improper treatment scheme and unclear drug mechanism. At present, the applications of artificial intelligence (AI)-based deep learning and cloud computing has aroused wide attention in the medical and health field, especially in screening potential active ingredients, targets and action pathways of single drugs or prescriptions in traditional medicine and optimizing disease diagnosis and treatment models. Integrated information and analysis of RA patients based on AI and medical big data will unquestionably benefit more RA patients worldwide. In this review, we mainly elaborated the application status and prospect of AI-assisted deep learning and cloud computation-oriented western medicine and traditional medicine on the diagnosis and treatment of RA in different stages. It can be predicted that with the help of AI, more pharmacological mechanisms of effective ethnic drugs against RA will be elucidated and more accurate solutions will be provided for the treatment and diagnosis of RA in the future.
Collapse
Affiliation(s)
- Shaohui Wang
- School of Ethnic Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Ya Hou
- School of Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xuanhao Li
- Chengdu Second People's Hospital, Chengdu, China
| | - Xianli Meng
- State Key Laboratory of Southwestern Chinese Medicine Resources, Innovative Institute of Chinese Medicine and Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Yi Zhang
- School of Ethnic Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xiaobo Wang
- State Key Laboratory of Southwestern Chinese Medicine Resources, Innovative Institute of Chinese Medicine and Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
45
|
García-Armenta E, Gutiérrez-López GF. Fractal Microstructure of Foods. FOOD ENGINEERING REVIEWS 2022. [DOI: 10.1007/s12393-021-09302-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
46
|
Zhong Y, Piao Y, Zhang G. Dilated and soft attention-guided convolutional neural network for breast cancer histology images classification. Microsc Res Tech 2021; 85:1248-1257. [PMID: 34859543 DOI: 10.1002/jemt.23991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/03/2021] [Accepted: 10/18/2021] [Indexed: 01/22/2023]
Abstract
Breast cancer is one of the most common types of cancer in women, and histopathological imaging is considered the gold standard for its diagnosis. However, the great complexity of histopathological images and the considerable workload make this work extremely time-consuming, and the results may be affected by the subjectivity of the pathologist. Therefore, the development of an accurate, automated method for analysis of histopathological images is critical to this field. In this article, we propose a deep learning method guided by the attention mechanism for fast and effective classification of haematoxylin and eosin-stained breast biopsy images. First, this method takes advantage of DenseNet and uses the feature map's information. Second, we introduce dilated convolution to produce a larger receptive field. Finally, spatial attention and channel attention are used to guide the extraction of the most useful visual features. With the use of fivefold cross-validation, the best model obtained an accuracy of 96.47% on the BACH2018 dataset. We also evaluated our method on other datasets, and the experimental results demonstrated that our model has reliable performance. This study indicates that our histopathological image classifier with a soft attention-guided deep learning model for breast cancer shows significantly better results than the latest methods. It has great potential as an effective tool for automatic evaluation of digital histopathological microscopic images for computer-aided diagnosis.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Guohui Zhang
- Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, China
| |
Collapse
|
47
|
Song J, Zheng Y, Xu C, Zou Z, Ding G, Huang W. Improving the classification ability of network utilizing fusion technique in contrast-enhanced spectral mammography. Med Phys 2021; 49:966-977. [PMID: 34860417 DOI: 10.1002/mp.15390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 11/14/2021] [Accepted: 11/16/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Contrast-enhanced spectral mammography (CESM) is an effective tool for diagnosing breast cancer with the benefit of its multiple types of images. However, few models simultaneously utilize this feature in deep learning-based breast cancer classification methods. To combine multiple features of CESM and thus aid physicians in making accurate diagnoses, we propose a hybrid approach by taking advantages of both fusion and classification models. METHODS We evaluated the proposed method on a CESM dataset obtained from 95 patients between ages ranging from 21 to 74 years, with a total of 760 images. The framework consists of two main parts: a generative adversarial network based image fusion module and a Res2Net-based classification module. The aim of the fusion module is to generate a fused image that combines the characteristics of dual-energy subtracted (DES) and low-energy (LE) images, and the classification module is developed to classify the fused image into benign or malignant. RESULTS Based on the experimental results, the fused images contained complementary information of the images of both types (DES and LE), whereas the model for classification achieved accurate classification results. In terms of qualitative indicators, the entropy of the fused images was 2.63, and the classification model achieved an accuracy of 94.784%, precision of 95.016%, recall of 95.912%, specificity of 0.945, F1_score of 0.955, and area under curve of 0.947 on the test dataset, respectively. CONCLUSIONS We conducted extensive comparative experiments and analyses on our in-house dataset, and demonstrated that our method produces promising results in the fusion of CESM images and is more accurate than the state-of-the-art methods in classification of fused CESM.
Collapse
Affiliation(s)
- Jingqi Song
- School of Information Science and Engineering, Shandong Normal University, Jinan, China.,Key Lab of Intelligent Computing & Information Security in Universities of Shandong, Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology, Institute of Biomedical Sciences, Shandong Normal University, Jinan, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China.,Key Lab of Intelligent Computing & Information Security in Universities of Shandong, Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology, Institute of Biomedical Sciences, Shandong Normal University, Jinan, China
| | - Chenxi Xu
- School of Information Science and Engineering, Shandong Normal University, Jinan, China.,Key Lab of Intelligent Computing & Information Security in Universities of Shandong, Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology, Institute of Biomedical Sciences, Shandong Normal University, Jinan, China
| | - Zhenxing Zou
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Guocheng Ding
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, China.,Key Lab of Intelligent Computing & Information Security in Universities of Shandong, Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology, Institute of Biomedical Sciences, Shandong Normal University, Jinan, China
| |
Collapse
|
48
|
Liew XY, Hameed N, Clos J. An investigation of XGBoost-based algorithm for breast cancer classification. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100154] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
49
|
Cai YW, Dong FF, Shi YH, Lu LY, Chen C, Lin P, Xue YS, Chen JH, Chen SY, Luo XB. Deep learning driven colorectal lesion detection in gastrointestinal endoscopic and pathological imaging. World J Clin Cases 2021; 9:9376-9385. [PMID: 34877273 PMCID: PMC8610875 DOI: 10.12998/wjcc.v9.i31.9376] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 07/26/2021] [Accepted: 08/13/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer has the second highest incidence of malignant tumors and is the fourth leading cause of cancer deaths in China. Early diagnosis and treatment of colorectal cancer will lead to an improvement in the 5-year survival rate, which will reduce medical costs. The current diagnostic methods for early colorectal cancer include excreta, blood, endoscopy, and computer-aided endoscopy. In this paper, research on image analysis and prediction of colorectal cancer lesions based on deep learning is reviewed with the goal of providing a reference for the early diagnosis of colorectal cancer lesions by combining computer technology, 3D modeling, 5G remote technology, endoscopic robot technology, and surgical navigation technology. The findings will supplement the research and provide insights to improve the cure rate and reduce the mortality of colorectal cancer.
Collapse
Affiliation(s)
- Yu-Wen Cai
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Fang-Fen Dong
- Department of Medical Technology and Engineering, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Yu-Heng Shi
- Computer Science and Engineering College, University of Alberta, Edmonton T6G 2R3, Canada
| | - Li-Yuan Lu
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Chen Chen
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Ping Lin
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Yu-Shan Xue
- Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
| | - Jian-Hua Chen
- Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou 350014, Fujian Province, China
| | - Su-Yu Chen
- Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou 350014, Fujian Province, China
| | - Xiong-Biao Luo
- Department of Computer Science, Xiamen University, Xiamen 361005, Fujian, China
| |
Collapse
|
50
|
Xiao J, Luo J, Ly-Mapes O, Wu TT, Dye T, Al Jallad N, Hao P, Ruan J, Bullock S, Fiscella K. Assessing a Smartphone App (AICaries) That Uses Artificial Intelligence to Detect Dental Caries in Children and Provides Interactive Oral Health Education: Protocol for a Design and Usability Testing Study. JMIR Res Protoc 2021; 10:e32921. [PMID: 34529582 PMCID: PMC8571694 DOI: 10.2196/32921] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 09/14/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Early childhood caries (ECC) is the most common chronic childhood disease, with nearly 1.8 billion new cases per year worldwide. ECC afflicts approximately 55% of low-income and minority US preschool children, resulting in harmful short- and long-term effects on health and quality of life. Clinical evidence shows that caries is reversible if detected and addressed in its early stages. However, many low-income US children often have poor access to pediatric dental services. In this underserved group, dental caries is often diagnosed at a late stage when extensive restorative treatment is needed. With more than 85% of lower-income Americans owning a smartphone, mobile health tools such as smartphone apps hold promise in achieving patient-driven early detection and risk control of ECC. OBJECTIVE This study aims to use a community-based participatory research strategy to refine and test the usability of an artificial intelligence-powered smartphone app, AICaries, to be used by children's parents/caregivers for dental caries detection in their children. METHODS Our previous work has led to the prototype of AICaries, which offers artificial intelligence-powered caries detection using photos of children's teeth taken by the parents' smartphones, interactive caries risk assessment, and personalized education on reducing children's ECC risk. This AICaries study will use a two-step qualitative study design to assess the feedback and usability of the app component and app flow, and whether parents can take photos of children's teeth on their own. Specifically, in step 1, we will conduct individual usability tests among 10 pairs of end users (parents with young children) to facilitate app module modification and fine-tuning using think aloud and instant data analysis strategies. In step 2, we will conduct unmoderated field testing for app feasibility and acceptability among 32 pairs of parents with their young children to assess the usability and acceptability of AICaries, including assessing the number/quality of teeth images taken by the parents for their children and parents' satisfaction. RESULTS The study is funded by the National Institute of Dental and Craniofacial Research, United States. This study received institutional review board approval and launched in August 2021. Data collection and analysis are expected to conclude by March 2022 and June 2022, respectively. CONCLUSIONS Using AICaries, parents can use their regular smartphones to take photos of their children's teeth and detect ECC aided by AICaries so that they can actively seek treatment for their children at an early and reversible stage of ECC. Using AICaries, parents can also obtain essential knowledge on reducing their children's caries risk. Data from this study will support a future clinical trial that evaluates the real-world impact of using this smartphone app on early detection and prevention of ECC among low-income children. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/32921.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Jiebo Luo
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Peirong Hao
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Jinlong Ruan
- Computer Science, University of Rochester, Rochester, NY, United States
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States
| |
Collapse
|