1
|
Umamaheswari T, Babu YMM. ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108373. [PMID: 39276667 DOI: 10.1016/j.cmpb.2024.108373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 08/01/2024] [Accepted: 08/10/2024] [Indexed: 09/17/2024]
Abstract
Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)". This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the detection phase. The tumor detection is performed by the Vision Transformer-based Multiscale Adaptive EfficientNetB7 (ViT-MAENB7) model. This model utilizes a combination of advanced algorithms and deep learning techniques to accurately identify and locate tumors within the segmented medical image. By incorporating a multiscale adaptive approach, the ViT-MAENB7 model can analyze the image at various levels of detail, improving the overall accuracy of tumor detection. This crucial step in the medical image analysis process allows healthcare professionals to make more informed decisions regarding patient treatment and care. Here, the created MGSOA algorithm is used to optimize the parameters for enhancing the performance of the model. The suggested breast cancer diagnosis performance is compared to conventional cancer diagnosis models and it showed high accuracy. The accuracy of the developed MGSOA-ViT-MAENB7 is 96.6 %, and others model like RNN, LSTM, EffNet, and ViT-MAENet given the accuracy to be 90.31 %, 92.79 %, 94.46 % and 94.75 %. The developed model's ability to analyze images at multiple scales, combined with the optimization provided by the MGSOA algorithm, results in a highly accurate and efficient system for detecting tumors in medical images. This cutting-edge technology not only improves the accuracy of diagnosis but also helps healthcare professionals tailor treatment plans to individual patients, ultimately leading to better outcomes. By outperforming traditional cancer diagnosis models, the proposed model is revolutionizing the field of medical imaging and setting a new standard for precision and effectiveness in healthcare.
Collapse
Affiliation(s)
| | - Y Murali Mohan Babu
- N.B.K.R. Institute of Science and Technology, Vidhyanagar, Andhra Pradesh, India.
| |
Collapse
|
2
|
Königer L, Malkmus C, Mahdy D, Däullary T, Götz S, Schwarz T, Gensler M, Pallmann N, Cheufou D, Rosenwald A, Möllmann M, Groneberg D, Popp C, Groeber‐Becker F, Steinke M, Hansmann J. ReBiA-Robotic Enabled Biological Automation: 3D Epithelial Tissue Production. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2406608. [PMID: 39324843 PMCID: PMC11615785 DOI: 10.1002/advs.202406608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/08/2024] [Indexed: 09/27/2024]
Abstract
The Food and Drug Administration's recent decision to eliminate mandatory animal testing for drug approval marks a significant shift to alternative methods. Similarly, the European Parliament is advocating for a faster transition, reflecting public preference for animal-free research practices. In vitro tissue models are increasingly recognized as valuable tools for regulatory assessments before clinical trials, in line with the 3R principles (Replace, Reduce, Refine). Despite their potential, barriers such as the need for standardization, availability, and cost hinder their widespread adoption. To address these challenges, the Robotic Enabled Biological Automation (ReBiA) system is developed. This system uses a dual-arm robot capable of standardizing laboratory processes within a closed automated environment, translating manual processes into automated ones. This reduces the need for process-specific developments, making in vitro tissue models more consistent and cost-effective. ReBiA's performance is demonstrated through producing human reconstructed epidermis, human airway epithelial models, and human intestinal organoids. Analyses confirm that these models match the morphology and protein expression of manually prepared and native tissues, with similar cell viability. These successes highlight ReBiA's potential to lower barriers to broader adoption of in vitro tissue models, supporting a shift toward more ethical and advanced research methods.
Collapse
Affiliation(s)
- Lukas Königer
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
| | - Christoph Malkmus
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
- Institute of Medical Engineering SchweinfurtTechnical University of Applied Sciences Würzburg‐Schweinfurt97421SchweinfurtGermany
| | - Dalia Mahdy
- Chair of Tissue Engineering and Regenerative MedicineUniversity Hospital Würzburg97070WürzburgGermany
| | - Thomas Däullary
- Chair of Tissue Engineering and Regenerative MedicineUniversity Hospital Würzburg97070WürzburgGermany
- Chair of Cellular ImmunotherapyUniversity Hospital Würzburg97080WürzburgGermany
| | - Susanna Götz
- Faculty of Design WürzburgTechnical University of Applied Sciences Würzburg‐Schweinfurt97070WürzburgGermany
| | - Thomas Schwarz
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
| | - Marius Gensler
- Chair of Tissue Engineering and Regenerative MedicineUniversity Hospital Würzburg97070WürzburgGermany
| | - Niklas Pallmann
- Chair of Tissue Engineering and Regenerative MedicineUniversity Hospital Würzburg97070WürzburgGermany
| | - Danjouma Cheufou
- Department of Thoracic SurgeryKlinikum Würzburg Mitte97070WürzburgGermany
| | | | - Marc Möllmann
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
| | - Dieter Groneberg
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
| | - Christina Popp
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
| | - Florian Groeber‐Becker
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
- Department of OphthalmologyUniversity Clinic Düsseldorf40225DüsseldorfGermany
| | - Maria Steinke
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
- Department of Oto‐Rhino‐LaryngologyPlasticAesthetic and Reconstructive Head and Neck SurgeryUniversity Hospital Würzburg97080WürzburgGermany
| | - Jan Hansmann
- Translational Center Regenerative TherapiesFraunhofer Institute for Silicate Research ISC97070WürzburgGermany
- Institute of Medical Engineering SchweinfurtTechnical University of Applied Sciences Würzburg‐Schweinfurt97421SchweinfurtGermany
| |
Collapse
|
3
|
Liao L, Aagaard EM. An open codebase for enhancing transparency in deep learning-based breast cancer diagnosis utilizing CBIS-DDSM data. Sci Rep 2024; 14:27318. [PMID: 39516557 PMCID: PMC11549440 DOI: 10.1038/s41598-024-78648-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
Accessible mammography datasets and innovative machine learning techniques are at the forefront of computer-aided breast cancer diagnosis. However, the opacity surrounding private datasets and the unclear methodology behind the selection of subset images from publicly available databases for model training and testing, coupled with the arbitrary incompleteness or inaccessibility of code, markedly intensifies the obstacles in replicating and validating the model's efficacy. These challenges, in turn, erect barriers for subsequent researchers striving to learn and advance this field. To address these limitations, we provide a pilot codebase covering the entire process from image preprocessing to model development and evaluation pipeline, utilizing the publicly available Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) mass subset, including both full images and regions of interests (ROIs). We have identified that increasing the input size could improve the detection accuracy of malignant cases within each set of models. Collectively, our efforts hold promise in accelerating global software development for breast cancer diagnosis by leveraging our codebase and structure, while also integrating other advancements in the field.
Collapse
Affiliation(s)
- Ling Liao
- Biomedical Deep Learning LLC, St. Louis, MO, USA.
- Computational and Systems Biology, Washington University in St. Louis, St. Louis, MO, USA.
| | - Eva M Aagaard
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
| |
Collapse
|
4
|
Lukovikov DA, Kolesnikova TO, Ikrin AN, Prokhorenko NO, Shevlyakov AD, Korotaev AA, Yang L, Bley V, de Abreu MS, Kalueff AV. A novel open-access artificial-intelligence-driven platform for CNS drug discovery utilizing adult zebrafish. J Neurosci Methods 2024; 411:110256. [PMID: 39182516 DOI: 10.1016/j.jneumeth.2024.110256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/17/2024] [Indexed: 08/27/2024]
Abstract
BACKGROUND Although zebrafish are increasingly utilized in biomedicine for CNS disease modelling and drug discovery, this generates big data necessitating objective, precise and reproducible analyses. The artificial intelligence (AI) applications have empowered automated image recognition and video-tracking to ensure more efficient behavioral testing. NEW METHOD Capitalizing on several AI tools that most recently became available, here we present a novel open-access AI-driven platform to analyze tracks of adult zebrafish collected from in vivo neuropharmacological experiments. For this, we trained the AI system to distinguish zebrafish behavioral patterns following systemic treatment with several well-studied psychoactive drugs - nicotine, caffeine and ethanol. RESULTS Experiment 1 showed the ability of the AI system to distinguish nicotine and caffeine with 75 % and ethanol with 88 % probability and high (81 %) accuracy following a post-training exposure to these drugs. Experiment 2 further validated our system with additional, previously unexposed compounds (cholinergic arecoline and varenicline, and serotonergic fluoxetine), used as positive and negative controls, respectively. COMPARISON WITH EXISTING METHODS The present study introduces a novel open-access AI-driven approach to analyze locomotor activity of adult zebrafish. CONCLUSIONS Taken together, these findings support the value of custom-made AI tools for unlocking full potential of zebrafish CNS drug research by monitoring, processing and interpreting the results of in vivo experiments.
Collapse
Affiliation(s)
- Danil A Lukovikov
- Graduate Program in Bioinformatics and Genomics, Sirius University of Science and Technology, Sochi 354340, Russia; Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Tatiana O Kolesnikova
- Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Aleksey N Ikrin
- Graduate Program in Genetics and Genetic Technologies, Sirius University of Science and Technology, Sochi 354340, Russia; Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Nikita O Prokhorenko
- Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Anton D Shevlyakov
- Graduate Program in Bioinformatics and Genomics, Sirius University of Science and Technology, Sochi 354340, Russia; Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Andrei A Korotaev
- Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia
| | - Longen Yang
- Department of Biological Sciences, School of Science, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; Suzhou Key Laboratory of Neurobiology and Cell Signaling, School of Science, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China
| | - Vea Bley
- Department of Biological Sciences, School of Science, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; Biology Program, University of Florida, Gainesville, FL 32610, USA
| | - Murilo S de Abreu
- Graduate Program in Health Sciences, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil; Western Caspian University, Baku, Azerbaijan.
| | - Allan V Kalueff
- Neuroscience Department, Sirius University of Science and Technology, Sochi 354340, Russia; Department of Biological Sciences, School of Science, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; Suzhou Key Laboratory of Neurobiology and Cell Signaling, School of Science, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China; Institute of Translational Biomedicine, St. Petersburg State University, St. Petersburg 199034, Russia; Institute of Experimental Medicine, Almazov National Medical Research Centre, Ministry of Healthcare of Russian Federation, St. Petersburg 194021, Russia.
| |
Collapse
|
5
|
Xie Z, Hu X, Guo L, Lin W, Liu J, Zhang C, Ge G, Tang Y, Wang W. A lightweight detection algorithm for tooth cracks in optical images. Comput Biol Med 2024; 182:109153. [PMID: 39288557 DOI: 10.1016/j.compbiomed.2024.109153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 08/23/2024] [Accepted: 09/11/2024] [Indexed: 09/19/2024]
Abstract
OBJECTIVES Cracked tooth syndrome (CTS) is one of the major causes of tooth loss, presents the problem of early microcrack symptoms that are difficult to distinguish. This paper aims to investigate the practicality and feasibility of an improved object detection algorithm for automatically detecting cracks in dental optical images. METHODS A total of 286 teeth were obtained from Sun Yat-sen University and Guangdong University of Technology, and simulated cracks were generated using thermal expansion and contraction. Over 3000 images of cracked teeth were collected, including 360 real clinical images. To make the model more lightweight and better suited for deployment on embedded devices, this paper improves the YOLOv8 model for detecting tooth cracks through model pruning and backbone replacement. Additionally, the impact of image enhancement modules and coordinate attention modules on optimizing our model was analyzed. RESULTS Through experimental validation, we conclude that that model pruning reduction maintains performance better than replacing a lightweight backbone network on a tooth crack detection task. This approach achieved a reduction in parameters and GFLOPs by 16.8 % and 24.3 %, respectively, with minimal impact on performance. These results affirm the effectiveness of the proposed method in identifying and labeling tooth fractures. In addition, this paper demonstrated that the impact of image enhancement modules and coordinate attention mechanisms on YOLOv8's performance in the task of tooth crack detection was minimal. CONCLUSIONS An improved object detection algorithm has been proposed to reduce model parameters. This lightweight model is easier to deploy and holds potential for assisting dentists in identifying cracks on tooth surfaces.
Collapse
Affiliation(s)
- Zewen Xie
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China; School of Physics and Material Science, Guangzhou University, Guangzhou, 510006, China
| | - Xian Hu
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China
| | - Lide Guo
- School of Computer Science and Cyber Engineeringe, Guangzhou University, Guangzhou, 510006, China
| | - Weiren Lin
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China
| | - Jiakun Liu
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China
| | - Chunliang Zhang
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China
| | - Guanghua Ge
- Department of Dentistry, Hospital of Guangdong University of Technology, Guangdong University of Technology, Guangzhou, 510006, China
| | - Yadong Tang
- School of Biomedical and Pharmaceutical Sciences, Guangdong University of Technology, Guangzhou, 510006, China
| | - Wenlong Wang
- School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
6
|
Elaraby A, Saad A, Elmannai H, Alabdulhafith M, Hadjouni M, Hamdi M. An approach for classification of breast cancer using lightweight deep convolution neural network. Heliyon 2024; 10:e38524. [PMID: 39640611 PMCID: PMC11619963 DOI: 10.1016/j.heliyon.2024.e38524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 09/09/2024] [Accepted: 09/25/2024] [Indexed: 12/07/2024] Open
Abstract
The rapid advancement of deep learning has generated considerable enthusiasm regarding its utilization in addressing medical imaging issues. Machine learning (ML) methods can help radiologists to diagnose breast cancer (BCs) barring invasive measures. Informative hand-crafted features are essential prerequisites for traditional machine learning classifiers to achieve accurate results, which are time-consuming to extract. In this paper, our deep learning algorithm is created to precisely identify breast cancers on screening mammograms, employing a training method that effectively utilizes training datasets with either full clinical annotation or solely the cancer status of the entire image. The proposed approach utilizes Lightweight Convolutional Neural Network (LWCNN) that allows automatic extraction features in an end-to-end manner. We have tested LWCNN model in two experiments. In the first experiment, the model was tested with two cases' original and enhancement datasets 1. It achieved 95 %, 93 %, 99 % and 98 % for training and testing accuracy respectively. In the second experiment, the model has been tested with two cases' original and enhancement datasets 2. It achieved 95 %, 91 %, 99 % and 92 % for training and testing accuracy respectively. Our proposed method, which uses various convolutional network to classify screening mammograms achieved exceptional performance when compared to other methods. The findings from these experiments clearly indicate that automatic deep learning techniques can be trained effectively to attain remarkable accuracy across a wide range of mammography datasets. This holds significant promise for improving clinical tools and reducing both false positive and false negative outcomes in screening mammography.
Collapse
Affiliation(s)
- Ahmed Elaraby
- Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena, 83523, Egypt
| | - Aymen Saad
- Department of Information Technology, Management Technical College, Al-Furat Al-Awsat Technical University, Kufa, Iraq
| | - Hela Elmannai
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.BOX 84428, Riyadh, 11671, Saudi Arabia
| | - Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.BOX 84428, Riyadh, 11671, Saudi Arabia
| | - Myriam Hadjouni
- Department of Computer Sciences, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Monia Hamdi
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.BOX 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|
7
|
Liu G, Zhang J, Chan AB, Hsiao JH. Human attention guided explainable artificial intelligence for computer vision models. Neural Netw 2024; 177:106392. [PMID: 38788290 DOI: 10.1016/j.neunet.2024.106392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/11/2024] [Accepted: 05/13/2024] [Indexed: 05/26/2024]
Abstract
Explainable artificial intelligence (XAI) has been increasingly investigated to enhance the transparency of black-box artificial intelligence models, promoting better user understanding and trust. Developing an XAI that is faithful to models and plausible to users is both a necessity and a challenge. This work examines whether embedding human attention knowledge into saliency-based XAI methods for computer vision models could enhance their plausibility and faithfulness. Two novel XAI methods for object detection models, namely FullGrad-CAM and FullGrad-CAM++, were first developed to generate object-specific explanations by extending the current gradient-based XAI methods for image classification models. Using human attention as the objective plausibility measure, these methods achieve higher explanation plausibility. Interestingly, all current XAI methods when applied to object detection models generally produce saliency maps that are less faithful to the model than human attention maps from the same object detection task. Accordingly, human attention-guided XAI (HAG-XAI) was proposed to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activation functions and smoothing kernels to maximize the similarity between XAI saliency map and human attention map. The proposed XAI methods were evaluated on widely used BDD-100K, MS-COCO, and ImageNet datasets and compared with typical gradient-based and perturbation-based XAI methods. Results suggest that HAG-XAI enhanced explanation plausibility and user trust at the expense of faithfulness for image classification models, and it enhanced plausibility, faithfulness, and user trust simultaneously and outperformed existing state-of-the-art XAI methods for object detection models.
Collapse
Affiliation(s)
- Guoyang Liu
- School of Integrated Circuits, Shandong University, Jinan, China; Department of Psychology, University of Hong Kong, Pokfulam Road, Hong Kong.
| | | | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong.
| | - Janet H Hsiao
- Division of Social Science, Hong Kong University of Science and Technology, Clearwater Bay, Hong Kong; Department of Psychology, University of Hong Kong, Pokfulam Road, Hong Kong.
| |
Collapse
|
8
|
Oza U, Gohel B, Kumar P, Oza P. Presegmenter Cascaded Framework for Mammogram Mass Segmentation. Int J Biomed Imaging 2024; 2024:9422083. [PMID: 39155940 PMCID: PMC11329304 DOI: 10.1155/2024/9422083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 05/09/2024] [Accepted: 06/26/2024] [Indexed: 08/20/2024] Open
Abstract
Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.
Collapse
Affiliation(s)
- Urvi Oza
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Bakul Gohel
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Pankaj Kumar
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| | - Parita Oza
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| |
Collapse
|
9
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
10
|
Yan J, Zeng Y, Lin J, Pei Z, Fan J, Fang C, Cai Y. Enhanced object detection in pediatric bronchoscopy images using YOLO-based algorithms with CBAM attention mechanism. Heliyon 2024; 10:e32678. [PMID: 39021922 PMCID: PMC11252869 DOI: 10.1016/j.heliyon.2024.e32678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 06/05/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Background and Objective Bronchoscopy is a widely used diagnostic and therapeutic procedure for respiratory disorders such as infections and tumors. However, visualizing the bronchial tubes and lungs can be challenging due to the presence of various objects, such as mucus, blood, and foreign bodies. Accurately identifying the anatomical location of the bronchi can be quite challenging, especially for medical professionals who are new to the field. Deep learning-based object detection algorithms can assist doctors in analyzing images or videos of the bronchial tubes to identify key features such as the epiglottis, vocal cord, and right basal bronchus. This study aims to improve the accuracy of object detection in bronchoscopy images by integrating a YOLO-based algorithm with a CBAM attention mechanism. Methods The CBAM attention module is implemented in the YOLO-V7 and YOLO-V8 object detection models to improve their object identification and classification capabilities in bronchoscopy images. Various YOLO-based object detection algorithms, such as YOLO-V5, YOLO-V7, and YOLO-V8 are compared on this dataset. Experiments are conducted to evaluate the performance of the proposed method and different algorithms. Results The proposed method significantly improves the accuracy and reliability of object detection for bronchoscopy images. This approach demonstrates the potential benefits of incorporating an attention mechanism in medical imaging and the benefits of utilizing object detection algorithms in bronchoscopy. In the experiments, the YOLO-V8-based model achieved a mean Average Precision (mAP) of 87.09% on the given dataset with an Intersection over Union (IoU) threshold of 0.5. After incorporating the Convolutional Block Attention Module (CBAM) into the YOLO-V8 architecture, the proposed method achieved a significantly enhanced m A P 0.5 and m A P 0.5 : 0.95 of 88.27% and 55.39%, respectively. Conclusions Our findings indicate that by incorporating a CBAM attention mechanism with a YOLO-based algorithm, there is a noticeable improvement in object detection performance in bronchoscopy images. This study provides valuable insights into enhancing the performance of attention mechanisms for object detection in medical imaging.
Collapse
Affiliation(s)
- Jianqi Yan
- Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, 999078, Macau
- R&D Department, Quanbao Technologies Co. Ltd, Hagongda Road, Xiangzhou District, Zhuhai, 519087, China
| | - Yifan Zeng
- R&D Department, Quanbao Technologies Co. Ltd, Hagongda Road, Xiangzhou District, Zhuhai, 519087, China
| | - Junhong Lin
- Pediatric Respiratory Department, M-Healtcare, Zhujiang New Town Clinic 2/F, No. 11 Xiancun Road, Tianhe District, Guangzhou, 510623, China
| | - Zhiyuan Pei
- Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, 999078, Macau
- R&D Department, Quanbao Technologies Co. Ltd, Hagongda Road, Xiangzhou District, Zhuhai, 519087, China
| | - Jinrui Fan
- General Surgery, Zhuhai People's Hospital, Kangning Road, Xiangzhou District, Zhuhai, 519000, China
| | - Chuanyu Fang
- R&D Department, Quanbao Technologies Co. Ltd, Hagongda Road, Xiangzhou District, Zhuhai, 519087, China
| | - Yong Cai
- Advanced Institute of Natural Sciences, Beijing Normal University, Jinfeng Road, Xiangzhou District, Zhuhai, 519087, China
| |
Collapse
|
11
|
Chen H, Gu W, Zhang Q, Li X, Jiang X. Integrating attention mechanism and multi-scale feature extraction for fall detection. Heliyon 2024; 10:e31614. [PMID: 38831825 PMCID: PMC11145491 DOI: 10.1016/j.heliyon.2024.e31614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 05/11/2024] [Accepted: 05/20/2024] [Indexed: 06/05/2024] Open
Abstract
Addressing the critical need for accurate fall event detection due to their potentially severe impacts, this paper introduces the Spatial Channel and Pooling Enhanced You Only Look Once version 5 small (SCPE-YOLOv5s) model. Fall events pose a challenge for detection due to their varying scales and subtle pose features. To address this problem, SCPE-YOLOv5s introduces spatial attention to the Efficient Channel Attention (ECA) network, which significantly enhances the model's ability to extract features from spatial pose distribution. Moreover, the model integrates average pooling layers into the Spatial Pyramid Pooling (SPP) network to support the multi-scale extraction of fall poses. Meanwhile, by incorporating the ECA network into SPP, the model effectively combines global and local features to further enhance the feature extraction. This paper validates the SCPE-YOLOv5s on a public dataset, demonstrating that it achieves a mean Average Precision of 88.29 %, outperforming the You Only Look Once version 5 small by 4.87 %. Additionally, the model achieves 57.4 frames per second. Therefore, SCPE-YOLOv5s provides a novel solution for fall event detection.
Collapse
Affiliation(s)
- Hao Chen
- School of Computer and Information Engineering, Nantong Institute of Technology, China
| | - Wenye Gu
- Affiliated Hospital of Nantong University, China
| | - Qiong Zhang
- School of Computer and Information Engineering, Nantong Institute of Technology, China
| | - Xiujing Li
- School of Computer and Information Engineering, Nantong Institute of Technology, China
| | - Xiaojing Jiang
- School of Computer and Information Engineering, Nantong Institute of Technology, China
| |
Collapse
|
12
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
13
|
Zeng X, Liu Y, Zhang J, Guo Y. Medical object detector jointly driven by knowledge and data. Neural Netw 2024; 172:106084. [PMID: 38183830 DOI: 10.1016/j.neunet.2023.12.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 11/15/2023] [Accepted: 12/22/2023] [Indexed: 01/08/2024]
Abstract
Most of the existing object detection algorithms are trained on medical datasets and then used for prediction. When the features of an object are not obvious in an image, these models are prone to mislocalize and misclassify it. In this paper, we propose a medical Object Detection algorithm jointly driven by Knowledge and Data (ODKD). It enables medical semantic knowledge provided by specialized physicians to be effective and helpful when traditional models have difficulty in correctly detecting objects relying on features alone. Our model consists of a base object detector together with a fusion module: the base object detector is trained based on medical datasets to obtain data-driven results; then we use a graph to represent external semantic knowledge and map the data-driven results to the nodes embedding of this graph structure. In the fusion module, a graph convolution network is used to fuse the data-driven results with the external semantic knowledge to output category adjustment coefficients. Finally, the adjustment coefficients are used to adjust the data-driven results to obtain results jointly driven by knowledge and data. Experiments show that professional medical semantic knowledge can effectively correct the erroneous results of the base detector, and the effect of our model outperforms Faster Rcnn, YOLOv5, YOLOv7, etc. on three medical datasets, Camus, Synapse, and AMOS.
Collapse
Affiliation(s)
- Xianhua Zeng
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yuhang Liu
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Jian Zhang
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yongli Guo
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| |
Collapse
|
14
|
Mei J, Yan H, Tang Z, Piao Z, Yuan Y, Dou Y, Su H, Hu C, Meng M, Jia Z. Deep learning algorithm applied to plain CT images to identify superior mesenteric artery abnormalities. Eur J Radiol 2024; 173:111388. [PMID: 38412582 DOI: 10.1016/j.ejrad.2024.111388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/02/2024] [Accepted: 02/21/2024] [Indexed: 02/29/2024]
Abstract
OBJECTIVES Atypical presentations, lack of biomarkers, and low sensitivity of plain CT can delay the diagnosis of superior mesenteric artery (SMA) abnormalities, resulting in poor clinical outcomes. Our study aims to develop a deep learning (DL) model for detecting SMA abnormalities in plain CT and evaluate its performance in comparison with a clinical model and radiologist assessment. MATERIALS AND METHODS A total of 1048 patients comprised the internal (474 patients with SMA abnormalities, 474 controls) and external testing (50 patients with SMA abnormalities, 50 controls) cohorts. The internal cohort was divided into the training cohort (n = 776), validation cohort (n = 86), and internal testing cohort (n = 86). A total of 5 You Only Look Once version 8 (YOLOv8)-based DL submodels were developed, and the performance of the optimal submodel was compared with that of a clinical model and of experienced radiologists. RESULTS Of the submodels, YOLOv8x had the best performance. The area under the curve (AUC) of the YOLOv8x submodel was higher than that of the clinical model (internal test set: 0.990 vs 0.878, P =.002; external test set: 0.967 vs 0.912, P =.140) and that of all radiologists (P <.001). The YOLOv8x submodel, when compared with radiologist assessment, demonstrated higher sensitivity (internal test set: 100.0 % vs 70.7 %, P =.002; external test set: 96.0 % vs 68.8 %, P <.001) and specificity (internal test set: 90.7 % vs 66.0 %, P =.025; external test set: = 88.0 % vs 66.0 %, P <.001). CONCLUSION Using plain CT images, YOLOv8x was able to efficiently identify cases of SMA abnormalities. This could potentially improve early diagnosis accuracy and thus improve clinical outcomes.
Collapse
Affiliation(s)
- Junhao Mei
- Department of Interventional and Vascular Surgery, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China
| | - Hui Yan
- Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Zheyu Tang
- Department of Interventional and Vascular Surgery, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China
| | - Zeyu Piao
- Department of Interventional and Vascular Surgery, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China
| | - Yuan Yuan
- Department of Interventional Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Yang Dou
- Department of Radiology, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China
| | - Haobo Su
- Department of Interventional Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Chunfeng Hu
- Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China
| | - Zhongzhi Jia
- Department of Interventional and Vascular Surgery, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, China.
| |
Collapse
|
15
|
Prinzi F, Currieri T, Gaglio S, Vitabile S. Shallow and deep learning classifiers in medical image analysis. Eur Radiol Exp 2024; 8:26. [PMID: 38438821 PMCID: PMC10912073 DOI: 10.1186/s41747-024-00428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/03/2024] [Indexed: 03/06/2024] Open
Abstract
An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.
Collapse
Affiliation(s)
- Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN, UK
| | - Tiziana Currieri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | - Salvatore Gaglio
- Department of Engineering, University of Palermo, Palermo, Italy
- Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
| |
Collapse
|
16
|
Kebede SR, Waldamichael FG, Debelee TG, Aleme M, Bedane W, Mezgebu B, Merga ZC. Dual view deep learning for enhanced breast cancer screening using mammography. Sci Rep 2024; 14:3839. [PMID: 38360869 PMCID: PMC10869685 DOI: 10.1038/s41598-023-50797-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 12/26/2023] [Indexed: 02/17/2024] Open
Abstract
Breast cancer has the highest incidence rate among women in Ethiopia compared to other types of cancer. Unfortunately, many cases are detected at a stage where a cure is delayed or not possible. To address this issue, mammography-based screening is widely accepted as an effective technique for early detection. However, the interpretation of mammography images requires experienced radiologists in breast imaging, a resource that is limited in Ethiopia. In this research, we have developed a model to assist radiologists in mass screening for breast abnormalities and prioritizing patients. Our approach combines an ensemble of EfficientNet-based classifiers with YOLOv5, a suspicious mass detection method, to identify abnormalities. The inclusion of YOLOv5 detection is crucial in providing explanations for classifier predictions and improving sensitivity, particularly when the classifier fails to detect abnormalities. To further enhance the screening process, we have also incorporated an abnormality detection model. The classifier model achieves an F1-score of 0.87 and a sensitivity of 0.82. With the addition of suspicious mass detection, sensitivity increases to 0.89, albeit at the expense of a slightly lower F1-score of 0.79.
Collapse
Affiliation(s)
- Samuel Rahimeto Kebede
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia.
- College of Engineering, Debre Berhan University, Debre Berhan, Ethiopia.
| | - Fraol Gelana Waldamichael
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia
| | - Taye Girma Debelee
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa, 120611, Ethiopia
| | | | - Wubalem Bedane
- Radiology, St. Pauli Millenium Medical College, Addis Ababa, Ethiopia
| | - Bethelhem Mezgebu
- Radiology, St. Pauli Millenium Medical College, Addis Ababa, Ethiopia
| | | |
Collapse
|
17
|
Sharma H, Kumar H. A computer vision-based system for real-time component identification from waste printed circuit boards. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 351:119779. [PMID: 38086120 DOI: 10.1016/j.jenvman.2023.119779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/15/2023] [Accepted: 12/03/2023] [Indexed: 01/14/2024]
Abstract
With an exponential increase in consumers' need for electronic products, the world is facing an ever-increasing economic and environmental threat of electronic waste (e-waste). To minimize their adverse effects, e-waste recycling is one of the pivotal factors that can help in minimizing the environmental pollution andto increase recovery of valuable materials. For instance, Printed Circuit Boards (PCBs), while they have several valuable elements, they are hazardous too; and therefore, they form a large chunk of e-waste being generated today. Thus, in recycling PCBs, Electronic Components (ECs) are segregated at first, and separately processed for recovering key elements that could be re-used. However, in the current recycling process, especially in developing nations, humans manually screen ECs, which goes on to affect their health. It also causes losses of valuable materials. Therefore, automated solutions need to be adopted for both to classify and to segregate ECs from waste PCBs. The study proposes a robust EC identification system based on computer vision and deep learning algorithms (YOLOv3) to automate sorting process which would help in further processing. The study uses a publicly available dataset, and a PCB dataset which reflect challenging recycling environments like lighting conditions, cast shadows, orientations, viewpoints, and different cameras/resolutions. The outcome of YOLOv3 detection model based on training of both datasets presents satisfactory classification accuracy and capability of real-time competent identification, which in turn, could help in automatically segregating ECs, while leading towards effective e-waste recycling.
Collapse
Affiliation(s)
| | - Harish Kumar
- Indian Institute of Management Kashipur, Uttarakhand, 244713, India.
| |
Collapse
|
18
|
Liang H, Li Z, Lin W, Xie Y, Zhang S, Li Z, Luo H, Li T, Han S. Enhancing Gastrointestinal Stromal Tumor (GIST) Diagnosis: An Improved YOLOv8 Deep Learning Approach for Precise Mitotic Detection. IEEE ACCESS 2024; 12:116829-116840. [DOI: 10.1109/access.2024.3446613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
Affiliation(s)
- Haoxin Liang
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zhichun Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, SAR, China
| | - Weijie Lin
- The Second Clinical College, Southern Medical University, Guangzhou, Guangdong, China
| | - Yuheng Xie
- The Second Clinical College, Southern Medical University, Guangzhou, Guangdong, China
| | - Shuo Zhang
- School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Zhou Li
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Hongyu Luo
- Department of General Surgery, The Sixth People’s Hospital of Huizhou City, Huizhou, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, SAR, China
| | - Shuai Han
- Department of General Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
19
|
Wang J, Sun H, Jiang K, Cao W, Chen S, Zhu J, Yang X, Zheng J. CAPNet: Context attention pyramid network for computer-aided detection of microcalcification clusters in digital breast tomosynthesis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107831. [PMID: 37783114 DOI: 10.1016/j.cmpb.2023.107831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 12/25/2022] [Accepted: 09/23/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided detection (CADe) of microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) is crucial in the early diagnosis of breast cancer. Although convolutional neural network (CNN)-based detection models have achieved excellent performance in medical lesion detection, they are subject to some limitations in MC detection: 1) Most existing models employ the feature pyramid network (FPN) for multi-scale object detection; however, the rough feature sharing between adjacent layers in the FPN may limit the detection ability for small and low-contrast MCs; and 2) the MCs region only accounts for a small part of the annotation box, so the features extracted indiscriminately within the whole box may easily be affected by the background. In this paper, we develop a novel CNN-based CADe method to alleviate the impacts of the above limitations for the accurate and rapid detection of MCs in DBT. METHODS The proposed method has two parts: a novel context attention pyramid network (CAPNet) for intra-layer MC detection in two-dimensional (2D) slices and a three-dimensional (3D) aggregation procedure for aggregating 2D intra-layer MCs into a 3D result according to their connectivity in 3D space. The proposed CAPNet is based on an anchor-free and one-stage detection architecture and contains a context feature selection fusion (CFSF) module and a microcalcification response (MCR) branch. The CFSF module can efficiently enrich shallow layers' features by the complementary selection of local context features, aiming to reduce the missed detection of small and low-contrast MCs. The MCR branch is a one-layer branch parallel to the classification branch, which can alleviate the influence of the background region within the annotation box on feature extraction and enhance the ability of the model to distinguish MCs from normal breast tissue. RESULTS We performed a comparison experiment on an in-house clinical dataset with 648 DBT volumes, and the proposed method achieved impressive performance with a sensitivity of 91.56 % at 1 false positive per DBT volume (FPs/volume) and 93.51 % at 2 FPs/volume, outperforming other representative detection models. CONCLUSIONS The experimental results indicate that the proposed method is effective in the detection of MCs in DBT. This method can provide objective, accurate, and quick diagnostic suggestions for radiologists, presenting potential clinical value for early breast cancer screening.
Collapse
Affiliation(s)
- Jingkun Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Haotian Sun
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Ke Jiang
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Shuangqing Chen
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Jianbing Zhu
- Suzhou Science & Technology Town Hospital, Gusu School, Nanjing Medical University, Suzhou 215153, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China.
| |
Collapse
|
20
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
21
|
Alhussan AA, Abdelhamid AA, Towfek SK, Ibrahim A, Abualigah L, Khodadadi N, Khafaga DS, Al-Otaibi S, Ahmed AE. Classification of Breast Cancer Using Transfer Learning and Advanced Al-Biruni Earth Radius Optimization. Biomimetics (Basel) 2023; 8:270. [PMID: 37504158 PMCID: PMC10377265 DOI: 10.3390/biomimetics8030270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/21/2023] [Accepted: 06/24/2023] [Indexed: 07/29/2023] Open
Abstract
Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abdelaziz A Abdelhamid
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
- Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
| | - S K Towfek
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
- Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
- MEU Research Unit, Middle East University, Amman 11831, Jordan
- School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
| | - Nima Khodadadi
- Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaha Al-Otaibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ayman Em Ahmed
- Faculty of Engineering, King Salman International University, El-Tor 8701301, Egypt
| |
Collapse
|
22
|
Chen QQ, Lin ST, Ye JY, Tong YF, Lin S, Cai SQ. Diagnostic value of mammography density of breast masses by using deep learning. Front Oncol 2023; 13:1110657. [PMID: 37333830 PMCID: PMC10275606 DOI: 10.3389/fonc.2023.1110657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 05/23/2023] [Indexed: 06/20/2023] Open
Abstract
Objective In order to explore the relationship between mammographic density of breast mass and its surrounding area and benign or malignant breast, this paper proposes a deep learning model based on C2FTrans to diagnose the breast mass using mammographic density. Methods This retrospective study included patients who underwent mammographic and pathological examination. Two physicians manually depicted the lesion edges and used a computer to automatically extend and segment the peripheral areas of the lesion (0, 1, 3, and 5 mm, including the lesion). We then obtained the mammary glands' density and the different regions of interest (ROI). A diagnostic model for breast mass lesions based on C2FTrans was constructed based on a 7: 3 ratio between the training and testing sets. Finally, receiver operating characteristic (ROC) curves were plotted. Model performance was assessed using the area under the ROC curve (AUC) with 95% confidence intervals (CI), sensitivity, and specificity. Results In total, 401 lesions (158 benign and 243 malignant) were included in this study. The probability of breast cancer in women was positively correlated with age and mass density and negatively correlated with breast gland classification. The largest correlation was observed for age (r = 0.47). Among all models, the single mass ROI model had the highest specificity (91.8%) with an AUC = 0.823 and the perifocal 5mm ROI model had the highest sensitivity (86.9%) with an AUC = 0.855. In addition, by combining the cephalocaudal and mediolateral oblique views of the perifocal 5 mm ROI model, we obtained the highest AUC (AUC = 0.877 P < 0.001). Conclusions Deep learning model of mammographic density can better distinguish benign and malignant mass-type lesions in digital mammography images and may become an auxiliary diagnostic tool for radiologists in the future.
Collapse
Affiliation(s)
- Qian-qian Chen
- Department of Radiology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
| | - Shu-ting Lin
- Department of Radiology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
| | - Jia-yi Ye
- Department of Radiology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
| | - Yun-fei Tong
- Shanghai Yanghe Huajian Artificial Intelligence Technology Co. Ltd., Shanghai, China
| | - Shu Lin
- Centre of Neurological and Metabolic Research, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
- Department of Neuroendocrinology, Group of Neuroendocrinology, Garvan Institute of Medical Research, Sydney, Australia
| | - Si-qing Cai
- Department of Radiology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
| |
Collapse
|
23
|
Chen JL, Cheng LH, Wang J, Hsu TW, Chen CY, Tseng LM, Guo SM. A YOLO-based AI system for classifying calcifications on spot magnification mammograms. Biomed Eng Online 2023; 22:54. [PMID: 37237394 DOI: 10.1186/s12938-023-01115-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 05/13/2023] [Indexed: 05/28/2023] Open
Abstract
OBJECTIVES Use of an AI system based on deep learning to investigate whether the system can aid in distinguishing malignant from benign calcifications on spot magnification mammograms, thus potentially reducing unnecessary biopsies. METHODS In this retrospective study, we included public and in-house datasets with annotations for the calcifications on both craniocaudal and mediolateral oblique vies, or both craniocaudal and mediolateral views of each case of mammograms. All the lesions had pathological results for correlation. Our system comprised an algorithm based on You Only Look Once (YOLO) named adaptive multiscale decision fusion module. The algorithm was pre-trained on a public dataset, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), then re-trained and tested on the in-house dataset of spot magnification mammograms. The performance of the system was investigated by receiver operating characteristic (ROC) analysis. RESULTS We included 1872 images from 753 calcification cases (414 benign and 339 malignant) from CBIS-DDSM. From the in-house dataset, 636 cases (432 benign and 204 malignant) with 1269 spot magnification mammograms were included, with all lesions being recommended for biopsy by radiologists. The area under the ROC curve for our system on the in-house testing dataset was 0.888 (95% CI 0.868-0.908), with a sensitivity of 88.4% (95% CI 86.9-8.99%), specificity of 80.8% (95% CI 77.6-84%), and an accuracy of 84.6% (95% CI 81.8-87.4%) at the optimal cutoff value. Using the system with two views of spot magnification mammograms, 80.8% benign biopsies could be avoided. CONCLUSION The AI system showed good accuracy for classification of calcifications on spot magnification mammograms which were all categorized as suspicious by radiologists, thereby potentially reducing unnecessary biopsies.
Collapse
Affiliation(s)
- Jian-Ling Chen
- Department of Radiology, Far Eastern Memorial Hospital, No. 21, Sec. 2, Nanya S. Rd., Banciao Dist., New Taipei City, 220, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, No. 201, Sec. 2, Shipai Rd., Beitou Dist., Taipei City, 112, Taiwan
| | - Lan-Hsin Cheng
- Institute of Computer Science and Information Engineering, National Cheng Kung University, No. 1, University Rd., Tainan City, 701, Taiwan
| | - Jane Wang
- Department of Radiology, Taipei Veterans General Hospital, No. 201, Sec. 2, Shipai Rd., Beitou Dist., Taipei City, 112, Taiwan
- Department of Radiology, National Taiwan University College of Medicine, No. 1, Jenai Rd., Taipei City, 100, Taiwan
- Department of Nurse-Midwifery and Women Health, and School of Nursing, College of Nursing, National Taipei University of Nursing and Health Sciences, No. 365, Mingde Rd., Beitou Dist., Taipei City, 112, Taiwan
| | - Tun-Wei Hsu
- Department of Radiology, Taipei Veterans General Hospital, No. 201, Sec. 2, Shipai Rd., Beitou Dist., Taipei City, 112, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, No. 155, Sec. 2, Linong St., Beitou Dist., Taipei City, 112, Taiwan
| | - Chin-Yu Chen
- Department of Radiology, Chi-Mei Medical Center, No. 901, Zhonghua Rd. Yongkang Dist., Tainan City, 710, Taiwan
| | - Ling-Ming Tseng
- Comprehensive Breast Health Center, Taipei-Veterans General Hospital, No. 201, Sec. 2, Shipai Rd., Beitou Dist., Taipei, 112, Taiwan
- Department of Surgery, Taipei Veterans General Hospital, No. 201, Sec. 2, Shipai Rd., Beitou Dist., Taipei, 112, Taiwan
- Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, No. 155, Sec. 2, Linong St., Beitou Dist., Taipei, 112, Taiwan
| | - Shu-Mei Guo
- Institute of Computer Science and Information Engineering, National Cheng Kung University, No. 1, University Rd., Tainan City, 701, Taiwan.
| |
Collapse
|
24
|
Goceri E. Medical image data augmentation: techniques, comparisons and interpretations. Artif Intell Rev 2023; 56:1-45. [PMID: 37362888 PMCID: PMC10027281 DOI: 10.1007/s10462-023-10453-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
Designing deep learning based methods with medical images has always been an attractive area of research to assist clinicians in rapid examination and accurate diagnosis. Those methods need a large number of datasets including all variations in their training stages. On the other hand, medical images are always scarce due to several reasons, such as not enough patients for some diseases, patients do not want to allow their images to be used, lack of medical equipment or equipment, inability to obtain images that meet the desired criteria. This issue leads to bias in datasets, overfitting, and inaccurate results. Data augmentation is a common solution to overcome this issue and various augmentation techniques have been applied to different types of images in the literature. However, it is not clear which data augmentation technique provides more efficient results for which image type since different diseases are handled, different network architectures are used, and these architectures are trained and tested with different numbers of data sets in the literature. Therefore, in this work, the augmentation techniques used to improve performances of deep learning based diagnosis of the diseases in different organs (brain, lung, breast, and eye) from different imaging modalities (MR, CT, mammography, and fundoscopy) have been examined. Also, the most commonly used augmentation methods have been implemented, and their effectiveness in classifications with a deep network has been discussed based on quantitative performance evaluations. Experiments indicated that augmentation techniques should be chosen carefully according to image types.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Engineering, Engineering Faculty, Akdeniz University, Antalya, Turkey
| |
Collapse
|
25
|
Razali NF, Isa IS, Sulaiman SN, Abdul Karim NK, Osman MK, Che Soh ZH. Enhancement Technique Based on the Breast Density Level for Mammogram for Computer-Aided Diagnosis. Bioengineering (Basel) 2023; 10:153. [PMID: 36829647 PMCID: PMC9952042 DOI: 10.3390/bioengineering10020153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 01/04/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023] Open
Abstract
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system's ability to extract sufficient feature descriptors and may result in lower accuracy performance. Therefore, this study is proposing a textural-based image enhancement technique named Spatial-based Breast Density Enhancement for Mass Detection (SbBDEM) to boost textural features of the overlapped mass region based on the breast density level. This approach determines the optimal exposure threshold of the images' lower contrast limit and optimizes the parameters by selecting the best intensity factor guided by the best Blind/Reference-less Image Spatial Quality Evaluator (BRISQUE) scores separately for both dense and non-dense breast classes prior to training. Meanwhile, a modified You Only Look Once v3 (YOLOv3) architecture is employed for mass detection by specifically assigning an extra number of higher-valued anchor boxes to the shallower detection head using the enhanced image. The experimental results show that the use of SbBDEM prior to training mass detection promotes superior performance with an increase in mean Average Precision (mAP) of 17.24% improvement over the non-enhanced trained image for mass detection, mass segmentation of 94.41% accuracy, and 96% accuracy for benign and malignant mass classification. Enhancing the mammogram images based on breast density is proven to increase the overall system's performance and can aid in an improved clinical diagnosis process.
Collapse
Affiliation(s)
- Noor Fadzilah Razali
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Iza Sazanita Isa
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Siti Noraini Sulaiman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
- Integrative Pharmacogenomics Institute (iPROMISE), Universiti Teknologi MARA Cawangan Selangor, Puncak Alam Campus, Puncak Alam 42300, Selangor, Malaysia
| | - Noor Khairiah Abdul Karim
- Department of Biomedical Imaging, Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
- Breast Cancer Translational Research Programme (BCTRP), Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
| | - Muhammad Khusairi Osman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Zainal Hisham Che Soh
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| |
Collapse
|
26
|
Yu X, Wang SH, Zhang YD. Multiple-level thresholding for breast mass detection. JOURNAL OF KING SAUD UNIVERSITY. COMPUTER AND INFORMATION SCIENCES 2023; 35:115-130. [PMID: 37220564 PMCID: PMC7614559 DOI: 10.1016/j.jksuci.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Detection of breast mass plays a very important role in making the diagnosis of breast cancer. For faster detection of breast cancer caused by breast mass, we developed a novel and efficient patch-based breast mass detection system for mammography images. The proposed framework is comprised of three modules, including pre-processing, multiple-level breast tissue segmentation, and final breast mass detection. An improved Deeplabv3+ model for pectoral muscle removal is deployed in pre-processing. We then proposed a multiple-level thresholding segmentation method to segment breast mass and obtained the connected components (ConCs), where the corresponding image patch to each ConC is extracted for mass detection. In the final detection stage, each image patch is classified into breast mass and breast tissue background by trained deep learning models. The patches that are classified as breast mass are then taken as the candidates for breast mass. To reduce the false positive rate in the detection results, we applied the non-maximum suppression algorithm to combine the overlapped detection results. Once an image patch is considered a breast mass, the accurate detection result can then be retrieved from the corresponding ConC in the segmented images. Moreover, a coarse segmentation result can be simultaneously retrieved after detection. Compared to the state-of-the-art methods, the proposed method achieved comparable performance. On CBIS-DDSM, the proposed method achieved a detection sensitivity of 0.87 at 2.86 FPI (False Positive rate per Image), while the sensitivity reached 0.96 on INbreast with an FPI of only 1.29.
Collapse
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| |
Collapse
|
27
|
Quan MY, Huang YX, Wang CY, Zhang Q, Chang C, Zhou SC. Deep learning radiomics model based on breast ultrasound video to predict HER2 expression status. Front Endocrinol (Lausanne) 2023; 14:1144812. [PMID: 37143737 PMCID: PMC10153672 DOI: 10.3389/fendo.2023.1144812] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/27/2023] [Indexed: 05/06/2023] Open
Abstract
Purpose The detection of human epidermal growth factor receptor 2 (HER2) expression status is essential to determining the chemotherapy regimen for breast cancer patients and to improving their prognosis. We developed a deep learning radiomics (DLR) model combining time-frequency domain features of ultrasound (US) video of breast lesions with clinical parameters for predicting HER2 expression status. Patients and Methods Data for this research was obtained from 807 breast cancer patients who visited from February 2019 to July 2020. Ultimately, 445 patients were included in the study. Pre-operative breast ultrasound examination videos were collected and split into a training set and a test set. Building a training set of DLR models combining time-frequency domain features and clinical features of ultrasound video of breast lesions based on the training set data to predict HER2 expression status. Test the performance of the model using test set data. The final models integrated with different classifiers are compared, and the best performing model is finally selected. Results The best diagnostic performance in predicting HER2 expression status is provided by an Extreme Gradient Boosting (XGBoost)-based time-frequency domain feature classifier combined with a logistic regression (LR)-based clinical parameter classifier of clinical parameters combined DLR, particularly with a high specificity of 0.917. The area under the receiver operating characteristic curve (AUC) for the test cohort was 0.810. Conclusion Our study provides a non-invasive imaging biomarker to predict HER2 expression status in breast cancer patients.
Collapse
Affiliation(s)
- Meng-Yao Quan
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yun-Xia Huang
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Chang-Yan Wang
- Laboratory of The Smart Medicine and AI-based Radiology Technology (SMART), School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Qi Zhang
- Laboratory of The Smart Medicine and AI-based Radiology Technology (SMART), School of Communication and Information Engineering, Shanghai University, Shanghai, China
- *Correspondence: Shi-Chong Zhou, ; Qi Zhang,
| | - Cai Chang
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Shi-Chong Zhou
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- *Correspondence: Shi-Chong Zhou, ; Qi Zhang,
| |
Collapse
|
28
|
Al-Hejri AM, Al-Tam RM, Fazea M, Sable AH, Lee S, Al-antari MA. ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images. Diagnostics (Basel) 2022; 13:diagnostics13010089. [PMID: 36611382 PMCID: PMC9818801 DOI: 10.3390/diagnostics13010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022] Open
Abstract
Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.
Collapse
Affiliation(s)
- Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Muneer Fazea
- Department of Radiology, Al-Ma’amon Diagnostic Center, Sana’a, Yemen
- Department of Radiology, School of Medicine, Ibb University of Medical Sciences, Ibb, Yemen
| | - Archana Harsing Sable
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Correspondence: (A.H.S.); (M.A.A.-a.)
| | - Soojeong Lee
- Department of Computer Engineering, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.H.S.); (M.A.A.-a.)
| |
Collapse
|
29
|
Li J, Li S, Li X, Miao S, Dong C, Gao C, Liu X, Hao D, Xu W, Huang M, Cui J. Primary bone tumor detection and classification in full-field bone radiographs via YOLO deep learning model. Eur Radiol 2022; 33:4237-4248. [PMID: 36449060 DOI: 10.1007/s00330-022-09289-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/02/2022] [Accepted: 11/07/2022] [Indexed: 12/02/2022]
Abstract
OBJECTIVES Automatic bone lesions detection and classifications present a critical challenge and are essential to support radiologists in making an accurate diagnosis of bone lesions. In this paper, we aimed to develop a novel deep learning model called You Only Look Once (YOLO) to handle detecting and classifying bone lesions on full-field radiographs with limited manual intervention. METHODS In this retrospective study, we used 1085 bone tumor radiographs and 345 normal bone radiographs from two centers between January 2009 and December 2020 to train and test our YOLO deep learning (DL) model. The trained model detected bone lesions and then classified these radiographs into normal, benign, intermediate, or malignant types. The intersection over union (IoU) was used to assess the model's performance in the detection task. Confusion matrices and Cohen's kappa scores were used for evaluating classification performance. Two radiologists compared diagnostic performance with the trained model using the external validation set. RESULTS In the detection task, the model achieved accuracies of 86.36% and 85.37% in the internal and external validation sets, respectively. In the DL model, radiologist 1 and radiologist 2 achieved Cohen's kappa scores of 0.8187, 0.7927, and 0.9077 for four-way classification in the external validation set, respectively. The YOLO DL model illustrated a significantly higher accuracy for intermediate bone tumor classification than radiologist 1 (95.73% vs 88.08%, p = 0.004). CONCLUSIONS The developed YOLO DL model could be used to assist radiologists at all stages of bone lesion detection and classification in full-field bone radiographs. KEY POINTS • YOLO DL model can automatically detect bone neoplasms from full-field radiographs in one shot and then simultaneously classify radiographs into normal, benign, intermediate, or malignant. • The dataset used in this retrospective study includes normal bone radiographs. • YOLO can detect even some challenging cases with small volumes.
Collapse
Affiliation(s)
- Jie Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Sudong Li
- College of Computer Science and Technology, Qingdao University, Qingdao, 266071, China
| | - Xiaoli Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Sheng Miao
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, 266520, China
| | - Cheng Dong
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Chuanping Gao
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Xuejun Liu
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Dapeng Hao
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Wenjian Xu
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China
| | - Mingqian Huang
- Department of Radiology, The Mount Sinai Hospital, New York, NY, 10029-0310, USA
| | - Jiufa Cui
- Department of Radiology, The Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Qingdao, 266003, Shandong, China.
| |
Collapse
|
30
|
Guo F, Li Q, Gao F, Huang C, Zhang F, Xu J, Xu Y, Li Y, Sun J, Jiang L. Evaluation of the peritumoral features using radiomics and deep learning technology in non-spiculated and noncalcified masses of the breast on mammography. Front Oncol 2022; 12:1026552. [PMID: 36479079 PMCID: PMC9721450 DOI: 10.3389/fonc.2022.1026552] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 10/18/2022] [Indexed: 09/05/2023] Open
Abstract
OBJECTIVE To assess the significance of peritumoral features based on deep learning in classifying non-spiculated and noncalcified masses (NSNCM) on mammography. METHODS We retrospectively screened the digital mammography data of 2254 patients who underwent surgery for breast lesions in Harbin Medical University Cancer Hospital from January to December 2018. Deep learning and radiomics models were constructed. The classification efficacy in ROI and patient levels of AUC, accuracy, sensitivity, and specificity were compared. Stratified analysis was conducted to analyze the influence of primary factors on the AUC of the deep learning model. The image filter and CAM were used to visualize the radiomics and depth features. RESULTS For 1298 included patients, 771 (59.4%) were benign, and 527 (40.6%) were malignant. The best model was the deep learning combined model (2 mm), in which the AUC was 0.884 (P < 0.05); especially the AUC of breast composition B reached 0.941. All the deep learning models were superior to the radiomics models (P < 0.05), and the class activation map (CAM) showed a high expression of signals around the tumor of the deep learning model. The deep learning model achieved higher AUC for large size, age >60 years, and breast composition type B (P < 0.05). CONCLUSION Combining the tumoral and peritumoral features resulted in better identification of malignant NSNCM on mammography, and the performance of the deep learning model exceeded the radiomics model. Age, tumor size, and the breast composition type are essential for diagnosis.
Collapse
Affiliation(s)
- Fei Guo
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Qiyang Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Fei Gao
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Chencui Huang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Fandong Zhang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Jingxu Xu
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Ye Xu
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Yuanzhou Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Jianghong Sun
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Li Jiang
- Department of Oncology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| |
Collapse
|
31
|
A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms. Biomedicines 2022; 10:biomedicines10112971. [PMID: 36428538 PMCID: PMC9687367 DOI: 10.3390/biomedicines10112971] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/03/2022] [Accepted: 11/13/2022] [Indexed: 11/19/2022] Open
Abstract
Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan.
Collapse
|
32
|
Loizidou K, Skouroumouni G, Nikolaou C, Pitris C. Automatic Breast Mass Segmentation and Classification Using Subtraction of Temporally Sequential Digital Mammograms. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1801111. [PMID: 36519002 PMCID: PMC9744267 DOI: 10.1109/jtehm.2022.3219891] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 10/10/2022] [Accepted: 10/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Cancer remains a major cause of morbidity and mortality globally, with 1 in 5 of all new cancers arising in the breast. The introduction of mammography for the radiological diagnosis of breast abnormalities, significantly decreased their mortality rates. Accurate detection and classification of breast masses in mammograms is especially challenging for various reasons, including low contrast and the normal variations of breast tissue density. Various Computer-Aided Diagnosis (CAD) systems are being developed to assist radiologists with the accurate classification of breast abnormalities. METHODS In this study, subtraction of temporally sequential digital mammograms and machine learning are proposed for the automatic segmentation and classification of masses. The performance of the algorithm was evaluated on a dataset created especially for the purposes of this study, with 320 images from 80 patients (two time points and two views of each breast) with precisely annotated mass locations by two radiologists. RESULTS Ninety-six features were extracted and ten classifiers were tested in a leave-one-patient-out and k-fold cross-validation process. Using Neural Networks, the detection of masses was 99.9% accurate. The classification accuracy of the masses as benign or suspicious increased from 92.6%, using the state-of-the-art temporal analysis, to 98%, using the proposed methodology. The improvement was statistically significant (p-value < 0.05). CONCLUSION These results demonstrate the effectiveness of the subtraction of temporally consecutive mammograms for the diagnosis of breast masses. Clinical and Translational Impact Statement: The proposed algorithm has the potential to substantially contribute to the development of automated breast cancer Computer-Aided Diagnosis systems with significant impact on patient prognosis.
Collapse
Affiliation(s)
- Kosmia Loizidou
- KIOS Research and Innovation Center of ExcellenceDepartment of Electrical and Computer EngineeringUniversity of Cyprus 2109 Nicosia Cyprus
| | | | | | - Costas Pitris
- KIOS Research and Innovation Center of ExcellenceDepartment of Electrical and Computer EngineeringUniversity of Cyprus 2109 Nicosia Cyprus
| |
Collapse
|
33
|
Fu Q, Dong H. Spiking Neural Network Based on Multi-Scale Saliency Fusion for Breast Cancer Detection. ENTROPY 2022; 24:e24111543. [PMID: 36359633 PMCID: PMC9689387 DOI: 10.3390/e24111543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/17/2022] [Accepted: 10/21/2022] [Indexed: 05/22/2023]
Abstract
Deep neural networks have been successfully applied in the field of image recognition and object detection, and the recognition results are close to or even superior to those from human beings. A deep neural network takes the activation function as the basic unit. It is inferior to the spiking neural network, which takes the spiking neuron model as the basic unit in the aspect of biological interpretability. The spiking neural network is considered as the third-generation artificial neural network, which is event-driven and has low power consumption. It modulates the process of nerve cells from receiving a stimulus to firing spikes. However, it is difficult to train spiking neural network directly due to the non-differentiable spiking neurons. In particular, it is impossible to train a spiking neural network using the back-propagation algorithm directly. Therefore, the application scenarios of spiking neural network are not as extensive as deep neural network, and a spiking neural network is mostly used in simple image classification tasks. This paper proposed a spiking neural network method for the field of object detection based on medical images using the method of converting a deep neural network to spiking neural network. The detection framework relies on the YOLO structure and uses the feature pyramid structure to obtain the multi-scale features of the image. By fusing the high resolution of low-level features and the strong semantic information of high-level features, the detection precision of the network is improved. The proposed method is applied to detect the location and classification of breast lesions with ultrasound and X-ray datasets, and the results are 90.67% and 92.81%, respectively.
Collapse
|
34
|
Rautela K, Kumar D, Kumar V. Dual-modality synthetic mammogram construction for breast lesion detection using U-DARTS. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
35
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
36
|
Ganesan G, Chinnappan J. Hybridization of ResNet with YOLO classifier for automated paddy leaf disease recognition: An optimized model. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Gangadevi Ganesan
- Department of Information Technology Meenakshi College of Engineering Chennai India
| | - Jayakumar Chinnappan
- Department of Computer Science and Engineering Sri Venkateswara College of Engineering Sriperumbudur India
| |
Collapse
|
37
|
Baccouche A, Garcia-Zapirain B, Zheng Y, Elmaghraby AS. Early detection and classification of abnormality in prior mammograms using image-to-image translation and YOLO techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106884. [PMID: 35594582 DOI: 10.1016/j.cmpb.2022.106884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 04/27/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided-detection (CAD) systems have been developed to assist radiologists on finding suspicious lesions in mammogram. Deep Learning technology have recently succeeded to increase the chance of recognizing abnormality at an early stage in order to avoid unnecessary biopsies and decrease the mortality rate. In this study, we investigated the effectiveness of an end-to-end fusion model based on You-Only-Look-Once (YOLO) architecture, to simultaneously detect and classify suspicious breast lesions on digital mammograms. Four categories of cases were included: Mass, Calcification, Architectural Distortions, and Normal from a private digital mammographic database including 413 cases. For all cases, Prior mammograms (typically scanned 1 year before) were all reported as Normal, while Current mammograms were diagnosed as cancerous (confirmed by biopsies) or healthy. METHODS We propose to apply the YOLO-based fusion model to the Current mammograms for breast lesions detection and classification. Then apply the same model retrospectively to synthetic mammograms for an early cancer prediction, where the synthetic mammograms were generated from the Prior mammograms by using the image-to-image translation models, CycleGAN and Pix2Pix. RESULTS Evaluation results showed that our methodology could significantly detect and classify breast lesions on Current mammograms with a highest rate of 93% ± 0.118 for Mass lesions, 88% ± 0.09 for Calcification lesions, and 95% ± 0.06 for Architectural Distortion lesions. In addition, we reported evaluation results on Prior mammograms with a highest rate of 36% ± 0.01 for Mass lesions, 14% ± 0.01 for Calcification lesions, and 50% ± 0.02 for Architectural Distortion lesions. Normal mammograms were accordingly classified with an accuracy rate of 92% ± 0.09 and 90% ± 0.06 respectively on Current and Prior exams. CONCLUSIONS Our proposed framework was first developed to help detecting and identifying suspicious breast lesions in X-ray mammograms on their Current screening. The work was also suggested to reduce the temporal changes between pairs of Prior and follow-up screenings for early predicting the location and type of abnormalities in Prior mammogram screening. The paper presented a CAD method to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning and image-to-image translation for a biomedical application.
Collapse
Affiliation(s)
- Asma Baccouche
- Department of Computer Science and Engineering, University of Louisville, Louisville, KY, 40292, USA.
| | | | - Yufeng Zheng
- University of Mississippi Medical Center, Jackson, MS, 39216, USA
| | - Adel S Elmaghraby
- Department of Computer Science and Engineering, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
38
|
Oza P, Sharma P, Patel S, Adedoyin F, Bruno A. Image Augmentation Techniques for Mammogram Analysis. J Imaging 2022; 8:141. [PMID: 35621905 PMCID: PMC9147240 DOI: 10.3390/jimaging8050141] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 04/19/2022] [Accepted: 04/22/2022] [Indexed: 01/30/2023] Open
Abstract
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods' performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The article aims to provide insights into basic and deep learning-based augmentation techniques.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Festus Adedoyin
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| |
Collapse
|
39
|
An improved YOLO Nano model for dorsal hand vein detection system. Med Biol Eng Comput 2022; 60:1225-1237. [DOI: 10.1007/s11517-022-02551-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 03/09/2022] [Indexed: 11/25/2022]
|
40
|
Bandari E, Beuzen T, Habashy L, Raza J, Yang X, Kapeluto J, Meneilly G, Madden K. Machine Learning Decision Support for Bedside Ultrasound to Detect Lipohypertrophy. JMIR Form Res 2022; 6:e34830. [PMID: 35404833 PMCID: PMC9123536 DOI: 10.2196/34830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 03/14/2022] [Accepted: 04/09/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The most common dermatological complication of insulin therapy is lipohypertrophy. OBJECTIVE As a proof-of-concept, we built and tested an automated model using a convolutional neural network (CNN) to detect the presence of lipohypertrophy in ultrasound images. METHODS Ultrasound images were obtained in a blinded fashion using a portable GE LOGIQe machine with an L8-18i-D probe (5-18 MHz; GE Healthcare, Frankfurt, Germany). The data was split into train, validation and test splits of 70%, 15%, and 15% respectively. Given the small size of the dataset, image augmentation techniques were used to expand the size of the training set and improve the model's generalizability. To compare the performance of the different architectures, the team considered the accuracy and recall of the models when tested on our test set. RESULTS The DenseNet CNN architecture was found to have the highest accuracy (76%) and recall (76%) in detecting lipohypertrophy in ultrasound images, when compared to other CNN architectures. Additional work showed that the YOLOv5m object detection model could be used to help identify the approximate location of lipohypertrophy in ultrasound images identified as containing lipohypertrophy by the DenseNet CNN. CONCLUSIONS We were able to demonstrate the ability of machine learning approaches to automate the process of detecting and locating lipohypertrophy. CLINICALTRIAL
Collapse
Affiliation(s)
- Ela Bandari
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Tomas Beuzen
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Lara Habashy
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Javairia Raza
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Xudong Yang
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Jordanna Kapeluto
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Endocrinology, Department of Medicine, University of British Columbia, Vancouver, CA
| | - Graydon Meneilly
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA
| | - Kenneth Madden
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA.,Centre for Hip Health and Mobility, Vancouver, CA
| |
Collapse
|
41
|
Kolchev A, Pasynkov D, Egoshin I, Kliouchkin I, Pasynkova O, Tumakov D. YOLOv4-Based CNN Model versus Nested Contours Algorithm in the Suspicious Lesion Detection on the Mammography Image: A Direct Comparison in the Real Clinical Settings. J Imaging 2022; 8:88. [PMID: 35448216 PMCID: PMC9031201 DOI: 10.3390/jimaging8040088] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/21/2022] [Accepted: 03/23/2022] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND We directly compared the mammography image processing results obtained with the help of the YOLOv4 convolutional neural network (CNN) model versus those obtained with the help of the NCA-based nested contours algorithm model. METHOD We used 1080 images to train the YOLOv4, plus 100 images with proven breast cancer (BC) and 100 images with proven absence of BC to test both models. RESULTS the rates of true-positive, false-positive and false-negative outcomes were 60, 10 and 40, respectively, for YOLOv4, and 93, 63 and 7, respectively, for NCA. The sensitivities for the YOLOv4 and the NCA were comparable to each other for star-like lesions, masses with unclear borders, round- or oval-shaped masses with clear borders and partly visualized masses. On the contrary, the NCA was superior to the YOLOv4 in the case of asymmetric density and of changes invisible on the dense parenchyma background. Radiologists changed their earlier decisions in six cases per 100 for NCA. YOLOv4 outputs did not influence the radiologists' decisions. CONCLUSIONS in our set, NCA clinically significantly surpasses YOLOv4.
Collapse
Affiliation(s)
- Alexey Kolchev
- Department of Applied Mathematics and Informatics, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (A.K.); (D.P.); (O.P.)
- Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia
- Department of Fundamental Medicine, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, 18 Kremlevskaya St., Kazan 420008, Russia;
| | - Dmitry Pasynkov
- Department of Applied Mathematics and Informatics, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (A.K.); (D.P.); (O.P.)
- Department of Diagnostic Ultrasound, Kazan State Medical Academy—Branch Campus of the Federal State Budgetary Educational Institution of Further Professional Education “Russian Medical Academy of Continuous Professional Education”, Ministry of Healthcare of the Russian Federation, 36 Butlerov St., Kazan 420012, Russia
| | - Ivan Egoshin
- Department of Applied Mathematics and Informatics, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (A.K.); (D.P.); (O.P.)
| | - Ivan Kliouchkin
- Department of General Surgery, Kazan Medical University, Ministry of Health of Russian Federation, 49 Butlerov St., Kazan 420012, Russia;
| | - Olga Pasynkova
- Department of Applied Mathematics and Informatics, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (A.K.); (D.P.); (O.P.)
| | - Dmitrii Tumakov
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, 18 Kremlevskaya St., Kazan 420008, Russia;
| |
Collapse
|
42
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
43
|
Üreten K, Maraş HH. Automated Classification of Rheumatoid Arthritis, Osteoarthritis, and Normal Hand Radiographs with Deep Learning Methods. J Digit Imaging 2022; 35:193-199. [PMID: 35018539 PMCID: PMC8921395 DOI: 10.1007/s10278-021-00564-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 12/03/2021] [Accepted: 12/04/2021] [Indexed: 11/30/2022] Open
Abstract
Rheumatoid arthritis and hand osteoarthritis are two different arthritis that causes pain, function limitation, and permanent joint damage in the hands. Plain hand radiographs are the most commonly used imaging methods for the diagnosis, differential diagnosis, and monitoring of rheumatoid arthritis and osteoarthritis. In this retrospective study, the You Only Look Once (YOLO) algorithm was used to obtain hand images from original radiographs without data loss, and classification was made by applying transfer learning with a pre-trained VGG-16 network. The data augmentation method was applied during training. The results of the study were evaluated with performance metrics such as accuracy, sensitivity, specificity, and precision calculated from the confusion matrix, and AUC (area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. In the classification of rheumatoid arthritis and normal hand radiographs, 90.7%, 92.6%, 88.7%, 89.3%, and 0.97 accuracy, sensitivity, specificity, precision, and AUC results, respectively, and in the classification of osteoarthritis and normal hand radiographs, 90.8%, 91.4%, 90.2%, 91.4%, and 0.96 accuracy, sensitivity, specificity, precision, and AUC results were obtained, respectively. In the classification of rheumatoid arthritis, osteoarthritis, and normal hand radiographs, an 80.6% accuracy result was obtained. In this study, to develop an end-to-end computerized method, the YOLOv4 algorithm was used for object detection, and a pre-trained VGG-16 network was used for the classification of hand radiographs. This computer-aided diagnosis method can assist clinicians in interpreting hand radiographs, especially in rheumatoid arthritis and osteoarthritis.
Collapse
Affiliation(s)
- Kemal Üreten
- Department of Rheumatology, Faculty of Medicine, Kırıkkale University, 71450, Kırıkkale, Turkey.
| | - Hadi Hakan Maraş
- Department of Computer Engineering, Faculty of Engineering, Çankaya University, 06790, Ankara, Turkey
| |
Collapse
|
44
|
Xu Y, Lou J, Gao Z, Zhan M. Computed Tomography Image Features under Deep Learning Algorithm Applied in Staging Diagnosis of Bladder Cancer and Detection on Ceramide Glycosylation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7979523. [PMID: 35035524 PMCID: PMC8759889 DOI: 10.1155/2022/7979523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 11/01/2021] [Accepted: 11/10/2021] [Indexed: 11/18/2022]
Abstract
The research is aimed at investigating computed tomography (CT) image based on deep learning algorithm and the application value of ceramide glycosylation in diagnosing bladder cancer. The images of ordinary CT detection were improved. In this study, 60 bladder cancer patients were selected and performed with ordinary CT detection, and the detection results were processed by CT based on deep learning algorithms and compared with pathological diagnosis. In addition, Western Blot technology was used to detect the expression of glucose ceramide synthase (GCS) in the cell membrane of tumor tissues and normal tissues of bladder. The comparison results found that, in simple CT clinical staging, the coincidence rates of T1 stage, T2a stage, T2b stage, T3 stage, and T4 stage were 28.56%, 62.51%, 78.94%, 84.61%, and 74.99%, respectively; and the total coincidence rate of CT clinical staging was 63.32%, which was greatly different from the clinical staging of pathological diagnosis (P < 0.05). In the clinical staging of algorithm-based CT test results, the coincidence rates of T1 stage and T2a stage were 50.01% and 91.65%, respectively; and those of T2b stage, T3 stage, and T4 stage were 100.00%; and the total coincidence rate was 96.69%, which was not obviously different from the clinical staging of pathological diagnosis (P > 0.05). Therefore, it could be concluded that the algorithm-based CT detection results were more accurate, and the use of CT scans based on deep learning algorithms in the preoperative staging and clinical treatment of bladder cancer showed reliable guiding significance and clinical value. In addition, it was found that the expression level of GCS in normal bladder tissues was much lower than that in bladder cancer tissues. This indicated that the changes in GCS were closely related to the development and prognosis of bladder cancer. Therefore, it was believed that GCS may be an effective target for the treatment of bladder cancer in the future, and further research was needed for specific conditions.
Collapse
Affiliation(s)
- Yisheng Xu
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Jianghua Lou
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Zhiqin Gao
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Ming Zhan
- Department of Radiology, Affiliated Xiaoshan Hospital, Hangzhou Normal University, Hangzhou 311201, China
| |
Collapse
|
45
|
Okagbue HI, Oguntunde PE, Adamu PI, Adejumo AO. Unique clusters of patterns of breast cancer survivorship. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-021-00637-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
46
|
Muduli D, Dash R, Majhi B. Automated diagnosis of breast cancer using multi-modal datasets: A deep convolution neural network based approach. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.102825] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
47
|
Agarwal R, Yap MH, Hasan MK, Zwiggelaar R, Martí R. Deep Learning in Mammography Breast Cancer Detection. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|