1
|
Demir-Kaymak Z, Turan Z, Unlu-Bidik N, Unkazan S. Effects of midwifery and nursing students' readiness about medical Artificial intelligence on Artificial intelligence anxiety. Nurse Educ Pract 2024; 78:103994. [PMID: 38810350 DOI: 10.1016/j.nepr.2024.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 04/30/2024] [Accepted: 05/07/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND Artificial intelligence technologies are one of the most important technologies of today. Developments in artificial intelligence technologies have widespread and increased the use of artificial intelligence in many areas. The field of health is also one of the areas where artificial intelligence technologies are widely used. For this reason, it is considered important that healthcare professionals be prepared for artificial intelligence and do not experience problems while training them. In this study, midwife and nurse candidates, as future healthcare professionals, were discussed. AIM This study aims to examine the effect of the artificial intelligence readiness on the artificial intelligence anxiety and the effect of artificial intelligence characteristic variables (artificial intelligence knowledge, daily life, occupational threat, artificial intelligence trust) on the medical artificial intelligence readiness and artificial intelligence anxiety of students. METHODS This study was planned and carried out as a relational survey study, which is a quantitative research. A total of 480 students, consisting of 240 nursing and 240 midwifery students, were included in this study. SPSS 26.0 and AMOS 26 package programs were used to analyse the data and descriptive statistics (frequency, percentage, mean, standard deviation) and path analysis for the structural equation model were used. RESULTS No significant difference was found between the medical artificial intelligence readiness (p=0.082) and artificial intelligence anxiety (p=0.486) scores of midwifery and nursing students. The model of the relationship between medical artificial intelligence readiness and artificial intelligence anxiety had a good goodness of fit. Artificial intelligence knowledge and using artificial intelligence in daily life are predictors of medical artificial intelligence readiness. Using artificial intelligence in daily life, occupational threat and artificial intelligence trust are predictors of artificial intelligence anxiety. CONCLUSION Midwifery and nursing students' AI anxiety and AI readiness levels were found to be at a moderate level and students' AI readiness affected AI anxiety.
Collapse
Affiliation(s)
- Zeliha Demir-Kaymak
- Sakarya University Faculty of Education, Department of Computer Education and Instructional Technologies, Sakarya, Turkiye.
| | - Zekiye Turan
- Sakarya University, Faculty of Health Sciences, Department of Nursing, Sakarya, Turkiye
| | - Nazli Unlu-Bidik
- Sakarya University, Faculty of Health Sciences, Department of Midwifery, Sakarya, Turkiye
| | - Semiha Unkazan
- Sakarya University, Faculty of Health Sciences, Department of Nursing, Sakarya, Turkiye
| |
Collapse
|
2
|
Zimmermann C, Michelmann A, Daniel Y, Enderle MD, Salkic N, Linzenbold W. Application of Deep Learning for Real-Time Ablation Zone Measurement in Ultrasound Imaging. Cancers (Basel) 2024; 16:1700. [PMID: 38730652 PMCID: PMC11083655 DOI: 10.3390/cancers16091700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND The accurate delineation of ablation zones (AZs) is crucial for assessing radiofrequency ablation (RFA) therapy's efficacy. Manual measurement, the current standard, is subject to variability and potential inaccuracies. AIM This study aims to assess the effectiveness of Artificial Intelligence (AI) in automating AZ measurements in ultrasound images and compare its accuracy with manual measurements in ultrasound images. METHODS An in vitro study was conducted using chicken breast and liver samples subjected to bipolar RFA. Ultrasound images were captured every 15 s, with the AI model Mask2Former trained for AZ segmentation. The measurements were compared across all methods, focusing on short-axis (SA) metrics. RESULTS We performed 308 RFA procedures, generating 7275 ultrasound images across liver and chicken breast tissues. Manual and AI measurement comparisons for ablation zone diameters revealed no significant differences, with correlation coefficients exceeding 0.96 in both tissues (p < 0.001). Bland-Altman plots and a Deming regression analysis demonstrated a very close alignment between AI predictions and manual measurements, with the average difference between the two methods being -0.259 and -0.243 mm, for bovine liver and chicken breast tissue, respectively. CONCLUSION The study validates the Mask2Former model as a promising tool for automating AZ measurement in RFA research, offering a significant step towards reducing manual measurement variability.
Collapse
Affiliation(s)
| | | | | | | | - Nermin Salkic
- Erbe Elektromedizin GmbH, 72072 Tübingen, Germany
- Faculty of Medicine, University of Tuzla, 75000 Tuzla, Bosnia and Herzegovina
| | | |
Collapse
|
3
|
Jamin A, Hoffmann C, Mahe G, Bressollette L, Humeau-Heurtier A. Pulmonary embolism detection on venous thrombosis ultrasound images with bi-dimensional entropy measures: Preliminary results. Med Phys 2023; 50:7840-7851. [PMID: 37370233 DOI: 10.1002/mp.16568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Venous thromboembolism (VTE) is a common health issue. A clinical expression of VTE is a deep vein thrombosis (DVT) that may lead to pulmonary embolism (PE), a critical illness. When DVT is suspected, an ultrasound exam is performed. However, the characteristics of the clot observed on ultrasound images cannot be linked with the presence of PE. Computed tomography angiography is the gold standard to diagnose PE. Nevertheless, the latter technique is expensive and requires the use of contrast agents. PURPOSE In this article, we present an image processing method based on ultrasound images to determine whether PE is associated or not with lower limb DVT. In terms of medical equipment, this new approach (Doppler ultrasound image processing) is inexpensive and quite easy. METHODS With the aim to help medical doctors in detecting PE, we herein propose to process ultrasound images of patients with DVT. After a first step based on histogram equalization, the analysis procedure is based on the use of bi-dimensional entropy measures. Two different algorithms are tested: the bi-dimensional dispersion entropy (D i s p E n 2 D $DispEn_{2D}$ ) mesure and the bi-dimensional fuzzy entropy (F u z E n 2 D $FuzEn_{2D}$ ) mesure. Thirty-two patients (12 women and 20 men, 67.63 ± 16.19 years old), split into two groups (16 with and 16 without PE), compose our database of around 1490 ultrasound images (split into seven different sizes from 32× 32 px to 128 × 128 px). p-values, computed with the Mann-Whitney test, are used to determine if entropy values of the two groups are statistically significantly different. Receiver operating characteristic (ROC) curves are plotted and analyzed for the most significant cases to define if entropy values are able to discriminate the two groups. RESULTS p-values show that there are statistical differences betweenF u z E n 2 D $FuzEn_{2D}$ of patients with PE and patients without PE for 112× 112 px and 128× 128 px images. Area under the ROC curve (AUC) is higher than 0.7 (threshold for a fair test) for 112× 112 and 128× 128 images. The best value of AUC (0.72) is obtained for 112× 112 px images. CONCLUSIONS Bi-dimensional entropy measures applied to ultrasound images seem to offer encouraging perspectives for PE detection: our first experiment, on a small dataset, shows thatF u z E n 2 D $FuzEn_{2D}$ on 112× 112 px images is able to detect PE. The next step of our work will consist in testing this approach on a larger dataset and in integratingF u z E n 2 D $FuzEn_{2D}$ in a machine learning algorithm. Furthermore, this study could also contribute to PE risk prediction for patients with VTE.
Collapse
Affiliation(s)
| | - Clément Hoffmann
- Internal and Vascular Medicine and Pulmonology Department, CHU Brest, Brest, France
- INSERM U1304 Groupe d'Etude de la Thrombose de Bretagne Occidentale (GETBO), University Brest, Brest, France
- F-CRIN INNOVTE, Saint-Etienne, France
| | - Guillaume Mahe
- Vascular Medicine Department, Centre Hospitalier Universitaire (CHU) de Rennes, Rennes, France
- INSERM CIC1414 CIC Rennes, Rennes, France
- Université de Rennes 2, M2S-EA 7470, Rennes, France
| | - Luc Bressollette
- Internal and Vascular Medicine and Pulmonology Department, CHU Brest, Brest, France
- INSERM U1304 Groupe d'Etude de la Thrombose de Bretagne Occidentale (GETBO), University Brest, Brest, France
- F-CRIN INNOVTE, Saint-Etienne, France
| | | |
Collapse
|
4
|
Deb SD, Jha RK. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
5
|
Singh S, Hoque S, Zekry A, Sowmya A. Radiological Diagnosis of Chronic Liver Disease and Hepatocellular Carcinoma: A Review. J Med Syst 2023; 47:73. [PMID: 37432493 PMCID: PMC10335966 DOI: 10.1007/s10916-023-01968-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 07/02/2023] [Indexed: 07/12/2023]
Abstract
Medical image analysis plays a pivotal role in the evaluation of diseases, including screening, surveillance, diagnosis, and prognosis. Liver is one of the major organs responsible for key functions of metabolism, protein and hormone synthesis, detoxification, and waste excretion. Patients with advanced liver disease and Hepatocellular Carcinoma (HCC) are often asymptomatic in the early stages; however delays in diagnosis and treatment can lead to increased rates of decompensated liver diseases, late-stage HCC, morbidity and mortality. Ultrasound (US) is commonly used imaging modality for diagnosis of chronic liver diseases that includes fibrosis, cirrhosis and portal hypertension. In this paper, we first provide an overview of various diagnostic methods for stages of liver diseases and discuss the role of Computer-Aided Diagnosis (CAD) systems in diagnosing liver diseases. Second, we review the utility of machine learning and deep learning approaches as diagnostic tools. Finally, we present the limitations of existing studies and outline future directions to further improve diagnostic accuracy, as well as reduce cost and subjectivity, while also improving workflow for the clinicians.
Collapse
Affiliation(s)
- Sonit Singh
- School of CSE, UNSW Sydney, High St, Kensington, 2052, NSW, Australia.
| | - Shakira Hoque
- Gastroenterology and Hepatology Department, St George Hospital, Hogben St, Kogarah, 2217, NSW, Australia
| | - Amany Zekry
- St George and Sutherland Clinical Campus, School of Clinical Medicine, UNSW, High St, Kensington, 2052, NSW, Australia
- Gastroenterology and Hepatology Department, St George Hospital, Hogben St, Kogarah, 2217, NSW, Australia
| | - Arcot Sowmya
- School of CSE, UNSW Sydney, High St, Kensington, 2052, NSW, Australia
| |
Collapse
|
6
|
Chen Z, Ying TC, Chen J, Wang Y, Wu C, Su Z. Assessment of Renal Fibrosis in Patients With Chronic Kidney Disease Using Shear Wave Elastography and Clinical Features: A Random Forest Approach. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1665-1671. [PMID: 37105772 DOI: 10.1016/j.ultrasmedbio.2023.03.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 03/27/2023] [Accepted: 03/30/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Renal fibrosis is the common pathological hallmark of chronic kidney disease (CKD) progression. In this study, a random forest (RF) classifier based on 2-D shear wave elastography (SWE) and clinical features for the differential severity of renal fibrosis in patients with CKD is proposed. METHODS A total of 162 patients diagnosed with CKD who underwent 2-D SWE and renal biopsy were prospectively enrolled from April 2019 to December 2021 and then randomized into training (n = 114) and validation (n = 48) cohorts at a ratio of 7:3. The least absolute shrinkage and selection operator (LASSO) regression and recursive feature elimination for support vector machines (SVM-RFE) algorithm were employed to select renal fibrosis-related features from clinical information and elastosonographic findings. An RF model was subsequently constructed using the aforementioned informative parameters in the training cohort and evaluated in terms of discrimination, calibration and clinical utility in both cohorts. RESULTS The LASSO and SVM-RFE analyses revealed that age, sex, blood urea nitrogen, renal resistive index, hypertension and the 2D-SWE value were independent risk variables associated with renal fibrosis severity. The established RF model incorporating these six variables exhibited fine discrimination in both the derivation (area under the curve [AUC]: 0.84, 95% confidence interval [CI]: 0.76-0.91) and validation (AUC: 0.88, 95% CI: 0.77-0.98) cohorts. Moreover, the calibration curve revealed satisfactory predictive accuracy, and the decision curve analysis revealed a significant clinical net benefit. CONCLUSION The developed RF model, via a combination of the 2-D SWE value and clinical information, indicated satisfactory diagnostic performance and clinical practicality toward differentiating moderate-severe from mild renal fibrosis, which may provide critical insight into risk stratification for patients with CKD.
Collapse
Affiliation(s)
- Ziman Chen
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Tin Cheung Ying
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Jiaxin Chen
- Department of Ultrasound, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Yingli Wang
- Ultrasound Department, EDAN Instruments, Inc., Shenzhen, China
| | - Chaoqun Wu
- Department of Ultrasound, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Zhongzhen Su
- Department of Ultrasound, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China.
| |
Collapse
|
7
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
8
|
Han X, Gong B, Guo L, Wang J, Ying S, Li S, Shi J. B-mode ultrasound based CAD for liver cancers via multi-view privileged information learning. Neural Netw 2023; 164:369-381. [PMID: 37167750 DOI: 10.1016/j.neunet.2023.03.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 01/21/2023] [Accepted: 03/21/2023] [Indexed: 03/29/2023]
Abstract
B-mode ultrasound-based computer-aided diagnosis model can help sonologists improve the diagnostic performance for liver cancers, but it generally suffers from the bottleneck due to the limited structure and internal echogenicity information in B-mode ultrasound images. Contrast-enhanced ultrasound images provide additional diagnostic information on dynamic blood perfusion of liver lesions for B-mode ultrasound images with improved diagnostic accuracy. Since transfer learning has indicated its effectiveness in promoting the performance of target computer-aided diagnosis model by transferring knowledge from related imaging modalities, a multi-view privileged information learning framework is proposed to improve the diagnostic accuracy of the single-modal B-mode ultrasound-based diagnosis for liver cancers. This framework can make full use of the shared label information between the paired B-mode ultrasound images and contrast-enhanced ultrasound images to guide knowledge transfer It consists of a novel supervised dual-view deep Boltzmann machine and a new deep multi-view SVM algorithm. The former is developed to implement knowledge transfer from the multi-phase contrast-enhanced ultrasound images to the B-mode ultrasound-based diagnosis model via a feature-level learning using privileged information paradigm, which is totally different from the existing learning using privileged information paradigm that performs knowledge transfer in the classifier. The latter further fuses and enhances feature representation learned from three pre-trained supervised dual-view deep Boltzmann machine networks for the classification task. An experiment is conducted on a bimodal ultrasound liver cancer dataset. The experimental results show that the proposed framework outperforms all the compared algorithms with the best classification accuracy of 88.91 ± 1.52%, sensitivity of 88.31 ± 2.02%, and specificity of 89.50 ± 3.12%. It suggests the effectiveness of our proposed MPIL framework for the BUS-based CAD of liver cancers.
Collapse
|
9
|
Gong L, Zhou P, Li JL, Liu WG. Investigating the diagnostic efficiency of a computer-aided diagnosis system for thyroid nodules in the context of Hashimoto's thyroiditis. Front Oncol 2023; 12:941673. [PMID: 36686823 PMCID: PMC9850089 DOI: 10.3389/fonc.2022.941673] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/09/2022] [Indexed: 01/07/2023] Open
Abstract
Objectives This study aims to investigate the efficacy of a computer-aided diagnosis (CAD) system in distinguishing between benign and malignant thyroid nodules in the context of Hashimoto's thyroiditis (HT) and to evaluate the role of the CAD system in reducing unnecessary biopsies of benign lesions. Methods We included a total of 137 nodules from 137 consecutive patients (mean age, 43.5 ± 11.8 years) who were histopathologically diagnosed with HT. The two-dimensional ultrasound images and videos of all thyroid nodules were analyzed by the CAD system and two radiologists with different experiences according to ACR TI-RADS. The diagnostic cutoff values of ACR TI-RADS were divided into two categories (TR4 and TR5), and then the sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of the CAD system and the junior and senior radiologists were compared in both cases. Moreover, ACR TI-RADS classification was revised according to the results of the CAD system, and the efficacy of recommended fine-needle aspiration (FNA) was evaluated by comparing the unnecessary biopsy rate and the malignant rate of punctured nodules. Results The accuracy, sensitivity, specificity, PPV, and NPV of the CAD system were 0.876, 0.905, 0.830, 0.894, and 0.846, respectively. With TR4 as the cutoff value, the AUCs of the CAD system and the junior and senior radiologists were 0.867, 0.628, and 0.722, respectively, and the CAD system had the highest AUC (P < 0.0001). With TR5 as the cutoff value, the AUCs of the CAD system and the junior and senior radiologists were 0.867, 0.654, and 0.812, respectively, and the CAD system had a higher AUC than the junior radiologist (P < 0.0001) but comparable to the senior radiologist (P = 0.0709). With the assistance of the CAD system, the number of TR4 nodules was decreased by both junior and senior radiologists, the malignant rate of punctured nodules increased by 30% and 22%, and the unnecessary biopsies of benign lesions were both reduced by nearly half. Conclusions The CAD system based on deep learning can improve the diagnostic performance of radiologists in identifying benign and malignant thyroid nodules in the context of Hashimoto's thyroiditis and can play a role in FNA recommendations to reduce unnecessary biopsy rates.
Collapse
|
10
|
Rajapaksa S, Khalvati F. Relevance maps: A weakly supervised segmentation method for 3D brain tumours in MRIs. FRONTIERS IN RADIOLOGY 2022; 2:1061402. [PMID: 37492689 PMCID: PMC10365288 DOI: 10.3389/fradi.2022.1061402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 11/28/2022] [Indexed: 07/27/2023]
Abstract
With the increased reliance on medical imaging, Deep convolutional neural networks (CNNs) have become an essential tool in the medical imaging-based computer-aided diagnostic pipelines. However, training accurate and reliable classification models often require large fine-grained annotated datasets. To alleviate this, weakly-supervised methods can be used to obtain local information such as region of interest from global labels. This work proposes a weakly-supervised pipeline to extract Relevance Maps of medical images from pre-trained 3D classification models using localized perturbations. The extracted Relevance Map describes a given region's importance to the classification model and produces the segmentation for the region. Furthermore, we propose a novel optimal perturbation generation method that exploits 3D superpixels to find the most relevant area for a given classification using U-net architecture. This model is trained with perturbation loss, which maximizes the difference between unperturbed and perturbed predictions. We validated the effectiveness of our methodology by applying it to the segmentation of Glioma brain tumours in MRI scans using only classification labels for glioma type. The proposed method outperforms existing methods in both Dice Similarity Coefficient for segmentation and resolution for visualizations.
Collapse
Affiliation(s)
- Sajith Rajapaksa
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Farzad Khalvati
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| |
Collapse
|
11
|
Ge Z, Wang Y, Wang Y, Fang S, Wang H, Li J. Diagnostic value of contrast-enhanced ultrasound in intravenous leiomyomatosis: a single-center experiences. Front Oncol 2022; 12:963675. [PMID: 36033528 PMCID: PMC9403056 DOI: 10.3389/fonc.2022.963675] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Intravenous leiomyomatosis (IVL) is a rare disease, and few studies have focused on the diagnostic value of contrast-enhanced ultrasound (CEUS) in this condition. This study aimed to investigate the diagnostic value of CEUS in IVL and summarize the specific CEUS characteristics of IVL. Materials and Method From December 2016 to March 2021, 93 patients admitted to our hospital with inferior vena cava (IVC) occupying lesions were prospectively enrolled and underwent detailed ultrasound multi-modality examinations, including conventional and contrast-enhanced ultrasound scans. The diagnostic value of CEUS and conventional ultrasound (CU) in IVL was compared, and the specific IVL signs were summarized. Results Among the 93 patients with inferior vena cava mass, 67 were IVL while 26 were non-IVL. The inter-observer agreement of the two senior doctors was good, with Kappa coefficient = 0.71 (95% CI: 0.572–0.885). The area under the ROC curve of CU for IVL diagnosis was 0.652 (95% CI: 0.528–0.776), and its sensitivity, specificity, accuracy, positive predictive value, negative predictive value, missed diagnosis rate, and misdiagnosis rate were 61.1%, 69.2%, 63.4%, 83.7%, 40.9%, 38.8%, and 30.8%, respectively. The area under curve (AUC) for IVL diagnosis by CEUS was 0.807 (95% CI: 0.701–0.911), and the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, missed diagnosis rate, and misdiagnosis rate were 82.0%, 84.6%, 82.8%, 93.2%, 64.7%, 15.4%, and 17.9%, respectively. In CEUS mode, “sieve hole sign” and “multi-track sign” were detected in 57 lesions, and the detected rate was higher than that of CU (https://loop.frontiersin.org/people/1014187 < 0.01). Conclusion CEUS can better show the fine blood flow inside the IVL, which is important for IVL differential diagnosis. Moreover, CEUS can obtain more information about IVL diagnosis than CU, compensating for the shortcomings of CU in detecting more blood flow within the lesion. Thus, this technique has great significance for IVL diagnosis.
Collapse
|
12
|
Tang F, Ding J, Wang L, Ning C. A Novel Distant Domain Transfer Learning Framework for Thyroid Image Classification. Neural Process Lett 2022; 55:1-17. [PMID: 35789884 PMCID: PMC9243866 DOI: 10.1007/s11063-022-10940-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/16/2022] [Indexed: 12/07/2022]
Abstract
Medical ultrasound imaging technology is currently the preferred method for early diagnosis of thyroid nodules. Radiologists' analysis of ultrasound images is highly dependent on their clinical experience and is susceptible to intra- and inter-observer variability. Although end-to-end deep learning technique can address these limitations, the difficulty of acquiring annotated medical image makes it very challenging. Transfer learning can alleviate the problems, but the large gap between source and target domain will lead to negative transfer. In this paper, a novel transfer learning method with distant domain high-level feature fusion (DHFF) model is proposed. It reduces the distribution distance between the source domain and the target domain while maintaining the characteristics of respective domains, which can avoid excessive feature fusion while enabling the model to learn more valuable transfer knowledge. The DHFF is validated by multiple public source and private target datasets in experiments. The results show that the classification accuracy of DHFF is up to 88.92% with thyroid ultrasound auxiliary source domains, which is up to 8% higher than existing transfer and distant transfer algorithms.
Collapse
Affiliation(s)
- Fenghe Tang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Jianrui Ding
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Lingtao Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Chunping Ning
- Ultrasound Department, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
13
|
Breast Cancer Prediction Empowered with Fine-Tuning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5918686. [PMID: 35720929 PMCID: PMC9203172 DOI: 10.1155/2022/5918686] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 05/06/2022] [Indexed: 12/19/2022]
Abstract
In the world, in the past recent five years, breast cancer is diagnosed about 7.8 million women's and making it the most widespread cancer, and it is the second major reason for women's death. So, early prevention and diagnosis systems of breast cancer could be more helpful and significant. Neural networks can extract multiple features automatically and perform predictions on breast cancer. There is a need for several labeled images to train neural networks which is a nonconventional method for some types of data images such as breast magnetic resonance imaging (MRI) images. So, there is only one significant solution for this query is to apply fine-tuning in the neural network. In this paper, we proposed a fine-tuning model using AlexNet in the neural network to extract features from breast cancer images for training purposes. So, in the proposed model, we updated the first and last three layers of AlexNet to detect the normal and abnormal regions of breast cancer. The proposed model is more efficient and significant because, during the training and testing process, the proposed model achieves higher accuracy 98.44% and 98.1% of training and testing, respectively. So, this study shows that the use of fine-tuning in the neural network can detect breast cancer using MRI images and train a neural network classifier by feature extraction using the proposed model is faster and more efficient.
Collapse
|
14
|
Kim GN, Zhang HY, Cho YE, Ryu SJ. Differential Screening of Herniated Lumbar Discs Based on Bag of Visual Words Image Classification Using Digital Infrared Thermographic Images. Healthcare (Basel) 2022; 10:healthcare10061094. [PMID: 35742145 PMCID: PMC9222567 DOI: 10.3390/healthcare10061094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/29/2022] [Accepted: 06/06/2022] [Indexed: 11/16/2022] Open
Abstract
Doctors in primary hospitals can obtain the impression of lumbosacral radiculopathy with a physical exam and need to acquire medical images, such as an expensive MRI, for diagnosis. Then, doctors will perform a foraminal root block to the target root for pain control. However, there was insufficient screening medical image examination for precise L5 and S1 lumbosacral radiculopathy, which is most prevalent in the clinical field. Therefore, to perform differential screening of L5 and S1 lumbosacral radiculopathy, the authors applied digital infrared thermographic images (DITI) to the machine learning (ML) algorithm, which is the bag of visual words method. DITI dataset included data from the healthy population and radiculopathy patients with herniated lumbar discs (HLDs) L4/5 and L5/S1. A total of 842 patients were enrolled and the dataset was split into a 7:3 ratio as the training algorithm and test dataset to evaluate model performance. The average accuracy was 0.72 and 0.67, the average precision was 0.71 and 0.77, the average recall was 0.69 and 0.74, and the F1 score was 0.70 and 0.75 for the training and test datasets. Application of the bag of visual words algorithm to DITI classification will aid in the differential screening of lumbosacral radiculopathy and increase the therapeutic effect of primary pain interventions with economical cost.
Collapse
Affiliation(s)
- Gi Nam Kim
- Department of Spinal Neurosurgery, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Korea; (G.N.K.); (Y.E.C.)
| | - Ho Yeol Zhang
- Department of Neurosurgery, National Health Insurance Service Ilsan Hospital, Yonsei University College of Medicine, Goyang 10444, Korea;
| | - Yong Eun Cho
- Department of Spinal Neurosurgery, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Korea; (G.N.K.); (Y.E.C.)
| | - Seung Jun Ryu
- Department of Neurosurgery, National Health Insurance Service Ilsan Hospital, Yonsei University College of Medicine, Goyang 10444, Korea;
- Correspondence: ; Tel.: +82-10-2367-9263
| |
Collapse
|
15
|
Zhou B, Yang X, Curran WJ, Liu T. Artificial Intelligence in Quantitative Ultrasound Imaging: A Survey. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1329-1342. [PMID: 34467542 DOI: 10.1002/jum.15819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 08/01/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
Quantitative ultrasound (QUS) imaging is a safe, reliable, inexpensive, and real-time technique to extract physically descriptive parameters for assessing pathologies. Compared with other major imaging modalities such as computed tomography and magnetic resonance imaging, QUS suffers from several major drawbacks: poor image quality and inter- and intra-observer variability. Therefore, there is a great need to develop automated methods to improve the image quality of QUS. In recent years, there has been increasing interest in artificial intelligence (AI) applications in medical imaging, and a large number of research studies in AI in QUS have been conducted. The purpose of this review is to describe and categorize recent research into AI applications in QUS. We first introduce the AI workflow and then discuss the various AI applications in QUS. Finally, challenges and future potential AI applications in QUS are discussed.
Collapse
Affiliation(s)
- Boran Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
16
|
Mitani Y, Fisher RB, Fujita Y, Hamamoto Y, Sakaida I. Image Correction Methods for Regions of Interest in Liver Cirrhosis Classification on CNNs. SENSORS (BASEL, SWITZERLAND) 2022; 22:3378. [PMID: 35591069 PMCID: PMC9105852 DOI: 10.3390/s22093378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/24/2022] [Accepted: 04/26/2022] [Indexed: 06/15/2023]
Abstract
The average error rate in liver cirrhosis classification on B-mode ultrasound images using the traditional pattern recognition approach is still too high. In order to improve the liver cirrhosis classification performance, image correction methods and a convolution neural network (CNN) approach are focused on. The impact of image correction methods on region of interest (ROI) images that are input into the CNN for the purpose of classifying liver cirrhosis based on data from B-mode ultrasound images is investigated. In this paper, image correction methods based on tone curves are developed. The experimental results show positive benefits from the image correction methods by improving the image quality of ROI images. By enhancing the image contrast of ROI images, the image quality improves and thus the generalization ability of the CNN also improves.
Collapse
Affiliation(s)
- Yoshihiro Mitani
- National Institute of Technology, Ube College, Ube 755-8555, Japan
| | - Robert B. Fisher
- School of Informatics, The University of Edinburgh, Edinburgh EH8 9BT, UK;
| | - Yusuke Fujita
- Faculty of Engineering, Yamaguchi University, Ube 755-8611, Japan; (Y.F.); (Y.H.)
| | - Yoshihiko Hamamoto
- Faculty of Engineering, Yamaguchi University, Ube 755-8611, Japan; (Y.F.); (Y.H.)
| | - Isao Sakaida
- School of Medicine and Health Sciences, Yamaguchi University, Ube 755-8505, Japan;
| |
Collapse
|
17
|
Jin C, Wang S, Yang G, Li E, Liang Z. A Review of the Methods on Cobb Angle Measurements for Spinal Curvature. SENSORS 2022; 22:s22093258. [PMID: 35590951 PMCID: PMC9101880 DOI: 10.3390/s22093258] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/11/2022] [Accepted: 04/19/2022] [Indexed: 11/16/2022]
Abstract
Scoliosis is a common disease of the spine and requires regular monitoring due to its progressive properties. A preferred indicator to assess scoliosis is by the Cobb angle, which is currently measured either manually by the relevant medical staff or semi-automatically, aided by a computer. These methods are not only labor-intensive but also vary in precision by the inter-observer and intra-observer. Therefore, a reliable and convenient method is urgently needed. With the development of computer vision and deep learning, it is possible to automatically calculate the Cobb angles by processing X-ray or CT/MR/US images. In this paper, the research progress of Cobb angle measurement in recent years is reviewed from the perspectives of computer vision and deep learning. By comparing the measurement effects of typical methods, their advantages and disadvantages are analyzed. Finally, the key issues and their development trends are also discussed.
Collapse
Affiliation(s)
- Chen Jin
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; (C.J.); (E.L.); (Z.L.)
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shengru Wang
- Peking Union Medical College Hospital, Beijing 100005, China;
| | - Guodong Yang
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; (C.J.); (E.L.); (Z.L.)
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Correspondence: ; Tel.: +86-10-82544504
| | - En Li
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; (C.J.); (E.L.); (Z.L.)
| | - Zize Liang
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; (C.J.); (E.L.); (Z.L.)
| |
Collapse
|
18
|
Bandari E, Beuzen T, Habashy L, Raza J, Yang X, Kapeluto J, Meneilly G, Madden K. Machine Learning Decision Support for Bedside Ultrasound to Detect Lipohypertrophy. JMIR Form Res 2022; 6:e34830. [PMID: 35404833 PMCID: PMC9123536 DOI: 10.2196/34830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 03/14/2022] [Accepted: 04/09/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The most common dermatological complication of insulin therapy is lipohypertrophy. OBJECTIVE As a proof-of-concept, we built and tested an automated model using a convolutional neural network (CNN) to detect the presence of lipohypertrophy in ultrasound images. METHODS Ultrasound images were obtained in a blinded fashion using a portable GE LOGIQe machine with an L8-18i-D probe (5-18 MHz; GE Healthcare, Frankfurt, Germany). The data was split into train, validation and test splits of 70%, 15%, and 15% respectively. Given the small size of the dataset, image augmentation techniques were used to expand the size of the training set and improve the model's generalizability. To compare the performance of the different architectures, the team considered the accuracy and recall of the models when tested on our test set. RESULTS The DenseNet CNN architecture was found to have the highest accuracy (76%) and recall (76%) in detecting lipohypertrophy in ultrasound images, when compared to other CNN architectures. Additional work showed that the YOLOv5m object detection model could be used to help identify the approximate location of lipohypertrophy in ultrasound images identified as containing lipohypertrophy by the DenseNet CNN. CONCLUSIONS We were able to demonstrate the ability of machine learning approaches to automate the process of detecting and locating lipohypertrophy. CLINICALTRIAL
Collapse
Affiliation(s)
- Ela Bandari
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Tomas Beuzen
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Lara Habashy
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Javairia Raza
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Xudong Yang
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Jordanna Kapeluto
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Endocrinology, Department of Medicine, University of British Columbia, Vancouver, CA
| | - Graydon Meneilly
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA
| | - Kenneth Madden
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA.,Centre for Hip Health and Mobility, Vancouver, CA
| |
Collapse
|
19
|
Yan Y, Tang L, Huang H, Yu Q, Xu H, Chen Y, Chen M, Zhang Q. Four-quadrant fast compressive tracking of breast ultrasound videos for computer-aided response evaluation of neoadjuvant chemotherapy in mice. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106698. [PMID: 35217304 DOI: 10.1016/j.cmpb.2022.106698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 01/26/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Neoadjuvant chemotherapy (NAC) is a valuable treatment approach for locally advanced breast cancer. Contrast-enhanced ultrasound (CEUS) potentially enables the assessment of therapeutic response to NAC. In order to evaluate the response accurately, quantitatively and objectively, a method that can effectively compensate motions of breast cancer in CEUS videos is urgently needed. METHODS We proposed the four-quadrant fast compressive tracking (FQFCT) approach to automatically perform CEUS video tracking and compensation for mice undergoing NAC. The FQFCT divided a tracking window into four smaller windows at four quadrants of a breast lesion and formulated the tracking at each quadrant as a binary classification task. After the FQFCT of breast cancer videos, the quantitative features of CEUS including the mean transit time (MTT) were computed. All mice showed a pathological response to NAC. The features between pre- (day 1) and post-treatment (day 3 and day 5) in these responders were statistically compared. RESULTS When we tracked the CEUS videos of mice with the FQFCT, the average tracking error of FQFCT was 0.65 mm, reduced by 46.72% compared with the classic fast compressive tracking method (1.22 mm). After compensation with the FQFCT, the MTT on day 5 of the NAC was significantly different from the MTT before NAC (day 1) (p = 0.013). CONCLUSIONS The FQFCT improves the accuracy of CEUS video tracking and contributes to the computer-aided response evaluation of NAC for breast cancer in mice.
Collapse
Affiliation(s)
- Yifei Yan
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Lei Tang
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China
| | - Haibo Huang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Qihui Yu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Haohao Xu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Ying Chen
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Man Chen
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China.
| |
Collapse
|
20
|
Song D, Zhang Z, Li W, Yuan L, Zhang W. Judgment of benign and early malignant colorectal tumors from ultrasound images with deep multi-View fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106634. [PMID: 35081497 DOI: 10.1016/j.cmpb.2022.106634] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 11/28/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Colorectal cancer (CRC) is currently one of the main cancers world-wide, with a high incidence in the elderly. In the diagnosis of CRC, endorectal ultrasound plays an important role in judging benign and early malignant tumors. However, malignant tumors in the early-stage are not easy to identify visually and experts usually seek help from multi-view images, which increases the workload and also exists a certain probability of misdiagnosis. In recent years, with the widespread use of deep learning methods in the analysis of medical images, it becomes necessary to design an effective computer-aided diagnosis (CAD) system of CRC based on multi-view endorectal ultrasound images. METHOD In this study, we proposed a CAD system for judging benign and early malignant colorectal tumors, and constructed the first multi-view ultrasound image dataset of CRC to validate our algorithm. Our system is an end-to-end model based on a deep neural network (DNN) which includes a feature extraction module based on dense blocks, a multi-view fusion module, and a Multi-Layer Perception-based classifier. A center loss was used for the first time in CAD tasks, to optimize our model. RESULT On the constructed dataset, the proposed system surpasses expert diagnosis in accuracy, sensitivity, specificity, and F1-score. Compared with the popular deep classification networks and other CAD methods, the algorithm has reached the best performance. Comparative experiments using different feature extraction methods, different view fusion strategies, and different classifiers verify the effectiveness of each part of the algorithm. CONCLUSION We propose a CAD system for judging benign and early malignant colorectal tumors based on DNN, which combines information of ultrasound images from different views for comprehension. On the first CRC multi-view ultrasound image dataset which we constructed, our method outperforms expert diagnosis results and all other methods, and the effectiveness of each part of the system has been verified. Our system has application value in future medical practice on early diagnosis of CRC.
Collapse
Affiliation(s)
- Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Zheqi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Wenhui Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lijun Yuan
- Department of Colorectal Surgery, Tianjin Union Medical Center, Tianjin 300121, China; Tianjin Institute of Coloproctology, Tianjin 300121, China.
| | - Wenshu Zhang
- EUREKA Robotics Centre, School of Technologies, Cardiff Metropolitan University, Cardiff, Wales, United Kingdom
| |
Collapse
|
21
|
High-Frequency Ultrasound Dataset for Deep Learning-Based Image Quality Assessment. SENSORS 2022; 22:s22041478. [PMID: 35214381 PMCID: PMC8875486 DOI: 10.3390/s22041478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/09/2022] [Accepted: 02/12/2022] [Indexed: 12/04/2022]
Abstract
This study aims at high-frequency ultrasound image quality assessment for computer-aided diagnosis of skin. In recent decades, high-frequency ultrasound imaging opened up new opportunities in dermatology, utilizing the most recent deep learning-based algorithms for automated image analysis. An individual dermatological examination contains either a single image, a couple of pictures, or an image series acquired during the probe movement. The estimated skin parameters might depend on the probe position, orientation, or acquisition setup. Consequently, the more images analyzed, the more precise the obtained measurements. Therefore, for the automated measurements, the best choice is to acquire the image series and then analyze its parameters statistically. However, besides the correctly received images, the resulting series contains plenty of non-informative data: Images with different artifacts, noise, or the images acquired for the time stamp when the ultrasound probe has no contact with the patient skin. All of them influence further analysis, leading to misclassification or incorrect image segmentation. Therefore, an automated image selection step is crucial. To meet this need, we collected and shared 17,425 high-frequency images of the facial skin from 516 measurements of 44 patients. Two experts annotated each image as correct or not. The proposed framework utilizes a deep convolutional neural network followed by a fuzzy reasoning system to assess the acquired data’s quality automatically. Different approaches to binary and multi-class image analysis, based on the VGG-16 model, were developed and compared. The best classification results reach 91.7% accuracy for the first, and 82.3% for the second analysis, respectively.
Collapse
|
22
|
Ying Z, Xiaohong J, Yijie D, Juan L, Yilai C, Congcong Y, Weiwei Z, Jianqiao Z. Using S-Detect to Improve Breast Ultrasound: The Different Combined Strategies Based on Radiologist Experienc. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY 2022. [DOI: 10.37015/audt.2022.220007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
|
23
|
A deep-learning framework for metacarpal-head cartilage-thickness estimation in ultrasound rheumatological images. Comput Biol Med 2021; 141:105117. [PMID: 34968861 DOI: 10.1016/j.compbiomed.2021.105117] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 11/30/2021] [Accepted: 12/02/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Rheumatoid arthritis (RA) is a chronic disease characterized by erosive symmetrical polyarthritis. Bone and cartilage are the main joint targets of this disease. Cartilage damage is one of the most relevant determinants of physical disability in RA patients. Cartilage damage is nowadays assessed by clinicians, which manually measure cartilage thickness in ultrasound (US) imaging. This poses issues relevant to intra-and inter-observer variability. Relying on the acquisition of metacarpal-head US images from 38 subjects, this work addresses the problem of automatic cartilage-thickness measurement by designing a new deep-learning (DL) framework. METHODS The framework consists of a Convolutional Neural Network (CNN), responsible for regressing cartilage-interface distance fields, followed by a post-processing step to delineate the two cartilage interfaces from the predicted distance fields and compute the cartilage thickness. RESULTS Our framework achieved encouraging results with a mean absolute difference (ADF) of 0.032 (±0.026) mm against manual thickness annotation by an expert clinician. When evaluating the intra-observer variability, we obtained an ADF = 0.036 (±0.028) mm. CONCLUSION The proposed framework achieved an ADF against manual annotation that was comparable to the intra-observer variability, proving the potential of DL in the field. SIGNIFICANCE This work is the first to address the problem of automatic cartilage-thickness estimation in US rheumatological images with DL, paving the way for future research in the field. From a clinical perspective, the proposed framework proved to be a valuable tool to support the clinical routine increasing the reproducibility of cartilage thickness measurements.
Collapse
|
24
|
Moga TV, David C, Popescu A, Lupusoru R, Heredea D, Ghiuchici AM, Foncea C, Burdan A, Sirli R, Danilă M, Ratiu I, Bizerea-Moga T, Sporea I. Multiparametric Ultrasound Approach Using a Tree-Based Decision Classifier for Inconclusive Focal Liver Lesions Evaluated by Contrast Enhanced Ultrasound. J Pers Med 2021; 11:jpm11121388. [PMID: 34945860 PMCID: PMC8709328 DOI: 10.3390/jpm11121388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 12/06/2021] [Accepted: 12/13/2021] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Multiparametric ultrasound (MPUS) is a concept whereby the examiner is encouraged to use the latest features of an ultrasound machine. The aim of this study was to reanalyze inconclusive focal liver lesions (FLLs) that had been analyzed via contrast enhanced ultrasound (CEUS) using the MPUS approach with the help of a tree-based decision classifier. MATERIALS AND METHODS We retrospectively analyzed FLLs that were inconclusive upon CEUS examination in our department, focusing our attention on samples taken over a period of two years (2017-2018). MPUS reanalysis followed a three-step algorithm, taking into account the liver stiffness measurement (LSM), time-intensity curve analysis (TIC), and parametric imaging (PI). After processing all steps of the algorithm, a binary decision tree classifier (BDTC) was used to achieve a software-assisted decision. RESULTS Area was the only TIC-CEUS parameter that showed a significant difference between malign and benign lesions with a cutoff of >-19.3 dB for washout phenomena (AUROC = 0.58, Se = 74.0%, Sp = 45.7%). Using the binary decision tree classifier (BDTC) algorithm, we correctly classified 71 out of 91 lesions according to their malignant or benignant status, with an accuracy of 78.0% (sensitivity = 62%, specificity = 45%, and precision = 80%). CONCLUSIONS By reevaluating inconclusive FLLs that had been analyzed via CEUS using MPUS, we managed to determine that 78% of the lesions were malignant and, in 28% of them, we established the lesion type.
Collapse
Affiliation(s)
- Tudor Voicu Moga
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Ciprian David
- Electronics and Telecommunications Faculty, “Politehnica” University of Timișoara, 300006 Timișoara, Romania;
| | - Alina Popescu
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Raluca Lupusoru
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
- Center for Modeling Biological Systems and Data Analysis, Department of Functional Sciences, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
- Correspondence: ; Tel.: +40-733912028
| | - Darius Heredea
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Ana M. Ghiuchici
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Camelia Foncea
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Adrian Burdan
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Roxana Sirli
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Mirela Danilă
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Iulia Ratiu
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| | - Teofana Bizerea-Moga
- Department of Pediatrics—1st Pediatric Discipline, “Victor Babeș” University of Medicine and Pharmacy, 300041 Timisoara, Romania;
| | - Ioan Sporea
- Advanced Regional Research Center in Gastroenterology and Hepatology, Department of Gastroenterology and Hepatology, “Victor Babeş” University of Medicine and Pharmacy, 300041 Timişoara, Romania; (T.V.M.); (A.P.); (D.H.); (A.M.G.); (C.F.); (A.B.); (R.S.); (M.D.); (I.R.); (I.S.)
| |
Collapse
|
25
|
Cui W, Peng Y, Yuan G, Cao W, Cao Y, Lu Z, Ni X, Yan Z, Zheng J. FMRNet: A fused network of multiple tumoral regions for breast tumor classification with ultrasound images. Med Phys 2021; 49:144-157. [PMID: 34766623 DOI: 10.1002/mp.15341] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 10/21/2021] [Accepted: 10/22/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct a deep convolutional neural network that can adaptively fuse the information of multiple tumoral-regions (FMRNet) for breast tumor classification using ultrasound (US) images without segmentation masks in the testing stage. METHODS To sufficiently excavate the potential relationship, we design a fused network and two independent modules to extract and fuse features of multiple regions simultaneously. First, we introduce two enhanced combined-tumoral (EC) region modules, aiming to enhance the combined-tumoral features gradually. Then, we further design a three-branch module for extracting and fusing the features of intratumoral, peritumoral, and combined-tumoral regions, denoted as the intratumoral, peritumoral, and combined-tumoral module. Especially, we design a novel fusion module by introducing a channel attention module to adaptively fuse the features of three regions. The model is evaluated on two public datasets including UDIAT and BUSI with breast tumor ultrasound images. Two independent groups of experiments are performed on two respective datasets using the fivefold stratified cross-validation strategy. Finally, we conduct ablation experiments on two datasets, in which BUSI is used as the training set and UDIAT is used as the testing set. RESULTS We conduct detailed ablation experiments about the proposed two modules and comparative experiments with other existing representative methods. The experimental results show that the proposed method yields state-of-the-art performance on both two datasets. Especially, in the UDIAT dataset, the proposed FMRNet achieves a high accuracy of 0.945 and a specificity of 0.945, respectively. Moreover, the precision (PRE = 0.909) even dramatically improves by 21.6% on the BUSI dataset compared with the existing method of the best result. CONCLUSION The proposed FMRNet shows good performance in breast tumor classification with US images, and proves its capability of exploiting and fusing the information of multiple tumoral-regions. Furthermore, the FMRNet has potential value in classifying other types of cancers using multiple tumoral-regions of other kinds of medical images.
Collapse
Affiliation(s)
- Wenju Cui
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China.,Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Yunsong Peng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Gang Yuan
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Weiwei Cao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Yuzhu Cao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jian Zheng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,Division of Life Sciences and Medicine, School of Biomedical Engineering (Suzhou), University of Science and Technology of China, Hefei, China
| |
Collapse
|
26
|
Nesovic K, Koh RGL, Aghamohammadi Sereshki A, Shomal Zadeh F, Popovic MR, Kumbhare D. Ultrasound Image Quality Evaluation using a Structural Similarity Based Autoencoder. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4002-4005. [PMID: 34892108 DOI: 10.1109/embc46164.2021.9630261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrasound (US) imaging is a widely used clinical technique that requires extensive training to use correctly. Good quality US images are essential for effective interpretation of the results, however numerous sources of error can impair quality. Currently, image quality assessment is performed by an experienced sonographer through visual inspection, however this is usually unachievable by inexperienced users. An autoencoder (AE) is a machine learning technique that has been shown to be effective at anomaly detection and could be used for fast and effective image quality assessment. In this study, we explored the use of an AE to distinguish between good and poor-quality US images (caused by artifacts and noise) by using the reconstruction error to train and test a random forest classifier (RFC) for classification. Good and poor-quality ultrasound images were obtained from forty-nine healthy subjects and were used to train an AE using two different loss functions, with one based on the structural similarity index measure (SSIM) and the other on the mean squared error (MSE). The resulting reconstruction errors of each image were then used to classify the images into two groups based on quality by training and testing an RFC. Using the SSIM based AE, the classifier showed an average accuracy of 71%±4.0% when classifying images based on user errors and an accuracy of 91%±1.0% when sorting images based on noise. The respective accuracies obtained from the AE using the MSE function were 76%±2.0% and 83%±2.0%. The results of this study demonstrate that an AE has the potential to differentiate good quality US images from those with poor quality, which could be used to help less experienced researchers and clinicians obtain a more objective measure of image quality when using US.
Collapse
|
27
|
A beneficial role of computer-aided diagnosis system for less experienced physicians in the diagnosis of thyroid nodule on ultrasound. Sci Rep 2021; 11:20448. [PMID: 34650185 PMCID: PMC8516898 DOI: 10.1038/s41598-021-99983-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 09/28/2021] [Indexed: 01/25/2023] Open
Abstract
Ultrasonography (US) is the primary diagnostic tool for thyroid nodules, while the accuracy is operator-dependent. It is widely used not only by radiologists but also by physicians with different levels of experience. The aim of this study was to investigate whether US with computer-aided diagnosis (CAD) has assisting roles to physicians in the diagnosis of thyroid nodules. 451 thyroid nodules evaluated by fine-needle aspiration cytology following surgery were included. 300 (66.5%) of them were diagnosed as malignancy. Physicians with US experience less than 1 year (inexperienced, n = 10), or more than 5 years (experienced, n = 3) reviewed the US images of thyroid nodules with or without CAD assistance. The diagnostic performance of CAD was comparable to that of the experienced group, and better than those of the inexperienced group. The AUC of the CAD for conventional PTC was higher than that for FTC and follicular variant PTC (0.925 vs. 0.499), independent of tumor size. CAD assistance significantly improved diagnostic performance in the inexperienced group, but not in the experienced groups. In conclusion, the CAD system showed good performance in the diagnosis of conventional PTC. CAD assistance improved the diagnostic performance of less experienced physicians in US, especially in diagnosis of conventional PTC.
Collapse
|
28
|
Portable Ultrasound Research System for Use in Automated Bladder Monitoring with Machine-Learning-Based Segmentation. SENSORS 2021; 21:s21196481. [PMID: 34640807 PMCID: PMC8512052 DOI: 10.3390/s21196481] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/10/2021] [Accepted: 09/23/2021] [Indexed: 11/17/2022]
Abstract
We developed a new mobile ultrasound device for long-term and automated bladder monitoring without user interaction consisting of 32 transmit and receive electronics as well as a 32-element phased array 3 MHz transducer. The device architecture is based on data digitization and rapid transfer to a consumer electronics device (e.g., a tablet) for signal reconstruction (e.g., by means of plane wave compounding algorithms) and further image processing. All reconstruction algorithms are implemented in the GPU, allowing real-time reconstruction and imaging. The system and the beamforming algorithms were evaluated with respect to the imaging performance on standard sonographical phantoms (CIRS multipurpose ultrasound phantom) by analyzing the resolution, the SNR and the CNR. Furthermore, ML-based segmentation algorithms were developed and assessed with respect to their ability to reliably segment human bladders with different filling levels. A corresponding CNN was trained with 253 B-mode data sets and 20 B-mode images were evaluated. The quantitative and qualitative results of the bladder segmentation are presented and compared to the ground truth obtained by manual segmentation.
Collapse
|
29
|
Czajkowska J, Badura P, Korzekwa S, Płatkowska-Szczerek A, Słowińska M. Deep Learning-Based High-Frequency Ultrasound Skin Image Classification with Multicriteria Model Evaluation. SENSORS 2021; 21:s21175846. [PMID: 34502735 PMCID: PMC8434172 DOI: 10.3390/s21175846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/22/2021] [Accepted: 08/27/2021] [Indexed: 02/01/2023]
Abstract
This study presents the first application of convolutional neural networks to high-frequency ultrasound skin image classification. This type of imaging opens up new opportunities in dermatology, showing inflammatory diseases such as atopic dermatitis, psoriasis, or skin lesions. We collected a database of 631 images with healthy skin and different skin pathologies to train and assess all stages of the methodology. The proposed framework starts with the segmentation of the epidermal layer using a DeepLab v3+ model with a pre-trained Xception backbone. We employ transfer learning to train the segmentation model for two purposes: to extract the region of interest for classification and to prepare the skin layer map for classification confidence estimation. For classification, we train five models in different input data modes and data augmentation setups. We also introduce a classification confidence level to evaluate the deep model’s reliability. The measure combines our skin layer map with the heatmap produced by the Grad-CAM technique designed to indicate image regions used by the deep model to make a classification decision. Moreover, we propose a multicriteria model evaluation measure to select the optimal model in terms of classification accuracy, confidence, and test dataset size. The experiments described in the paper show that the DenseNet-201 model fed with the extracted region of interest produces the most reliable and accurate results.
Collapse
Affiliation(s)
- Joanna Czajkowska
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland;
- Correspondence: ; Tel.: +48-322-774-67
| | - Pawel Badura
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland;
| | - Szymon Korzekwa
- Department of Temporomandibular Disorders, Division of Prosthodontics, Poznan University of Medical Sciences, 60-512 Poznań, Poland;
| | | | - Monika Słowińska
- Department of Dermatology, Military Institute of Medicine, 01-755 Warszawa, Poland;
| |
Collapse
|
30
|
Liu C, Qiao M, Jiang F, Guo Y, Jin Z, Wang Y. TN-USMA Net: Triple normalization-based gastrointestinal stromal tumors classification on multicenter EUS images with ultrasound-specific pretraining and meta attention. Med Phys 2021; 48:7199-7214. [PMID: 34412155 DOI: 10.1002/mp.15172] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 07/11/2021] [Accepted: 07/31/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Accurate quantification of gastrointestinal stromal tumors' (GISTs) risk stratification on multicenter endoscopic ultrasound (EUS) images plays a pivotal role in aiding the surgical decision-making process. This study focuses on automatically classifying higher-risk and lower-risk GISTs in the presence of a multicenter setting and limited data. METHODS In this study, we retrospectively enrolled 914 patients with GISTs (1824 EUS images in total) from 18 hospitals in China. We propose a triple normalization-based deep learning framework with ultrasound-specific pretraining and meta attention, namely, TN-USMA model. The triple normalization module consists of the intensity normalization, size normalization, and spatial resolution normalization. First, the image intensity is standardized and same-size regions of interest (ROIs) and same-resolution tumor masks are generated in parallel. Then, the transfer learning strategy is utilized to mitigate the data scarcity problem. The same-size ROIs are fed into a deep architecture with ultrasound-specific pretrained weights, which are obtained from self-supervised learning using a large volume of unlabeled ultrasound images. Meanwhile, tumors' size features are calculated from the same-resolution masks individually. Afterward, the size features together with two demographic features are integrated to the model before the final classification layer using a meta attention mechanism to further enhance feature representations. The diagnostic performance of the proposed method was compared with one radiomics-based method and two state-of-the-art deep learning methods. Four evaluation metrics, namely, the accuracy, the area under the receiver operator curve, the sensitivity, and the specificity were used to evaluate the model performance. RESULTS The proposed TN-USMA model achieves an overall accuracy of 0.834 (95% confidence interval [CI]: 0.772, 0.885), an area under the receiver operator curve of 0.881 (95% CI: 0.825, 0.924), a sensitivity of 0.844 (95% CI: 0.672, 0.947), and a specificity of 0.832 (95% CI: 0.762, 0.888). The AUC significantly outperforms other two deep learning approaches (p < 0.05, DeLong et al). Moreover, the performance is stable under different variations of multicenter dataset partitions. CONCLUSIONS The proposed TN-USMA model can successfully differentiate higher-risk GISTs from lower-risk ones. It is accurate, robust, generalizable, and efficient for potential clinical applications.
Collapse
Affiliation(s)
- Chengcheng Liu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Mengyun Qiao
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Fei Jiang
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Zhendong Jin
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China
| |
Collapse
|
31
|
Barros B, Lacerda P, Albuquerque C, Conci A. Pulmonary COVID-19: Learning Spatiotemporal Features Combining CNN and LSTM Networks for Lung Ultrasound Video Classification. SENSORS (BASEL, SWITZERLAND) 2021; 21:5486. [PMID: 34450928 PMCID: PMC8401701 DOI: 10.3390/s21165486] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 08/04/2021] [Accepted: 08/05/2021] [Indexed: 12/18/2022]
Abstract
Deep Learning is a very active and important area for building Computer-Aided Diagnosis (CAD) applications. This work aims to present a hybrid model to classify lung ultrasound (LUS) videos captured by convex transducers to diagnose COVID-19. A Convolutional Neural Network (CNN) performed the extraction of spatial features, and the temporal dependence was learned using a Long Short-Term Memory (LSTM). Different types of convolutional architectures were used for feature extraction. The hybrid model (CNN-LSTM) hyperparameters were optimized using the Optuna framework. The best hybrid model was composed of an Xception pre-trained on ImageNet and an LSTM containing 512 units, configured with a dropout rate of 0.4, two fully connected layers containing 1024 neurons each, and a sequence of 20 frames in the input layer (20×2018). The model presented an average accuracy of 93% and sensitivity of 97% for COVID-19, outperforming models based purely on spatial approaches. Furthermore, feature extraction using transfer learning with models pre-trained on ImageNet provided comparable results to models pre-trained on LUS images. The results corroborate with other studies showing that this model for LUS classification can be an important tool in the fight against COVID-19 and other lung diseases.
Collapse
Affiliation(s)
- Bruno Barros
- Institute of Computing, Campus Praia Vermelha, Fluminense Federal University, Niterói 24.210-346, Brazil; (P.L.); (C.A.); (A.C.)
| | | | | | | |
Collapse
|
32
|
Wen H, Zheng W, Li M, Li Q, Liu Q, Zhou J, Liu Z, Chen X. Multiparametric Quantitative US Examination of Liver Fibrosis: A Feature-engineering and Machine-learning Based Analysis. IEEE J Biomed Health Inform 2021; 26:715-726. [PMID: 34329172 DOI: 10.1109/jbhi.2021.3100319] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Quantitative ultrasound (QUS), which is commonly used to extract quantitative features from the ultrasound radiofrequency (RF) data or the RF envelope signals for tissue characterization, is becoming a promising technique for noninvasive assessments of liver fibrosis. However, the number of feature variables examined and finally used in the existing QUS methods is typically small, to some extent limiting the diagnostic performance. Therefore, this paper devises a new multiparametric QUS (MP-QUS) method which enables the extraction of a large number of feature variables from US RF signals and allows for the use of feature-engineering and machinelearning based algorithms for liver fibrosis assessment. In the MP-QUS, eighty-four feature variables were extracted from multiple QUS parametric maps derived from the RF signals and the envelope data. Afterwards, feature reduction and selection were performed in turn to remove the feature redundancy and identify the best combination of features in the reduced feature set. Finally, a variety of machine-learning algorithms were tested for classifying liver fibrosis with the selected features, based on the results of which the optimal classifier was established and used for final classification. The performance of the proposed MPQUS method for staging liver fibrosis was evaluated on an animal model, with histologic examination as the reference standard. The mean accuracy, sensitivity, specificity and area under the receiver-operating-characteristic curve achieved by MP-QUS are respectively 83.38%, 86.04%, 80.82% and 0.891 for recognizing significant liver fibrosis, and 85.50%, 88.92%, 85.24% and 0.924 for diagnosing liver cirrhosis. The proposed MP-QUS method paves a way for its future extension to assess liver fibrosis in human subjects.
Collapse
|
33
|
Marosán-Vilimszky P, Szalai K, Horváth A, Csabai D, Füzesi K, Csány G, Gyöngy M. Automated Skin Lesion Classification on Ultrasound Images. Diagnostics (Basel) 2021; 11:1207. [PMID: 34359290 PMCID: PMC8303815 DOI: 10.3390/diagnostics11071207] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Accepted: 06/30/2021] [Indexed: 11/17/2022] Open
Abstract
The growing incidence of skin cancer makes computer-aided diagnosis tools for this group of diseases increasingly important. The use of ultrasound has the potential to complement information from optical dermoscopy. The current work presents a fully automatic classification framework utilizing fully-automated (FA) segmentation and compares it with classification using two semi-automated (SA) segmentation methods. Ultrasound recordings were taken from a total of 310 lesions (70 melanoma, 130 basal cell carcinoma and 110 benign nevi). A support vector machine (SVM) model was trained on 62 features, with ten-fold cross-validation. Six classification tasks were considered, namely all the possible permutations of one class versus one or two remaining classes. The receiver operating characteristic (ROC) area under the curve (AUC) as well as the accuracy (ACC) were measured. The best classification was obtained for the classification of nevi from cancerous lesions (melanoma, basal cell carcinoma), with AUCs of over 90% and ACCs of over 85% obtained with all segmentation methods. Previous works have either not implemented FA ultrasound-based skin cancer classification (making diagnosis more lengthy and operator-dependent), or are unclear in their classification results. Furthermore, the current work is the first to assess the effect of implementing FA instead of SA classification, with FA classification never degrading performance (in terms of AUC or ACC) by more than 5%.
Collapse
Affiliation(s)
- Péter Marosán-Vilimszky
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083 Budapest, Hungary; (A.H.); (M.G.)
- Dermus Kft., Sopron út 64, 1116 Budapest, Hungary; (D.C.); (K.F.); (G.C.)
| | - Klára Szalai
- Department of Dermatology, Venereology and Dermatooncology, Semmelweis University, Mária u. 41, 1085 Budapest, Hungary;
| | - András Horváth
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083 Budapest, Hungary; (A.H.); (M.G.)
| | - Domonkos Csabai
- Dermus Kft., Sopron út 64, 1116 Budapest, Hungary; (D.C.); (K.F.); (G.C.)
| | - Krisztián Füzesi
- Dermus Kft., Sopron út 64, 1116 Budapest, Hungary; (D.C.); (K.F.); (G.C.)
| | - Gergely Csány
- Dermus Kft., Sopron út 64, 1116 Budapest, Hungary; (D.C.); (K.F.); (G.C.)
| | - Miklós Gyöngy
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083 Budapest, Hungary; (A.H.); (M.G.)
- Dermus Kft., Sopron út 64, 1116 Budapest, Hungary; (D.C.); (K.F.); (G.C.)
| |
Collapse
|
34
|
Rosa LG, Zia JS, Inan OT, Sawicki GS. Machine learning to extract muscle fascicle length changes from dynamic ultrasound images in real-time. PLoS One 2021; 16:e0246611. [PMID: 34038426 PMCID: PMC8153491 DOI: 10.1371/journal.pone.0246611] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/20/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Dynamic muscle fascicle length measurements through B-mode ultrasound have become popular for the non-invasive physiological insights they provide regarding musculoskeletal structure-function. However, current practices typically require time consuming post-processing to track muscle length changes from B-mode images. A real-time measurement tool would not only save processing time but would also help pave the way toward closed-loop applications based on feedback signals driven by in vivo muscle length change patterns. In this paper, we benchmark an approach that combines traditional machine learning (ML) models with B-mode ultrasound recordings to obtain muscle fascicle length changes in real-time. To gauge the utility of this framework for 'in-the-loop' applications, we evaluate accuracy of the extracted muscle length change signals against time-series' derived from a standard, post-hoc automated tracking algorithm. METHODS We collected B-mode ultrasound data from the soleus muscle of six participants performing five defined ankle motion tasks: (a) seated, constrained ankle plantarflexion, (b) seated, free ankle dorsi/plantarflexion, (c) weight-bearing, calf raises (d) walking, and then a (e) mix. We trained machine learning (ML) models by pairing muscle fascicle lengths obtained from standardized automated tracking software (UltraTrack) with the respective B-mode ultrasound image input to the tracker, frame-by-frame. Then we conducted hyperparameter optimizations for five different ML models using a grid search to find the best performing parameters for a combination of high correlation and low RMSE between ML and UltraTrack processed muscle fascicle length trajectories. Finally, using the global best model/hyperparameter settings, we comprehensively evaluated training-testing outcomes within subject (i.e., train and test on same subject), cross subject (i.e., train on one subject, test on another) and within/direct cross task (i.e., train and test on same subject, but different task). RESULTS Support vector machine (SVM) was the best performing model with an average r = 0.70 ±0.34 and average RMSE = 2.86 ±2.55 mm across all direct training conditions and average r = 0.65 ±0.35 and average RMSE = 3.28 ±2.64 mm when optimized for all cross-participant conditions. Comparisons between ML vs. UltraTrack (i.e., ground truth) tracked muscle fascicle length versus time data indicated that ML tracked images reliably capture the salient qualitative features in ground truth length change data, even when correlation values are on the lower end. Furthermore, in the direct training, calf raises condition, which is most comparable to previous studies validating automated tracking performance during isolated contractions on a dynamometer, our ML approach yielded 0.90 average correlation, in line with other accepted tracking methods in the field. CONCLUSIONS By combining B-mode ultrasound and classical ML models, we demonstrate it is possible to achieve real-time tracking of human soleus muscle fascicles across a number of functionally relevant contractile conditions. This novel sensing modality paves the way for muscle physiology in-the-loop applications that could be used to modify gait via biofeedback or unlock novel wearable device control techniques that could enable restored or augmented locomotion performance.
Collapse
Affiliation(s)
- Luis G. Rosa
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Jonathan S. Zia
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Emory University School of Medicine, Atlanta, Georgia, United States of America
| | - Omer T. Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Gregory S. Sawicki
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| |
Collapse
|
35
|
Using Predictive Modeling and Machine Learning to Identify Patients Appropriate for Outpatient Anterior Cervical Fusion and Discectomy. Spine (Phila Pa 1976) 2021; 46:665-670. [PMID: 33306613 DOI: 10.1097/brs.0000000000003865] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Retrospective, case-control. OBJECTIVE The aim of this study was to use predictive modeling and machine learning to develop novel tools for identifying patients who may be appropriate for single-level outpatient anterior cervical fusion and discectomy (ACDF), and to compare these to legacy metrics. SUMMARY OF BACKGROUND DATA ACDF performed in an ambulatory surgical setting has started to gain popularity in recent years. Currently there are no standardized risk-stratification tools for determining which patients may be safe candidates for outpatient ACDF. METHODS Adult patients with American Society of Anesthesiologists (ASA) Class 1, 2, or 3 undergoing one-level ACDF in inpatient or outpatient settings were identified in the National Surgical Quality Improvement Program database. Patients were deemed as "unsafe" for outpatient surgery if they suffered any complication within a week of the index operation. Two different methodologies were used to identify unsafe candidates: a novel predictive model derived from multivariable logistic regression of significant risk factors, and an artificial neural network (ANN) using preoperative variables. Both methods were trained using randomly split 70% of the dataset and validated on the remaining 30%. The methods were compared against legacy risk-stratification measures: ASA and Charlson Comorbidity Index (CCI) using area under the curve (AUC) statistic. RESULTS A total of 12,492 patients who underwent single-level ACDF met the study criteria. Of these, 9.79% (1223) were deemed unsafe for outpatient ACDF given development of a complication within 1 week of the index operation. The five clinical variables that were found to be significant in the multivariable predictive model were: advanced age, low hemoglobin, high international normalized ratio, low albumin, and poor functional status. The predictive model had an AUC of 0.757, which was significantly higher than the AUC of both ASA (0.66; P < 0.001) and CCI (0.60; P < 0.001). The ANN exhibited an AUC of 0.740, which was significantly higher than the AUCs of ASA and CCI (all, P < 0.05), and comparable to that of the predictive model (P > 0.05). CONCLUSION Predictive analytics and machine learning can be leveraged to aid in identification of patients who may be safe candidates for single-level outpatient ACDF. Surgeons and perioperative teams may find these tools useful to augment clinical decision-making.Level of Evidence: 3.
Collapse
|
36
|
Reverse Scan Conversion and Efficient Deep Learning Network Architecture for Ultrasound Imaging on a Mobile Device. SENSORS 2021; 21:s21082629. [PMID: 33918047 PMCID: PMC8070375 DOI: 10.3390/s21082629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 04/01/2021] [Accepted: 04/06/2021] [Indexed: 11/20/2022]
Abstract
Point-of-care ultrasound (POCUS), realized by recent developments in portable ultrasound imaging systems for prompt diagnosis and treatment, has become a major tool in accidents or emergencies. Concomitantly, the number of untrained/unskilled staff not familiar with the operation of the ultrasound system for diagnosis is increasing. By providing an imaging guide to assist clinical decisions and support diagnosis, the risk brought by inexperienced users can be managed. Recently, deep learning has been employed to guide users in ultrasound scanning and diagnosis. However, in a cloud-based ultrasonic artificial intelligence system, the use of POCUS is limited due to information security, network integrity, and significant energy consumption. To address this, we propose (1) a structure that simultaneously provides ultrasound imaging and a mobile device-based ultrasound image guide using deep learning, and (2) a reverse scan conversion (RSC) method for building an ultrasound training dataset to increase the accuracy of the deep learning model. Experimental results show that the proposed structure can achieve ultrasound imaging and deep learning simultaneously at a maximum rate of 42.9 frames per second, and that the RSC method improves the image classification accuracy by more than 3%.
Collapse
|
37
|
Wan KW, Wong CH, Ip HF, Fan D, Yuen PL, Fong HY, Ying M. Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: a comparative study. Quant Imaging Med Surg 2021; 11:1381-1393. [PMID: 33816176 DOI: 10.21037/qims-20-922] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Background In recent years, there was an increasing popularity in applying artificial intelligence in the medical field from computer-aided diagnosis (CAD) to patient prognosis prediction. Given the fact that not all healthcare professionals have the required expertise to develop a CAD system, the aim of this study was to investigate the feasibility of using AutoML Vision, a highly automatic machine learning model, for future clinical applications by comparing AutoML Vision with some commonly used CAD algorithms in the differentiation of benign and malignant breast lesions on ultrasound. Methods A total of 895 breast ultrasound images were obtained from the two online open-access ultrasound breast images datasets. Traditional machine learning models (comprising of seven commonly used CAD algorithms) with three content-based radiomic features (Hu Moments, Color Histogram, Haralick Texture) extracted, and a convolutional neural network (CNN) model were built using python language. AutoML Vision was trained in Google Cloud Platform. Sensitivity, specificity, F1 score and average precision (AUCPR) were used to evaluate the diagnostic performance of the models. Cochran's Q test was used to evaluate the statistical significance between all studied models and McNemar test was used as the post-hoc test to perform pairwise comparisons. The proposed AutoML model was also compared with the current related studies that involve similar medical imaging modalities in characterizing benign or malignant breast lesions. Results There was significant difference in the diagnostic performance among all studied traditional machine learning classifiers (P<0.05). Random Forest achieved the best performance in the differentiation of benign and malignant breast lesions (accuracy: 90%; sensitivity: 71%; specificity: 100%; F1 score: 0.83; AUCPR: 0.90) which was statistically comparable to the performance of CNN (accuracy: 91%; sensitivity: 82%; specificity: 96%; F1 score: 0.87; AUCPR: 0.88) and AutoML Vision (accuracy: 86%; sensitivity: 84%; specificity: 88%; F1 score: 0.83; AUCPR: 0.95) based on Cochran's Q test (P>0.05). Conclusions In this study, the performance of AutoML Vision was not significantly different from that of Random Forest (the best classifier among traditional machine learning models) and CNN. AutoML Vision showed relatively high accuracy and comparable to current commonly used classifiers which may prompt for future application in clinical practice.
Collapse
Affiliation(s)
- Ka Wing Wan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Chun Hoi Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ho Fung Ip
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Dejian Fan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Pak Leung Yuen
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Hoi Ying Fong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| |
Collapse
|
38
|
|
39
|
Development and Application of Artificial Intelligence in Auxiliary TCM Diagnosis. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2021; 2021:6656053. [PMID: 33763147 PMCID: PMC7955861 DOI: 10.1155/2021/6656053] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Revised: 02/10/2021] [Accepted: 02/24/2021] [Indexed: 01/10/2023]
Abstract
As an emerging comprehensive discipline, artificial intelligence (AI) has been widely applied in various fields, including traditional Chinese medicine (TCM), a treasure of the Chinese nation. Realizing the organic combination of AI and TCM can promote the inheritance and development of TCM. The paper summarizes the development and application of AI in auxiliary TCM diagnosis, analyzes the bottleneck of artificial intelligence in the field of auxiliary TCM diagnosis at present, and proposes a possible future direction of its development.
Collapse
|
40
|
Tsai CH, van der Burgt J, Vukovic D, Kaur N, Demi L, Canty D, Wang A, Royse A, Royse C, Haji K, Dowling J, Chetty G, Fontanarosa D. Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis. Phys Med 2021; 83:38-45. [DOI: 10.1016/j.ejmp.2021.02.023] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 02/09/2021] [Accepted: 02/22/2021] [Indexed: 12/13/2022] Open
|
41
|
Ayana G, Dese K, Choe SW. Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers (Basel) 2021; 13:738. [PMID: 33578891 PMCID: PMC7916666 DOI: 10.3390/cancers13040738] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 02/05/2021] [Accepted: 02/08/2021] [Indexed: 11/26/2022] Open
Abstract
Transfer learning is a machine learning approach that reuses a learning method developed for a task as the starting point for a model on a target task. The goal of transfer learning is to improve performance of target learners by transferring the knowledge contained in other (but related) source domains. As a result, the need for large numbers of target-domain data is lowered for constructing target learners. Due to this immense property, transfer learning techniques are frequently used in ultrasound breast cancer image analyses. In this review, we focus on transfer learning methods applied on ultrasound breast image classification and detection from the perspective of transfer learning approaches, pre-processing, pre-training models, and convolutional neural network (CNN) models. Finally, comparison of different works is carried out, and challenges-as well as outlooks-are discussed.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| |
Collapse
|
42
|
Shin Y, Yang J, Lee YH, Kim S. Artificial intelligence in musculoskeletal ultrasound imaging. Ultrasonography 2021; 40:30-44. [PMID: 33242932 PMCID: PMC7758096 DOI: 10.14366/usg.20080] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 09/04/2020] [Accepted: 09/06/2020] [Indexed: 12/14/2022] Open
Abstract
Ultrasonography (US) is noninvasive and offers real-time, low-cost, and portable imaging that facilitates the rapid and dynamic assessment of musculoskeletal components. Significant technological improvements have contributed to the increasing adoption of US for musculoskeletal assessments, as artificial intelligence (AI)-based computer-aided detection and computer-aided diagnosis are being utilized to improve the quality, efficiency, and cost of US imaging. This review provides an overview of classical machine learning techniques and modern deep learning approaches for musculoskeletal US, with a focus on the key categories of detection and diagnosis of musculoskeletal disorders, predictive analysis with classification and regression, and automated image segmentation. Moreover, we outline challenges and a range of opportunities for AI in musculoskeletal US practice.
Collapse
Affiliation(s)
- YiRang Shin
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Jaemoon Yang
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
- Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Korea
- Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Korea
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Sungjun Kim
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
43
|
Cengizler C, Kerem Ün M, Buyukkurt S. A novel evolutionary method for spine detection in ultrasound samples of spina bifida cases. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105787. [PMID: 33080492 DOI: 10.1016/j.cmpb.2020.105787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 09/30/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Spina bifida is a fetal spine defect observed during pregnancy. The defect is caused by unfinished closure of the embryonic neural column. Common diagnosis of the defect is still based on manual examination which aims to detect any deformation on spinal axis. This study proposes a novel evolutionary method for locating spinal axis on sonograms of spina bifida pathology. METHODS The method involves a meta-heuristic evolutionary approach, where the sonogram is automatically divided into columns and bone regions belonging to the spine are classified. Accordingly, a specific genetic algorithm is utilized which constructs a set of candidate spine axes. Fitness of the candidate axes is measured by a proposed problem-specific fitness function. A combination of conventional genetic operators and a novel energy minimization approach is applied to each population in order to explore the problem search space. RESULTS Results show that presented algorithm is generally able to distinguish the spinal bones from others even in the presence of severe morphological defects. CONCLUSION It is observed that the presented approach is promising and in most samples the spines identified by the proposed algorithm closely match those drawn by the experts. A computer assisted ultrasound diagnosis system specialized for spina bifida cases does not exist yet, but an algorithm to identify the spine, such as the one presented in this work, is the first natural step towards a diagnosis system. In the future, we intend to improve the algorithm by improving the segmentation stage and further optimizing the various stages of the genetic algorithm.
Collapse
Affiliation(s)
- Caglar Cengizler
- Department of Biomedical Engineering, Faculty of Engineering, Cukurova University, 01330, Turkey.
| | - M Kerem Ün
- Department of Biomedical Engineering, Faculty of Engineering, Cukurova University, 01330, Turkey.
| | - Selim Buyukkurt
- Department of Obstetrics and Gynaecology, Medical Faculty, Cukurova University, 01330, Turkey.
| |
Collapse
|
44
|
An Efficient and Effective Model to Handle Missing Data in Classification. BIOMED RESEARCH INTERNATIONAL 2020; 2020:8810143. [PMID: 33299878 PMCID: PMC7710403 DOI: 10.1155/2020/8810143] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 10/20/2020] [Accepted: 11/04/2020] [Indexed: 11/18/2022]
Abstract
Missing data is one of the most important causes in reduction of classification accuracy. Many real datasets suffer from missing values, especially in medical sciences. Imputation is a common way to deal with incomplete datasets. There are various imputation methods that can be applied, and the choice of the best method depends on the dataset conditions such as sample size, missing percent, and missing mechanism. Therefore, the better solution is to classify incomplete datasets without imputation and without any loss of information. The structure of the “Bayesian additive regression trees” (BART) model is improved with the “Missingness Incorporated in Attributes” approach to solve its inefficiency in handling the missingness problem. Implementation of MIA-within-BART is named “BART.m”. As the abilities of BART.m are not investigated in classification of incomplete datasets, this simulation-based study aimed to provide such resource. The results indicate that BART.m can be used even for datasets with 90 missing present and more importantly, it diagnoses the irrelevant variables and removes them by its own. BART.m outperforms common models for classification with incomplete data, according to accuracy and computational time. Based on the revealed properties, it can be said that BART.m is a high accuracy model in classification of incomplete datasets which avoids any assumptions and preprocess steps.
Collapse
|
45
|
Andersén C, Rydén T, Thunberg P, Lagerlöf JH. Deep learning-based digitization of prostate brachytherapy needles in ultrasound images. Med Phys 2020; 47:6414-6420. [PMID: 33012023 PMCID: PMC7821271 DOI: 10.1002/mp.14508] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 09/12/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To develop, and evaluate the performance of, a deep learning-based three-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) algorithm aimed at finding needles in ultrasound images used in prostate brachytherapy. METHODS Transrectal ultrasound (TRUS) image volumes from 1102 treatments were used to create a clinical ground truth (CGT) including 24422 individual needles that had been manually digitized by medical physicists during brachytherapy procedures. A 3D CNN U-net with 128 × 128 × 128 TRUS image volumes as input was trained using 17215 needle examples. Predictions of voxels constituting a needle were combined to yield a 3D linear function describing the localization of each needle in a TRUS volume. Manual and AI digitizations were compared in terms of the root-mean-square distance (RMSD) along each needle, expressed as median and interquartile range (IQR). The method was evaluated on a data set including 7207 needle examples. A subgroup of the evaluation data set (n = 188) was created, where the needles were digitized once more by a medical physicist (G1) trained in brachytherapy. The digitization procedure was timed. RESULTS The RMSD between the AI and CGT was 0.55 (IQR: 0.35-0.86) mm. In the smaller subset, the RMSD between AI and CGT was similar (0.52 [IQR: 0.33-0.79] mm) but significantly smaller (P < 0.001) than the difference of 0.75 (IQR: 0.49-1.20) mm between AI and G1. The difference between CGT and G1 was 0.80 (IQR: 0.48-1.18) mm, implying that the AI performed as well as the CGT in relation to G1. The mean time needed for human digitization was 10 min 11 sec, while the time needed for the AI was negligible. CONCLUSIONS A 3D CNN can be trained to identify needles in TRUS images. The performance of the network was similar to that of a medical physicist trained in brachytherapy. Incorporating a CNN for needle identification can shorten brachytherapy treatment procedures substantially.
Collapse
Affiliation(s)
- Christoffer Andersén
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Tobias Rydén
- Department of Medical Physics and Biomedical EngineeringSahlgrenska University HospitalGothenburgSweden
| | - Per Thunberg
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Jakob H. Lagerlöf
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
- Department of Medical PhysicsKarlstad Central HospitalKarlstadSweden
| |
Collapse
|
46
|
Zhou GQ, Huo EZ, Yuan M, Zhou P, Wang RL, Wang KN, Chen Y, He XP. A Single-Shot Region-Adaptive Network for Myotendinous Junction Segmentation in Muscular Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2531-2542. [PMID: 32167889 DOI: 10.1109/tuffc.2020.2979481] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Tracking the myotendinous junction (MTJ) in consecutive ultrasound images is crucial for understanding the mechanics and pathological conditions of the muscle-tendon unit. However, the lack of reliable and efficient identification of MTJ due to poor image quality and boundary ambiguity restricts its application in motion analysis. In recent years, with the rapid development of deep learning, the region-based convolution neural network (RCNN) has shown great potential in the field of simultaneous objection detection and instance segmentation in medical images. This article proposes a region-adaptive network (RAN) to localize MTJ region and to segment it in a single shot. Our model learns about the salient information of MTJ with the help of a composite architecture. Herein, a region-based multitask learning network explores the region containing MTJ, while a parallel end-to-end U-shaped path extracts the MTJ structure from the adaptively selected region for combating data imbalance and boundary ambiguity. By demonstrating the ultrasound images of the gastrocnemius, we showed that the RAN achieves superior segmentation performance when compared with the state-of-the-art Mask RCNN method with an average Dice score of 80.1%. Our proposed method is robust and reliable for advanced muscle and tendon function examinations obtained by ultrasound imaging.
Collapse
|
47
|
Breast Tumor Classification in Ultrasound Images Using Combined Deep and Handcrafted Features. SENSORS 2020; 20:s20236838. [PMID: 33265900 PMCID: PMC7730057 DOI: 10.3390/s20236838] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/19/2020] [Accepted: 11/22/2020] [Indexed: 12/24/2022]
Abstract
This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different extraction levels. The deep features extracted at each level are analyzed using a features selection algorithm to identify the deep feature combination that achieves the highest classification performance. Furthermore, the extracted deep features are combined with handcrafted texture and morphological features and processed using features selection to investigate the possibility of improving the classification performance. The cross-validation analysis, which is performed using 380 breast ultrasound images, shows that the best combination of deep features is obtained using a feature set, denoted by CONV features that include convolution features extracted from all convolution blocks of the VGG19 model. In particular, the CONV features achieved mean accuracy, sensitivity, and specificity values of 94.2%, 93.3%, and 94.9%, respectively. The analysis also shows that the performance of the CONV features degrades substantially when the features selection algorithm is not applied. The classification performance of the CONV features is improved by combining these features with handcrafted morphological features to achieve mean accuracy, sensitivity, and specificity values of 96.1%, 95.7%, and 96.3%, respectively. Furthermore, the cross-validation analysis demonstrates that the CONV features and the combined CONV and morphological features outperform the handcrafted texture and morphological features as well as the fine-tuned VGG19 model. The generalization performance of the CONV features and the combined CONV and morphological features is demonstrated by performing the training using the 380 breast ultrasound images and the testing using another dataset that includes 163 images. The results suggest that the combined CONV and morphological features can achieve effective breast ultrasound image classifications that increase the capability of detecting malignant tumors and reduce the potential of misclassifying benign tumors.
Collapse
|
48
|
Xie J, Song X, Zhang W, Dong Q, Wang Y, Li F, Wan C. A novel approach with dual-sampling convolutional neural network for ultrasound image classification of breast tumors. Phys Med Biol 2020; 65. [PMID: 33120380 DOI: 10.1088/1361-6560/abc5c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022]
Abstract
Breast cancer is one of the leading causes of female cancer deaths. Early diagnosis with prophylactic may improve the patients' prognosis. So far ultrasound (US) imaging is a popular method in breast cancer diagnosis. However, its accuracy is bounded to traditional handcrafted feature methods and expertise. A novel method named Dual-Sampling Convolutional Neural Networks (DSCNN) was proposed in this paper for the differential diagnosis of breast tumors based on US images. Combining traditional convolutional and residual networks, DSCNN prevented gradient disappearance and degradation. The prediction accuracy was increased by the parallel dual-sampling structure, which can effectively extract potential features from US images. Compared with other advanced deep learning methods and traditional handcraftedfeaturemethods,DSCNNreachedthebestperformance withanaccuracyof91.67%andan AUC of 0.939. The robustness of the proposed method was also verified by using a public dataset. Moreover, DSCNN was compared with evaluation from three radiologists utilizing US-BI-RADS lexicon categories for overall breast tumors assessment. The result demonstrated that the prediction sensitivity, specificity and accuracy of the DSCNN were higher than those of the radiologist with 10- year experience, suggesting that the DSCNN has the potential to help doctors make judgement in clinic.
Collapse
Affiliation(s)
- Jiang Xie
- School of Computer Engineering and Science, Shanghai University, Shanghai, CHINA
| | - Xiangshuai Song
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, CHINA
| | - Wu Zhang
- Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai, CHINA
| | - Qi Dong
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Yan Wang
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Fenghua Li
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Caifeng Wan
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, 200127, CHINA
| |
Collapse
|
49
|
Wei M, Du Y, Wu X, Su Q, Zhu J, Zheng L, Lv G, Zhuang J. A Benign and Malignant Breast Tumor Classification Method via Efficiently Combining Texture and Morphological Features on Ultrasound Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:5894010. [PMID: 33062038 PMCID: PMC7547332 DOI: 10.1155/2020/5894010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 09/01/2020] [Accepted: 09/15/2020] [Indexed: 12/14/2022]
Abstract
The classification of benign and malignant based on ultrasound images is of great value because breast cancer is an enormous threat to women's health worldwide. Although both texture and morphological features are crucial representations of ultrasound breast tumor images, their straightforward combination brings little effect for improving the classification of benign and malignant since high-dimensional texture features are too aggressive so that drown out the effect of low-dimensional morphological features. For that, an efficient texture and morphological feature combing method is proposed to improve the classification of benign and malignant. Firstly, both texture (i.e., local binary patterns (LBP), histogram of oriented gradients (HOG), and gray-level co-occurrence matrixes (GLCM)) and morphological (i.e., shape complexities) features of breast ultrasound images are extracted. Secondly, a support vector machine (SVM) classifier working on texture features is trained, and a naive Bayes (NB) classifier acting on morphological features is designed, in order to exert the discriminative power of texture features and morphological features, respectively. Thirdly, the classification scores of the two classifiers (i.e., SVM and NB) are weighted fused to obtain the final classification result. The low-dimensional nonparameterized NB classifier is effectively control the parameter complexity of the entire classification system combine with the high-dimensional parametric SVM classifier. Consequently, texture and morphological features are efficiently combined. Comprehensive experimental analyses are presented, and the proposed method obtains a 91.11% accuracy, a 94.34% sensitivity, and an 86.49% specificity, which outperforms many related benign and malignant breast tumor classification methods.
Collapse
Affiliation(s)
- Mengwan Wei
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yongzhao Du
- College of Engineering, Huaqiao University, Quanzhou 362021, China
- School of Medicine, Huaqiao University, Quanzhou 362021, China
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, China
| | - Xiuming Wu
- The First Hospital of Quanzhou, Fujian Medical University, Quanzhou 350005, China
| | - Qichen Su
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, China
- Department of Medical Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Jianqing Zhu
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Lixin Zheng
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Guorong Lv
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, China
- Department of Medical Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Jiafu Zhuang
- Quanzhou Institute of Equipment Manufacturing, Haixi Institutes, Chinese Academy of Sciences, 362216 Quanzhou, China
| |
Collapse
|
50
|
Mao WB, Lyu JY, Vaishnani DK, Lyu YM, Gong W, Xue XL, Shentu YP, Ma J. Application of artificial neural networks in detection and diagnosis of gastrointestinal and liver tumors. World J Clin Cases 2020; 8:3971-3977. [PMID: 33024753 PMCID: PMC7520792 DOI: 10.12998/wjcc.v8.i18.3971] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 05/10/2020] [Accepted: 06/28/2020] [Indexed: 02/05/2023] Open
Abstract
As a form of artificial intelligence, artificial neural networks (ANNs) have the advantages of adaptability, parallel processing capabilities, and non-linear processing. They have been widely used in the early detection and diagnosis of tumors. In this article, we introduce the development, working principle, and characteristics of ANNs and review the research progress on the application of ANNs in the detection and diagnosis of gastrointestinal and liver tumors.
Collapse
Affiliation(s)
- Wei-Bo Mao
- Department of Pathology, Lishui Hospital of Zhejiang University, Lishui Central Hospital, Lishui 323000, Zhejiang Province, China
| | - Jia-Yu Lyu
- Department of Psychiatry, The Affiliated Kangning Hospital of Wenzhou Medical University, Wenzhou 325000, Zhejiang Province, China
| | - Deep K Vaishnani
- School of International Studies, Wenzhou Medical University, Wenzhou 325035, Zhejiang Province, China
| | - Yu-Man Lyu
- College of Civil Engineering and Architecture, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Wei Gong
- Department of Pathology, Lishui Hospital of Zhejiang University, Lishui Central Hospital, Lishui 323000, Zhejiang Province, China
| | - Xi-Ling Xue
- Department of Psychiatry, The Affiliated Kangning Hospital of Wenzhou Medical University, Wenzhou 325000, Zhejiang Province, China
| | - Yang-Ping Shentu
- Department of Pathology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, Zhejiang, China
| | - Jun Ma
- Department of Pathology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, Zhejiang, China
| |
Collapse
|