1
|
Wu Y, Chen G, Feng Z, Cui H, Rao F, Ni Y, Huang Z, Zhu W. Phase Difference Network for Efficient Differentiation of Hepatic Tumors with Multi-Phase CT. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083466 DOI: 10.1109/embc40787.2023.10340090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Liver cancer has been one of the top causes of cancer-related death. For developing an accurate treatment strategy and raising the survival rate, the differentiation of liver cancers is essential. Multiphase CT recently acts as the primary examination method for clinical diagnosis. Deep learning techniques based on multiphase CT have been proposed to distinguish hepatic cancers. However, due to the recurrent mechanism, RNN-based approaches require expensive calculations whereas CNN-based models fail to explicitly establish temporal correlations among phases. In this paper, we proposed a phase difference network, termed as Phase Difference Network (PDN), to identify two liver cancer, hepatocellular carcinoma and intrahepatic cholangiocarcinoma, from four-phase CT. Specifically, the phase difference was used as interphase temporal information in a differential attention module, which enhanced the feature representation. Additionally, utilizing a multihead self-attention module, a transformer-based classification module was employed to explore the long-term context and capture the temporal relation between phases. Clinical datasets are used in experiments to compare the performance of the proposed strategy versus conventional approaches. The results indicate that the proposed method outperforms the traditional deep learning based methods.
Collapse
|
2
|
Ferreira MR, Torres HR, Oliveira B, de Araujo ARVF, Morais P, Novais P, Vilaca JL. Deep Learning Networks for Breast Lesion Classification in Ultrasound Images: A Comparative Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083151 DOI: 10.1109/embc40787.2023.10340293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Accurate lesion classification as benign or malignant in breast ultrasound (BUS) images is a critical task that requires experienced radiologists and has many challenges, such as poor image quality, artifacts, and high lesion variability. Thus, automatic lesion classification may aid professionals in breast cancer diagnosis. In this scope, computer-aided diagnosis systems have been proposed to assist in medical image interpretation, outperforming the intra and inter-observer variability. Recently, such systems using convolutional neural networks have demonstrated impressive results in medical image classification tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the performance comparison of networks. This work is a benchmark for lesion classification in BUS images comparing six state-of-the-art networks: GoogLeNet, InceptionV3, ResNet, DenseNet, MobileNetV2, and EfficientNet. For each network, five input data variations that include segmentation information were tested to compare their impact on the final performance. The methods were trained on a multi-center BUS dataset (BUSI and UDIAT) and evaluated using the following metrics: precision, sensitivity, F1-score, accuracy, and area under the curve (AUC). Overall, the lesion with a thin border of background provides the best performance. For this input data, EfficientNet obtained the best results: an accuracy of 97.65% and an AUC of 96.30%.Clinical Relevance- This study showed the potential of deep neural networks to be used in clinical practice for breast lesion classification, also suggesting the best model choices.
Collapse
|
3
|
Ribeiro RF, Torres HR, Oliveira B, Morais P, Vilaca JL. Comparative analysis of deep learning methods for lesion detection on full screening mammography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082575 DOI: 10.1109/embc40787.2023.10340501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Breast cancer is the most prevalent type of cancer in women. Although mammography is used as the main imaging modality for the diagnosis, robust lesion detection in mammography images is a challenging task, due to the poor contrast of the lesion boundaries and the widely diverse sizes and shapes of the lesions. Deep Learning techniques have been explored to facilitate automatic diagnosis and have produced outstanding outcomes when used for different medical challenges. This study provides a benchmark for breast lesion detection in mammography images. Five state-of-art methods were evaluated on 1592 mammograms from a publicly available dataset (CBIS-DDSM) and compared considering the following seven metrics: i) mean Average Precision (mAP); ii) intersection over union; iii) precision; iv) recall; v) True Positive Rate (TPR); and vi) false positive per image. The CenterNet, YOLOv5, Faster-R-CNN, EfficientDet, and RetinaNet architectures were trained with a combination of the L1 localization loss and L2 localization loss. Despite all evaluated networks having mAP ratings greater than 60%, two managed to stand out among the evaluated networks. In general, the results demonstrate the efficiency of the model CenterNet with Hourglass-104 as its backbone and the model YOLOv5, achieving mAP scores of 70.71% and 69.36%, and TPR scores of 96.10% and 92.19%, respectively, outperforming the state-of-the-art models.Clinical Relevance - This study demonstrates the effectiveness of deep learning algorithms for breast lesion detection in mammography, potentially improving the accuracy and efficiency of breast cancer diagnosis.
Collapse
|
4
|
Costa N, Ferreira L, de Araújo ARVF, Oliveira B, Torres HR, Morais P, Alves V, Vilaça JL. Augmented Reality-Assisted Ultrasound Breast Biopsy. SENSORS (BASEL, SWITZERLAND) 2023; 23:1838. [PMID: 36850436 PMCID: PMC9961993 DOI: 10.3390/s23041838] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/17/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
Breast cancer is the most prevalent cancer in the world and the fifth-leading cause of cancer-related death. Treatment is effective in the early stages. Thus, a need to screen considerable portions of the population is crucial. When the screening procedure uncovers a suspect lesion, a biopsy is performed to assess its potential for malignancy. This procedure is usually performed using real-time Ultrasound (US) imaging. This work proposes a visualization system for US breast biopsy. It consists of an application running on AR glasses that interact with a computer application. The AR glasses track the position of QR codes mounted on an US probe and a biopsy needle. US images are shown in the user's field of view with enhanced lesion visualization and needle trajectory. To validate the system, latency of the transmission of US images was evaluated. Usability assessment compared our proposed prototype with a traditional approach with different users. It showed that needle alignment was more precise, with 92.67 ± 2.32° in our prototype versus 89.99 ± 37.49° in a traditional system. The users also reached the lesion more accurately. Overall, the proposed solution presents promising results, and the use of AR glasses as a tracking and visualization device exhibited good performance.
Collapse
Affiliation(s)
- Nuno Costa
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimaraes, Portugal
| | - Luís Ferreira
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
| | - Augusto R. V. F. de Araújo
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Institute of Computing, Universidade Federal Fluminense (UFF), Niteroi 24210-310, Brazil
| | - Bruno Oliveira
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimaraes, Portugal
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, 4710-057 Braga, Portugal
- ICVS/3B’s—PT Government Associate Laboratory, 4710-057 Braga/Guimaraes, Portugal
| | - Helena R. Torres
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimaraes, Portugal
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, 4710-057 Braga, Portugal
- ICVS/3B’s—PT Government Associate Laboratory, 4710-057 Braga/Guimaraes, Portugal
| | - Pedro Morais
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
| | - Victor Alves
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimaraes, Portugal
| | - João L. Vilaça
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
| |
Collapse
|