1
|
Wang Z, Wang X, Wang T, Qiu J, Lu W. Localization and Risk Stratification of Thyroid Nodules in Ultrasound Images Through Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:882-887. [PMID: 38494413 DOI: 10.1016/j.ultrasmedbio.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 01/03/2024] [Accepted: 02/20/2024] [Indexed: 03/19/2024]
Abstract
OBJECTIVE Deep learning algorithms have commonly been used for the differential diagnosis between benign and malignant thyroid nodules. The aim of the study described here was to develop an integrated system that combines a deep learning model and a clinical standard Thyroid Imaging Reporting and Data System (TI-RADS) for the simultaneous segmentation and risk stratification of thyroid nodules. METHODS Three hundred four ultrasound images from two independent sites with TI-RADS 4 thyroid nodules were collected. The edge connection and Criminisi algorithm were used to remove manually induced markers in ultrasound images. An integrated system based on TI-RADS and a mask region-based convolution neural network (Mask R-CNN) was proposed to stratify subclasses of TI-RADS 4 thyroid nodules and to segment thyroid nodules in the ultrasound images. Accuracy and the precision-recall curve were used to evaluate stratification performance, and the Dice similarity coefficient (DSC) between the segmentation of Mask R-CNN and the radiologist's contour was used to evaluate the segmentation performance of the model. RESULTS The combined approach could significantly enhance the performance of the proposed integrated system. Overall stratification accuracy of TI-RADS 4 thyroid nodules, mean average precision and mean DSC of the proposed model in the independent test set was 90.79%, 0.8579 and 0.83, respectively. Specifically, stratification accuracy values for TI-RADS 4a, 4b and 4c thyroid nodules were 95.83%, 84.21% and 77.78%, respectively. CONCLUSION An integrated system combining TI-RADS and a deep learning model was developed. The system can provide clinicians with not only diagnostic assistance from TI-RADS but also accurate segmentation of thyroid nodules, which improves the applicability of the system in clinical practice.
Collapse
Affiliation(s)
- Zhipeng Wang
- Department of Radiology, Second Affiliated Hospital of Shandong First Medical University, Tai'an, China; School of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Tai'an, China
| | - Xiuzhu Wang
- Department of Obstetrics, Tai'an City Central Hospital, Tai'an, China
| | - Ting Wang
- Department of Ultrasound, Zoucheng Maternity and Child Healthcare Hospital, Jining, China
| | - Jianfeng Qiu
- Department of Radiology, Second Affiliated Hospital of Shandong First Medical University, Tai'an, China; School of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Tai'an, China
| | - Weizhao Lu
- Department of Radiology, Second Affiliated Hospital of Shandong First Medical University, Tai'an, China.
| |
Collapse
|
2
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
3
|
Li X, Wang Y, Zhao Y, Wei Y. Fast Speckle Noise Suppression Algorithm in Breast Ultrasound Image Using Three-Dimensional Deep Learning. Front Physiol 2022; 13:880966. [PMID: 35492597 PMCID: PMC9043555 DOI: 10.3389/fphys.2022.880966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 03/28/2022] [Indexed: 11/22/2022] Open
Abstract
The rapid development of ultrasound medical imaging technology has greatly broadened the scope of application of ultrasound, which has been widely used in the screening, diagnosis of breast diseases and so on. However, the presence of excessive speckle noise in breast ultrasound images can greatly reduce the image resolution and affect the observation and judgment of patients’ condition. Therefore, it is particularly important to investigate image speckle noise suppression. In the paper, we propose fast speckle noise suppression algorithm in breast ultrasound image using three-dimensional (3D) deep learning. Firstly, according to the gray value of the breast ultrasound image, the input breast ultrasound image contrast is enhanced using logarithmic and exponential transforms, and guided filter algorithm was used to enhance the details of glandular ultrasound image, and spatial high-pass filtering algorithm was used to suppress the excessive sharpening of breast ultrasound image to complete the pre-processing of breast ultrasound image and improve the image clarity; Secondly, the pre-processed breast ultrasound images were input into the 3D convolutional cloud neural network image speckle noise suppression model; Finally, the edge sensitive terms were introduced into the 3D convolutional cloud neural network to suppress the speckle noise of breast ultrasound images while retaining image edge information. The experiments demonstrate that the mean square error and false recognition rate all reduced to below 1.2% at the 100th iteration of training, and the 3D convolutional cloud neural network is well trained, and the signal-to-noise ratio of ultrasound image speckle noise suppression is greater than 60 dB, the peak signal-to-noise ratio is greater than 65 dB, the edge preservation index value exceeds the experimental threshold of 0.45, the speckle noise suppression time is low, the edge information is well preserved, and the image details are clearly visible. The speckle noise suppression time is low, the edge information is well preserved, and the image details are clearly visible, which can be applied to the field of breast ultrasound diagnosis.
Collapse
Affiliation(s)
- Xiaofeng Li
- Department of Information Engineering, Heilongjiang International University, Harbin, China
- *Correspondence: Xiaofeng Li,
| | - Yanwei Wang
- School of Mechanical Engineering, Harbin Institute of Petroleum, Harbin, China
| | | | - Yanbo Wei
- School of Automatic Control Engineering, Harbin Institute of Petroleum, Harbin, China
| |
Collapse
|
4
|
Yang R, Yu J, Yin J, Liu K, Xu S. An FA-SegNet Image Segmentation Model Based on Fuzzy Attention and Its Application in Cardiac MRI Segmentation. INT J COMPUT INT SYS 2022. [DOI: 10.1007/s44196-022-00080-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
AbstractAiming at the medical images segmentation with low-recognition and high background noise, a deep convolution neural network image segmentation model based on fuzzy attention mechanism is proposed, which is called FA-SegNet. It takes SegNet as the basic framework. In the down-sampling module for image feature extraction, a fuzzy channel-attention module is added to strengthen the discrimination of different target regions. In the up-sampling module for image size restoration and multi-scale feature fusion, a fuzzy spatial-attention module is added to reduce the loss of image details and expand the receptive field. In this paper, fuzzy cognition is introduced into the feature fusion of CNNs. Based on the attention mechanism, fuzzy membership is used to re-calibrate the importance of the pixel value in local regions. It can strengthen the distinguishing ability of image features, and the fusion ability of the contextual information, which improves the segmentation accuracy of the target regions. Taking MRI segmentation as an experimental example, multiple targets such as the left ventricles, right ventricles, and left ventricular myocardium are selected as the segmentation targets. The pixels accuracy is 92.47%, the mean intersection to union is 86.18%, and the Dice coefficient is 92.44%, which are improved compared with other methods. It verifies the accuracy and applicability of the proposed method for the medical images segmentation, especially the targets with low-recognition and serious occlusion.
Collapse
|
5
|
Yin J, Zhou Z, Xu S, Yang R, Liu K. A 3D Grouped Convolutional Network Fused with Conditional Random Field and Its Application in Image Multi-target Fine Segmentation. INT J COMPUT INT SYS 2022. [DOI: 10.1007/s44196-022-00065-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
AbstractAiming at the utilization of adjacent image correlation information in multi-target segmentation of 3D image slices and the optimization of segmentation results, a 3D grouped fully convolutional network fused with conditional random fields (3D-GFCN) is proposed. The model takes fully convolutional network (FCN) as the image segmentation infrastructure, and fully connected conditional random field (FCCRF) as the post-processing tool. It expands the 2D convolution into 3D operations, and uses a shortcut-connection structure to achieve feature fusion of different levels and scales, to realizes the fine-segmentation of 3D image slices. 3D-GFCN uses 3D convolution kernel to correlate the information of 3D image adjacent slices, uses the context correlation and probability exploration mechanism of FCCRF to optimize the segmentation results, and uses the grouped convolution to reduce the model parameters. The dice loss that can ignore the influence of background pixels is used as the training objective function to reduce the influence of the imbalance quantity between background pixels and target pixels. The model can automatically study and focus on target structures of different shapes and sizes in the image, highlight the salient features useful for specific tasks. In the mechanism, it can improve the shortcomings and limitations of the existing image segmentation algorithms, such as insignificant morphological features of the target image, weak correlation of spatial information and discontinuous segmentation results, and improve the accuracy of multi-target segmentation results and learning efficiency. Take abdominal abnormal tissue detection and multi-target segmentation based on 3D computer tomography (CT) images as verification experiments. In the case of small-scale and unbalanced data set, the average Dice coefficient is 88.8%, the Class Pixel Accuracy is 95.3%, and Intersection of Union is 87.8%. Compared with other methods, the performance evaluation index and segmentation accuracy are significantly improved. It shows that the proposed method has good applicability for solving typical multi-target image segmentation problems, such as boundary overlap, offset deformation and low contrast.
Collapse
|
6
|
Yin J, Zhou Z, Xu S, Yang R, Liu K. A Generative Adversarial Network Fused with Dual-Attention Mechanism and Its Application in Multitarget Image Fine Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:2464648. [PMID: 34961814 PMCID: PMC8710171 DOI: 10.1155/2021/2464648] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/20/2021] [Indexed: 11/21/2022]
Abstract
Aiming at the problem of insignificant target morphological features, inaccurate detection and unclear boundary of small-target regions, and multitarget boundary overlap in multitarget complex image segmentation, combining the image segmentation mechanism of generative adversarial network with the feature enhancement method of nonlocal attention, a generative adversarial network fused with attention mechanism (AM-GAN) is proposed. The generative network in the model is composed of residual network and nonlocal attention module, which use the feature extraction and multiscale fusion mechanism of residual network, as well as feature enhancement and global information fusion ability of nonlocal spatial-channel dual attention to enhance the target features in the detection area and improve the continuity and clarity of the segmentation boundary. The adversarial network is composed of fully convolutional networks, which penalizes the loss of information in small-target regions by judging the authenticity of prediction and label segmentation and improves the detection ability of the generative adversarial model for small targets and the accuracy of multitarget segmentation. AM-GAN can use the GAN's inherent mechanism that reconstruct and repair high-resolution image, as well as the ability of nonlocal attention global receptive field to strengthen detail features, automatically learn to focus on target structures of different shapes and sizes, highlight salient features useful for specific tasks, reduce the loss of image detail features, improve the accuracy of small-target detection, and optimize the segmentation boundary of multitargets. Taking medical MRI abdominal image segmentation as a verification experiment, multitargets such as liver, left/right kidney, and spleen are selected for segmentation and abnormal tissue detection. In the case of small and unbalanced sample datasets, the class pixels' accuracy reaches 87.37%, the intersection over union is 92.42%, and the average Dice coefficient is 93%. Compared with other methods in the experiment, the segmentation precision and accuracy are greatly improved. It shows that the proposed method has good applicability for solving typical multitarget image segmentation problems such as small-target feature detection, boundary overlap, and offset deformation.
Collapse
Affiliation(s)
- Jian Yin
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266 590, China
| | - Zhibo Zhou
- Qingdao Ruisi Intelligent Technology Co., Ltd., Qingdao 266 590, China
| | - Shaohua Xu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266 590, China
| | - Ruiping Yang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266 590, China
| | - Kun Liu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266 590, China
| |
Collapse
|