1
|
Guérendel C, Petrychenko L, Chupetlovska K, Bodalal Z, Beets-Tan RGH, Benson S. Generalizability, robustness, and correction bias of segmentations of thoracic organs at risk in CT images. Eur Radiol 2024:10.1007/s00330-024-11321-2. [PMID: 39738559 DOI: 10.1007/s00330-024-11321-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 10/28/2024] [Accepted: 11/28/2024] [Indexed: 01/02/2025]
Abstract
OBJECTIVE This study aims to assess and compare two state-of-the-art deep learning approaches for segmenting four thoracic organs at risk (OAR)-the esophagus, trachea, heart, and aorta-in CT images in the context of radiotherapy planning. MATERIALS AND METHODS We compare a multi-organ segmentation approach and the fusion of multiple single-organ models, each dedicated to one OAR. All were trained using nnU-Net with the default parameters and the full-resolution configuration. We evaluate their robustness with adversarial perturbations, and their generalizability on external datasets, and explore potential biases introduced by expert corrections compared to fully manual delineations. RESULTS The two approaches show excellent performance with an average Dice score of 0.928 for the multi-class setting and 0.930 when fusing the four single-organ models. The evaluation of external datasets and common procedural adversarial noise demonstrates the good generalizability of these models. In addition, expert corrections of both models show significant bias to the original automated segmentation. The average Dice score between the two corrections is 0.93, ranging from 0.88 for the trachea to 0.98 for the heart. CONCLUSION Both approaches demonstrate excellent performance and generalizability in segmenting four thoracic OARs, potentially improving efficiency in radiotherapy planning. However, the multi-organ setting proves advantageous for its efficiency, requiring less training time and fewer resources, making it a preferable choice for this task. Moreover, corrections of AI segmentation by clinicians may lead to biases in the results of AI approaches. A test set, manually annotated, should be used to assess the performance of such methods. KEY POINTS Question While manual delineation of thoracic organs at risk is labor-intensive, prone to errors, and time-consuming, evaluation of AI models performing this task lacks robustness. Findings The deep-learning model using the nnU-Net framework showed excellent performance, generalizability, and robustness in segmenting thoracic organs in CT, enhancing radiotherapy planning efficiency. Clinical relevance Automatic segmentation of thoracic organs at risk can save clinicians time without compromising the quality of the delineations, and extensive evaluation across diverse settings demonstrates the potential of integrating such models into clinical practice.
Collapse
Affiliation(s)
- Corentin Guérendel
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands.
- GROW-Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.
| | - Liliana Petrychenko
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Kalina Chupetlovska
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands
- University Hospital St. Ivan Rilski, Sofia, Bulgaria
| | - Zuhir Bodalal
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW-Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW-Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Sean Benson
- Department of Radiology, Antoni van Leeuwenhoek-The Netherlands Cancer Institute, Amsterdam, The Netherlands
- Department of Cardiology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Lou X, Zhu J, Yang J, Zhu Y, Shu H, Li B. Enhanced Cross-stage-attention U-Net for esophageal target volume segmentation. BMC Med Imaging 2024; 24:339. [PMID: 39696039 DOI: 10.1186/s12880-024-01515-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 11/26/2024] [Indexed: 12/20/2024] Open
Abstract
PURPOSE The segmentation of target volume and organs at risk (OAR) was a significant part of radiotherapy. Specifically, determining the location and scale of the esophagus in simulated computed tomography images was difficult and time-consuming primarily due to its complex structure and low contrast with the surrounding tissues. In this study, an Enhanced Cross-stage-attention U-Net was proposed to solve the segmentation problem for the esophageal gross tumor volume (GTV) and clinical tumor volume (CTV) in CT images. METHODS First, a module based on principal component analysis theory was constructed to pre-extract the features of the input image. Then, a cross-stage based feature fusion model was designed to replace the skip concatenation of original UNet, which was composed of Wide Range Attention unit, Small-kernel Local Attention unit, and Inverted Bottleneck unit. WRA was employed to capture global attention, whose large convolution kernel was further decomposed to simplify the calculation. SLA was used to complement the local attention to WRA. IBN was structed to fuse the extracted features, where a global frequency response layer was built to redistribute the frequency response of the fused feature maps. RESULTS The proposed method was compared with relevant published esophageal segmentation methods. The prediction of the proposed network was MSD = 2.83(1.62, 4.76)mm, HD = 11.79 ± 6.02 mm, DC = 72.45 ± 19.18% in GTV; MSD = 5.26(2.18, 8.82)mm, HD = 16.22 ± 10.01 mm, DC = 71.06 ± 17.72% in CTV. CONCLUSION The reconstruction of the skip concatenation in UNet showed an improvement of performance for esophageal segmentation. The results showed the proposed network had better effect on esophageal GTV and CTV segmentation.
Collapse
Affiliation(s)
- Xiao Lou
- Laboratory of Image Science and Technology, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Sipailou 2, Nanjing, P.R. China
- Department of Radiotherapy, Lishui People's Hospital, No. 1188, Liyang Street, Lishui, P.R. China
| | - Juan Zhu
- Department of Respiratory Medicine, The People's Hospital of Zhangqiuqu Area, No. 1920, Huiquan Street, Jinan, P.R. China
| | - Jian Yang
- Department of Clinical Laboratory, The People's Hospital of Zhangqiuqu Area, No. 1920, Huiquan Street, Jinan, P.R. China
| | - Youzhe Zhu
- Laboratory of Image Science and Technology, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Sipailou 2, Nanjing, P.R. China.
- Department of Radiotherapy, Lishui People's Hospital, No. 1188, Liyang Street, Lishui, P.R. China.
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Sipailou 2, Nanjing, P.R. China.
| | - Baosheng Li
- Laboratory of Image Science and Technology, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Sipailou 2, Nanjing, P.R. China.
- Shandong First Medical University and Shandong Academy of Medical Sciences, Shandong Cancer Hospital and Institute, No. 440, Jiyan Street, Jinan, P.R. China.
| |
Collapse
|
3
|
Jang DH, Lee J, Jeon YJ, Yoon YE, Ahn H, Kang BK, Choi WS, Oh J, Lee DK. Kidney, ureter, and urinary bladder segmentation based on non-contrast enhanced computed tomography images using modified U-Net. Sci Rep 2024; 14:15325. [PMID: 38961140 PMCID: PMC11222420 DOI: 10.1038/s41598-024-66045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 06/26/2024] [Indexed: 07/05/2024] Open
Abstract
This study was performed to segment the urinary system as the basis for diagnosing urinary system diseases on non-contrast computed tomography (CT). This study was conducted with images obtained between January 2016 and December 2020. During the study period, non-contrast abdominopelvic CT scans of patients and diagnosed and treated with urinary stones at the emergency departments of two institutions were collected. Region of interest extraction was first performed, and urinary system segmentation was performed using a modified U-Net. Thereafter, fivefold cross-validation was performed to evaluate the robustness of the model performance. In fivefold cross-validation results of the segmentation of the urinary system, the average dice coefficient was 0.8673, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9651, 0.7172, and 0.9196, respectively. In the test dataset, the average dice coefficient of best performing model in fivefold cross validation for whole urinary system was 0.8623, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9613, 0.7225, and 0.9032, respectively. The segmentation of the urinary system using the modified U-Net proposed in this study could be the basis for the detection of kidney, ureter, and urinary bladder lesions, such as stones and tumours, through machine learning.
Collapse
Affiliation(s)
- Dong-Hyun Jang
- Department of Public Healthcare Service, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | | | - Young Eun Yoon
- Department of Urology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Hyungwoo Ahn
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Bo-Kyeong Kang
- Department of Radiology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Won Seok Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea.
| | - Dong Keon Lee
- Department of Emergency Medicine, Seoul National University Bundang Hospital, 13620, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea.
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Du W, Guo H, Chen B, Cui M, Zhang T, Sun D, Ma H. Cascaded-TOARNet: A cascaded framework based on mixed attention and multiscale information for thoracic OARs segmentation. Med Phys 2024; 51:3405-3420. [PMID: 38063140 DOI: 10.1002/mp.16881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 10/20/2023] [Accepted: 11/19/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Accurate and automated segmentation of thoracic organs-at-risk (OARs) is critical for radiotherapy treatment planning of thoracic cancers. However, this has remained a challenging task for four major reasons: (1) thoracic OARs have diverse morphologies; (2) thoracic OARs have low contrast with the background; (3) boundaries of thoracic OARs are blurry; (4) class imbalance issue caused by small organs. PURPOSE To overcome the above challenges and achieve accurate and automated segmentation of thoracic OARs on thoracic CT. METHODS A novel cascaded framework based on mixed attention and multiscale information for thoracic OARs segmentation, called Cascaded-TOARNet. This cascaded framework comprises two stages: localization and segmentation. During the localization stage, TOARNet locates each organ to crop the regions of interest (ROIs). During the segmentation stage, TOARNet accurately segments the ROIs, and the segmentation results are merged into a complete result. RESULTS We evaluated our proposed method and other common segmentation methods on two public datasets: the AAPM Thoracic Auto-Segmentation Challenge dataset and the Segmentation of Thoracic Organs at Risk (SegTHOR) dataset. Our method demonstrated superior performance, achieving a mean Dice score of 92.6% on the SegTHOR dataset and 90.8% on the AAPM dataset. CONCLUSIONS This segmentation method holds great promise as an essential tool for enhancing the efficiency of thoracic radiotherapy planning.
Collapse
Affiliation(s)
- Wu Du
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Huimin Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Boyang Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Ming Cui
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - Teng Zhang
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - Deyu Sun
- Gastrointestinal and Urinary and Musculoskeletal Cancer, Cancer Hospital of Dalian University of Technology, Shenyang, Liaoning, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, Liaoning, China
| |
Collapse
|
5
|
Huang Y, Jiao J, Yu J, Zheng Y, Wang Y. RsALUNet: A reinforcement supervision U-Net-based framework for multi-ROI segmentation of medical images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
6
|
Karimi D, Rollins CK, Velasco-Annis C, Ouaalam A, Gholipour A. Learning to segment fetal brain tissue from noisy annotations. Med Image Anal 2023; 85:102731. [PMID: 36608414 PMCID: PMC9974964 DOI: 10.1016/j.media.2022.102731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 11/17/2022] [Accepted: 12/23/2022] [Indexed: 01/03/2023]
Abstract
Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19-39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Caitlin K Rollins
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clemente Velasco-Annis
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Abdelhakim Ouaalam
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
7
|
Krishnamurthy R, Mummudi N, Goda JS, Chopra S, Heijmen B, Swamidas J. Using Artificial Intelligence for Optimization of the Processes and Resource Utilization in Radiotherapy. JCO Glob Oncol 2022; 8:e2100393. [PMID: 36395438 PMCID: PMC10166445 DOI: 10.1200/go.21.00393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The radiotherapy (RT) process from planning to treatment delivery is a multistep, complex operation involving numerous levels of human-machine interaction and requiring high precision. These steps are labor-intensive and time-consuming and require meticulous coordination between professionals with diverse expertise. We reviewed and summarized the current status and prospects of artificial intelligence and machine learning relevant to the various steps in RT treatment planning and delivery workflow specifically in low- and middle-income countries (LMICs). We also searched the PubMed database using the search terms (Artificial Intelligence OR Machine Learning OR Deep Learning OR Automation OR knowledge-based planning AND Radiotherapy) AND (list of Low- and Middle-Income Countries as defined by the World Bank at the time of writing this review). The search yielded a total of 90 results, of which results with first authors from the LMICs were chosen. The reference lists of retrieved articles were also reviewed to search for more studies. No language restrictions were imposed. A total of 20 research items with unique study objectives conducted with the aim of enhancing RT processes were examined in detail. Artificial intelligence and machine learning can improve the overall efficiency of RT processes by reducing human intervention, aiding decision making, and efficiently executing lengthy, repetitive tasks. This improvement could permit the radiation oncologist to redistribute resources and focus on responsibilities such as patient counseling, education, and research, especially in resource-constrained LMICs.
Collapse
Affiliation(s)
- Revathy Krishnamurthy
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Naveen Mummudi
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Jayant Sastri Goda
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Supriya Chopra
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Ben Heijmen
- Division of Medical Physics, Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - Jamema Swamidas
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| |
Collapse
|
8
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
9
|
Divya S, Padma Suresh L, John A. Enhanced deep-joint segmentation with deep learning networks of glioma tumor for multi-grade classification using MR images. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01064-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Diniz JOB, Dias Júnior DA, da Cruz LB, da Silva GLF, Ferreira JL, Pontes DBQ, Silva AC, de Paiva AC, Gattas M. Heart segmentation in planning CT using 2.5D U-Net++ with attention gate. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2043779] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- J. O. B. Diniz
- Laboratory Innovation Factory, Federal Institute of Maranhão (IFMA), Grajaú - MA, Brazil
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - D. A. Dias Júnior
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - L. B. da Cruz
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - G. L. F. da Silva
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
- Dom Bosco Higher Education Unit (UNDB), São Luís - MA, Brazil
| | - J. L. Ferreira
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - D. B. Q. Pontes
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - A. C. Silva
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - A. C. de Paiva
- Applied Computing Group, Federal University of Maranhão (UFMA), São Luís - MA, Brazil
| | - M. Gattas
- Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro, RJ, Brazil
| |
Collapse
|
11
|
Araújo JDL, da Cruz LB, Diniz JOB, Ferreira JL, Silva AC, de Paiva AC, Gattass M. Liver segmentation from computed tomography images using cascade deep learning. Comput Biol Med 2022; 140:105095. [PMID: 34902610 DOI: 10.1016/j.compbiomed.2021.105095] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 11/17/2021] [Accepted: 11/27/2021] [Indexed: 12/18/2022]
Abstract
BACKGROUND Liver segmentation is a fundamental step in the treatment planning and diagnosis of liver cancer. However, manual segmentation of liver is time-consuming because of the large slice quantity and subjectiveness associated with the specialist's experience, which can lead to segmentation errors. Thus, the segmentation process can be automated using computational methods for better time efficiency and accuracy. However, automatic liver segmentation is a challenging task, as the liver can vary in shape, ill-defined borders, and lesions, which affect its appearance. We aim to propose an automatic method for liver segmentation using computed tomography (CT) images. METHODS The proposed method, based on deep convolutional neural network models and image processing techniques, comprise of four main steps: (1) image preprocessing, (2) initial segmentation, (3) reconstruction, and (4) final segmentation. RESULTS We evaluated the proposed method using 131 CT images from the LiTS image base. An average sensitivity of 95.45%, an average specificity of 99.86%, an average Dice coefficient of 95.64%, an average volumetric overlap error (VOE) of 8.28%, an average relative volume difference (RVD) of -0.41%, and an average Hausdorff distance (HD) of 26.60 mm were achieved. CONCLUSIONS This study demonstrates that liver segmentation, even when lesions are present in CT images, can be efficiently performed using a cascade approach and including a reconstruction step based on deep convolutional neural networks.
Collapse
Affiliation(s)
- José Denes Lima Araújo
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Luana Batista da Cruz
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - João Otávio Bandeira Diniz
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil; Federal Institute of Maranhão, BR-226, SN, Campus Grajaú, Vila Nova, 65 940-000, Grajaú, MA, Brazil.
| | - Jonnison Lima Ferreira
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil; Federal Institute of Amazonas, Rua Santos Dumont, SN, Campus Tabatinga, Vila Verde, 69 640-000, Tabatinga, AM, Brazil.
| | - Aristófanes Corrêa Silva
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Anselmo Cardoso de Paiva
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, 22 453-900, Rio de Janeiro, RJ, Brazil.
| |
Collapse
|
12
|
Bandeira Diniz JO, Ferreira JL, Bandeira Diniz PH, Silva AC, Paiva AC. A deep learning method with residual blocks for automatic spinal cord segmentation in planning CT. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Yu X, Tang S, Cheang CF, Yu HH, Choi IC. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention. SENSORS 2021; 22:s22010283. [PMID: 35009825 PMCID: PMC8749873 DOI: 10.3390/s22010283] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
Collapse
Affiliation(s)
- Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Chak Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
- Correspondence: (C.F.C.); (H.H.Y.)
| | - Hon Ho Yu
- Kiang Wu Hospital, Santo António, Macau;
- Correspondence: (C.F.C.); (H.H.Y.)
| | | |
Collapse
|
14
|
Dias Júnior DA, da Cruz LB, Bandeira Diniz JO, França da Silva GL, Junior GB, Silva AC, de Paiva AC, Nunes RA, Gattass M. Automatic method for classifying COVID-19 patients based on chest X-ray images, using deep features and PSO-optimized XGBoost. EXPERT SYSTEMS WITH APPLICATIONS 2021; 183:115452. [PMID: 34177133 PMCID: PMC8218245 DOI: 10.1016/j.eswa.2021.115452] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 02/18/2021] [Accepted: 06/14/2021] [Indexed: 05/05/2023]
Abstract
The COVID-19 pandemic, which originated in December 2019 in the city of Wuhan, China, continues to have a devastating effect on the health and well-being of the global population. Currently, approximately 8.8 million people have already been infected and more than 465,740 people have died worldwide. An important step in combating COVID-19 is the screening of infected patients using chest X-ray (CXR) images. However, this task is extremely time-consuming and prone to variability among specialists owing to its heterogeneity. Therefore, the present study aims to assist specialists in identifying COVID-19 patients from their chest radiographs, using automated computational techniques. The proposed method has four main steps: (1) the acquisition of the dataset, from two public databases; (2) the standardization of images through preprocessing; (3) the extraction of features using a deep features-based approach implemented through the networks VGG19, Inception-v3, and ResNet50; (4) the classifying of images into COVID-19 groups, using eXtreme Gradient Boosting (XGBoost) optimized by particle swarm optimization (PSO). In the best-case scenario, the proposed method achieved an accuracy of 98.71%, a precision of 98.89%, a recall of 99.63%, and an F1-score of 99.25%. In our study, we demonstrated that the problem of classifying CXR images of patients under COVID-19 and non-COVID-19 conditions can be solved efficiently by combining a deep features-based approach with a robust classifier (XGBoost) optimized by an evolutionary algorithm (PSO). The proposed method offers considerable advantages for clinicians seeking to tackle the current COVID-19 pandemic.
Collapse
Affiliation(s)
- Domingos Alves Dias Júnior
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Luana Batista da Cruz
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - João Otávio Bandeira Diniz
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
- Federal Institute of Maranhão BR-226, SN, Campus Grajaú, Vila Nova 65940-00, Grajaú, MA, Brazil
| | | | - Geraldo Braz Junior
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Aristófanes Corrêa Silva
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Anselmo Cardoso de Paiva
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Rodolfo Acatauassú Nunes
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel 20551-030, Rio de Janeiro, RJ, Brazil
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, 22453-900, Rio de Janeiro, RJ, Brazil
| |
Collapse
|
15
|
Zhang G, Yang Z, Huo B, Chai S, Jiang S. Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106419. [PMID: 34563895 DOI: 10.1016/j.cmpb.2021.106419] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/12/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurately and reliably defining organs at risk (OARs) and tumors are the cornerstone of radiation therapy (RT) treatment planning for lung cancer. Almost all segmentation networks based on deep learning techniques rely on fully annotated data with strong supervision. However, existing public imaging datasets encountered in the RT domain frequently include singly labelled tumors or partially labelled organs because annotating full OARs and tumors in CT images is both rigorous and tedious. To utilize labelled data from different sources, we proposed a dual-path semi-supervised conditional nnU-Net for OARs and tumor segmentation that is trained on a union of partially labelled datasets. METHODS The framework employs the nnU-Net as the base model and introduces a conditioning strategy by incorporating auxiliary information as an additional input layer into the decoder. The conditional nnU-Net efficiently leverages prior conditional information to classify the target class at the pixelwise level. Specifically, we employ the uncertainty-aware mean teacher (UA-MT) framework to assist in OARs segmentation, which can effectively leverage unlabelled data (images from a tumor labelled dataset) by encouraging consistent predictions of the same input under different perturbations. Furthermore, we individually design different combinations of loss functions to optimize the segmentation of OARs (Dice loss and cross-entropy loss) and tumors (Dice loss and focal loss) in a dual path. RESULTS The proposed method is evaluated on two publicly available datasets of the spinal cord, left and right lung, heart, esophagus, and lung tumor, in which satisfactory segmentation performance has been achieved in term of both the region-based Dice similarity coefficient (DSC) and the boundary-based Hausdorff distance (HD). CONCLUSIONS The proposed semi-supervised conditional nnU-Net breaks down the barriers between nonoverlapping labelled datasets and further alleviates the problem of "data hunger" and "data waste" in multi-class segmentation. The method has the potential to help radiologists with RT treatment planning in clinical practice.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Bin Huo
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shude Chai
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
16
|
Lee S, Summers RM. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol Clin North Am 2021; 59:987-1002. [PMID: 34689882 DOI: 10.1016/j.rcl.2021.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Organ segmentation, chest radiograph classification, and lung and liver nodule detections are some of the popular artificial intelligence (AI) tasks in chest and abdominal radiology due to the wide availability of public datasets. AI algorithms have achieved performance comparable to humans in less time for several organ segmentation tasks, and some lesion detection and classification tasks. This article introduces the current published articles of AI applied to chest and abdominal radiology, including organ segmentation, lesion detection, classification, and predicting prognosis.
Collapse
Affiliation(s)
- Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
17
|
Pecere S, Milluzzo SM, Esposito G, Dilaghi E, Telese A, Eusebi LH. Applications of Artificial Intelligence for the Diagnosis of Gastrointestinal Diseases. Diagnostics (Basel) 2021; 11:diagnostics11091575. [PMID: 34573917 PMCID: PMC8469485 DOI: 10.3390/diagnostics11091575] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 12/16/2022] Open
Abstract
The development of convolutional neural networks has achieved impressive advances of machine learning in recent years, leading to an increasing use of artificial intelligence (AI) in the field of gastrointestinal (GI) diseases. AI networks have been trained to differentiate benign from malignant lesions, analyze endoscopic and radiological GI images, and assess histological diagnoses, obtaining excellent results and high overall diagnostic accuracy. Nevertheless, there data are lacking on side effects of AI in the gastroenterology field, and high-quality studies comparing the performance of AI networks to health care professionals are still limited. Thus, large, controlled trials in real-time clinical settings are warranted to assess the role of AI in daily clinical practice. This narrative review gives an overview of some of the most relevant potential applications of AI for gastrointestinal diseases, highlighting advantages and main limitations and providing considerations for future development.
Collapse
Affiliation(s)
- Silvia Pecere
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00135 Rome, Italy;
- Center for Endoscopic Research Therapeutics and Training (CERTT), Catholic University, 00168 Rome, Italy
- Correspondence: (S.P.); (L.H.E.)
| | - Sebastian Manuel Milluzzo
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00135 Rome, Italy;
- Fondazione Poliambulanza Istituto Ospedaliero, 25121 Brescia, Italy
| | - Gianluca Esposito
- Department of Medical-Surgical Sciences and Translational Medicine, Sant’Andrea Hospital, Sapienza University of Rome, 00168 Rome, Italy; (G.E.); (E.D.)
| | - Emanuele Dilaghi
- Department of Medical-Surgical Sciences and Translational Medicine, Sant’Andrea Hospital, Sapienza University of Rome, 00168 Rome, Italy; (G.E.); (E.D.)
| | - Andrea Telese
- Department of Gastroenterology, University College London Hospital (UCLH), London NW1 2AF, UK;
| | - Leonardo Henry Eusebi
- Division of Gastroenterology and Endoscopy, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40121 Bologna, Italy
- Department of Medical and Surgical Sciences, University of Bologna, 40121 Bologna, Italy
- Correspondence: (S.P.); (L.H.E.)
| |
Collapse
|
18
|
Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm. SENSORS 2021; 21:s21134556. [PMID: 34283090 PMCID: PMC8271959 DOI: 10.3390/s21134556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 06/24/2021] [Accepted: 06/28/2021] [Indexed: 11/16/2022]
Abstract
One essential step in radiotherapy treatment planning is the organ at risk of segmentation in Computed Tomography (CT). Many recent studies have focused on several organs such as the lung, heart, esophagus, trachea, liver, aorta, kidney, and prostate. However, among the above organs, the esophagus is one of the most difficult organs to segment because of its small size, ambiguous boundary, and very low contrast in CT images. To address these challenges, we propose a fully automated framework for the esophagus segmentation from CT images. The proposed method is based on the processing of slice images from the original three-dimensional (3D) image so that our method does not require large computational resources. We employ the spatial attention mechanism with the atrous spatial pyramid pooling module to locate the esophagus effectively, which enhances the segmentation performance. To optimize our model, we use group normalization because the computation is independent of batch sizes, and its performance is stable. We also used the simultaneous truth and performance level estimation (STAPLE) algorithm to reach robust results for segmentation. Firstly, our model was trained by k-fold cross-validation. And then, the candidate labels generated by each fold were combined by using the STAPLE algorithm. And as a result, Dice and Hausdorff Distance scores have an improvement when applying this algorithm to our segmentation results. Our method was evaluated on SegTHOR and StructSeg 2019 datasets, and the experiment shows that our method outperforms the state-of-the-art methods in esophagus segmentation. Our approach shows a promising result in esophagus segmentation, which is still challenging in medical analyses.
Collapse
|
19
|
Diniz JOB, Quintanilha DBP, Santos Neto AC, da Silva GLF, Ferreira JL, Netto SMB, Araújo JDL, Da Cruz LB, Silva TFB, da S. Martins CM, Ferreira MM, Rego VG, Boaro JMC, Cipriano CLS, Silva AC, de Paiva AC, Junior GB, de Almeida JDS, Nunes RA, Mogami R, Gattass M. Segmentation and quantification of COVID-19 infections in CT using pulmonary vessels extraction and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:29367-29399. [PMID: 34188605 PMCID: PMC8224997 DOI: 10.1007/s11042-021-11153-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 05/26/2021] [Accepted: 06/03/2021] [Indexed: 05/07/2023]
Abstract
At the end of 2019, the World Health Organization (WHO) reported pneumonia that started in Wuhan, China, as a global emergency problem. Researchers quickly advanced in research to try to understand this COVID-19 and sough solutions for the front-line professionals fighting this fatal disease. One of the tools to aid in the detection, diagnosis, treatment, and prevention of this disease is computed tomography (CT). CT images provide valuable information on how this new disease affects the lungs of patients. However, the analysis of these images is not trivial, especially when researchers are searching for quick solutions. Detecting and evaluating this disease can be tiring, time-consuming, and susceptible to errors. Thus, in this study, we aim to automatically segment infections caused by COVID19 and provide quantitative measures of these infections to specialists, thus serving as a support tool. We use a database of real clinical cases from Pedro Ernesto University Hospital of the State of Rio de Janeiro, Brazil. The method involves five steps: lung segmentation, segmentation and extraction of pulmonary vessels, infection segmentation, infection classification, and infection quantification. For the lung segmentation and infection segmentation tasks, we propose modifications to the traditional U-Net, including batch normalization, leaky ReLU, dropout, and residual block techniques, and name it as Residual U-Net. The proposed method yields an average Dice value of 77.1% and an average specificity of 99.76%. For quantification of infectious findings, the proposed method achieves results like that of specialists, and no measure presented a value of ρ < 0.05 in the paired t-test. The results demonstrate the potential of the proposed method as a tool to help medical professionals combat COVID-19. fight the COVID-19.
Collapse
Affiliation(s)
- João O. B. Diniz
- Federal Institute of Maranhão, BR-226, SN, Campus Grajaú, Vila Nova, Grajaú, MA 65940-00 Brazil
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Darlan B. P. Quintanilha
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Antonino C. Santos Neto
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Giovanni L. F. da Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
- Dom Bosco Higher Education Unit (UNDB), Av. Colares Moreira, 443 - Jardim Renascença, São Luís, MA 65075-441 Brazil
| | - Jonnison L. Ferreira
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
- Federal Institute of Amazonas (IFAM), BR-226, SN, Campus Grajaú, Vila Nova, Grajaú, MA 65940-00 Brazil
| | - Stelmo M. B. Netto
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - José D. L. Araújo
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Luana B. Da Cruz
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Thamila F. B. Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Caio M. da S. Martins
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Marcos M. Ferreira
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Venicius G. Rego
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - José M. C. Boaro
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Carolina L. S. Cipriano
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Aristófanes C. Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Anselmo C. de Paiva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Geraldo Braz Junior
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - João D. S. de Almeida
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Rodolfo A. Nunes
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel, Rio de Janeiro, RJ 20551-030 Brazil
| | - Roberto Mogami
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel, Rio de Janeiro, RJ 20551-030 Brazil
| | - M. Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, Rio de Janeiro, RJ 22453-900 Brazil
| |
Collapse
|