1
|
Khorshidi A. Tumor segmentation via enhanced area growth algorithm for lung CT images. BMC Med Imaging 2023; 23:189. [PMID: 37986046 PMCID: PMC10662793 DOI: 10.1186/s12880-023-01126-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 10/16/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 ± 0.02 and 0.92 ± 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.
Collapse
Affiliation(s)
- Abdollah Khorshidi
- School of Paramedical, Gerash University of Medical Sciences, P.O. Box: 7441758666, Gerash, Iran.
| |
Collapse
|
2
|
Kukla P, Maciejewska K, Strojna I, Zapał M, Zwierzchowski G, Bąk B. Extended Reality in Diagnostic Imaging-A Literature Review. Tomography 2023; 9:1071-1082. [PMID: 37368540 DOI: 10.3390/tomography9030088] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/21/2023] [Accepted: 05/22/2023] [Indexed: 06/29/2023] Open
Abstract
The utilization of extended reality (ER) has been increasingly explored in the medical field over the past ten years. A comprehensive analysis of scientific publications was conducted to assess the applications of ER in the field of diagnostic imaging, including ultrasound, interventional radiology, and computed tomography. The study also evaluated the use of ER in patient positioning and medical education. Additionally, we explored the potential of ER as a replacement for anesthesia and sedation during examinations. The use of ER technologies in medical education has received increased attention in recent years. This technology allows for a more interactive and engaging educational experience, particularly in anatomy and patient positioning, although the question may be asked: is the technology and maintenance cost worth the investment? The results of the analyzed studies suggest that implementing augmented reality in clinical practice is a positive phenomenon that expands the diagnostic capabilities of imaging studies, education, and positioning. The results suggest that ER has significant potential to improve diagnostic imaging procedures' accuracy and efficiency and enhance the patient experience through increased visualization and understanding of medical conditions. Despite these promising advancements, further research is needed to fully realize the potential of ER in the medical field and to address the challenges and limitations associated with its integration into clinical practice.
Collapse
Affiliation(s)
- Paulina Kukla
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
| | - Karolina Maciejewska
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
| | - Iga Strojna
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
| | - Małgorzata Zapał
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
- Department of Adult Neurology, Medical University of Gdansk, 80-210 Gdansk, Poland
| | - Grzegorz Zwierzchowski
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
- Department of Medical Physics, Greater Poland Cancer Centre, 61-866 Poznan, Poland
| | - Bartosz Bąk
- Department of Electroradiology, Poznan University of Medical Sciences, 61-866 Poznan, Poland
- Department of Radiotherapy II, Greater Poland Cancer Centre, 61-866 Poznan, Poland
| |
Collapse
|
3
|
Soltani-Nabipour J, Khorshidi A, Noorian B. Lung tumor segmentation using improved region growing algorithm. NUCLEAR ENGINEERING AND TECHNOLOGY 2020. [DOI: 10.1016/j.net.2020.03.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
4
|
A novel algorithm for refining cerebral vascular measurements in infants and adults. J Neurosci Methods 2020; 340:108751. [PMID: 32344044 DOI: 10.1016/j.jneumeth.2020.108751] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 04/03/2020] [Accepted: 04/21/2020] [Indexed: 01/11/2023]
Abstract
BACKGROUND Comprehensive quantification of intracranial vascular characteristics by vascular tracing provides an objective clinical assessment of vascular structure. However, weak signal or low contrast in small distal arteries, artifacts due to volitional motion, and vascular pulsation are challenges for accurate vessel tracing from 3D time-of-flight (3D-TOF) magnetic resonance angiography (MRA) images. NEW METHOD A vascular measurement refinement algorithm is developed and validated for robust quantification of intracranial vasculature from 3D-TOF MRA. After automated vascular tracing, centerline positions, lumen radii and centerline deviations are jointly optimized to restrict traces to within vascular regions in the straightened curved planar reformation (CPR) views. The algorithm is validated on simulated vascular images and on repeat 3D-TOF MRA acquired from infants and adults. RESULTS The refinement algorithm can reliably estimate vascular radius and correct deviated centerlines. For the simulated vascular image with noise level of 1 and deviation of centerline of 3, the mean radius difference is below 15.3 % for scan-rescan reliability. Vascular features from repeated clinical scans show significantly improved measurement agreement, with intra-class correlation coefficient (ICC) improvement from 0.55 to 0.7 for infants and from 0.59 to 0.92 for adults. COMPARISON WITH EXISTING METHODS The refinement algorithm is novel because it utilizes straightened CPR views that incorporate information from the entire artery. In addition, the optimization corrects centerline positions, lumen radii and centerline deviations simultaneously. CONCLUSIONS Intracranial vasculature quantification using a novel refinement algorithm for vascular tracing improves the reliability of vascular feature measurements in both infants and adults.
Collapse
|
5
|
Shah A, Abámoff MD, Wu X. Optimal surface segmentation with convex priors in irregularly sampled space. Med Image Anal 2019; 54:63-75. [PMID: 30836307 DOI: 10.1016/j.media.2019.02.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 01/29/2019] [Accepted: 02/07/2019] [Indexed: 12/23/2022]
Abstract
Optimal surface segmentation is a state-of-the-art method used for segmentation of multiple globally optimal surfaces in volumetric datasets. The method is widely used in numerous medical image segmentation applications. However, nodes in the graph based optimal surface segmentation method typically encode uniformly distributed orthogonal voxels of the volume. Thus the segmentation cannot attain an accuracy greater than a single unit voxel, i.e. the distance between two adjoining nodes in graph space. Segmentation accuracy higher than a unit voxel is achievable by exploiting partial volume information in the voxels which shall result in non-equidistant spacing between adjoining graph nodes. This paper reports a generalized graph based multiple surface segmentation method with convex priors which can optimally segment the target surfaces in an irregularly sampled space. The proposed method allows non-equidistant spacing between the adjoining graph nodes to achieve subvoxel segmentation accuracy by utilizing the partial volume information in the voxels. The partial volume information in the voxels is exploited by computing a displacement field from the original volume data to identify the subvoxel-accurate centers within each voxel resulting in non-equidistant spacing between the adjoining graph nodes. The smoothness of each surface modeled as a convex constraint governs the connectivity and regularity of the surface. We employ an edge-based graph representation to incorporate the necessary constraints and the globally optimal solution is obtained by computing a minimum s-t cut. The proposed method was validated on 10 intravascular multi-frame ultrasound image datasets for subvoxel segmentation accuracy. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional segmentations.
Collapse
Affiliation(s)
- Abhay Shah
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - Michael D Abámoff
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA; Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA; Department of Radiation Oncology, University of Iowa, Iowa City, IA, 52242, USA.
| |
Collapse
|
6
|
Shah A, Zhou L, Abrámoff MD, Wu X. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images. BIOMEDICAL OPTICS EXPRESS 2018; 9:4509-4526. [PMID: 30615698 PMCID: PMC6157759 DOI: 10.1364/boe.9.004509] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 08/17/2018] [Accepted: 08/18/2018] [Indexed: 05/07/2023]
Abstract
Automated segmentation of object boundaries or surfaces is crucial for quantitative image analysis in numerous biomedical applications. For example, retinal surfaces in optical coherence tomography (OCT) images play a vital role in the diagnosis and management of retinal diseases. Recently, graph based surface segmentation and contour modeling have been developed and optimized for various surface segmentation tasks. These methods require expertly designed, application specific transforms, including cost functions, constraints and model parameters. However, deep learning based methods are able to directly learn the model and features from training data. In this paper, we propose a convolutional neural network (CNN) based framework to segment multiple surfaces simultaneously. We demonstrate the application of the proposed method by training a single CNN to segment three retinal surfaces in two types of OCT images - normal retinas and retinas affected by intermediate age-related macular degeneration (AMD). The trained network directly infers the segmentations for each B-scan in one pass. The proposed method was validated on 50 retinal OCT volumes (3000 B-scans) including 25 normal and 25 intermediate AMD subjects. Our experiment demonstrated statistically significant improvement of segmentation accuracy compared to the optimal surface segmentation method with convex priors (OSCS) and two deep learning based UNET methods for both types of data. The average computation time for segmenting an entire OCT volume (consisting of 60 B-scans each) for the proposed method was 12.3 seconds, demonstrating low computation costs and higher performance compared to the graph based optimal surface segmentation and UNET based methods.
Collapse
Affiliation(s)
- Abhay Shah
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
| | - Leixin Zhou
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
| | - Michael D. Abrámoff
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA,
USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA,
USA
| |
Collapse
|
7
|
Lassen-Schmidt BC, Kuhnigk JM, Konrad O, van Ginneken B, van Rikxoort EM. Fast interactive segmentation of the pulmonary lobes from thoracic computed tomography data. Phys Med Biol 2017; 62:6649-6665. [PMID: 28570264 DOI: 10.1088/1361-6560/aa7674] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Automated lung lobe segmentation methods often fail for challenging and clinically relevant cases with incomplete fissures or substantial amounts of pathology. We present a fast and intuitive method to interactively correct a given lung lobe segmentation or to quickly create a lobe segmentation from scratch based on a lung mask. A given lobar boundary is converted into a mesh by principal component analysis of 3D lobar boundary markers to obtain a plane where nodes correspond to the position of the markers. An observer can modify the mesh by drawing on 2D slices in arbitrary orientations. After each drawing, the mesh is immediately adapted in a 3D region around the user interaction. For evaluation we participated in the international lung lobe segmentation challenge LObe and lung analysis 2011 (LOLA11). Two observers applied the method to correct a given lung lobe segmentation obtained by a fully automatic method for all 55 CT scans of LOLA11. On average observer 1/2 required 8 ± 4/25 ± 12 interactions per case and took 1:30 ± 0:34/3:19 ± 1:29 min. The average distances to the reference segmentation were improved from an initial 2.68 ± 14.71 mm to 0.89 ± 1.63/0.74 ± 1.51 mm. In addition, one observer applied the proposed method to create a segmentation from scratch. This took 3:44 ± 0:58 minutes on average per case, applying an average of 20 ± 3 interactions to reach an average distance to the reference of 0.77 ± 1.14 mm. Thus, both the interactive corrections and the creation of a segmentation from scratch were feasible in a short time with excellent results and only little interaction. Since the mesh adaptation is independent of image features, the method can successfully handle patients with severe pathologies, provided that the human operator is capable of correctly indicating the lobar boundaries.
Collapse
|
8
|
Patwardhan KA, Beichel RR, Smith BJ, Mart C, Plichta KA, Chang T, Sonka M, Graham MM, Magnotta V, Casavant T, Buatti JM. Development of a radiobiological evaluation tool to assess the expected clinical impacts of contouring accuracy between manual and semi-automated segmentation algorithms. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:3409-3412. [PMID: 29060629 DOI: 10.1109/embc.2017.8037588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
RADEval is a tool developed to assess the expected clinical impact of contouring accuracy when comparing manual contouring and semi-automated segmentation. The RADEval tool, designed to process large scale datasets, imported a total of 2,760 segmentation datasets, along with a Simultaneous Truth and Performance Level Estimation (STAPLE) to act as ground truth tumor segmentations. Virtual dose-maps were created within RADEval and two different tumor control probability (TCP) values using a Logistic and a Poisson TCP models were calculated in RADEval using each STAPLE and each dose-map. RADEval also virtually generated a ring of normal tissue. To evaluate clinical impact, two different uncomplicated TCP (UTCP) values were calculated in RADEval by using two TCP-NTCP correlation parameters (δ = 0 and 1). NTCP values showed that semi-automatic segmentation resulted in lower NTCP with an average 1.5 - 1.6 % regardless of STAPLE design. This was true even though each normal tissue was created from each STAPLE (p <; 0.00001). TCP and UTCP presented no statistically significant differences (p ≥ 0.1884). The intra-operator standard deviations (SDs) for TCP, NTCP and UTCP were significantly lower for the semi-automatic segmentation method regardless of STAPLE design (p <; 0.0331). Both intra-and inter-operator SDs of TCP, NTCP and UTCP were significantly lower for semi-automatic segmentation for the STAPLE 1 design (p <;0.0331). RADEval was able to efficiently process 4,920 datasets of two STAPLE designs and successfully assess the expected clinical impact of contouring accuracy.
Collapse
|
9
|
Beichel RR, Van Tol M, Ulrich EJ, Bauer C, Chang T, Plichta KA, Smith BJ, Sunderland JJ, Graham MM, Sonka M, Buatti JM. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach. Med Phys 2017; 43:2948-2964. [PMID: 27277044 PMCID: PMC4874930 DOI: 10.1118/1.4948679] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.
Collapse
Affiliation(s)
- Reinhard R Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242; The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242; and Department of Internal Medicine, The University of Iowa, Iowa City, Iowa 52242
| | - Markus Van Tol
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Ethan J Ulrich
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Christian Bauer
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Tangel Chang
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242
| | - Kristin A Plichta
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242
| | - Brian J Smith
- Department of Biostatistics, The University of Iowa, Iowa City, Iowa 52242
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, Iowa 52242
| | - Michael M Graham
- Department of Radiology, The University of Iowa, Iowa City, Iowa 52242
| | - Milan Sonka
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242; Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242; and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - John M Buatti
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
10
|
Park SH, Lee S, Yun ID, Lee SU. Structured patch model for a unified automatic and interactive segmentation framework. Med Image Anal 2015; 24:297-312. [PMID: 25682219 DOI: 10.1016/j.media.2015.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Revised: 01/05/2015] [Accepted: 01/19/2015] [Indexed: 11/30/2022]
Abstract
We present a novel interactive segmentation framework incorporating a priori knowledge learned from training data. The knowledge is learned as a structured patch model (StPM) comprising sets of corresponding local patch priors and their pairwise spatial distribution statistics which represent the local shape and appearance along its boundary and the global shape structure, respectively. When successive user annotations are given, the StPM is appropriately adjusted in the target image and used together with the annotations to guide the segmentation. The StPM reduces the dependency on the placement and quantity of user annotations with little increase in complexity since the time-consuming StPM construction is performed offline. Furthermore, a seamless learning system can be established by directly adding the patch priors and the pairwise statistics of segmentation results to the StPM. The proposed method was evaluated on three datasets, respectively, of 2D chest CT, 3D knee MR, and 3D brain MR. The experimental results demonstrate that within an equal amount of time, the proposed interactive segmentation framework outperforms recent state-of-the-art methods in terms of accuracy, while it requires significantly less computing and editing time to obtain results with comparable accuracy.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Electrical Engineering, ASRI, INMC, Seoul National University, Seoul, Republic of Korea.
| | - Soochahn Lee
- Department of Electronic Engineering, Soonchunhyang University, Asan-si, Republic of Korea.
| | - Il Dong Yun
- Department of Digital Information Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea.
| | - Sang Uk Lee
- Department of Electrical Engineering, ASRI, INMC, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|
11
|
[Current reporting in radiology : what will happen tomorrow?]. Radiologe 2014; 54:45-52. [PMID: 24402724 DOI: 10.1007/s00117-013-2540-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
CLINICAL/METHODICAL ISSUE Reporting in radiology faces considerable changes in the near future that will be influenced by a broader understanding of the task and increasing technological possibilities. STANDARD RADIOLOGICAL METHODS Until now a radiological report could be regarded as a text phrased by a radiologist after viewing imaging data. METHODICAL INNOVATIONS New solutions will be accessed by advances in visualization of large datasets, in extracting, analyzing, and communicating metadata as well as by improved integration and interpretation of clinical information. PERFORMANCE Virtual reality, texture analysis, growing networks, semantic annotation, data mining and context based presentation have the potential to extensively change the everyday working routine. ACHIEVEMENTS Although many of these developments are still in a laboratory phase, the impact on the process of reporting can already be predicted. PRACTICAL RECOMMENDATIONS As the leading community in information analysis and technology, radiology as a subject should strive to lead and shape these impending changes.
Collapse
|
12
|
Kockelkorn TTJP, Schaefer-Prokop CM, Bozovic G, Muñoz-Barrutia A, van Rikxoort EM, Brown MS, de Jong PA, Viergever MA, van Ginneken B. Interactive lung segmentation in abnormal human and animal chest CT scans. Med Phys 2014; 41:081915. [DOI: 10.1118/1.4890597] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
13
|
Oguz I, Sonka M. LOGISMOS-B: layered optimal graph image segmentation of multiple objects and surfaces for the brain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1220-35. [PMID: 24760901 PMCID: PMC4324764 DOI: 10.1109/tmi.2014.2304499] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Automated reconstruction of the cortical surface is one of the most challenging problems in the analysis of human brain magnetic resonance imaging (MRI). A desirable segmentation must be both spatially and topologically accurate, as well as robust and computationally efficient. We propose a novel algorithm, LOGISMOS-B, based on probabilistic tissue classification, generalized gradient vector flows and the LOGISMOS graph segmentation framework. Quantitative results on MRI datasets from both healthy subjects and multiple sclerosis patients using a total of 16,800 manually placed landmarks illustrate the excellent performance of our algorithm with respect to spatial accuracy. Remarkably, the average signed error was only 0.084 mm for the white matter and 0.008 mm for the gray matter, even in the presence of multiple sclerosis lesions. Statistical comparison shows that LOGISMOS-B produces a significantly more accurate cortical reconstruction than FreeSurfer, the current state-of-the-art approach (p << 0.001). Furthermore, LOGISMOS-B enjoys a run time that is less than a third of that of FreeSurfer, which is both substantial, considering the latter takes 10 h/subject on average, and a statistically significant speedup.
Collapse
Affiliation(s)
- Ipek Oguz
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242 USA
| | - Milan Sonka
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242 USA
| |
Collapse
|