1
|
Peres C, Mammano F. A Protocol for the Automated Assessment of Cutaneous Pathology in a Mouse Model of Hemichannel Dysfunction. Methods Mol Biol 2024; 2801:177-187. [PMID: 38578421 DOI: 10.1007/978-1-0716-3842-2_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2024]
Abstract
In this chapter, we provide detailed instructions to perform quantitative reflectance imaging in a mouse model of a rare epidermal disorder caused by hyperactive connexin 26 hemichannels. Reflectance imaging is a versatile and powerful tool in dermatology, offering noninvasive, high-resolution insights into skin pathology, which is essential for both clinical practice and research. This approach offers several advantages and applications. Unlike traditional biopsy, reflectance imaging is noninvasive, allowing for real-time, in vivo examination of the skin. This is particularly valuable for monitoring chronic conditions or assessing the efficacy of treatments over time, enabling the detailed examination of skin morphology. This is crucial for identifying features of skin diseases such as cancers, inflammatory conditions, and infections. In therapeutic applications, reflectance imaging can be used to monitor the response of skin lesions to treatments. It can help in identifying the most representative area of a lesion for biopsy, thereby increasing the diagnostic accuracy. Reflectance imaging can also be used to diagnose and monitor inflammatory skin diseases, like psoriasis and eczema, by visualizing changes in skin structure and cellular infiltration. As the technology becomes more accessible, it has potential in telemedicine, allowing for remote diagnosis and monitoring of skin conditions. In academic settings, reflectance imaging can be a powerful research tool, enabling the study of skin pathology and the effects of novel treatments, including the development of monoclonal antibodies for therapeutic applications.
Collapse
|
2
|
Matsumoto A, Yonehara K. Emerging computational motifs: Lessons from the retina. Neurosci Res 2023; 196:11-22. [PMID: 37352934 DOI: 10.1016/j.neures.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/03/2023] [Accepted: 06/08/2023] [Indexed: 06/25/2023]
Abstract
The retinal neuronal circuit is the first stage of visual processing in the central nervous system. The efforts of scientists over the last few decades indicate that the retina is not merely an array of photosensitive cells, but also a processor that performs various computations. Within a thickness of only ∼200 µm, the retina consists of diverse forms of neuronal circuits, each of which encodes different visual features. Since the discovery of direction-selective cells by Horace Barlow and Richard Hill, the mechanisms that generate direction selectivity in the retina have remained a fascinating research topic. This review provides an overview of recent advances in our understanding of direction-selectivity circuits. Beyond the conventional wisdom of direction selectivity, emerging findings indicate that the retina utilizes complicated and sophisticated mechanisms in which excitatory and inhibitory pathways are involved in the efficient encoding of motion information. As will become evident, the discovery of computational motifs in the retina facilitates an understanding of how sensory systems establish feature selectivity.
Collapse
|
3
|
Hayta EN, Rickert CA, Lieleg O. Topography quantifications allow for identifying the contribution of parental strains to physical properties of co-cultured biofilms. Biofilm 2021; 3:100044. [PMID: 33665611 PMCID: PMC7902895 DOI: 10.1016/j.bioflm.2021.100044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/25/2021] [Accepted: 01/25/2021] [Indexed: 12/17/2022] Open
Abstract
Most biofilm research has so far focused on investigating biofilms generated by single bacterial strains. However, such single-species biofilms are rare in nature where bacteria typically coexist with other microorganisms. Although, from a biological view, the possible interactions occurring between different bacteria are well studied, little is known about what determines the material properties of a multi-species biofilm. Here, we ask how the co-cultivation of two B. subtilis strains affects certain important biofilm properties such as surface topography and wetting behavior. We find that, even though each daughter colony typically resembles one of the parent colonies in terms of morphology and wetting, it nevertheless exhibits a significantly different surface topography. Yet, this difference is only detectable via a quantitative metrological analysis of the biofilm surface. Furthermore, we show that this difference is due to the presence of bacteria belonging to the 'other' parent strain, which does not dominate the biofilm features. The findings presented here may pinpoint new strategies for how biofilms with hybrid properties could be generated from two different bacterial strains. In such engineered biofilms, it might be possible to combine desired properties from two strains by co-cultivation.
Collapse
|
4
|
Zhao F, Huang S, Zhang X. High sensitivity and specificity feature detection in liquid chromatography-mass spectrometry data: A deep learning framework. Talanta 2021; 222:121580. [PMID: 33167267 DOI: 10.1016/j.talanta.2020.121580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 08/17/2020] [Accepted: 08/21/2020] [Indexed: 10/23/2022]
Abstract
Feature detection is a crucial pre-processing step for high-resolution liquid chromatography-mass spectrometry (LC-MS) data analysis. Typical practices based on thresholds or rigid mathematical assumptions can cause ineffective performance in detecting low abundance and non-ideal distributed compounds. We herein introduce a novel feature detection method based on deep learning named SeA-M2Net that considers feature detection as an image-based object detection task. By fully employing raw data directly, and integrating all related factors (e.g., LC elution, charge state, and isotope distribution) with two-dimensional pseudo color images to calculate the probability of the presence of the compound, low abundance compounds can be well preserved and observed. More importantly, SeA-M2Net, with deep multilevel and multiscale structures focuses on compound pattern detection in a learned method instead of assuming a mathematical parametric model. All parameters in SeA-M2Net are learned from data in the training procedure, thus allowing for maximum flexibility of pattern distribution deformation. The algorithm is tested on several LC-MS datasets of multiple biological samples obtained from different instruments with varied experimental settings. We demonstrate the superiority of the new approach in handling complex compound patterns (e.g., low abundance, overlapping regions, LC shifts, and missing values). Our experiments indicate that SeA-M2Net outperforms widely used detection methods in terms of detection accuracy.
Collapse
|
5
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
|
6
|
Jivraj J, Deorajh R, Lai P, Chen C, Nguyen N, Ramjist J, Yang VXD. Robotic laser osteotomy through penscriptive structured light visual servoing. Int J Comput Assist Radiol Surg 2019; 14:809-818. [PMID: 30730030 DOI: 10.1007/s11548-018-01905-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Accepted: 12/19/2018] [Indexed: 11/30/2022]
Abstract
PURPOSE Planning osteotomies is a task that surgeons do as part of standard surgical workflow. This task, however, becomes more difficult and less intuitive when a robot is tasked with performing the osteotomy. In this study, we aim to provide a new method for surgeons to allow for highly intuitive trajectory planning, similar to the way an attending surgeon would instruct a junior. METHODS Planning an osteotomy, especially during a craniotomy, is performed intraoperatively using a sterile surgical pen or pencil directly on the exposed bone surface. This paper presents a new method for generating osteotomy trajectories for a multi-DOF robotic manipulator using the same method and relaying the penscribed cut path to the manipulator as a three-dimensional trajectory. The penscribed cut path is acquired using structured light imaging, and detection, segmentation, optimization and orientation generation of the Cartesian trajectory are done autonomously after minimal user input. RESULTS A 7-DOF manipulator (KUKA IIWA) is able to follow fully penscribed trajectories with sub-millimeter accuracy in the target plane and perpendicular to it (0.46 mm and 0.36 mm absolute mean error, respectively). CONCLUSIONS The robot is able to precisely follow cut paths drawn by the surgeon directly onto the exposed boney surface of the skull. We demonstrate through this study that current surgical workflow does not have to be drastically modified to introduce robotic technology in the operating room. We show that it is possible to guide a robot to perform an osteotomy in much the same way a senior surgeon would show a trainee by using a simple surgical pen or pencil.
Collapse
|
7
|
Keleş MF, Mongeau JM, Frye MA. Object features and T4/T5 motion detectors modulate the dynamics of bar tracking by Drosophila. ACTA ACUST UNITED AC 2019; 222:jeb.190017. [PMID: 30446539 DOI: 10.1242/jeb.190017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/09/2018] [Indexed: 01/21/2023]
Abstract
Visual objects can be discriminated by static spatial features such as luminance or dynamic features such as relative movement. Flies track a solid dark vertical bar moving on a bright background, a behavioral reaction so strong that for a rigidly tethered fly, the steering trajectory is phase advanced relative to the moving bar, apparently in anticipation of its future position. By contrast, flickering bars that generate no coherent motion or have a surface texture that moves in the direction opposite to the bar generate steering responses that lag behind the stimulus. It remains unclear how the spatial properties of a bar influence behavioral response dynamics. Here, we show that a dark bar defined by its luminance contrast to the uniform background drives a co-directional steering response that is phase advanced relative to the response to a textured bar defined only by its motion relative to a stationary textured background. The textured bar drives an initial contra-directional turn and phase-locked tracking. The qualitatively distinct response dynamics could indicate parallel visual processing of a luminance versus motion-defined object. Calcium imaging shows that T4/T5 motion-detecting neurons are more responsive to a solid dark bar than a motion-defined bar. Genetically blocking T4/T5 neurons eliminates the phase-advanced co-directional response to the luminance-defined bar, leaving the orientation response largely intact. We conclude that T4/T5 neurons mediate a co-directional optomotor response to a luminance-defined bar, thereby driving phase-advanced wing kinematics, whereas separate unknown visual pathways elicit the contra-directional orientation response.
Collapse
|
8
|
Li Z, Lu Y, Guo Y, Cao H, Wang Q, Shui W. Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection. Anal Chim Acta 2018; 1029:50-57. [PMID: 29907290 DOI: 10.1016/j.aca.2018.05.001] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 04/24/2018] [Accepted: 05/01/2018] [Indexed: 01/22/2023]
Abstract
Data analysis represents a key challenge for untargeted metabolomics studies and it commonly requires extensive processing of more than thousands of metabolite peaks included in raw high-resolution MS data. Although a number of software packages have been developed to facilitate untargeted data processing, they have not been comprehensively scrutinized in the capability of feature detection, quantification and marker selection using a well-defined benchmark sample set. In this study, we acquired a benchmark dataset from standard mixtures consisting of 1100 compounds with specified concentration ratios including 130 compounds with significant variation of concentrations. Five software evaluated here (MS-Dial, MZmine 2, XCMS, MarkerView, and Compound Discoverer) showed similar performance in detection of true features derived from compounds in the mixtures. However, significant differences between untargeted metabolomics software were observed in relative quantification of true features in the benchmark dataset. MZmine 2 outperformed the other software in terms of quantification accuracy and it reported the most true discriminating markers together with the fewest false markers. Furthermore, we assessed selection of discriminating markers by different software using both the benchmark dataset and a real-case metabolomics dataset to propose combined usage of two software for increasing confidence of biomarker identification. Our findings from comprehensive evaluation of untargeted metabolomics software would help guide future improvements of these widely used bioinformatics tools and enable users to properly interpret their metabolomics results.
Collapse
|
9
|
Energy Spectrum CT Image Detection Based Dimensionality Reduction with Phase Congruency. J Med Syst 2018; 42:49. [PMID: 29374333 DOI: 10.1007/s10916-018-0904-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Accepted: 01/18/2018] [Indexed: 10/18/2022]
Abstract
The image feature detection is widely used in image registration, image stitching and object recognition. The feature detection algorithm can be applied to the detection of artificial images, and can be used to detect the energy spectrum CT image. A new algorithm of phase consistency detection based on dimensionality reduction is proposed in this paper. We mainly focus on the phase congruency of the spectral CT images in the paper and try to use dimensionality reduction to integrate the information of phase congruency detected in the image. The experimental results show that the algorithm can detect the energy spectrum CT image with clear edge and contour, which is beneficial to the subsequent processing. Meanwhile, the algorithm presented is more effective in diagnosis of disease for medical professionals.
Collapse
|
10
|
Suthar M, Asghari H, Jalali B. Feature Enhancement in Visually Impaired Images. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2017; 6:1407-1415. [PMID: 30581690 PMCID: PMC6301048 DOI: 10.1109/access.2017.2779107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
One of the major open problems in computer vision is feature detection in visually impaired images. In this paper, we describe a potential solution using Phase Stretch Transform, a new computational approach for image analysis, edge detection and resolution enhancement that is inspired by the physics of the photonic time stretch technique. We mathematically derive the intrinsic nonlinear transfer function and demonstrate how it leads to (1) superior performance at low contrast levels and (2) a reconfigurable operator for hyper-dimensional classification. We prove that the Phase Stretch Transform equalizes the input image brightness across a range of intensities resulting in high dynamic range in visually impaired images. We also show further improvement in the dynamic range by combining our method with the conventional techniques. Finally, our results propose a new paradigm for the computation of mathematical derivatives via group delay dispersion operations.
Collapse
|
11
|
Rigosi E, Wiederman SD, O'Carroll DC. Photoreceptor signalling is sufficient to explain the detectability threshold of insect aerial pursuers. ACTA ACUST UNITED AC 2017; 220:4364-4369. [PMID: 29187619 DOI: 10.1242/jeb.166207] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 09/25/2017] [Indexed: 11/20/2022]
Abstract
An essential biological task for many flying insects is the detection of small, moving targets, such as when pursuing prey or conspecifics. Neural pathways underlying such 'target-detecting' behaviours have been investigated for their sensitivity and tuning properties (size, velocity). However, which stage of neuronal processing limits target detection is not yet known. Here, we investigated several skilled, aerial pursuers (males of four insect species), measuring the target-detection limit (signal-to-noise ratio) of light-adapted photoreceptors. We recorded intracellular responses to moving targets of varying size, extended well below the nominal resolution of single ommatidia. We found that the signal detection limit (2× photoreceptor noise) matches physiological or behavioural target-detection thresholds observed in each species. Thus, across a diverse range of flying insects, individual photoreceptor responses to changes in light intensity establish the sensitivity of the feature detection pathway, indicating later stages of processing are dedicated to feature tuning, tracking and selection.
Collapse
|
12
|
Lobachev O, Ulrich C, Steiniger BS, Wilhelmi V, Stachniss V, Guthe M. Feature-based multi-resolution registration of immunostained serial sections. Med Image Anal 2016; 35:288-302. [PMID: 27494805 DOI: 10.1016/j.media.2016.07.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 07/03/2016] [Accepted: 07/21/2016] [Indexed: 10/21/2022]
Abstract
The form and exact function of the blood vessel network in some human organs, like spleen and bone marrow, are still open research questions in medicine. In this paper, we propose a method to register the immunohistological stainings of serial sections of spleen and bone marrow specimens to enable the visualization and visual inspection of blood vessels. As these vary much in caliber, from mesoscopic (millimeter-range) to microscopic (few micrometers, comparable to a single erythrocyte), we need to utilize a multi-resolution approach. Our method is fully automatic; it is based on feature detection and sparse matching. We utilize a rigid alignment and then a non-rigid deformation, iteratively dealing with increasingly smaller features. Our tool pipeline can already deal with series of complete scans at extremely high resolution, up to 620 megapixels. The improvement presented increases the range of represented details up to smallest capillaries. This paper provides details on the multi-resolution non-rigid registration approach we use. Our application is novel in the way the alignment and subsequent deformations are computed (using features, i.e. "sparse"). The deformations are based on all images in the stack ("global"). We also present volume renderings and a 3D reconstruction of the vascular network in human spleen and bone marrow on a level not possible before. Our registration makes easy tracking of even smallest blood vessels possible, thus granting experts a better comprehension. A quantitative evaluation of our method and related state of the art approaches with seven different quality measures shows the efficiency of our method. We also provide z-profiles and enlarged volume renderings from three different registrations for visual inspection.
Collapse
|
13
|
Nilse L, Avci D, Heisterkamp P, Serang O, Lemberg MK, Schilling O. Yeast membrane proteomics using leucine metabolic labelling: Bioinformatic data processing and exemplary application to the ER-intramembrane protease Ypf1. BIOCHIMICA ET BIOPHYSICA ACTA-PROTEINS AND PROTEOMICS 2016; 1864:1363-71. [PMID: 27426920 DOI: 10.1016/j.bbapap.2016.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2015] [Revised: 06/27/2016] [Accepted: 07/12/2016] [Indexed: 10/21/2022]
Abstract
We describe in detail the usage of leucine metabolic labelling in yeast in order to monitor quantitative proteome alterations, e.g. upon removal of a protease. Since laboratory yeast strains are typically leucine auxotroph, metabolic labelling with trideuterated leucine (d3-leucine) is a straightforward, cost-effective, and ubiquitously applicable strategy for quantitative proteomic studies, similar to the widely used arginine/lysine metabolic labelling method for mammalian cells. We showcase the usage of advanced peptide quantification using the FeatureFinderMultiplex algorithm (part of the OpenMS software package) for robust and reliable quantification. Furthermore, we present an OpenMS bioinformatics data analysis workflow that combines accurate quantification with high proteome coverage. In order to enable visualization, peptide-mapping, and sharing of quantitative proteomic data, especially for membrane-spanning and cell-surface proteins, we further developed the web-application Proteator (http://proteator.appspot.com). Due to its simplicity and robustness, we expect metabolic leucine labelling in yeast to be of great interest to the research community. As an exemplary application, we show the identification of the copper transporter Ctr1 as a putative substrate of the ER-intramembrane protease Ypf1 by yeast membrane proteomics using d3-leucine isotopic labelling.
Collapse
|
14
|
Kaddi CD, Bennett RV, Paine MRL, Banks MD, Weber AL, Fernández FM, Wang MD. DetectTLC: Automated Reaction Mixture Screening Utilizing Quantitative Mass Spectrometry Image Features. JOURNAL OF THE AMERICAN SOCIETY FOR MASS SPECTROMETRY 2016; 27:359-65. [PMID: 26508443 PMCID: PMC5003040 DOI: 10.1007/s13361-015-1293-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Revised: 10/05/2015] [Accepted: 10/07/2015] [Indexed: 05/25/2023]
Abstract
Full characterization of complex reaction mixtures is necessary to understand mechanisms, optimize yields, and elucidate secondary reaction pathways. Molecular-level information for species in such mixtures can be readily obtained by coupling mass spectrometry imaging (MSI) with thin layer chromatography (TLC) separations. User-guided investigation of imaging data for mixture components with known m/z values is generally straightforward; however, spot detection for unknowns is highly tedious, and limits the applicability of MSI in conjunction with TLC. To accelerate imaging data mining, we developed DetectTLC, an approach that automatically identifies m/z values exhibiting TLC spot-like regions in MS molecular images. Furthermore, DetectTLC can also spatially match m/z values for spots acquired during alternating high and low collision-energy scans, pairing product ions with precursors to enhance structural identification. As an example, DetectTLC is applied to the identification and structural confirmation of unknown, yet significant, products of abiotic pyrazinone and aminopyrazine nucleoside analog synthesis. Graphical Abstract ᅟ.
Collapse
|
15
|
Brehler M, Görres J, Franke J, Barth K, Vetter SY, Grützner PA, Meinzer HP, Wolf I, Nabers D. Intra-operative adjustment of standard planes in C-arm CT image data. Int J Comput Assist Radiol Surg 2015; 11:495-504. [PMID: 26316065 DOI: 10.1007/s11548-015-1281-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2015] [Accepted: 08/13/2015] [Indexed: 11/26/2022]
Abstract
PURPOSE With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. METHODS We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. RESULTS For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. CONCLUSIONS By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.
Collapse
|
16
|
Ruiz A, Ujaldon M, Cooper L, Huang K. Non-rigid Registration for Large Sets of Microscopic Images on Graphics Processors. JOURNAL OF SIGNAL PROCESSING SYSTEMS 2009; 55:229-250. [PMID: 25328635 PMCID: PMC4198069 DOI: 10.1007/s11265-008-0208-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Microscopic imaging is an important tool for characterizing tissue morphology and pathology. 3D reconstruction and visualization of large sample tissue structure requires registration of large sets of high-resolution images. However, the scale of this problem presents a challenge for automatic registration methods. In this paper we present a novel method for efficient automatic registration using graphics processing units (GPUs) and parallel programming. Comparing a C++ CPU implementation with Compute Unified Device Architecture (CUDA) libraries and pthreads running on GPU we achieve a speed-up factor of up to 4.11× with a single GPU and 6.68× with a GPU pair. We present execution times for a benchmark composed of two sets of large-scale images: mouse placenta (16K × 16K pixels) and breast cancer tumors (23K × 62K pixels). It takes more than 12 hours for the genetic case in C++ to register a typical sample composed of 500 consecutive slides, which was reduced to less than 2 hours using two GPUs, in addition to a very promising scalability for extending those gains easily on a large number of GPUs in a distributed system.
Collapse
|