1
|
X-ray source motion blur modeling and deblurring with generative diffusion for digital breast tomosynthesis. Phys Med Biol 2024; 69:115003. [PMID: 38640913 PMCID: PMC11103667 DOI: 10.1088/1361-6560/ad40f8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 03/27/2024] [Accepted: 04/19/2024] [Indexed: 04/21/2024]
Abstract
Objective. Digital breast tomosynthesis (DBT) has significantly improved the diagnosis of breast cancer due to its high sensitivity and specificity in detecting breast lesions compared to two-dimensional mammography. However, one of the primary challenges in DBT is the image blur resulting from x-ray source motion, particularly in DBT systems with a source in continuous-motion mode. This motion-induced blur can degrade the spatial resolution of DBT images, potentially affecting the visibility of subtle lesions such as microcalcifications.Approach. We addressed this issue by deriving an analytical in-plane source blur kernel for DBT images based on imaging geometry and proposing a post-processing image deblurring method with a generative diffusion model as an image prior.Main results. We showed that the source blur could be approximated by a shift-invariant kernel over the DBT slice at a given height above the detector, and we validated the accuracy of our blur kernel modeling through simulation. We also demonstrated the ability of the diffusion model to generate realistic DBT images. The proposed deblurring method successfully enhanced spatial resolution when applied to DBT images reconstructed with detector blur and correlated noise modeling.Significance. Our study demonstrated the advantages of modeling the imaging system components such as source motion blur for improving DBT image quality.
Collapse
|
2
|
EXPRESS: Modeling the Impact of Single vs Dual Presentation on Visual Discrimination Across Resolutions. Q J Exp Psychol (Hove) 2024:17470218241255670. [PMID: 38714527 DOI: 10.1177/17470218241255670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2024]
Abstract
Visual categorisation relies on our ability to extract useful diagnostic information from complex stimuli. To do this, we can utilise both the 'high-level' and 'low-level' information in a stimulus, however the extent to which changes in these properties impact the decision-making process is less clear. We manipulated participants' access to high-level category features via gradated reductions to image resolution while exploring the impact of access to additional category features through a dual stimulus presentation when compared to single stimulus presentation. Results showed that while increasing image resolution consistently resulted in better choice performance, no benefit was found for dual presentation over single presentation, despite responses for dual presentation being slower compared to single presentation. Applying the diffusion decision model revealed increases in drift rate as a function of resolution, but no change in drift rate for single versus dual presentation. The increase in response time for dual presentation was instead accounted for by an increase in response caution for dual presentations. These findings suggest that while increasing access to high-level features (via increased resolution) can improve participants' categorisation performance, increasing access to both high- and low-level features (via an additional stimulus) does not.
Collapse
|
3
|
Super resolution-based methodology for self-supervised segmentation of microscopy images. Front Microbiol 2024; 15:1255850. [PMID: 38533330 PMCID: PMC10963421 DOI: 10.3389/fmicb.2024.1255850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 02/15/2024] [Indexed: 03/28/2024] Open
Abstract
Data-driven Artificial Intelligence (AI)/Machine learning (ML) image analysis approaches have gained a lot of momentum in analyzing microscopy images in bioengineering, biotechnology, and medicine. The success of these approaches crucially relies on the availability of high-quality microscopy images, which is often a challenge due to the diverse experimental conditions and modes under which these images are obtained. In this study, we propose the use of recent ML-based image super-resolution (SR) techniques for improving the image quality of microscopy images, incorporating them into multiple ML-based image analysis tasks, and describing a comprehensive study, investigating the impact of SR techniques on the segmentation of microscopy images. The impacts of four Generative Adversarial Network (GAN)- and transformer-based SR techniques on microscopy image quality are measured using three well-established quality metrics. These SR techniques are incorporated into multiple deep network pipelines using supervised, contrastive, and non-contrastive self-supervised methods to semantically segment microscopy images from multiple datasets. Our results show that the image quality of microscopy images has a direct influence on the ML model performance and that both supervised and self-supervised network pipelines using SR images perform better by 2%-6% in comparison to baselines, not using SR. Based on our experiments, we also establish that the image quality improvement threshold range [20-64] for the complemented Perception-based Image Quality Evaluator(PIQE) metric can be used as a pre-condition by domain experts to incorporate SR techniques to significantly improve segmentation performance. A plug-and-play software platform developed to integrate SR techniques with various deep networks using supervised and self-supervised learning methods is also presented.
Collapse
|
4
|
PSF-Radon transform algorithm: Measurement of the point-spread function from the Radon transform of the line-spread function. Microsc Res Tech 2024. [PMID: 38419356 DOI: 10.1002/jemt.24526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/06/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024]
Abstract
In this article, we present a new method called point spread function (PSF)-Radon transform algorithm. This algorithm consists on recovering the instrument PSF from the Radon transform (in the line direction axis) of the line spread function (i.e., the image of a line). We present the method and tested with synthetic images, and real images from macro lens camera and microscopy. A stand-alone program along with a tutorial is available, for any interested user, in Martinez (PSF-Radon transform algorithm, standalone program). RESEARCH HIGHLIGHTS: Determining the instrument PSF is a key issue. Precise PSF determinations are mandatory if image improvement is performed numerically by deconvolution. Much less exposure time to achieve the same performance than a measurement of the PSF from a very small bead. Does not require having to adjust the PSF by an analytical function to overcome the noise uncertainties.
Collapse
|
5
|
Effect of image resolution on automated classification of chest X-rays. J Med Imaging (Bellingham) 2023; 10:044503. [PMID: 37547812 PMCID: PMC10403240 DOI: 10.1117/1.jmi.10.4.044503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 07/09/2023] [Accepted: 07/21/2023] [Indexed: 08/08/2023] Open
Abstract
Purpose Deep learning (DL) models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays (CXRs). Recently available public CXR datasets include high resolution images, but state-of-the-art models are trained on reduced size images due to limitations on graphics processing unit memory and training time. As computing hardware continues to advance, it has become feasible to train deep convolutional neural networks on high-resolution images without sacrificing detail by downscaling. This study examines the effect of increased resolution on CXR classification performance. Approach We used the publicly available MIMIC-CXR-JPG dataset, comprising 377,110 high resolution CXR images for this study. We applied image downscaling from native resolution to 2048 × 2048 pixels , 1024 × 1024 pixels , 512 × 512 pixels , and 256 × 256 pixels and then we used the DenseNet121 and EfficientNet-B4 DL models to evaluate clinical task performance using these four downscaled image resolutions. Results We find that while some clinical findings are more reliably labeled using high resolutions, many other findings are actually labeled better using downscaled inputs. We qualitatively verify that tasks requiring a large receptive field are better suited to downscaled low resolution input images, by inspecting effective receptive fields and class activation maps of trained models. Finally, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, indicating that diverse information is extracted across resolutions. Conclusions This study suggests that instead of focusing solely on the finest image resolutions, multi-scale features should be emphasized for information extraction from high-resolution CXRs.
Collapse
|
6
|
Multiplex immunofluorescence staining of coverslip-mounted paraffin-embedded tissue sections. APMIS 2023. [PMID: 37211896 DOI: 10.1111/apm.13329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 04/27/2023] [Indexed: 05/23/2023]
Abstract
Animal and human tissues are used extensively in physiological and pathophysiological research. Due to both ethical considerations and low availability, it is essential to maximize the use of these tissues. Therefore, the aim was to develop a new method allowing for multiplex immunofluorescence (IF) staining of kidney sections in order to reuse the same tissue section multiple times. The paraffin-embedded kidney sections were placed onto coated coverslips and multiplex IF staining was performed. Five rounds of staining were performed where each round consisted of indirect antibody labelling, imaging on a widefield epifluorescence microscope, removal of the antibodies using a stripping buffer, and then re-staining. In the final round, the tissue was stained with hematoxylin/eosin. Using this method, tubular segments in the nephron, blood vessels, and interstitial cells were labeled. Furthermore, by placing the tissue on coverslips, confocal-like resolution was obtained using a conventional widefield epifluorescence microscope and a 60x oil objective. Thus, using standard reagents and equipment, paraffin-embedded tissue was used for multiplex IF staining with increased Z-resolution. In summary, this method offers time-saving multiplex IF staining and allows for the retrieval of both quantitative and spatial expressional information of multiple proteins and subsequently for an assessment of the tissue morphology. Due to the simplicity and integrated effectivity of this multiplex IF protocol, it holds the potential to supplement standard IF staining protocols and maximize use of tissue.
Collapse
|
7
|
Linearized Analysis of Noise and Resolution for DL-Based Image Generation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:647-660. [PMID: 36227827 PMCID: PMC10132822 DOI: 10.1109/tmi.2022.3214475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep-learning (DL) based CT image generation methods are often evaluated using RMSE and SSIM. By contrast, conventional model-based image reconstruction (MBIR) methods are often evaluated using image properties such as resolution, noise, bias. Calculating such image properties requires time consuming Monte Carlo (MC) simulations. For MBIR, linearized analysis using first order Taylor expansion has been developed to characterize noise and resolution without MC simulations. This inspired us to investigate if linearization can be applied to DL networks to enable efficient characterization of resolution and noise. We used FBPConvNet as an example DL network and performed extensive numerical evaluations, including both computer simulations and real CT data. Our results showed that network linearization works well under normal exposure settings. For such applications, linearization can characterize image noise and resolutions without running MC simulations. We provide with this work the computational tools to implement network linearization. The efficiency and ease of implementation of network linearization can hopefully popularize the physics-related image quality measures for DL applications. Our methodology is general; it allows flexible compositions of DL nonlinear modules and linear operators such as filtered-backprojection (FBP). For the latter, we develop a generic method for computing the covariance images that is needed for network linearization.
Collapse
|
8
|
Assessing the Impact of Image Resolution on Deep Learning for TB Lesion Segmentation on Frontal Chest X-rays. Diagnostics (Basel) 2023; 13:diagnostics13040747. [PMID: 36832235 PMCID: PMC9955202 DOI: 10.3390/diagnostics13040747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/10/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023] Open
Abstract
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations with an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments and identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study, which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary; however, identifying the optimal image resolution is critical to achieving superior performance.
Collapse
|
9
|
At the molecular resolution with MINFLUX? PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20200145. [PMID: 35152756 PMCID: PMC9653251 DOI: 10.1098/rsta.2020.0145] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
MINFLUX is purported as the next revolutionary fluorescence microscopy technique claiming a spatial resolution in the range of 1-3 nm in fixed and living cells. Though the claim of molecular resolution is attractive, I am concerned whether true 1 nm resolution has been attained. Here, I compare the performance with other super-resolution methods focusing particularly on spatial resolution claims, subjective filtering of localizations, detection versus labelling efficiency and the possible limitations when imaging biological samples containing densely labelled structures. I hope the analysis and evaluation parameters presented here are not only useful for future research directions for single-molecule techniques but also microscope users, developers and core facility managers when deciding on an investment for the next 'state-of-the-art' instrument. This article is part of the Theo Murphy meeting issue 'Super-resolution structured illumination microscopy (part 2)'.
Collapse
|
10
|
Spatial Image Resolution Assessment by Fourier Analysis (SIRAF). MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2022; 28:1-9. [PMID: 35236536 DOI: 10.1017/s1431927622000228] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Determining spatial resolution from images is crucial when optimizing focus, determining smallest resolvable object, and assessing size measurement uncertainties. However, no standard algorithm exists to measure resolution from electron microscopy (EM) images, though several have been proposed, where most require user decisions. We present the Spatial Image Resolution Assessment by Fourier analysis (SIRAF) algorithm that uses fast Fourier transform analysis to estimate resolution directly from a single image without user inputs. The method is derived from the underlying assumption that objects display intensity transitions, resembling a step function blurred by a Gaussian point spread function. This hypothesis is tested and verified on simulated EM images with known resolution. To identify potential pitfalls, the algorithm is also tested on simulated images with a variety of settings, and on real SEM images acquired at different magnification and defocus settings. Finally, the versatility of the method is investigated by assessing resolution in images from several microscopy techniques. It is concluded that the algorithm can assess resolution from a large selection of image types, thereby providing a measure of this fundamental image parameter. It may also improve autofocus methods and guide the optimization of magnification settings when balancing spatial resolution and field of view.
Collapse
|
11
|
Impact of Image Resolution on Deep Learning Performance in Endoscopy Image Classification: An Experimental Study Using a Large Dataset of Endoscopic Images. Diagnostics (Basel) 2021; 11:diagnostics11122183. [PMID: 34943421 PMCID: PMC8700246 DOI: 10.3390/diagnostics11122183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/18/2021] [Accepted: 11/20/2021] [Indexed: 01/05/2023] Open
Abstract
Recent trials have evaluated the efficacy of deep convolutional neural network (CNN)-based AI systems to improve lesion detection and characterization in endoscopy. Impressive results are achieved, but many medical studies use a very small image resolution to save computing resources at the cost of losing details. Today, no conventions between resolution and performance exist, and monitoring the performance of various CNN architectures as a function of image resolution provides insights into how subtleties of different lesions on endoscopy affect performance. This can help set standards for image or video characteristics for future CNN-based models in gastrointestinal (GI) endoscopy. This study examines the performance of CNNs on the HyperKvasir dataset, consisting of 10,662 images from 23 different findings. We evaluate two CNN models for endoscopic image classification under quality distortions with image resolutions ranging from 32 × 32 to 512 × 512 pixels. The performance is evaluated using two-fold cross-validation and F1-score, maximum Matthews correlation coefficient (MCC), precision, and sensitivity as metrics. Increased performance was observed with higher image resolution for all findings in the dataset. MCC was achieved at image resolutions between 512 × 512 pixels for classification for the entire dataset after including all subclasses. The highest performance was observed with an MCC value of 0.9002 when the models were trained on the highest resolution and tested on the same resolution. Different resolutions and their effect on CNNs are explored. We show that image resolution has a clear influence on the performance which calls for standards in the field in the future.
Collapse
|
12
|
A Mosaic Method for Side-Scan Sonar Strip Images Based on Curvelet Transform and Resolution Constraints. SENSORS 2021; 21:s21186044. [PMID: 34577250 PMCID: PMC8471239 DOI: 10.3390/s21186044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 09/01/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
Due to the complex marine environment, side-scan sonar signals are unstable, resulting in random non-rigid distortion in side-scan sonar strip images. To reduce the influence of resolution difference of common areas on strip image mosaicking, we proposed a mosaic method for side-scan sonar strip images based on curvelet transform and resolution constraints. First, image registration was carried out to eliminate dislocation and distortion of the strip images. Then, the resolution vector of the common area in two strip images were calculated, and a resolution model was created. Curvelet transform was then performed for the images, the resolution fusion rules were used for Coarse layer coefficients, and the maximum coefficient integration was applied to the Detail layer and Fine layer to calculate the fusion coefficients. Last, inverse Curvelet transform was carried out on the fusion coefficients to obtain images in the fusion area. The fusion images in multiple areas were then combined in the registered images to obtain the final image. The experiment results showed that the proposed method had better mosaicking performance than some conventional fusion algorithms.
Collapse
|
13
|
Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias. Perception 2021; 50:328-342. [PMID: 33709837 DOI: 10.1177/03010066211000270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants' expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
Collapse
|
14
|
Rethinking resolution estimation in fluorescence microscopy: from theoretical resolution criteria to super-resolution microscopy. SCIENCE CHINA-LIFE SCIENCES 2020; 63:1776-1785. [PMID: 33351176 DOI: 10.1007/s11427-020-1785-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 10/20/2020] [Indexed: 11/28/2022]
Abstract
Resolution is undoubtedly the most important parameter in optical microscopy by providing an estimation on the maximum resolving power of a certain optical microscope. For centuries, the resolution of an optical microscope is generally considered to be limited only by the numerical aperture of the optical system and the wavelength of light. However, since the invention and popularity of various advanced fluorescence microscopy techniques, especially super-resolution fluorescence microscopy, many new methods have been proposed for estimating the resolution, leading to confusions for researchers who need to quantify the resolution of their fluorescence microscopes. In this paper, we firstly summarize the early concepts and criteria for predicting the resolution limit of an ideal optical system. Then, we discuss some important influence factors that deteriorate the resolution of a certain fluorescence microscope. Finally, we provide methods and examples on how to measure the resolution of a fluorescence microscope from captured fluorescence images. This paper aims to answer as best as possible the theoretical and practical issues regarding the resolution estimation in fluorescence microscopy.
Collapse
|
15
|
High-Resolution Ultrasound Imaging Enabled by Random Interference and Joint Image Reconstruction. SENSORS 2020; 20:s20226434. [PMID: 33187144 PMCID: PMC7698025 DOI: 10.3390/s20226434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 11/02/2020] [Accepted: 11/04/2020] [Indexed: 12/02/2022]
Abstract
In ultrasound, wave interference is an undesirable effect that degrades the resolution of the images. We have recently shown that a wavefront of random interference can be used to reconstruct high-resolution ultrasound images. In this study, we further improve the resolution of interference-based ultrasound imaging by proposing a joint image reconstruction scheme. The proposed reconstruction scheme utilizes radio frequency (RF) signals from all elements of the sensor array in a joint optimization problem to directly reconstruct the final high-resolution image. By jointly processing array signals, we significantly improved the resolution of interference-based imaging. We compare the proposed joint reconstruction method with popular beamforming techniques and the previously proposed interference-based compound method. The simulation study suggests that, among the different reconstruction methods, the joint reconstruction method has the lowest mean-squared error (MSE), the best peak signal-to-noise ratio (PSNR), and the best signal-to-noise ratio (SNR). Similarly, the joint reconstruction method has an exceptional structural similarity index (SSIM) of 0.998. Experimental studies showed that the quality of images significantly improved when compared to other image reconstruction methods. Furthermore, we share our simulation codes as an open-source repository in support of reproducible research.
Collapse
|
16
|
Establishing Standardized Terminology for Digital Image Analysis: A Pilot Study. Radiol Technol 2020; 92:126-134. [PMID: 33203769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/06/2020] [Indexed: 06/11/2023]
Abstract
PURPOSE To point out the need for standardized terminology for digital image analysis and to collect data by surveying radiologic technology professionals for a more comprehensive, national-breadth study. METHODS A mixed-method pilot study was conducted, in which a survey was emailed to 4 Joint Review Committee on Education in Radiologic Technology-accredited radiography programs in July and August 2019. Eight educators and 28 radiologic technologists responded, and acceptance was evaluated on 3 of the proposed terms: signal, signal value, and signal variance (later changed to signal differences). Quantitative data results were analyzed in Microsoft Forms and percentage of acceptance rates calculated. Respondents who did not accept the proposed terms were asked to provide reasoning in open-ended responses, which were analyzed using manual coding and categorization. RESULTS The term signal received an 88% acceptance rate among educators and a 96% rate among radiographers. Signal value was accepted by 88% of educators and 79% of radiographers. The lowest acceptance rate was for the term signal variance (educators, 63%; radiographers, 79%). Open-ended responses were categorized into themes revealing respondent concerns about the use of signal value, which might result in forgetting about radiation dose (4 respondents) and how signal value relates to image receptor exposure and exposure indicator value (2 respondents). Concerns about signal variance involved contrast being easier to understand because it is visible (2 respondents), confusion with the usage of the proposed term (2 respondents), and preference for contrast because of its current use (2 respondents). DISCUSSION Recent history indicates confusion regarding which terms effectively describe the new image quality factors that dictate proper use of digital radiography. The proposed terms evaluated in this pilot study received a mean acceptance rate of 83.5%, suggesting understanding of terms related to digital image analysis from participating educators and radiographers. CONCLUSION The findings of this pilot study indicate a need to standardize terminology related to digital image quality factors. However, these preliminary results should be interpreted with caution because of the low response rate. Readers can participate in helping to establish a universal language for digital image analysis by scanning the quick response (QR) code or clicking the link at the end of the article and completing the survey.
Collapse
|
17
|
Abstract
In natural vision, noisy and distorted visual inputs often change our perceptual strategy in scene perception. However, it is unclear the extent to which the affective meaning embedded in the degraded natural scenes modulates our scene understanding and associated eye movements. In this eye-tracking experiment by presenting natural scene images with different categories and levels of emotional valence (high-positive, medium-positive, neutral/low-positive, medium-negative, and high-negative), we systematically investigated human participants' perceptual sensitivity (image valence categorization and arousal rating) and image-viewing gaze behaviour to the changes of image resolution. Our analysis revealed that reducing image resolution led to decreased valence recognition and arousal rating, decreased number of fixations in image-viewing but increased individual fixation duration, and stronger central fixation bias. Furthermore, these distortion effects were modulated by the scene valence with less deterioration impact on the valence categorization of negatively valenced scenes and on the gaze behaviour in viewing of high emotionally charged (high-positive and high-negative) scenes. It seems that our visual system shows a valence-modulated susceptibility to the image distortions in scene perception.
Collapse
|
18
|
Estimation of Ultrasound Echogenicity Map from B-Mode Images Using Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20174931. [PMID: 32878199 PMCID: PMC7506733 DOI: 10.3390/s20174931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 08/24/2020] [Accepted: 08/29/2020] [Indexed: 06/11/2023]
Abstract
In ultrasound B-mode imaging, speckle noises decrease the accuracy of estimation of tissue echogenicity of imaged targets from the amplitude of the echo signals. In addition, since the granular size of the speckle pattern is affected by the point spread function (PSF) of the imaging system, the resolution of B-mode image remains limited, and the boundaries of tissue structures often become blurred. This study proposed a convolutional neural network (CNN) to remove speckle noises together with improvement of image spatial resolution to reconstruct ultrasound tissue echogenicity map. The CNN model is trained using in silico simulation dataset and tested with experimentally acquired images. Results indicate that the proposed CNN method can effectively eliminate the speckle noises in the background of the B-mode images while retaining the contours and edges of the tissue structures. The contrast and the contrast-to-noise ratio of the reconstructed echogenicity map increased from 0.22/2.72 to 0.33/44.14, and the lateral and axial resolutions also improved from 5.9/2.4 to 2.9/2.0, respectively. Compared with other post-processing filtering methods, the proposed CNN method provides better approximation to the original tissue echogenicity by completely removing speckle noises and improving the image resolution together with the capability for real-time implementation.
Collapse
|
19
|
Gamma Radiation Imaging System via Variable and Time-Multiplexed Pinhole Arrays. SENSORS 2020; 20:s20113013. [PMID: 32466401 PMCID: PMC7313691 DOI: 10.3390/s20113013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/20/2020] [Accepted: 05/21/2020] [Indexed: 11/16/2022]
Abstract
Biomedical planar imaging using gamma radiation is a very important screening tool for medical diagnostics. Since lens imaging is not available in gamma imaging, the current methods use lead collimator or pinhole techniques to perform imaging. However, due to ineffective utilization of the gamma radiation emitted from the patient’s body and the radioactive dose limit in patients, poor image signal to noise ratio (SNR) and long image capturing time are evident. Furthermore, the resolution is related to the pinhole diameter, thus there is a tradeoff between SNR and resolution. Our objectives are to reduce the radioactive dose given to the patient and to preserve or improve SNR, resolution and capturing time while incorporating three-dimensional capabilities in existing gamma imaging systems. The proposed imaging system is based on super-resolved time-multiplexing methods using both variable and moving pinhole arrays. Simulations were performed both in MATLAB and GEANT4, and gamma single photon emission computed tomography (SPECT) experiments were conducted to support theory and simulations. The proposed method is able to reduce the radioactive dose and image capturing time and to improve SNR and resolution. The results and method enhance the gamma imaging capabilities that exist in current systems, while providing three-dimensional data on the object.
Collapse
|
20
|
Evaluation of reconstruction algorithms for a stationary digital breast tomosynthesis system using a carbon nanotube X-ray source array. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:1157-1169. [PMID: 32925159 DOI: 10.3233/xst-200668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Breast cancer is the most frequently diagnosed cancer in women worldwide. Digital breast tomosynthesis (DBT), which is based on limited-angle tomography, was developed to solve tissue overlapping problems associated with traditional breast mammography. However, due to the problems associated with tube movement during the process of data acquisition, stationary DBT (s-DBT) was developed to allow the X-ray source array to stay stationary during the DBT scanning process. In this work, we evaluate four widely used and investigated DBT image reconstruction algorithms, including the commercial Feldkamp-Davis-Kress algorithm (FBP), the simultaneous iterative reconstruction technique (SIRT), the simultaneous algebraic reconstruction technique (SART) and the total variation regularized SART (SART-TV) for an s-DBT imaging system that we set up in our own laboratory for studies using a semi-elliptical digital phantom and a rubber breast phantom to determine the most superior algorithm for s-DBT image reconstruction among the four algorithms. Several quantitative indexes for image quality assessment, including the peak signal-noise ratio (PSNR), the root mean square error (RMSE) and the structural similarity (SSIM), are used to determine the best algorithm for the imaging system that we set up. Image resolutions are measured via the calculation of the contrast-to-noise ratio (CNR) and artefact spread function (ASF). The experimental results show that the SART-TV algorithm gives reconstructed images with the highest PSNR and SSIM values and the lowest RMSE values in terms of image accuracy and similarity, along with the highest CNR values calculated for the selected features and the best ASF curves in terms of image resolution in the horizontal and vertical directions. Thus, the SART-TV algorithm is proven to be the best algorithm for use in s-DBT image reconstruction for the specific imaging task in our study.
Collapse
|
21
|
Mastcam Image Resolution Enhancement with Application to Disparity Map Generation for Stereo Images with Different Resolutions. SENSORS 2019; 19:s19163526. [PMID: 31409022 PMCID: PMC6720598 DOI: 10.3390/s19163526] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/05/2019] [Accepted: 08/09/2019] [Indexed: 12/01/2022]
Abstract
In this paper, we introduce an in-depth application of high-resolution disparity map estimation using stereo images from Mars Curiosity rover’s Mastcams, which have two imagers with different resolutions. The left Mastcam has three times lower resolution as that of the right. The left Mastcam image’s resolution is first enhanced with three methods: Bicubic interpolation, pansharpening-based method, and a deep learning super resolution method. The enhanced left camera image and the right camera image are then used to estimate the disparity map. The impact of the left camera image enhancement is examined. The comparative performance analyses showed that the left camera enhancement results in getting more accurate disparity maps in comparison to using the original left Mastcam images for disparity map estimation. The deep learning-based method provided the best performance among the three for both image enhancement and disparity map estimation accuracy. A high-resolution disparity map, which is the result of the left camera image enhancement, is anticipated to improve the conducted science products in the Mastcam imagery such as 3D scene reconstructions, depth maps, and anaglyph images.
Collapse
|
22
|
Coherent Multi-Transducer Ultrasound Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2019; 66:1316-1330. [PMID: 31180847 PMCID: PMC7115943 DOI: 10.1109/tuffc.2019.2921103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This work extends the effective aperture size by coherently compounding the received radio frequency data from multiple transducers. As a result, it is possible to obtain an improved image, with enhanced resolution, an extended field of view (FoV), and high-acquisition frame rates. A framework is developed in which an ultrasound imaging system consisting of N synchronized matrix arrays, each with partly shared FoV, take turns to transmit plane waves (PWs). Only one individual transducer transmits at each time while all N transducers simultaneously receive. The subwavelength localization accuracy required to combine information from multiple transducers is achieved without the use of any external tracking device. The method developed in this study is based on the study of the backscattered echoes received by the same transducer and resulting from a targeted scatterer point in the medium insonated by the multiple ultrasound probes of the system. The current transducer locations along with the speed of sound in the medium are deduced by optimizing the cross correlation between these echoes. The method is demonstrated experimentally in 2-D for two linear arrays using point targets and anechoic lesion phantoms. The first demonstration of a free-hand experiment is also shown. Results demonstrate that the coherent multi-transducer ultrasound imaging method has the potential to improve ultrasound image quality, improving resolution, and target detectability. Compared with coherent PW compounding using a single probe, lateral resolution improved from 1.56 to 0.71 mm in the coherent multi-transducer imaging method without acquisition frame rate sacrifice (acquisition frame rate 5350 Hz).
Collapse
|
23
|
Micrometer Scale Resolution Limit of a Fiber-Coupled Electro-Optic Probe. SENSORS 2019; 19:s19132874. [PMID: 31261698 PMCID: PMC6651240 DOI: 10.3390/s19132874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 06/21/2019] [Accepted: 06/27/2019] [Indexed: 11/17/2022]
Abstract
We present the practical resolution limit of a fine electrical structure based on a fiber-coupled electro-optic probing system. The spatial resolution limit was experimentally evaluated on the sub-millimeter to micrometer scale of planar electrical transmission lines. The electrical lines were fabricated to have various potential differences depending on the dimensions and geometry. The electric field between the lines was measured through an electro-optic probe, which was miniaturized up to the optical bare fiber scale so as to investigate the spatial limit of electrical signals with minimal invasiveness. The experimental results show that the technical resolution limitation of a fiber-coupled probe can reasonably approach a fraction of the mode field diameter (~10 μm) of the fiber in use.
Collapse
|
24
|
Spatial Resolution and Imaging Encoding fMRI Settings for Optimal Cortical and Subcortical Motor Somatotopy in the Human Brain. Front Neurosci 2019; 13:571. [PMID: 31244595 PMCID: PMC6579882 DOI: 10.3389/fnins.2019.00571] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Accepted: 05/20/2019] [Indexed: 11/23/2022] Open
Abstract
There is much controversy about the optimal trade-off between blood-oxygen-level-dependent (BOLD) sensitivity and spatial precision in experiments on brain’s topology properties using functional magnetic resonance imaging (fMRI). The sparse empirical evidence and regional specificity of these interactions pose a practical burden for the choice of imaging protocol parameters. Here, we test in a motor somatotopy experiment the impact of fMRI spatial resolution on differentiation between body part representations in cortex and subcortical structures. Motor somatotopy patterns were obtained in a block-design paradigm and visually cued movements of face, upper and lower limbs at 1.5, 2, and 3 mm spatial resolution. The degree of segregation of the body parts’ spatial representations was estimated using a pattern component model. In cortical areas, we observed the same level of segregation between somatotopy maps across all three resolutions. In subcortical areas the degree of effective similarity between spatial representations was significantly impacted by the image resolution. The 1.5 mm 3D EPI and 3 mm 2D EPI protocols led to higher segregation between motor representations compared to the 2 mm 3D EPI protocol. This finding could not be attributed to differential BOLD sensitivity or delineation of functional areas alone and suggests a crucial role of the image encoding scheme – i.e., 2D vs. 3D EPI. Our study contributes to the field by providing empirical evidence about the impact of acquisition protocols for the delineation of somatotopic areas in cortical and sub-cortical brain regions.
Collapse
|
25
|
LED-based indoor positioning system using novel optical pixelation technique. Healthc Technol Lett 2019; 6:76-81. [PMID: 31341632 PMCID: PMC6595537 DOI: 10.1049/htl.2018.5039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2018] [Revised: 02/20/2019] [Accepted: 04/04/2019] [Indexed: 11/20/2022] Open
Abstract
At present, about 47 million people worldwide have Alzheimer's disease (AD), and because there is no treatment currently available to cure AD, people with AD (PWAD) are cared for. The estimated cost of care for PWAD in 2016 alone is about $236 billion, which puts a huge burden on relatives of PWAD. This work aims to reduce this burden by proposing an inexpensive indoor positioning system that can be used to monitor PWAD. For the positioning, freeform lenses are used to enable a novel optically pixeled LED luminaire (OPLL) that focuses beams from LEDs to various parts of a room, thereby creating uniquely identifiable regions which are used to improve positioning accuracy. Monte Carlo simulation with the designed OPLL in a room with dimensions 5 m × 5 m × 3 m is used to compute the positioning error and theoretical analysis and experiments are used to validate the time for positioning. Results show that by appropriate LED beam design, OPLL has a positioning error and time for positioning of 0.735 m and 187 ms which is 55.1% lower and 1.2 times faster than existing multiple LED estimation model proximity systems.
Collapse
|
26
|
Superresolution method for a single wide-field image deconvolution by superposition of point sources. J Microsc 2019; 275:51-65. [PMID: 31062365 DOI: 10.1111/jmi.12802] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Accepted: 05/03/2019] [Indexed: 01/18/2023]
Abstract
In this work, we present a new algorithm for wide-field fluorescent micrsocopy deconvolution from a single acquisition without a sparsity prior, which allows the retrieval of the target function with superresolution, with a simple approach that the measured data are fit by the convolution of a superposition of virtual point sources (SUPPOSe) of equal intensity with the point spread function. The cloud of virtual point sources approximates the actual distribution of sources that can be discrete or continuous. In this manner, only the positions of the sources need to be determined. An upper bound for the uncertainty in the position of the sources was derived, which provides a criteria to distinguish real facts from eventual artefacts and distortions. Two very different experimental situations were used for the test (an artificially synthesized image and fluorescent microscopy images), showing excellent reconstructions and agreement with the predicted uncertainties, achieving up to a fivefold improvement in the resolution for the microscope. The method also provides the optimum number of sources to be used for the fit. LAY DESCRIPTION: A new method is presented that allows the reconstruction of an image with superresolution from a single frame taken with a standard fluorescent microscope. An improvement in the resolution of a factor between 3 and 5 is achieved depending on the noise of the measurement and how precisely the instrument response function (point spread function) is measured. The complete mathematical description is presented showing how to estimate the quality of the reconstruction. The method is based in the approximation of the actual intensity distribution of the object being measured by a superposition of point sources of equal intensity. The problem is converted from determining the intensity of each point to determining the position of the virtual sources. The best fit is found using a genetic algorithm. To validate the method several results of different nature are presented including an artificially generated image, fluorescent beads and labelled mitochondria. The artificial image provides a prior knowledge of the actual system for comparison and validation. The beads were imaged with our highest numerical aperture objective to show method capabilities and also acquired with a low numerical aperture objective to compare the reconstructed image with that acquired with a high numerical aperture objective. This same strategy was followed with the biological sample to show the method working in real practical situations.
Collapse
|
27
|
Investigation of transmission computed tomography (CT) image quality and x-ray dose achievable from an experimental dual-mode benchtop x-ray fluorescence CT and transmission CT system. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:431-442. [PMID: 30909268 PMCID: PMC7027361 DOI: 10.3233/xst-180457] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
OBJECTIVE To investigate the image quality and x-ray dose associated with a transmission computed tomography (CT) component implemented within the same platform of an experimental benchtop x-ray fluorescence CT (XFCT) system for multimodal preclinical imaging applications. METHODS Cone-beam CT scans were performed using an experimental benchtop CT + XFCT system and a cylindrically-shaped 3D-printed polymethyl methacrylate phantom (3 cm in diameter, 7 cm in height) loaded with various concentrations (0.05-1 wt. %) of gold nanoparticles (GNPs). Two commercial CT quality assurance phantoms containing 3D line-pair (LP) targets and contrast targets were also scanned. The x-ray beams of 40 and 62 kVp, both filtered by 0.08 mm Cu and 0.4 mm Al, were used with 17 ms of exposure time per projection at three current settings (2.5, 5, and 10 mA). The ordered-subset simultaneous algebraic reconstruction and total variation-minimization methods were used to reconstruct images. Sparse projection and short scan were considered to reduce the x-ray dose. The contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were calculated. RESULTS The lowest detectable concentration of GNPs (CNR > 5) and the highest spatial resolution (per MTF50%) were 0.10 wt. % and 9.5 LP/CM, respectively, based on the images reconstructed from 360 projections of the 40 kVp beam (or x-ray dose of 3.44 cGy). The background noise for the image resulting in the lowest GNP detection limit was 25 Hounsfield units. CONCLUSION The transmission CT component within the current experimental benchtop CT + XFCT system produced images deemed acceptable for multimodal (CT + XFCT) imaging purposes, with less than 4 cGy of x-ray dose.
Collapse
|
28
|
The Fractal Nature of Planetary Landforms and Implications to Geologic Mapping. EARTH AND SPACE SCIENCE (HOBOKEN, N.J.) 2018; 5:211-220. [PMID: 30035188 PMCID: PMC6049887 DOI: 10.1002/2018ea000372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 03/13/2018] [Accepted: 04/02/2018] [Indexed: 06/08/2023]
Abstract
The primary product of planetary geologic and geomorphologic mapping is a group of lines and polygons that parameterize planetary surfaces and landforms. Many different research fields use those shapes to conduct their own analyses, and some of those analyses require measurement of the shape's perimeter or line length, sometimes relative to a surface area. There is a general lack of discussion in the relevant literature of the fact that perimeters of many planetary landforms are not easily parameterized by a simple aggregation of lines or even curves, but they instead display complexity across a large range of scale lengths; in fewer words, many planetary landforms are fractals. Because of their fractal nature, instead of morphometric properties converging on a single value, those properties will change based on the scale used to measure them. Therefore, derived properties can change-in some cases, by an order of magnitude or more-just when the measuring length scale is altered. This can result in significantly different interpretations of the features. Conversely, instead of a problem, analysis of the fractal properties of some landforms has led to diagnostic criteria that other remote sensing data cannot easily provide. This paper outlines the basic issue of the fractal nature of planetary landforms, gives case studies where the effects become important, and provides the recommendation that geologic mappers consider characterizing the fractal dimension of their mapped units via a relatively simple, straightforward calculation.
Collapse
|
29
|
Robust head CT image registration pipeline for craniosynostosis skull correction surgery. Healthc Technol Lett 2017; 4:174-178. [PMID: 29184660 PMCID: PMC5683203 DOI: 10.1049/htl.2017.0067] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 07/31/2017] [Indexed: 11/19/2022] Open
Abstract
Craniosynostosis is a congenital malformation of the infant skull typically treated via corrective surgery. To accurately quantify the extent of deformation and identify the optimal correction strategy, the patient-specific skull model extracted from a pre-surgical computed tomography (CT) image needs to be registered to an atlas of head CT images representative of normal subjects. Here, the authors present a robust multi-stage, multi-resolution registration pipeline to map a patient-specific CT image to the atlas space of normal CT images. The proposed registration pipeline first performs an initial optimisation at very low resolution to yield a good initial alignment that is subsequently refined at high resolution. They demonstrate the robustness of the proposed method by evaluating its performance on 560 head CT images of 320 normal subjects and 240 craniosynostosis patients and show a success rate of 92.8 and 94.2%, respectively. Their method achieved a mean surface-to-surface distance between the patient and template skull of <2.5 mm in the targeted skull region across both the normal subjects and patients.
Collapse
|
30
|
Abstract
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
Collapse
|
31
|
Oversampling in the computed tomography measurements applied for bone structure studies as a method of spatial resolution improvement. Pol J Radiol 2012; 77:14-8. [PMID: 22844304 PMCID: PMC3403796 DOI: 10.12659/pjr.882965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Accepted: 04/19/2012] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Our purpose was to check the potential ability of oversampling as a method for computed tomography axial resolution improvement. The method of achieving isotropic and fine resolution, when the scanning system is characterized by anisotropic resolutions is proposed. In case of typical clinical system the axial resolution is much lower than the planar one. The idea relies on the scanning with a wide overlapping layers and subsequent resolution recovery on the level of scanning step. MATERIAL/METHODS Simulated three-dimensional images, as well as the real microtomographic images of rat femoral bone were used in proposed solution tests. Original high resolution images were virtually scanned with a wide beam and a small step in order to simulate the real measurements. The low resolution image series were subsequently processed in order to back to the original fine one. Original, virtually scanned and recovered images resolutions were compared with the use of modulation transfer function (MTF). RESULTS/CONCLUSIONS A good ability of oversampling as a method for the resolution recovery was showed. It was confirmed by comparing the resolving powers after and before resolution recovery. The MTF analysis showed resolution improvement. The resolution improvement was achieved but the image noise raised considerably, which is clearly visible on image histograms. Despite this disadvantage the proposed method can be successfully used in practice, especially in the trabecular bone studies because of high contrast between trabeculae and intertrabecular spaces.
Collapse
|
32
|
Compton Scattering in Clinical PET/CT With High Resolution Half Ring PET Insert Device. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2010; 57:1045-1051. [PMID: 21552470 PMCID: PMC3087385 DOI: 10.1109/tns.2010.2046754] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The integration of a high resolution PET insert into a conventional PET system can significantly improve the resolution and the contrast of its images within a reduced imaging field of view. For the rest of the scanner imaging field of view, the insert is a highly attenuating and scattering media. In order to use all available coincidence events (including coincidences between 2 detectors in the original scanner, namely the scanner-scanner coincidences), appropriate scatter and attenuation corrections have to be implemented. In this work, we conducted a series of Monte Carlo simulations to estimate the composition of the scattering background and the importance of the scatter correction. We implemented and tested the Single Scatter Simulation (SSS) algorithm for a hypothetical system and show good agreement between the estimated scatter using SSS and Monte Carlo simulated scatter contribution. We further applied the SSS to estimate scatter contribution from an existing prototype PET insert for a clinical PET/CT scanner. The results demonstrated the applicability of SSS to estimate the scatter contribution within a clinical PET/CT system even when there is a high resolution half ring PET insert device in its imaging field of view.
Collapse
|
33
|
Toward quantitative small animal pinhole SPECT: assessment of quantitation accuracy prior to image compensations. Mol Imaging Biol 2009; 11:195-203. [PMID: 19048346 PMCID: PMC3085830 DOI: 10.1007/s11307-008-0181-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2008] [Revised: 06/29/2008] [Accepted: 07/22/2008] [Indexed: 01/03/2023]
Abstract
PURPOSE We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. PROCEDURES The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. RESULTS Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to -18% (Tc-99m or In-111) or -41% (I-125). The inclusion of scatter effects changed the above numbers to -12% (Tc-99m or In-111) and -21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. CONCLUSIONS We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed.
Collapse
|
34
|
Comparison between two super-resolution implementations in PET imaging. Med Phys 2009; 36:1370-83. [PMID: 19472644 PMCID: PMC3910148 DOI: 10.1118/1.3090890] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2007] [Revised: 02/05/2009] [Accepted: 02/06/2009] [Indexed: 11/07/2022] Open
Abstract
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POV). In this article, the authors propose a novel implementation of the SR technique whereby the required multiple low-resolution images are generated by shifting the reconstruction pixel grid during the image reconstruction process rather than being acquired from different POVs. The objective of this article is to compare the performances of the two SR implementations using theoretical and experimental studies. A mathematical framework is first provided to support the hypothesis that the two SR implementations have similar performance in current PET/CT scanners that use block detectors. Based on this framework, a simulation study, a point source study, and a NEMA/IEC phantom study were conducted to compare the performance of these two SR implementations with respect to contrast, resolution, noise, and SNR. For reference purposes, a comparison with a native reconstruction (NR) image using a high-resolution pixel grid was also performed. The mathematical framework showed that the two SR implementations are expected to achieve similar contrast and resolution but different noise contents. These results were confirmed by the simulation and experimental studies. The simulation study showed that the two SR implementations have an average contrast difference of 2.3%, while the point source study showed that their average differences in contrast and resolution were 0.5% and 1.2%, respectively. Comparisons between the SR and NR images for the point source study showed that the NR image exhibited averages of 30% and 8% lower contrast and resolution, respectively. The NEMA/IEC phantom study showed that the three images (two SR and NR) exhibited different noise structures. The SNR of the new SR implementation was, on average, 21.5% lower than the original implementation largely due to an increase in background noise, while the NR image had averages of 18.5% and 8% lower SNR and contrast, respectively, versus the two SR images. The new SR implementation can potentially replace the original SR approach in current PET scanners that use block detectors while maintaining similar contrast and resolution, but at a relatively lower SNR. A major advantage of the new SR implementation is its shorter overall scan duration which results in an increase in scanner throughput and a reduction in patient motion.
Collapse
|
35
|
The Effects of Incorrect Modeling on Noise and Resolution Properties of ML-EM Images. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2002; 49:768-773. [PMID: 21785511 PMCID: PMC3140698 DOI: 10.1109/tns.2002.1039561] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The effects of incorrect compensation for collimator blur in single-photon emission computed tomography (SPECT) images are studied in terms of the noise and resolution properties of the reconstructed images. Qualitative analysis of images of the Hoffman brain phantom reconstructed using nonlinear maximum-likelihood-expectation maximization (ML-EM) show the behavior of longer noise correlations for high-pass filtered images. These qualitative observations are confirmed with more quantitative noise measures. The differences also appear in images reconstructed using linear Landweber iteration. However, the signal-to-noise ratio, in terms of the noise-equivalent quanta, remains largely unchanged. We conclude that the compensation model affects SPECT image properties, though the effect on human task performance remains to be studied.
Collapse
|