1
|
Improving Medical Photography in a Level 1 Trauma Center by Implementing a Specialized Smartphone-Based App in Comparison to the Usage of Digital Cameras: Prospective Panel Study. JMIR Form Res 2024; 8:e47572. [PMID: 38271087 PMCID: PMC10853857 DOI: 10.2196/47572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 09/19/2023] [Accepted: 09/25/2023] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND Medical photography plays a pivotal role in modern health care, serving multiple purposes ranging from patient care to medical documentation and education. Specifically, it aids in wound management, surgical planning, and medical training. While digital cameras have traditionally been used, smartphones equipped with specialized apps present an intriguing alternative. Smartphones offer several advantages, including increased usability and efficiency and the capability to uphold medicolegal standards more effectively and consistently. OBJECTIVE This study aims to assess whether implementing a specialized smartphone app could lead to more frequent and efficient use of medical photography. METHODS We carried out this study as a comprehensive single-center panel investigation at a level 1 trauma center, encompassing various settings including the emergency department, operating theaters, and surgical wards, over a 6-month period from June to November 2020. Using weekly questionnaires, health care providers were asked about their experiences and preferences with using both digital cameras and smartphones equipped with a specialized medical photography app. Parameters such as the frequency of use, time taken for image upload, and general usability were assessed. RESULTS A total of 65 questionnaires were assessed for digital camera use and 68 for smartphone use. Usage increased significantly by 5.4 (SD 1.9) times per week (95% CI 1.7-9.2; P=.005) when the smartphone was used. The time it took to upload pictures to the clinical picture and archiving system was significantly shorter for the app (mean 1.8, SD 1.2 min) than for the camera (mean 14.9, SD 24.0 h; P<.001). Smartphone usage also outperformed the digital camera in terms of technical failure (4.4% vs 9.7%; P=.04) and for the technical process of archiving (P<.001) pictures to the picture archiving and communication system (PACS) and display images (P<.001) from it. No difference was found in regard to the photographer's intent (P=.31) or reasoning (P=.94) behind the pictures. Additionally, the study highlighted that potential concerns regarding data security and patient confidentiality were also better addressed through the smartphone app, given its encryption capabilities and password protection. CONCLUSIONS Specialized smartphone apps provide a secure, rapid, and user-friendly platform for medical photography, showing significant advantages over traditional digital cameras. This study supports the notion that these apps not only have the potential to improve patient care, particularly in the realm of wound management, but also offer substantial medicolegal and economic benefits. Future research should focus on additional aspects such as patient comfort and preference, image resolution, and the quality of photographs, as well as seek to corroborate these findings through a larger sample size.
Collapse
|
2
|
Color Conversion of Wide-Color-Gamut Cameras Using Optimal Training Groups. SENSORS (BASEL, SWITZERLAND) 2023; 23:7186. [PMID: 37631723 PMCID: PMC10460023 DOI: 10.3390/s23167186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/04/2023] [Accepted: 08/06/2023] [Indexed: 08/27/2023]
Abstract
The colorimetric conversion of wide-color-gamut cameras plays an important role in the field of wide-color-gamut displays. However, it is rather difficult for us to establish the conversion models with desired approximation accuracy in the case of wide color gamut. In this paper, we propose using an optimal method to establish the color conversion models that change the RGB space of cameras to the XYZ space of a CIEXYZ system. The method makes use of the Pearson correlation coefficient to evaluate the linear correlation between the RGB values and the XYZ values in a training group so that a training group with optimal linear correlation can be obtained. By using the training group with optimal linear correlation, the color conversion models can be established, and the desired color conversion accuracy can be obtained in the whole color space. In the experiments, the wide-color-gamut sample groups were designed and then divided into different groups according to their hue angles and chromas in the CIE1976L*a*b* space, with the Pearson correlation coefficient being used to evaluate the linearity between RGB and XYZ space. Particularly, two kinds of color conversion models employing polynomial formulas with different terms and a BP artificial neural network (BP-ANN) were trained and tested with the same sample groups. The experimental results show that the color conversion errors (CIE1976L*a*b* color difference) of the polynomial transforms with the training groups divided by hue angles can be decreased efficiently.
Collapse
|
3
|
Estimating Fluor Emission Spectra Using Digital Image Analysis Compared to Spectrophotometer Measurements. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094291. [PMID: 37177494 PMCID: PMC10181702 DOI: 10.3390/s23094291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/13/2023] [Accepted: 04/24/2023] [Indexed: 05/15/2023]
Abstract
This paper describes a practical method for obtaining the spectra of lights emitted by a fluor in a liquid scintillator (LS) using a digital camera. The emission wavelength results obtained using a digital image were compared with those obtained using a fluorescence spectrophotometer. For general users, conventional spectrophotometers are expensive and difficult to access. Moreover, their experimental measurement setup and processes are highly complicated, and they require considerable care in handling. To overcome these limitations, a feasibility study was performed to obtain the emission spectrum through image analysis. Specifically, the emission spectrum of a fluor dissolved in a liquid scintillator was obtained using digital image analysis. An image processing method was employed to convert the light irradiated during camera exposure into wavelengths. Hue (H) and wavelength (W) are closely related. Thus, we obtained an H-W response curve in the 400~450 nm wavelength region, using a light-emitting diode. Another relevant advantage of the method described in this study is its non-invasiveness in sealed LS samples. Our results showed that this method has the potential to accurately investigate the emission wavelengths of fluor within acceptable uncertainties. We envision the use of this method to perform experiments in chemistry and physics laboratories in the future.
Collapse
|
4
|
Investigation of the Hue-Wavelength Response of a CMOS RGB-Based Image Sensor. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22239497. [PMID: 36502198 PMCID: PMC9739397 DOI: 10.3390/s22239497] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/30/2022] [Accepted: 12/02/2022] [Indexed: 05/27/2023]
Abstract
In this study, a non-linear hue-wavelength (H-W) curve was investigated from 400 to 650 nm. To date, no study has reported on H-W relationship measurements, especially down to the 400 nm region. A digital camera mounted with complementary metal oxide semiconductor (CMOS) image sensors was used. The obtained digital images of the sample were based on an RGB-based imaging analysis rather than multispectral imaging or hyperspectral imaging. In this study, we focused on the raw image to reconstruct the H-W curve. In addition, several factors affecting the digital image, such as exposure time or international organization for standardization (ISO), were investigated. In addition, cross check of the H-W response using laser was performed. We expect that our method will be useful as an auxiliary method in the future for obtaining the fluor emission wavelength information.
Collapse
|
5
|
An automatic fluorescence phenotyping platform to evaluate dynamic infection process of Tobacco mosaic virus-green fluorescent protein in tobacco leaves. FRONTIERS IN PLANT SCIENCE 2022; 13:968855. [PMID: 36119566 PMCID: PMC9478445 DOI: 10.3389/fpls.2022.968855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 08/15/2022] [Indexed: 06/15/2023]
Abstract
Tobacco is one of the important economic crops all over the world. Tobacco mosaic virus (TMV) seriously affects the yield and quality of tobacco leaves. The expression of TMV in tobacco leaves can be analyzed by detecting green fluorescence-related traits after inoculation with the infectious clone of TMV-GFP (Tobacco mosaic virus - green fluorescent protein). However, traditional methods for detecting TMV-GFP are time-consuming and laborious, and mostly require a lot of manual procedures. In this study, we develop a low-cost machine-vision-based phenotyping platform for the automatic evaluation of fluorescence-related traits in tobacco leaf based on digital camera and image processing. A dynamic monitoring experiment lasting 7 days was conducted to evaluate the efficiency of this platform using Nicotiana tabacum L. with a total of 14 samples, including the wild-type strain SR1 and 4 mutant lines generated by RNA interference technology. As a result, we found that green fluorescence area and brightness generally showed an increasing trend over time, and the trends were different among these SR1 and 4 mutant lines samples, where the maximum and minimum of green fluorescence area and brightness were mutant-4 and mutant-1 respectively. In conclusion, the platform can full-automatically extract fluorescence-related traits with the advantage of low-cost and high accuracy, which could be used in detecting dynamic changes of TMV-GFP in tobacco leaves.
Collapse
|
6
|
The Spectrum of Light Emitted by LED Using a CMOS Sensor-Based Digital Camera and Its Application. SENSORS (BASEL, SWITZERLAND) 2022; 22:6418. [PMID: 36080877 PMCID: PMC9460717 DOI: 10.3390/s22176418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
We introduced a digital photo image analysis in color space to estimate the spectrum of fluor components dissolved in a liquid scintillator sample through the hue and wavelength relationship. Complementary metal oxide semiconductor (CMOS) image sensors with Bayer color filter array (CFA) technology in the digital camera were used to reconstruct and decode color images. Hue and wavelength are closely related. To date, no literature has reported the hue and wavelength relationship measurements, especially for blue or close to the UV region. The non-linear hue and wavelength relationship in the blue region was investigated using a light emitting diode source. We focused on this wavelength region, because the maximum quantum efficiency of the bi-alkali photomultiplier tube (PMT) is around 430 nm. It is necessary to have a good understanding of this wavelength region in PMT-based experiments. The CMOS Bayer CFA approach was sufficient to estimate the fluor emission spectrum in the liquid scintillator sample without using an expensive spectrophotometer.
Collapse
|
7
|
Visualization of dose distribution and basic study of dose estimation using plastic scintillator and digital camera. Biomed Phys Eng Express 2022; 8. [PMID: 35764067 DOI: 10.1088/2057-1976/ac7c91] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 06/28/2022] [Indexed: 11/11/2022]
Abstract
Radiation can be visualized using a scintillator and a digital camera. If the amount of light emitted by the scintillator increases with dose, the dose estimation can be obtained from the amount of light emitted. In this study, the basic performance of the scintillator and digital camera system was evaluated by measuring computed tomography dose index (CTDI). A circular plastic scintillator plate was sandwiched between polymethyl methacrylate (PMMA) phantoms, and X-rays were irradiated to them while rotating the X-ray tube to confirm changes in light emission. In addition, CTDI was estimated from the amount of light emitted by the scintillator during the helical scan and compared with the value measured from dosimeter. The scintillator emitted light while changing its distribution according to the movement of the X-ray tube. The measured CTDIvol was 33.20 mGy, the CTDIvol estimated from the scintillation light was approximately 46 mGy, which was 40% larger. In particular, when the scintillator was directly irradiated, the dose was overestimated compared with the value measured from the dosimeter. This overestimation can be because of the reproducibility of the position and the difference between the sensitivity of the scintillator to detect light emission and the sensitivity of the dosimeter, and the non-uniformity of position sensitivity due to the wide-angle lens.
Collapse
|
8
|
Pixel Image Analysis and Its Application with an Alcohol-Based Liquid Scintillator for Particle Therapy. SENSORS (BASEL, SWITZERLAND) 2022; 22:4876. [PMID: 35808370 PMCID: PMC9269500 DOI: 10.3390/s22134876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 06/09/2022] [Accepted: 06/23/2022] [Indexed: 06/15/2023]
Abstract
We synthesized an alcohol-based liquid scintillator (AbLS), and we implemented an auxiliary monitoring system with short calibration intervals using AbLS for particle therapy. The commercial liquid scintillator used in previous studies did not allow the user to control the chemical ratio and its composition. In our study, the chemical ratio of AbLS was freely controlled by simultaneously mixing water and alcohol. To make an equivalent substance to the human body, 2-ethoxyethanol was used. There was no significant difference between AbLS and water in areal density. As an application of AbLS, the range was measured with AbLS using an electron beam in an image analysis that combined AbLS and a digital phone camera. Given a range-energy relationship for the electron expressed as areal density, the electron beam range (cm) in water can be easily estimated. To date, no literature report for the direct comparison of a pixel image analysis and Monte Carlo (MC) simulation has been published. Furthermore, optical tomography of the inverse problem was performed with AbLS and a mobile phone camera. Analyses of optical tomography images provide deeper insight into Radon transformation. In addition, the human phantom, which is difficult to compose with semiconductor diodes, was easily implemented as an image acquisition and analysis system.
Collapse
|
9
|
New device for taking nine-directional ocular photographs: "9Gaze" application. J Eye Mov Res 2022; 15:10.16910/jemr.15.1.5. [PMID: 35444735 PMCID: PMC9015868 DOI: 10.16910/jemr.15.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
This study compared the time required to produce nine-directional ocular photographs using the conventional method to that using the newly devised 9Gaze application. In total, 20 healthy adults, 10 adult patients with strabismus, and 10 pediatric patients with amblyopia or strabismus had their ocular photographs taken using a digital camera with PowerPoint 2010, and with an iPad, and iPod touch with 9Gaze. Photographs of 10 healthy patients were taken by orthoptists with <1 year of experience, and the other participants had theirs taken by those with >1 year of experience. The required time was compared between the three devices in all patients and the two orthoptist groups in 20 healthy adults (>1 year and <1 year of experience). The required times were significantly different between the devices: 515.5 ± 187.0 sec with the digital camera, 117.4 ± 17.8 sec with the iPad, and 76.3 ± 14.1 sec with the iPod touch. The required time with the digital camera was significantly different between the two orthoptist groups (404.7 ± 150.8 vs. 626.3 ± 154.2 sec, P=0.007). The use of the 9Gaze application shortened the recording time required. Furthermore, 9Gaze can be used without considering the years of experience of the examiner.
Collapse
|
10
|
Contactless Vital Signs Monitoring From Videos Recorded With Digital Cameras: An Overview. Front Physiol 2022; 13:801709. [PMID: 35250612 PMCID: PMC8895203 DOI: 10.3389/fphys.2022.801709] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 01/20/2022] [Indexed: 01/26/2023] Open
Abstract
The measurement of physiological parameters is fundamental to assess the health status of an individual. The contactless monitoring of vital signs may provide benefits in various fields of application, from healthcare and clinical setting to occupational and sports scenarios. Recent research has been focused on the potentiality of camera-based systems working in the visible range (380-750 nm) for estimating vital signs by capturing subtle color changes or motions caused by physiological activities but invisible to human eyes. These quantities are typically extracted from videos framing some exposed body areas (e.g., face, torso, and hands) with adequate post-processing algorithms. In this review, we provided an overview of the physiological and technical aspects behind the estimation of vital signs like respiratory rate, heart rate, blood oxygen saturation, and blood pressure from digital images as well as the potential fields of application of these technologies. Per each vital sign, we provided the rationale for the measurement, a classification of the different techniques implemented for post-processing the original videos, and the main results obtained during various applications or in validation studies. The available evidence supports the premise of digital cameras as an unobtrusive and easy-to-use technology for physiological signs monitoring. Further research is needed to promote the advancements of the technology, allowing its application in a wide range of population and everyday life, fostering a biometrical holistic of the human body (BHOHB) approach.
Collapse
|
11
|
Estimation of Fluor Emission Spectrum through Digital Photo Image Analysis with a Water-Based Liquid Scintillator. SENSORS 2021; 21:s21248483. [PMID: 34960580 PMCID: PMC8703946 DOI: 10.3390/s21248483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 12/10/2021] [Accepted: 12/16/2021] [Indexed: 11/16/2022]
Abstract
In this paper, we performed a feasibility study of using a water-based liquid scintillator (WbLS) for conducting imaging analysis with a digital camera. The liquid scintillator (LS) dissolves a scintillating fluor in an organic base solvent to emit light. We synthesized a liquid scintillator using water as a solvent. In a WbLS, a suitable surfactant is needed to mix water and oil together. As an application of the WbLS, we introduced a digital photo image analysis in color space. A demosaicing process to reconstruct and decode color is briefly described. We were able to estimate the emission spectrum of the fluor dissolved in the WbLS by analyzing the pixel information stored in the digital image. This technique provides the potential to estimate fluor components in the visible region without using an expensive spectrophotometer. In addition, sinogram analysis was performed with Radon transformation to reconstruct transverse images with longitudinal photo images of the WbLS sample.
Collapse
|
12
|
DisCaaS: Micro Behavior Analysis on Discussion by Camera as a Sensor. SENSORS 2021; 21:s21175719. [PMID: 34502609 PMCID: PMC8434061 DOI: 10.3390/s21175719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 08/19/2021] [Accepted: 08/20/2021] [Indexed: 11/22/2022]
Abstract
The emergence of various types of commercial cameras (compact, high resolution, high angle of view, high speed, and high dynamic range, etc.) has contributed significantly to the understanding of human activities. By taking advantage of the characteristic of a high angle of view, this paper demonstrates a system that recognizes micro-behaviors and a small group discussion with a single 360 degree camera towards quantified meeting analysis. We propose a method that recognizes speaking and nodding, which have often been overlooked in existing research, from a video stream of face images and a random forest classifier. The proposed approach was evaluated on our three datasets. In order to create the first and the second datasets, we asked participants to meet physically: 16 sets of five minutes data from 21 unique participants and seven sets of 10 min meeting data from 12 unique participants. The experimental results showed that our approach could detect speaking and nodding with a macro average f1-score of 67.9% in a 10-fold random split cross-validation and a macro average f1-score of 62.5% in a leave-one-participant-out cross-validation. By considering the increased demand for an online meeting due to the COVID-19 pandemic, we also record faces on a screen that are captured by web cameras as the third dataset and discussed the potential and challenges of applying our ideas to virtual video conferences.
Collapse
|
13
|
Basic study of mobile gamma ray imaging using a digital camera and scintillator. Biomed Phys Eng Express 2021; 7. [PMID: 33752192 DOI: 10.1088/2057-1976/abf0e3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/22/2021] [Indexed: 11/12/2022]
Abstract
Gamma cameras are used in nuclear medicine examinations involving radioisotopes; however, they do not provide real-time feedback. We propose a real-time imaging method based on a commercially available digital camera and a scintillator array to provide simple and accurate measurements of radioisotope accumulation and contamination. We evaluate the sensitivity and resolution of the proposed device using X-rays as a proxy for gamma-rays. The performance of the device is demonstrated using PENTAX KP and ORCA-spark C11440-36U digital cameras. A caesium iodide scintillator array is irradiated with X-rays, with the state of light emission confirmed using live view images. The pixel value is evaluated as a function of dose rate. Furthermore, we investigate the state of light emission in response to amplifying the light signal using an image intensifier. For the PENTAX KP, luminescence is observable for a dose rate of approximately 10 mSv/h, which changes to 2.1 mSv/h when an image intensifier is used. Notably, the ORCA-spark detected emission at a low dose rate of 0.06 mSv/h. However, using an image intensifier resulted in noisier images. Therefore, although the ORCA-spark can observe luminescence at a suitable predicted dose rate for application in nuclear medicine examinations, a collimator is required to control the spread of gamma rays. However, as this causes the sensitivity to decrease, increasing the amount of light emitted by the scintillator and improving the sensitivity of the camera is vital.
Collapse
|
14
|
Measurement of Water Leaving Reflectance Using a Digital Camera Based on Multiple Reflectance Reference Cards. SENSORS 2020; 20:s20226580. [PMID: 33217939 PMCID: PMC7698626 DOI: 10.3390/s20226580] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 11/13/2020] [Accepted: 11/16/2020] [Indexed: 11/16/2022]
Abstract
With the development of citizen science, digital cameras and smartphones are increasingly utilized in water quality monitoring. The smartphone application HydroColor quantitatively retrieves water quality parameters from digital images. HydroColor assumes a linear relationship between the digital pixel number (DN) and incident radiance and applies a grey reference card to derive water leaving reflectance. However, image DNs change with incident light brightness non-linearly, according to a power function. We developed an improved method for observing and calculating water leaving reflectance from digital images based on multiple reflectance reference cards. The method was applied to acquire water, sky, and reflectance reference card images using a Cannon 50D digital camera at 31 sampling stations; the results were validated using synchronously measured water leaving reflectance using a field spectrometer. The R2 for the red, green, and blue color bands were 0.94, 0.95, 0.94, and the mean relative errors were 27.6%, 29.8%, 31.8%, respectively. The validation results confirm that this method can derive accurate water leaving reflectance, especially when compared with the results derived by HydroColor, which systematically overestimates water leaving reflectance. Our results provide a more accurate theoretical foundation for quantitative water quality monitoring using digital and smartphone cameras.
Collapse
|
15
|
Direct Georeferencing for the Images in an Airborne LiDAR System by Automatic Boresight Misalignments Calibration. SENSORS 2020; 20:s20185056. [PMID: 32899588 PMCID: PMC7570596 DOI: 10.3390/s20185056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 08/26/2020] [Accepted: 09/02/2020] [Indexed: 11/24/2022]
Abstract
Airborne Light Detection and Ranging (LiDAR) system and digital camera are usually integrated on a flight platform to obtain multi-source data. However, the photogrammetric system calibration is often independent of the LiDAR system and performed by the aerial triangulation method, which needs a test field with ground control points. In this paper, we present a method for the direct georeferencing of images collected by a digital camera integrated in an airborne LiDAR system by automatic boresight misalignments calibration with the auxiliary of point cloud. The method firstly uses an image matching to generate a tie point set. Space intersection is then performed to obtain the corresponding object coordinate values of the tie points, while the elevation calculated from the space intersection is replaced by the value from the LiDAR data, resulting in a new object point called Virtual Control Point (VCP). Because boresight misalignments exist, a distance between the tie point and the image point of VCP can be found by collinear equations in that image from which the tie point is selected. An iteration process is performed to minimize the distance with boresight corrections in each epoch, and it stops when the distance is smaller than a predefined threshold or the total number of epochs is reached. Two datasets from real projects were used to validate the proposed method and the experimental results show the effectiveness of the method by being evaluated both quantitatively and visually.
Collapse
|
16
|
Heat Transfer and Temperature Characteristics of a Working Digital Camera. SENSORS 2020; 20:s20092561. [PMID: 32365948 PMCID: PMC7248918 DOI: 10.3390/s20092561] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 04/18/2020] [Accepted: 04/27/2020] [Indexed: 11/17/2022]
Abstract
Digital cameras represented by industrial cameras are widely used as image acquisition sensors in the field of image-based mechanics measurement, and their thermal effect inevitably induces thermal-induced errors of the mechanics measurement. To deeply understand the errors, the research for digital camera’s thermal effect is necessary. This study systematically investigated the heat transfer processes and temperature characteristics of a working digital camera. Concretely, based on the temperature distribution of a typical working digital camera, the heat transfer of the working digital camera was investigated, and a model describing the temperature variation and distribution was presented and verified experimentally. With this model, the thermal equilibrium time and thermal equilibrium temperature of the camera system were calculated. Then, the influences of thermal parameters of digital camera and environmental temperature on the temperature characteristics of working digital camera were simulated and experimentally investigated. The theory analysis and experimental results demonstrate that the presented model can accurately describe the temperature characteristics and further calculate the thermal equilibrium state of working digital camera, all of which contribute to guiding mechanics measurement and thermal design based on such camera sensors.
Collapse
|
17
|
Quality and Feasibility of Automated Digital Retinal Imaging in the Emergency Department. J Emerg Med 2019; 58:18-24. [PMID: 31718881 DOI: 10.1016/j.jemermed.2019.08.034] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 08/21/2019] [Accepted: 08/25/2019] [Indexed: 11/22/2022]
Abstract
BACKGROUND Emergency physicians (EPs) frequently evaluate patients at risk for sight-threatening conditions but may have difficulty performing direct ophthalmoscopy effectively. Digital fundus photography offers a potential alternative. OBJECTIVE We sought to assess the performance of an automated digital retinal imaging platform in a real-world emergency department. METHODS We performed a prospective, observational study of emergency department patients who were at risk for acute, nontraumatic, posterior segment pathology. Photographs were obtained using an automated digital retinal camera and were subsequently reviewed by an ophthalmologist. We recorded the number of attempts required, total time required, patient comfort, and findings on EP-performed direct ophthalmoscopy, if performed. RESULTS Of 123 participants completing the study, 93 (75.6%) had ≥1 eye with a diagnostically useful image, while 29 (23.6%) had no photographs of diagnostic value. The mean number of attempts required to obtain images was 1.45 (range 1-3) and the mean elapsed time required to complete photography was 109.6 s. The mean patient comfort score was 4.6 on a 5-point scale, where 5 was the most comfortable. Direct ophthalmoscopy was performed by an emergency department provider for 19 (15.4%) patients. Acute findings were noted in 14 patients during expert review of fundus photographs, though in only 2 of these cases was direct ophthalmoscopy performed by an EP with only 1 finding ultimately identified correctly. CONCLUSIONS Automated digital imaging of the ocular fundus is rapidly performed, is well tolerated by patients, and can be used to obtain diagnostic quality images without the use of pharmacologic pupillary dilation in most emergency department patients who are at risk for acute posterior segment pathology.
Collapse
|
18
|
Abstract
BACKGROUND Radiographs of the feet are the reference standard for measuring the hallux valgus angle. However, the availability and use of radiographs are constrained due to cost and radiation exposure. Less invasive, nonradiographic assessments have been proposed, although measurement using self-photography has not been reported. OBJECTIVES To determine (1) reliability of photographic hallux valgus angle (pHVA) measurement using the same photographs of the feet, (2) reliability of repeated self-photography trials, and (3) measurement error when the radiographic hallux valgus angle (rHVA) is estimated using the pHVA. METHODS In this reliability study, participants took photographs of their own feet using a digital camera. The intrarater and interrater reliability of pHVA measurements were then assessed using the intraclass correlation coefficient (ICC) and 95% minimum detectable change (MDC). The participants took photographs twice, and the reliability of repeated self-photography trials was examined. Participants also received radiographs of their feet, from which the rHVA was measured. The measurement error was assessed using the mean difference and 95% limits of agreement. RESULTS The intrarater and interrater ICC of pHVA measurement was 0.99, with MDCs less than 2°. The ICC of pHVA measurement for repeated self-photography was 0.96, and the MDC was 6.9°. The pHVA was systematically lower than the rHVA, by 5.3°. CONCLUSION Measurement of the pHVA using self-photography was reproducible, although pHVA measurement underestimated the rHVA. The pHVA can be a useful nonradiographic method to quantify hallux valgus deformity. J Orthop Sports Phys Ther 2019;49(2):80-86. Epub 12 Sep 2018. doi: 10.2519/jospt.2019.8280.
Collapse
|
19
|
[Application of near-surface remote sensing in monitoring the dynamics of forest canopy phenology.]. YING YONG SHENG TAI XUE BAO = THE JOURNAL OF APPLIED ECOLOGY 2018; 29:1768-1778. [PMID: 29974684 DOI: 10.13287/j.1001-9332.201806.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Near-surface remote sensing is an important technique for in-situ monitoring of forest phenology and a robust tool for scaling of the phenology with a high temporal resolution and mode-rate spatial coverage. Here, we first reviewed the methods of near-surface remote sensing with three major optical sensors (i.e., radiometer, spectrometer, and digital camera) for monitoring forest phenology. Second, we analyzed sources of uncertainties from distinguishing the phenophases by using the data obtained at the Maoershan flux site in the temperate forest. We found that the error was mainly attributed to the extracting method. Third, we analyzed the linkage of near-surface remote sensing with other methods and its intrinsic problems. Finally, we proposed four priorities in the research of this field: 1) linking optical (or canopy structural) phenology with functional phenology (physiological and ecological processes); 2) integrating the regional networks of canopy phenology for global networking observation and data sharing of canopy phenology; 3) integrating multi-source and multi-scale phenological data with the help of near-surface remote sensing; 4) developing phenology models based on near-surface remote sensing in order to improve the phenology simulation in the dynamic global vegetation models.
Collapse
|
20
|
Abstract
Photography has always been an integral part of dentistry. The journey goes back to the time when film photography was used only for documentation and referral purpose which has now evolved to digital photography. Its application in dental practice is simple, fast, and extremely useful in documenting procedures of work, education of patients, and pursuing clinical investigations, thus providing many benefits to the dentists and patients. The article describes the added benefits of digital dental photography over film photography, basic armamentarium for obtaining good photographs, and how digital dental photography is beneficial in the field of prosthodontics.
Collapse
|
21
|
Imaging Analysis by Digital Camera for Separating Broiler Breast Meat with Low Water-Holding Capacity. J Poult Sci 2017; 54:253-261. [PMID: 32908434 PMCID: PMC7477213 DOI: 10.2141/jpsa.0160122] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Separating breast meat with low water-holding capacity, conformation parameters (thickness, volume, bottom sarea, and perimeter), and color of chicken breast meat were measured by direct measurement and by imaging analysis with a digital camera. Samples were obtained from a production line. The L* value was used to separate the samples by three characteristics designating the quality of the meat: dark-colored samples (L*<50), normal-colored samples (50≤L*≤56), and light-colored samples (L*>56). Light-colored samples had higher moisture content, thawing loss, drip loss, and lower pH compared with those of normal- and dark-colored samples. Lower thickness was observed in the light-colored samples compared with those of normal- and dark-colored samples. Light- and normal-colored samples had a greater volume of meat than did the dark-colored samples. Imaging analysis showed that light-colored samples had a greater bottom area and perimeter compared with those of normal- and dark-colored samples. However, these conformation parameters showed low correlation with water-holding capacity, which was determined as thawing and drip loss of the samples. Therefore, the conformation parameters, determined by direct measurement or imaging analysis, could not be used to predict the water-holding capacity of breast meat. Nevertheless, waterholding capacity showed high correlation with the L* value of breast meat. Imaging analysis could be used to separate light-colored breast meat with mostly low water-holding capacity. The accuracy of determining the characteristics of light-, normal-, and dark-colored samples by imaging analysis was evaluated. The characteristics of light-colored samples were determined with higher accuracy by imaging analysis than were the characteristics of normal- and dark-colored samples. This result indicated that imaging analysis using a digital camera could be used to separate light-colored breast meat with mostly low water-holding capacity from normal- and dark-colored meat.
Collapse
|
22
|
A new mobile phone-based tool for assessing energy and certain food intakes in young children: a validation study. JMIR Mhealth Uhealth 2015; 3:e38. [PMID: 25910494 PMCID: PMC4425820 DOI: 10.2196/mhealth.3670] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Revised: 11/08/2014] [Accepted: 02/25/2015] [Indexed: 11/17/2022] Open
Abstract
Background Childhood obesity is an increasing health problem globally. Obesity may be established already at pre-school age. Further research in this area requires accurate and easy-to-use methods for assessing the intake of energy and foods. Traditional methods have limited accuracy, and place large demands on the study participants and researchers. Mobile phones offer possibilities for methodological advancements in this area since they are readily available, enable instant digitalization of collected data, and also contain a camera to photograph pre- and post-meal food items. We have recently developed a new tool for assessing energy and food intake in children using mobile phones called the Tool for Energy Balance in Children (TECH). Objective The main aims of our study are to (1) compare energy intake by means of TECH with total energy expenditure (TEE) measured using a criterion method, the doubly labeled water (DLW) method, and (2) to compare intakes of fruits and berries, vegetables, juice, and sweetened beverages assessed by means of TECH with intakes obtained using a Web-based food frequency questionnaire (KidMeal-Q) in 3 year olds. Methods In this study, 30 Swedish 3 year olds were included. Energy intake using TECH was compared to TEE measured using the DLW method. Intakes of vegetables, fruits and berries, juice, as well as sweetened beverages were assessed using TECH and compared to the corresponding intakes assessed using KidMeal-Q. Wilcoxon matched pairs test, Spearman rank order correlations, and the Bland-Altman procedure were applied. Results The mean energy intake, assessed by TECH, was 5400 kJ/24h (SD 1500). This value was not significantly different (P=.23) from TEE (5070 kJ/24h, SD 600). However, the limits of agreement (2 standard deviations) in the Bland-Altman plot for energy intake estimated using TECH compared to TEE were wide (2990 kJ/24h), and TECH overestimated high and underestimated low energy intakes. The Bland-Altman plots for foods showed similar patterns. The mean intakes of vegetables, fruits and berries, juice, and sweetened beverages estimated using TECH were not significantly different from the corresponding intakes estimated using KidMeal-Q. Moderate but statistically significant correlations (ρ=.42-.46, P=.01-.02) between TECH and KidMeal-Q were observed for intakes of vegetables, fruits and berries, and juice, but not for sweetened beverages. Conclusion We found that one day of recordings using TECH was not able to accurately estimate intakes of energy or certain foods in 3 year old children.
Collapse
|
23
|
Abstract
We introduce the concept, benefits, and general architecture for acquiring, storing, and displaying digital photographs along with medical imaging examinations. We also discuss a specific implementation built around an Android-based system for simultaneously acquiring digital photographs along with portable radiographs. By an innovative application of radiofrequency identification technology to radiographic cassettes, the system is able to maintain a tight relationship between these photographs and the radiographs within the picture archiving and communications system (PACS) environment. We provide a cost analysis demonstrating the economic feasibility of this technology. Since our architecture naturally integrates with patient identification methods, we also address patient privacy issues.
Collapse
|
24
|
Camera-enabled techniques for organic synthesis. Beilstein J Org Chem 2013; 9:1051-72. [PMID: 23766820 PMCID: PMC3678607 DOI: 10.3762/bjoc.9.118] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 05/23/2013] [Indexed: 11/23/2022] Open
Abstract
A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.
Collapse
|
25
|
Sensor integration in a low cost land mobile mapping system. SENSORS 2012; 12:2935-53. [PMID: 22736985 PMCID: PMC3376609 DOI: 10.3390/s120302935] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2012] [Revised: 02/16/2012] [Accepted: 02/23/2012] [Indexed: 11/28/2022]
Abstract
Mobile mapping is a multidisciplinary technique which requires several dedicated equipment, calibration procedures that must be as rigorous as possible, time synchronization of all acquired data and software for data processing and extraction of additional information. To decrease the cost and complexity of Mobile Mapping Systems (MMS), the use of less expensive sensors and the simplification of procedures for calibration and data acquisition are mandatory features. This article refers to the use of MMS technology, focusing on the main aspects that need to be addressed to guarantee proper data acquisition and describing the way those aspects were handled in a terrestrial MMS developed at the University of Porto. In this case the main aim was to implement a low cost system while maintaining good quality standards of the acquired georeferenced information. The results discussed here show that this goal has been achieved.
Collapse
|
26
|
Range camera self-calibration based on integrated bundle adjustment via joint setup with a 2D digital camera. SENSORS 2011; 11:8721-40. [PMID: 22164102 PMCID: PMC3231487 DOI: 10.3390/s110908721] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2011] [Revised: 08/12/2011] [Accepted: 08/15/2011] [Indexed: 11/16/2022]
Abstract
Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera.
Collapse
|
27
|
A Three-Channel Spectrometer for Wide-Field Imaging of Anisotropic Plasmonic Nanoparticles. THE JOURNAL OF PHYSICAL CHEMISTRY. C, NANOMATERIALS AND INTERFACES 2011; 115:15933-15937. [PMID: 21927639 PMCID: PMC3171732 DOI: 10.1021/jp206157v] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
A three-channel spectrometer (3CS) based on a commercial digital camera was developed to distinguish among tens of large (>100 nm), anisotropic plasmonic particles with various shapes, orientations, and compositions on a surface simultaneously. Using band pass filters and polarizers, the contrast of 3CS images could be enhanced to identify specific orientation and composition characteristics of gold and gold-silver nanopyramids and as well as the direction of the longest arm of gold nanostars.
Collapse
|
28
|
Application of a hybrid 3D-2D laser scanning system to the characterization of slate slabs. SENSORS 2010; 10:5949-61. [PMID: 22219696 PMCID: PMC3247741 DOI: 10.3390/s100605949] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2010] [Revised: 04/21/2010] [Accepted: 05/11/2010] [Indexed: 11/16/2022]
Abstract
Dimensional control based on 3D laser scanning techniques is widely used in practice. We describe the application of a hybrid 3D-2D laser scanning system to the characterization of slate slabs with structural defects that are difficult for the human eye to characterize objectively. Our study is based on automating the process using a 3D laser scanner and a 2D camera. Our results demonstrate that the application of this hybrid system optimally characterizes slate slabs in terms of the defects described by the Spanish UNE-EN 12326-1 standard.
Collapse
|
29
|
Geometric stability and lens decentering in compact digital cameras. SENSORS 2010; 10:1553-72. [PMID: 22294886 PMCID: PMC3264438 DOI: 10.3390/s100301553] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/25/2009] [Revised: 01/26/2010] [Accepted: 02/20/2010] [Indexed: 11/17/2022]
Abstract
A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With regard to lens decentering, the amount of radial and tangential displacement resulting from decentering distortion was related with the precision of the camera and with the offset of the principal point from the geometric center of the sensor. The study was conducted with data obtained after 372 calibration processes (62 per camera). The tests were performed for each camera in three situations: during continuous use of the cameras, after camera power off/on and after the full extension and retraction of the zoom-lens. Additionally, 360 new calibrations were performed in order to study the variation of the internal geometry when the camera is rotated. The aim of this study was to relate the level of stability and decentering in a camera with the precision and quality that can be obtained. An additional goal was to provide practical recommendations about photogrammetric use of such cameras.
Collapse
|
30
|
Diagnostic accuracy of digitized conventional radiographs by camera and scanner in detection of proximal caries. J Dent Res Dent Clin Dent Prospects 2009; 3:126-31. [PMID: 23230500 PMCID: PMC3463097 DOI: 10.5681/joddd.2009.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2009] [Accepted: 08/24/2009] [Indexed: 12/03/2022] Open
Abstract
Background and aims
Digital radiographs have some advantages over conventional ones. Application of digital recep-tors is not routine yet. Therefore, there is a need for digitizing conventional radiographs. The aim of the present study was to compare the diagnostic accuracy of digitized conventional radiographs by scanner and camera in detection of proximal car-ies.
Materials and methods
Three hundred and sixteen surfaces of 158 extracted posterior teeth were radiographed. The radiographs were digitized using a digital camera and a scanner. Five observers scored the images for the presence and depth of caries. Histopathologic sections were the gold standard. Kappa agreement coefficient was used for statistical analysis.
Results
Kappa agreement coefficients between the camera and the scanner and also between each one with the gold stan-dard in detecting the depth of caries were 0.504, 0.557 and 0.454, respectively. In detection of caries, the indexes were 0.571, 0.553 and 0.527, respectively.
Conclusion Diagnostic accuracy of camera images in caries detection was more than that of scanned images, but there was also a moderate consistency between them. The consistency of detecting the presence of caries was more than that of detecting their depths. It seems that both digital cameras and scanners can be used interchangeably.
Collapse
|
31
|
Metric Potential of a 3D Measurement System Based on Digital Compact Cameras. SENSORS 2009; 9:4178-94. [PMID: 22408520 PMCID: PMC3291905 DOI: 10.3390/s90604178] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2009] [Revised: 05/17/2009] [Accepted: 05/25/2009] [Indexed: 11/17/2022]
Abstract
This paper presents an optical measuring system based on low cost, high resolution digital cameras. Once the cameras are synchronised, the portable and adjustable system can be used to observe living beings, bodies in motion, or deformations of very different sizes. Each of the cameras has been modelled individually and studied with regard to the photogrammetric potential of the system. We have investigated the photogrammetric precision obtained from the crossing of rays, the repeatability of results, and the accuracy of the coordinates obtained. Systematic and random errors are identified in validity assessment of the definition of the precision of the system from crossing of rays or from marking residuals in images. The results have clearly demonstrated the capability of a low-cost multiple-camera system to measure with sub-millimetre precision.
Collapse
|
32
|
How to optimize radiological images captured from digital cameras, using the Adobe Photoshop 6.0 program. J Digit Imaging 2003; 16:216-29. [PMID: 12964054 PMCID: PMC3046467 DOI: 10.1007/s10278-003-1651-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.
Collapse
|