26
|
Kim K, Jang KW, Bae SI, Kim HK, Cha Y, Ryu JK, Jo YJ, Jeong KH. Ultrathin arrayed camera for high-contrast near-infrared imaging. OPTICS EXPRESS 2021; 29:1333-1339. [PMID: 33726351 DOI: 10.1364/oe.409472] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 12/22/2020] [Indexed: 06/12/2023]
Abstract
We report an ultrathin arrayed camera (UAC) for high-contrast near infrared (NIR) imaging by using microlens arrays with a multilayered light absorber. The UAC consists of a multilayered composite light absorber, inverted microlenses, gap-alumina spacers and a planar CMOS image sensor. The multilayered light absorber was fabricated through lift-off and repeated photolithography processes. The experimental results demonstrate that the image contrast is increased by 4.48 times and the MTF 50 is increased by 2.03 times by eliminating optical noise between microlenses through the light absorber. The NIR imaging of UAC successfully allows distinguishing the security strip of authentic bill and the blood vessel of finger. The ultrathin camera offers a new route for diverse applications in biometric, surveillance, and biomedical imaging.
Collapse
|
27
|
Eszes DJ, Szabó DJ, Russell G, Lengyel C, Várkonyi T, Paulik E, Nagymajtényi L, Facskó A, Petrovski G, Petrovski BÉ. Diabetic Retinopathy Screening in Patients with Diabetes Using a Handheld Fundus Camera: The Experience from the South-Eastern Region in Hungary. J Diabetes Res 2021; 2021:6646645. [PMID: 33628836 PMCID: PMC7884113 DOI: 10.1155/2021/6646645] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/20/2020] [Accepted: 01/19/2021] [Indexed: 11/18/2022] Open
Abstract
PURPOSE Diabetic retinopathy (DR) is the leading cause of vision loss among active adults in industrialized countries. We aimed to investigate the prevalence of diabetes mellitus (DM), DR and its different grades, in patients with DM in the Csongrád County, South-Eastern region, Hungary. Furthermore, we aimed to detect the risk factors for developing DR and the diabetology/ophthalmology screening patterns and frequencies, as well as the effect of socioeconomic status- (SES-) related factors on the health and behavior of DM patients. METHODS A cross-sectional study was conducted on adults (>18 years) involving handheld fundus camera screening (Smartscope Pro Optomed, Finland) and image assessment using the Spectra DR software (Health Intelligence, England). Self-completed questionnaires on self-perceived health status (SPHS) and health behavior, as well as visual acuity, HbA1c level, type of DM, and attendance at healthcare services were also recorded. RESULTS 787 participants with fundus camera images and full self-administered questionnaires were included in the study; 46.2% of the images were unassessable. T1D and T2D were present in 13.5% and 86.5% of the participants, respectively. Among the T1D and T2D patients, 25.0% and 33.5% had DR, respectively. The SES showed significant proportion differences in the T1D group. Lower education was associated with a lower DR rate compared to non-DR (7.7% vs. 40.5%), while bad/very bad perceived financial status was associated with significantly higher DR proportion compared to non-DR (63.6% vs. 22.2%). Neither the SPHS nor the health behavior showed a significant relationship with the disease for both DM groups. Mild nonproliferative retinopathy without maculopathy (R1M0) was detected in 6% and 23% of the T1D and T2D patients having DR, respectively; R1 with maculopathy (R1M1) was present in 82% and 66% of the T1D and T2D groups, respectively. Both moderate nonproliferative retinopathy with maculopathy (R2M1) and active proliferative retinopathy with maculopathy (R3M1) were detected in 6% and 7% of the T1D and T2D patients having DR, respectively. The level of HbA1c affected the attendance at the diabetology screening (HbA1c > 7% associated with >50% of all quarter-yearly attendance in DM patients, and with 10% of the diabetology screening nonattendance). CONCLUSION The prevalence of DM and DR in the studied population in Hungary followed the country trend, with a slightly higher sight-threatening DR than the previously reported national average. SES appears to affect the DR rate, in particular, for T1D. Although DR screening using handheld cameras seems to be simple and dynamic, much training and experience, as well as overcoming the issue of decreased optic clarity is needed to achieve a proper level of image assessability, and in particular, for use in future telemedicine or artificial intelligence screening programs.
Collapse
|
28
|
Koyama A, Hirata T, Kawahara Y, Iyooka H, Kubozono H, Onikura N, Itaya S, Minagawa T. Habitat suitability maps for juvenile tri-spine horseshoe crabs in Japanese intertidal zones: A model approach using unmanned aerial vehicles and the Structure from Motion technique. PLoS One 2020; 15:e0244494. [PMID: 33362230 PMCID: PMC7757885 DOI: 10.1371/journal.pone.0244494] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 12/10/2020] [Indexed: 11/22/2022] Open
Abstract
The tri-spine horseshoe crab, Tachypleus tridentatus, is a threatened species that inhabits coastal areas from South to East Asia. A Conservation management system is urgently required for managing its nursery habitats, i.e., intertidal flats, especially in Japan. Habitat suitability maps are useful in drafting conservation plans; however, they have rarely been prepared for juvenile T. tridentatus. In this study, we examined the possibility of constructing robust habitat suitability models (HSMs) for juveniles based on topographical data acquired using unmanned aerial vehicles and the Structure from Motion (UAV-SfM) technique. The distribution data of the juveniles in the Tsuyazaki and Imazu intertidal flats from 2017 to 2019 were determined. The data were divided into a training dataset for HSM construction and three test datasets for model evaluation. High accuracy digital surface models were built for each region using the UAV-SfM technique. Normalized elevation was assessed by converting the topographical models that consider the tidal range in each region, and the slope was calculated based on these models. Using the training data, HSMs of the juveniles were constructed with normalized elevation and slope as the predictor variables. The HSMs were evaluated using the test data. The results showed that HSMs exhibited acceptable discrimination performance for each region. Habitat suitability maps were built for the juveniles in each region, and the suitable areas were estimated to be approximately 6.1 ha of the total 19.5 ha in Tuyazaki, and 3.7 ha of the total 7.9 ha area in Imazu. In conclusion, our findings support the usefulness of the UAV-SfM technique in constructing HSMs for juvenile T. tridentatus. The monitoring of suitable habitat areas for the juveniles using the UAV-SfM technique is expected to reduce survey costs, as it can be conducted with fewer investigators over vast intertidal zones within a short period of time.
Collapse
|
29
|
Koh W, Khoo D, Pan LTT, Lean LL, Loh MH, Chua TYV, Ti LK. Use of GoPro point-of-view camera in intubation simulation-A randomized controlled trial. PLoS One 2020; 15:e0243217. [PMID: 33259536 PMCID: PMC7707475 DOI: 10.1371/journal.pone.0243217] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 11/17/2020] [Indexed: 11/18/2022] Open
Abstract
Introduction Teaching endotracheal intubation is uniquely challenging due to its technical, high-stakes, and highly time-sensitive nature. The GoPro is a small, lightweight, high-resolution action camera with a wide-angle field of view that can encompass both the airway as well as the procedurist’s hands and positioning technique when worn with a head mount. We aimed to evaluate its effectiveness in improving intubation teaching for novice learners in a simulated setting, via a two-arm, parallel group, randomized controlled superiority trial with 1:1 allocation ratio. Methods We recruited Year 4 medical students at the start of their compulsory 2-week Anesthesia posting. Participants underwent a standardized intubation curriculum and a formative assessment, then randomized to receive GoPro or non-GoPro led feedback. After a span of three months, participants were re-assessed in a summative assessment by blinded accessors. Participants were also surveyed on their learning experience for a qualitative thematic perspective. The primary outcomes were successful intubation and successful first-pass intubation. Results Seventy-one participants were recruited with no dropouts, and all were included in the analysis. 36 participants received GoPro led feedback, and 35 participants received non-GoPro led feedback. All participants successfully intubated the manikin. No statistically significant differences were found between the GoPro group and the non-GoPro group at summative assessment (85.3% vs 90.0%, p = 0.572). Almost all participants surveyed found the GoPro effective for their learning (98.5%). Common themes in the qualitative analysis were: the ability for an improved assessment, greater identification of small details that would otherwise be missed, and usefulness of the unique point-of-view footage in improving understanding. Conclusions The GoPro is a promising tool for simulation-based intubation teaching. There are considerations in its implementation to maximize the learning experience and yield from GoPro led feedback and training.
Collapse
|
30
|
Liang J, Wang P, Zhu L, Wang LV. Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution. Nat Commun 2020; 11:5252. [PMID: 33067438 PMCID: PMC7567836 DOI: 10.1038/s41467-020-19065-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 09/16/2020] [Indexed: 12/27/2022] Open
Abstract
Simultaneous and efficient ultrafast recording of multiple photon tags contributes to high-dimensional optical imaging and characterization in numerous fields. Existing high-dimensional optical imaging techniques that record space and polarization cannot detect the photon's time of arrival owing to the limited speeds of the state-of-the-art electronic sensors. Here, we overcome this long-standing limitation by implementing stereo-polarimetric compressed ultrafast photography (SP-CUP) to record light-speed high-dimensional events in a single exposure. Synergizing compressed sensing and streak imaging with stereoscopy and polarimetry, SP-CUP enables video-recording of five photon tags (x, y, z: space; t: time of arrival; and ψ: angle of linear polarization) at 100 billion frames per second with a picosecond temporal resolution. We applied SP-CUP to the spatiotemporal characterization of linear polarization dynamics in early-stage plasma emission from laser-induced breakdown. This system also allowed three-dimensional ultrafast imaging of the linear polarization properties of a single ultrashort laser pulse propagating in a scattering medium.
Collapse
|
31
|
Lin TC, Chiang YH, Hsu CL, Liao LS, Chen YY, Chen SJ. Image quality and diagnostic accuracy of a handheld nonmydriatic fundus camera: Feasibility of a telemedical approach in screening retinal diseases. J Chin Med Assoc 2020; 83:962-966. [PMID: 32649414 PMCID: PMC7526587 DOI: 10.1097/jcma.0000000000000382] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND A suitable fundus camera for telemedicine screening can expand the scale of eye care service. The purpose of this study was to compare a handheld nonmydriatic digital fundus camera and a conventional mydriatic fundus camera according to the image quality of their photographs and usability of those photographs to accurately diagnose various retinal diseases. METHODS A handheld nonmydriatic fundus camera and conventional fundus camera were used to take fundus photographs of outpatients at an ophthalmic clinic before and after pupillary dilation. Image quality and diagnostic agreement of the photos were graded by two masked and experienced retinal specialists. RESULTS A total of 867 photographs of 393 eyes of 200 patients were collected. Approximately 80% of photos taken under nonmydriasis status using the handheld nonmydriatic fundus camera had good (55.7%) or excellent (22.7%) image quality. The overall agreement of diagnoses between the doctors was more than 90%. When the handheld nonmydriatic fundus camera was used after mydriasis, the proportion of images with good (45%) or excellent (49.7%) quality reached 94.7% and diagnostic agreement was 93.4%. Lens opacity was associated with the quality of images obtained using the handheld camera (p = 0.041), and diagnosis disagreement for handheld camera images was associated with preexisting diabetes diagnosis (p = 0.009). Approximately 40% of patients expressed preference for use of the handheld nonmydriatic camera. CONCLUSION This study demonstrated the effectiveness of the handheld nonmydriatic fundus camera in clinical practice and its feasibility for telemedicine screening of retinal diseases.
Collapse
|
32
|
Soranzo A, Bruno N. Nonverbal communication in selfies posted on Instagram: Another look at the effect of gender on vertical camera angle. PLoS One 2020; 15:e0238588. [PMID: 32915837 PMCID: PMC7485807 DOI: 10.1371/journal.pone.0238588] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 08/19/2020] [Indexed: 11/18/2022] Open
Abstract
Background Selfies are a novel social phenomenon that is gradually beginning to receive attention within the cognitive sciences. Several studies have documented biases that may be related to nonverbal communicative intentions. For instance, in selfies posted on the dating platform Tinder males but not females prefer camera views from below (Sedgewick, Flath & Elias, 2017). We re-examined this study to assess whether this bias is confined to dating selection contexts and to compare variability between individuals and between genders. Methods Three raters evaluated vertical camera position in 2000 selfies– 1000 by males and 1000 by females—posted in Instagram. Results We found that the choices of camera angle do seem to vary depending on the context under which the selfies were uploaded. On Tinder, females appear more likely to choose neutral, frontal presentations than they do on Instagram, whereas males on Tinder appear more likely to opt for camera angles from below than on Instagram. Conclusions This result confirms that the composition of selfies is constrained by factors affecting nonverbal communicative intentions.
Collapse
|
33
|
Pan C, Tan W, Savini G, Hua Y, Ye X, Xu W, Yu J, Wang Q, Huang J. A Comparative Study of Total Corneal Power Using a Ray Tracing Method Obtained from 3 Different Scheimpflug Camera Devices. Am J Ophthalmol 2020; 216:90-98. [PMID: 32277940 DOI: 10.1016/j.ajo.2020.03.037] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 03/22/2020] [Accepted: 03/25/2020] [Indexed: 11/18/2022]
Abstract
PURPOSE We sought to assess the agreement of ray-traced corneal power values by 3 Scheimpflug tomographers tp construct the corresponding arithmetic adjustment factor in comparison with an automated keratometer (IOLMaster) and a conventional Placido-based topographer (Allegro Topolyzer). DESIGN Prospective reliability analysis. METHODS A total of 74 eyes from 74 healthy subjects who underwent corneal power measurements using Pentacam, Sirius, Galilei, IOLMaster, and Allegro Topolyzer were included. Ray-traced corneal power values, such as total corneal refractive power (TCRP), mean pupil power (MPP), total corneal power (TCP), mean keratometry (Km), and simulated keratometry (SimK) were recorded respectively and analyzed using one-way analysis of variance (ANOVA) and Bland-Altman plots. RESULTS Among the 3 ray-traced corneal power values, TCRP and MPP did not differ significantly (P = 0.81), whereas TCP presented a slightly significant larger value (P < 0.001). Compared to Km or SimK, corneal power measurements by the ray tracing method exhibited significantly lower values (P < 0.001). Bland-Altman plots disclosed that the 3 Scheimpflug tomographers showed similar 95% limits of agreement after arithmetic adjustment compared with Km (-0.40 to 0.40 D, -0.39 to 0.39 D, and -0.35 to 0.34 D) or SimK (-0.50 to 0.51 D, -0.43 to 0.42 D, and -0.46 to 0.46 D). CONCLUSIONS Ray-traced corneal power values obtained using 3 Scheimpflug tomographers with default diameter settings were similar, indicating that they could be used interchangeably in daily clinical practice. The 3 Scheimpflug tomographers were satisfactory in agreement after arithmetical adjustment compared with conventional automated keratometer or Placido-based topographer.
Collapse
|
34
|
Lee J, Ahn B. Real-Time Human Action Recognition with a Low-Cost RGB Camera and Mobile Robot Platform. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2886. [PMID: 32438776 PMCID: PMC7287597 DOI: 10.3390/s20102886] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/08/2020] [Accepted: 05/13/2020] [Indexed: 11/16/2022]
Abstract
Human action recognition is an important research area in the field of computer vision that can be applied in surveillance, assisted living, and robotic systems interacting with people. Although various approaches have been widely used, recent studies have mainly focused on deep-learning networks using Kinect camera that can easily generate data on skeleton joints using depth data, and have achieved satisfactory performances. However, their models are deep and complex to achieve a higher recognition score; therefore, they cannot be applied to a mobile robot platform using a Kinect camera. To overcome these limitations, we suggest a method to classify human actions in real-time using a single RGB camera, which can be applied to the mobile robot platform as well. We integrated two open-source libraries, i.e., OpenPose and 3D-baseline, to extract skeleton joints on RGB images, and classified the actions using convolutional neural networks. Finally, we set up the mobile robot platform including an NVIDIA JETSON XAVIER embedded board and tracking algorithm to monitor a person continuously. We achieved an accuracy of 70% on the NTU-RGBD training dataset, and the whole process was performed on an average of 15 frames per second (FPS) on an embedded board system.
Collapse
|
35
|
Bae TW. Image-quality metric system for color filter array evaluation. PLoS One 2020; 15:e0232583. [PMID: 32392215 PMCID: PMC7213733 DOI: 10.1371/journal.pone.0232583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 04/17/2020] [Indexed: 11/18/2022] Open
Abstract
A modern color filter array (CFA) output is rendered into the final output image using a demosaicing algorithm. During this process, the rendered image is affected by optical and carrier cross talk of the CFA pattern and demosaicing algorithm. Although many CFA patterns have been proposed thus far, an image-quality (IQ) evaluation system capable of comprehensively evaluating the IQ of each CFA pattern has yet to be developed, although IQ evaluation items using local characteristics or specific domain have been created. Hence, we present an IQ metric system to evaluate the IQ performance of CFA patterns. The proposed CFA evaluation system includes proposed metrics such as the moiré robustness using the experimentally determined moiré starting point (MSP) and achromatic reproduction (AR) error, as well as existing metrics such as color accuracy using CIELAB, a color reproduction error using spatial CIELAB, structural information using the structure similarity, the image contrast based on MTF50, structural and color distortion using the mean deviation similarity index (MDSI), and perceptual similarity using Haar wavelet-based perceptual similarity index (HaarPSI). Through our experiment, we confirmed that the proposed CFA evaluation system can assess the IQ for an existing CFA. Moreover, the proposed system can be used to design or evaluate new CFAs by automatically checking the individual performance for the metrics used.
Collapse
|
36
|
Nguyen AT, Van Nguyen T, Timmins R, McGowan P, Van Hoang T, Le MD. Efficacy of camera traps in detecting primates in Hue Saola Nature Reserve. Primates 2020; 61:697-705. [PMID: 32383126 DOI: 10.1007/s10329-020-00823-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 04/22/2020] [Indexed: 11/26/2022]
Abstract
Camera trapping has been demonstrated to be an effective tool in surveying a suite of species, especially elusive mammals in rough terrains. The method has become increasingly common in primate surveys for both ground-dwelling and arboreal taxa in many tropical regions of the world. However, camera trapping has rarely been used to inventory primates in Vietnam, although many species are under severe threats and in critical need of surveying for improved conservation measures. In this study, we employed camera trapping to primarily investigate the possible continued presence of galliform species, but also to opportunistically record primate species, in Hue Saola Nature Reserve in central Vietnam. We documented five primate species, including the northern pig-tailed macaque Macaca leonina, the stump-tailed macaque Macaca arctoides, the rhesus macaque Macaca mulatta, the pygmy slow loris Nycticebus pygmaeus, and the red-shanked douc Pygathrix nemaeus, which represents a majority of primate diversity in the reserve. The results show that camera trapping may be an option for documenting primate diversity, and seasonal and daily activities of ground-dwelling taxa. Our data also suggest that although human disturbance is still rampant in the area, Hue Saola Nature Reserve appears to be reasonably well protected compared to other conservation areas in Indochina. In particular, it is home to several highly threatened primates, and it therefore plays a crucial role in primate conservation in Vietnam. However, these populations are in need of greater protection, such as more targeted patrols to remove snares and prevent other violations.
Collapse
|
37
|
Li H, Zhu M, Graham DJ, Zhang Y. Are multiple speed cameras more effective than a single one? Causal analysis of the safety impacts of multiple speed cameras. ACCIDENT; ANALYSIS AND PREVENTION 2020; 139:105488. [PMID: 32126326 DOI: 10.1016/j.aap.2020.105488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 02/20/2020] [Accepted: 02/24/2020] [Indexed: 06/10/2023]
Abstract
Most previous studies investigate the safety effects of a single speed camera, ignoring the potential impacts from adjacent speed cameras. The mutual influence between two or even more adjacent speed cameras is a relevant attribute worth taking into account when evaluating the safety impacts of speed cameras. This paper investigates the safety effects of two or more speed cameras observed within a specific radius which are defined as multiple speed cameras. A total of 464 speed cameras at treated sites and 3119 control sites are observed and related to road traffic accident data from 1999 to 2007. The effects of multiple speed cameras are evaluated using pairwise comparisons between treatment units with different doses based on the propensity score methods. The spatial effect of multiple speed cameras is investigated by testing various radii. There are two major findings in this study. First, sites with multiple speed cameras perform better in reducing the absolute number of road accidents than those with a single camera. Second, speed camera sites are found to be most effective with a radius of 200 m. For a radius of 200 m and 300 m, the reduction in the personal injury collisions by multiple speed cameras are 21.4 % and 13.2 % more than a single camera. Our results also suggest that multiple speed cameras are effective within a small radius (200 m and 300 m).
Collapse
|
38
|
Singh A, Cheyne K, Wilson G, Sime MJ, Hong SC. On the use of a new monocular-indirect ophthalmoscope for retinal photography in a primary care setting. THE NEW ZEALAND MEDICAL JOURNAL 2020; 133:31-38. [PMID: 32242176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
AIM There is consensus among general practitioners regarding the difficulty of direct ophthalmoscopy. Hence, there is increasing interest in smartphone-based ophthalmoscopes; the New Zealand-made oDocs Nun ophthalmoscope is one such device, released in November 2018. This study aims to subjectively assess the quality of the images captured with it in order to determine the feasibility of its use in a primary care setting. METHOD Twenty-eight general practitioners (GPs) from different practices throughout New Zealand agreed to participate in this prospective observational study and were sent an oDocs Nun ophthalmoscope. Using the device, clinicians took retinal photographs of patients who presented with visual complaints and uploaded one image per eye onto a database. Three hundred and fifty-seven photographs were collated and rated by four professionals (two ophthalmologists and two optometrists) on the basis of image quality and the anatomical features visible. RESULTS On a Likert scale from 1 (poor quality) to 4 (very good quality), the median and mode values for each professional's rating of all photographs were both 2. On average, 94.5% of the photographs were deemed to have visible optic discs and 50.0% to have visible maculae adequate for detecting an abnormality. Pairwise comparison showed 93.7% agreement among the four professionals for optic disc visibility, and 74.2% agreement for macula visibility. CONCLUSION The oDocs Nun is a promising tool which GPs could use to circumvent the challenges associated with direct ophthalmoscopy. With appropriate training to ensure proficiency, it may have a valuable role in telemedicine and tele-referral.
Collapse
|
39
|
Xu Z, Sun L, Wang X, Lei P, He J, Zhou Y. Stereo camera trap for wildlife in situ observations and measurements. APPLIED OPTICS 2020; 59:3262-3269. [PMID: 32400611 DOI: 10.1364/ao.389835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 02/18/2020] [Indexed: 06/11/2023]
Abstract
This paper proposes a stereo camera trap to expand field of view (FOV) of the traditional camera trap and to measure wildlife sizes with a centimeter-scaled accuracy within the detection distance of 10 m. In the method, FOVs of the two cameras are partly overlapped with a 30-cm-long baseline and a posture angle of 100°. Typically only targets in the public FOV can be measured; in contrast, when only parts of targets appear in the public FOV they are difficult to measure. To solve the problem, a part-matching algorithm is provided. In the proposed camera trap, a central process unit is realized by a micro control unit, an advanced reduced-instruction-set-computing machine, and a field-programmable gate array, and then motion sensors trigger the cameras to capture stereo images when animals pass by. In addition, the camera trap has daytime mode and nighttime mode switched by a photosensitive sensor by perceiving ambient lights. Finally, the stereo camera trap data is transmitted by a long-term-evolution module at a scheduled time. Experimental results show that the proposed stereo camera trap can broaden the FOV of a monocular camera by up to 77% at 5 m and estimate feature sizes of targets with centimeter-scaled accuracy.
Collapse
|
40
|
Miles HC, Gunn MD, Coates AJ, Potel M. Seeing Through the "Science Eyes" of the ExoMars Rover. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:71-81. [PMID: 32149612 DOI: 10.1109/mcg.2020.2970796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The ExoMars rover, due to launch in mid 2020, will travel to Mars in search of signs of past or present habitability. The rover will carry the Panoramic Camera, PanCam, a scientific camera system designed to provide crucial remote sensing capabilities as mission scientists search for targets of interest. In preparation for the mission operations, the visual output of PanCam has been simulated and modeled with a three-dimensional rendering system, allowing the team to investigate the capabilities of the camera system and providing insight into how it may be calibrated and used for engineering tasks during the surface mission.
Collapse
|
41
|
Krtalić A, Bajić M, Ivelja T, Racetin I. The AIDSS Module for Data Acquisition in Crisis Situations and Environmental Protection. SENSORS 2020; 20:s20051267. [PMID: 32110938 PMCID: PMC7085737 DOI: 10.3390/s20051267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/17/2020] [Accepted: 02/24/2020] [Indexed: 11/16/2022]
Abstract
The Toolbox implementation for removal of antipersonnel mines, submunitions and unexploded ordnance (TIRAMISU) Advanced Intelligence Decision Support System is an operational system proposed to Mine Action Centres worldwide for conducting non-technical surveys in humanitarian demining. The system consists of three modules, one of which is the module for data acquisition introduced and described in this study. The module has been designed, produced, improved, used and operationally tested and validated on several platforms (helicopters, remotely piloted aircraft systems (RPAS) and a blimp), with various sensors and acquisition units (Global Positioning System (GPS) and inertial measurement unit) in a variety of combinations for additional data acquisition from deep inside a suspected hazardous area. For the purposes of aerial data acquisition over a suspected hazardous area, the use of multiple sensors such as visible digital cameras and multi-spectral visible, near infrared (VNIR), hyperspectral VNIR and thermal infrared sensors are of benefit, because they display the scene in different ways. Off-the-shelf equipment and software were mostly used, but some specific equipment, such as sensor pods, was developed and also some software solutions for data acquisition and pre-processing (transforming hyperspectral line scanner data into hyperspectral images, and producing hyperspectral cubes). The technical stability and robustness of the module were confirmed by operationally testing and evaluating the systems on the aforementioned platforms and missions in several actual suspected hazardous areas in Croatia and Bosnia and Herzegovina, between 2001 and 2015.
Collapse
|
42
|
Kritikos J, Zoitaki C, Tzannetos G, Mehmeti A, Douloudi M, Nikolaou G, Alevizopoulos G, Koutsouris D. Comparison between Full Body Motion Recognition Camera Interaction and Hand Controllers Interaction used in Virtual Reality Exposure Therapy for Acrophobia. SENSORS 2020; 20:s20051244. [PMID: 32106452 PMCID: PMC7085665 DOI: 10.3390/s20051244] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 02/10/2020] [Accepted: 02/14/2020] [Indexed: 11/16/2022]
Abstract
Virtual Reality has already been proven as a useful supplementary treatment tool for anxiety disorders. However, no specific technological importance has been given so far on how to apply Virtual Reality with a way that properly stimulates the phobic stimulus and provide the necessary means for lifelike experience. Thanks to technological advancements, there is now a variety of hardware that can help enhance stronger emotions generated by Virtual Reality systems. This study aims to evaluate the feeling of presence during different hardware setups of Virtual Reality Exposure Therapy, and, particularly how the user's interaction with those setups can affects their sense of presence during the virtual simulation. An acrophobic virtual scenario is used as a case study by 20 phobic individuals and the Witmer-Singer presence questionnaire was used for presence evaluation by the users of the system. Statistical analysis on their answers revealed that the proposed full body Motion Recognition Cameras system generates a better feeling of presence compared to the Hand Controllers system. This is thanks to the Motion Recognition Cameras, which track and allow display of the user's entire body within the virtual environment. Thus, the users are enabled to interact and confront the anxiety-provoking stimulus as in real world. Further studies are recommended, in which the proposed system could be used in Virtual Reality Exposure Therapy trials with acrophobic patients and other anxiety disorders as well, since the proposed system can provide natural interaction in various simulated environments.
Collapse
|
43
|
Gai W, Qi M, Ma M, Wang L, Yang C, Liu J, Bian Y, de Melo G, Liu S, Meng X. Employing Shadows for Multi-Person Tracking Based on a Single RGB-D Camera. SENSORS 2020; 20:s20041056. [PMID: 32075274 PMCID: PMC7070640 DOI: 10.3390/s20041056] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 02/09/2020] [Accepted: 02/13/2020] [Indexed: 11/16/2022]
Abstract
Although there are many algorithms to track people that are walking, existing methods mostly fail to cope with occluded bodies in the setting of multi-person tracking with one camera. In this paper, we propose a method to use people’s shadows as a clue to track them instead of treating shadows as mere noise. We introduce a novel method to track multiple people by fusing shadow data from the RGB image with skeleton data, both of which are captured by a single RGB Depth (RGB-D) camera. Skeletal tracking provides the positions of people that can be captured directly, while their shadows are used to track them when they are no longer visible. Our experiments confirm that this method can efficiently handle full occlusions. It thus has substantial value in resolving the occlusion problem in multi-person tracking, even with other kinds of cameras.
Collapse
|
44
|
Maudsley-Barton S, Hoon Yap M, Bukowski A, Mills R, McPhee J. A new process to measure postural sway using a Kinect depth camera during a Sensory Organisation Test. PLoS One 2020; 15:e0227485. [PMID: 32023256 PMCID: PMC7001893 DOI: 10.1371/journal.pone.0227485] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Accepted: 12/19/2019] [Indexed: 01/10/2023] Open
Abstract
Posturography provides quantitative, objective measurements of human balance and postural control for research and clinical use. However, it usually requires access to specialist equipment to measure ground reaction forces, which are not widely available in practice, due to their size or cost. In this study, we propose an alternative approach to posturography. It uses the skeletal output of an inexpensive Kinect depth camera to localise the Centre of Mass (CoM) of an upright individual. We demonstrate a pipeline which is able to measure postural sway directly from CoM trajectories, obtained from tracking the relative position of three key joints. In addition, we present the results of a pilot study that compares this method of measuring postural sway to the output of a NeuroCom SMART Balance Master. 15 healthy individuals (age: 42.3 ± 20.4 yrs, height: 172 ± 11 cm, weight: 75.1 ± 14.2 kg, male = 11), completed 25 Sensory Organisation Test (SOT) on a NeuroCom SMART Balance Master. Simultaneously, the sessions were recorded using custom software developed for this study (CoM path recorder). Postural sway was calculated from the output of both methods and the level of agreement determined, using Bland-Altman plots. Good agreement was found for eyes open tasks with a firm support, the agreement decreased as the SOT tasks became more challenging. The reasons for this discrepancy may lie in the different approaches that each method takes to calculate CoM. This discrepancy warrants further study with a larger cohort, including fall-prone individuals, cross-referenced with a marker-based system. However, this pilot study lays the foundation for the development of a portable device, which could be used to assess postural control, more cost-effectively than existing equipment.
Collapse
|
45
|
Conti TF, Ohlhausen M, Hom GL, Talcott KE, Golshani C, Choudhry N, Singh RP. Comparison of Widefield Imaging Between Confocal Laser Scanning Ophthalmoscopy and Broad Line Fundus Imaging in Routine Clinical Practice. Ophthalmic Surg Lasers Imaging Retina 2020; 51:89-94. [PMID: 32084281 DOI: 10.3928/23258160-20200129-03] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 08/27/2019] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE The purpose of this study was to evaluate the difference between widefield confocal scanning laser imaging (WSLO) and widefield broad line fundus (WBLF) imaging in their ability to view the peripheral retina in routine clinical practice. PATIENTS AND METHODS A retrospective chart review identified patients within routine clinical practice who were imaged with a WSLO image and a single and montaged WBLF image. The primary outcome was the number of ultra-widefield quadrants captured utilizing the UWF consensus definitions. Secondary outcomes included the area within each of quadrant and the differences in clinical grading between modalities. RESULTS More vortex ampullae were identified with the WSLO than either single image or montage WBLF image. The WSLO captured 116 of the possible 260 vortex ampullae (45%) in comparison to the WBLF single image (8 of 260; 3%) and WBLF montage (96 of 260; 37%). Only five eyes from WSLO and no images from the WBLF single image met the ultra-widefield consensus definition in routine clinical practice. The average area per individual quadrant acquired by WSLO image was greater than the single or montage WBLF image (781.67 mm2, 433.82 mm2, and 686.03 mm2, respectively; P < .001). Clinical grading of images found a substantial inter-rater agreement with both technologies (86% on WSLO; 88% on WBLF). CONCLUSIONS Both systems had a low rate of meeting UWF consensus definitions in routine clinical practice. A single WSLO image acquired a greater area than WBLF image in both single-image and montage formats. [Ophthalmic Surg Lasers Imaging Retina. 2020;51:89-94.].
Collapse
|
46
|
Choi MH, Ju YG, Park JH. Holographic near-eye display with continuously expanded eyebox using two-dimensional replication and angular spectrum wrapping. OPTICS EXPRESS 2020; 28:533-547. [PMID: 32118979 DOI: 10.1364/oe.381277] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 12/18/2019] [Indexed: 06/10/2023]
Abstract
Holographic near-eye displays present true three-dimensional images with full monocular depth cues. In this paper, we propose a technique to expand the eyebox of the holographic near-eye displays. The base eyebox of the holographic near-eye displays is determined by the space bandwidth product of a spatial light modulator. The proposed technique replicates and stitches the base eyebox by the combined use of a holographic optical element and high order diffractions of the spatial light modulator, achieving horizontally and vertically expanded eyebox. An angular spectrum wrapping technique is also applied to alleviate image distortions observed at the boundaries between the replicated base eyeboxes.
Collapse
|
47
|
Baek JJ, Kim SW, Kim YT. Camera-Integrable Wide-Bandwidth Antenna for Capsule Endoscope. SENSORS 2019; 20:s20010232. [PMID: 31906143 PMCID: PMC6982747 DOI: 10.3390/s20010232] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/27/2019] [Accepted: 12/29/2019] [Indexed: 11/16/2022]
Abstract
This paper presents a new antenna design for a capsule endoscope. The proposed antenna comprises a camera hole and meandered line. These features enable the antenna to be integrated on the same side as the camera, within the capsule endoscope. Moreover, light-emitting diodes can be mounted on the surface of the antenna for illumination. The antenna achieves a wide bandwidth, despite the small size owing to its meandered line structure.
Collapse
|
48
|
Bauer JR, Thomas JB, Hardeberg JY, Verdaasdonk RM. An Evaluation Framework for Spectral Filter Array Cameras to Optimize Skin Diagnosis. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4805. [PMID: 31694239 PMCID: PMC6864639 DOI: 10.3390/s19214805] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 10/31/2019] [Accepted: 11/01/2019] [Indexed: 01/02/2023]
Abstract
Comparing and selecting an adequate spectral filter array (SFA) camera is application-specific and usually requires extensive prior measurements. An evaluation framework for SFA cameras is proposed and three cameras are tested in the context of skin analysis. The proposed framework does not require application-specific measurements and spectral sensitivities together with the number of bands are the main focus. An optical model of skin is used to generate a specialized training set to improve spectral reconstruction. The quantitative comparison of the cameras is based on reconstruction of measured skin spectra, colorimetric accuracy, and oxygenation level estimation differences. Specific spectral sensitivity shapes influence the results directly and a 9-channel camera performed best regarding the spectral reconstruction metrics. Sensitivities at key wavelengths influence the performance of oxygenation level estimation the strongest. The proposed framework allows to compare spectral filter array cameras and can guide their application-specific development.
Collapse
|
49
|
Gracia-Cazaña T, García-Malinis AJ, Gilaberte Y. i-Fluorescence: Fluorescence photography with a smartphone. J Am Acad Dermatol 2019; 84:e195-e196. [PMID: 31639415 DOI: 10.1016/j.jaad.2019.10.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/26/2019] [Accepted: 10/13/2019] [Indexed: 11/18/2022]
|
50
|
Phillips M, Marsden H, Jaffe W, Matin RN, Wali GN, Greenhalgh J, McGrath E, James R, Ladoyanni E, Bewley A, Argenziano G, Palamaras I. Assessment of Accuracy of an Artificial Intelligence Algorithm to Detect Melanoma in Images of Skin Lesions. JAMA Netw Open 2019; 2:e1913436. [PMID: 31617929 PMCID: PMC6806667 DOI: 10.1001/jamanetworkopen.2019.13436] [Citation(s) in RCA: 94] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 08/27/2019] [Indexed: 01/22/2023] Open
Abstract
Importance A high proportion of suspicious pigmented skin lesions referred for investigation are benign. Techniques to improve the accuracy of melanoma diagnoses throughout the patient pathway are needed to reduce the pressure on secondary care and pathology services. Objective To determine the accuracy of an artificial intelligence algorithm in identifying melanoma in dermoscopic images of lesions taken with smartphone and digital single-lens reflex (DSLR) cameras. Design, Setting, and Participants This prospective, multicenter, single-arm, masked diagnostic trial took place in dermatology and plastic surgery clinics in 7 UK hospitals. Dermoscopic images of suspicious and control skin lesions from 514 patients with at least 1 suspicious pigmented skin lesion scheduled for biopsy were captured on 3 different cameras. Data were collected from January 2017 to July 2018. Clinicians and the Deep Ensemble for Recognition of Malignancy, a deterministic artificial intelligence algorithm trained to identify melanoma in dermoscopic images of pigmented skin lesions using deep learning techniques, assessed the likelihood of melanoma. Initial data analysis was conducted in September 2018; further analysis was conducted from February 2019 to August 2019. Interventions Clinician and algorithmic assessment of melanoma. Main Outcomes and Measures Area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of the algorithmic and specialist assessment, determined using histopathology diagnosis as the criterion standard. Results The study population of 514 patients included 279 women (55.7%) and 484 white patients (96.8%), with a mean (SD) age of 52.1 (18.6) years. A total of 1550 images of skin lesions were included in the analysis (551 [35.6%] biopsied lesions; 999 [64.4%] control lesions); 286 images (18.6%) were used to train the algorithm, and a further 849 (54.8%) images were missing or unsuitable for analysis. Of the biopsied lesions that were assessed by the algorithm and specialists, 125 (22.7%) were diagnosed as melanoma. Of these, 77 (16.7%) were used for the primary analysis. The algorithm achieved an AUROC of 90.1% (95% CI, 86.3%-94.0%) for biopsied lesions and 95.8% (95% CI, 94.1%-97.6%) for all lesions using iPhone 6s images; an AUROC of 85.8% (95% CI, 81.0%-90.7%) for biopsied lesions and 93.8% (95% CI, 91.4%-96.2%) for all lesions using Galaxy S6 images; and an AUROC of 86.9% (95% CI, 80.8%-93.0%) for biopsied lesions and 91.8% (95% CI, 87.5%-96.1%) for all lesions using DSLR camera images. At 100% sensitivity, the algorithm achieved a specificity of 64.8% with iPhone 6s images. Specialists achieved an AUROC of 77.8% (95% CI, 72.5%-81.9%) and a specificity of 69.9%. Conclusions and Relevance In this study, the algorithm demonstrated an ability to identify melanoma from dermoscopic images of selected lesions with an accuracy similar to that of specialists.
Collapse
|