1
|
Ryu J, Beck D, Park W. A systematic review of camera monitor system display layout designs: Integration of existing knowledge. Appl Ergon 2024; 118:104228. [PMID: 38428169 DOI: 10.1016/j.apergo.2024.104228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 12/21/2023] [Accepted: 01/08/2024] [Indexed: 03/03/2024]
Abstract
Despite the growing interest in mirrorless vehicles equipped with a camera monitor system (CMS), the human factors research findings on CMS display layout design have not been synthesized yet, hindering the application of the knowledge and the identification of future research directions. In an effort to address the 'lack of integration of the existing knowledge', this literature review addresses the following research questions: 1) what CMS display layout designs have been considered/developed by academic researchers and by automakers, respectively?; 2) among possible CMS display layout design alternatives, which ones have not yet been examined through human factors evaluation studies?; and 3) how do the existing human factors studies on the evaluation of different CMS display layout designs vary in the specifics of research? This review provides significant implications for the ergonomic design of CMS display layouts, including some potential design opportunities and future research directions.
Collapse
Affiliation(s)
- Jungmin Ryu
- Department of Industrial Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Donghyun Beck
- Department of Safety Engineering, Incheon National University, Incheon, 22012, South Korea.
| | - Woojin Park
- Department of Industrial Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea; Institute for Industrial Systems Innovation, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| |
Collapse
|
2
|
Sebbag L, Ofri R, Arad D, Handel KW, Pe'er O. Using a smartphone-based digital fundus camera for screening of retinal and optic nerve diseases in veterinary medicine: A preliminary investigation. Vet Rec 2024; 194:e4088. [PMID: 38637964 DOI: 10.1002/vetr.4088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/30/2024] [Accepted: 03/14/2024] [Indexed: 04/20/2024]
Abstract
BACKGROUND Ophthalmoscopy is a valuable tool in clinical practice. We report the use of a novel smartphone-based handheld device for visualisation and photo-documentation of the ocular fundus in veterinary medicine. METHODS Selected veterinary patients of a referral ophthalmology service were included if one or both eyes had clear ocular media, allowing for examination of the fundus. Following pharmacological mydriasis, fundic images were obtained with a handheld fundus camera (Volk VistaView). For comparison, the fundus of a subset of animals was also imaged with a veterinary-specific fundus camera (Optomed Smartscope VET2). RESULTS The large field of view achieved by the Volk VistaView allowed for rapid and thorough observation of the ocular fundus in animals, providing a tool to visualise and record common pathologies of the posterior segment. Captured fundic images were sometimes overexposed, with the tapetal fundus artificially appearing hyperreflective when using the Volk VistaView camera, a finding that was less frequent when activating a 'veterinary mode' that reduced the sensitivity of the camera's sensor. The Volk VistaView compared well with the Optomed Smartscope VET2. LIMITATION The main study limitation was the small sample size. CONCLUSIONS The Volk VistaView camera was easy to use and provided good-quality fundic images in veterinary patients with healthy or diseased eyes, offering a wide field of view that was ideal for screening purposes.
Collapse
Affiliation(s)
- Lionel Sebbag
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, Rehovot, Israel
| | - Ron Ofri
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, Rehovot, Israel
| | - Dikla Arad
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, Rehovot, Israel
| | - Karin W Handel
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, Rehovot, Israel
| | - Oren Pe'er
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, Rehovot, Israel
| |
Collapse
|
3
|
Tan G, Hiew J, Ferreira I, Shah P, McEvoy M, Manning L, Hamilton EJ, Ting M. Reliability of a Three-Dimensional Wound Camera for Measurement of Diabetes-Related Foot Ulcers in a Clinical Setting. J Diabetes Sci Technol 2024; 18:747-749. [PMID: 38400719 DOI: 10.1177/19322968241233547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/26/2024]
Affiliation(s)
- Gabrielle Tan
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| | - Jonathan Hiew
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| | - Ivana Ferreira
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| | - Priyal Shah
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| | - Mahalia McEvoy
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| | - Laurens Manning
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
- Department of Infectious Diseases, Fiona Stanley Hospital, Murdoch, WA, Australia
- Medical School, The University of Western Australia, Perth, WA, Australia
| | - Emma J Hamilton
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
- Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Endocrinology and Diabetes, Fiona Stanley Hospital, Murdoch, WA, Australia
| | - Melissa Ting
- Department of Podiatry, Fiona Stanley Hospital, Murdoch, WA, Australia
- Multidisciplinary Diabetes Foot Unit, Fiona Stanley and Fremantle Hospitals, Murdoch, WA, Australia
| |
Collapse
|
4
|
Garrido-Jurado S, Garrido J, Jurado-Rodríguez D, Vázquez F, Muñoz-Salinas R. Reflection-Aware Generation and Identification of Square Marker Dictionaries. Sensors (Basel) 2022; 22:s22218548. [PMID: 36366245 PMCID: PMC9655742 DOI: 10.3390/s22218548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/24/2022] [Accepted: 11/03/2022] [Indexed: 06/12/2023]
Abstract
Square markers are a widespread tool to find correspondences for camera localization because of their robustness, accuracy, and detection speed. Their identification is usually based on a binary encoding that accounts for the different rotations of the marker; however, most systems do not consider the possibility of observing reflected markers. This case is possible in environments containing mirrors or reflective surfaces, and its lack of consideration is a source of detection errors, which is contrary to the robustness expected from square markers. This is the first work in the literature that focuses on reflection-aware square marker dictionaries. We present the derivation of the inter-marker distance of a reflection-aware dictionary and propose new algorithms for generating and identifying such dictionaries. Additionally, part of the proposed method can be used to optimize preexisting dictionaries to take reflection into account. The experimentation carried out demonstrates how our proposal greatly outperforms the most popular predefined dictionaries in terms of inter-marker distance and how the optimization process significantly improves them.
Collapse
Affiliation(s)
- Sergio Garrido-Jurado
- Seabery R&D, Aldebarán Building, Córdoba Science and Technology Park, 14014 Córdoba, Spain
| | - Juan Garrido
- Department of Electrical Engineering and Automation, Rabanales Campus, University of Córdoba, 14071 Córdoba, Spain
| | - David Jurado-Rodríguez
- Seabery R&D, Aldebarán Building, Córdoba Science and Technology Park, 14014 Córdoba, Spain
- Department of Computer Science and Numerical Analysis, Rabanales Campus, University of Córdoba, 14071 Córdoba, Spain
| | - Francisco Vázquez
- Department of Electrical Engineering and Automation, Rabanales Campus, University of Córdoba, 14071 Córdoba, Spain
| | - Rafael Muñoz-Salinas
- Department of Computer Science and Numerical Analysis, Rabanales Campus, University of Córdoba, 14071 Córdoba, Spain
| |
Collapse
|
5
|
Chen Z, Xu Y, Tang X, Shao X, Sun W, He X. Dual stereo-digital image correlation system for simultaneous measurement of overlapped wings with a polarization RGB camera and fluorescent speckle patterns. Opt Express 2022; 30:3345-3357. [PMID: 35209594 DOI: 10.1364/oe.446721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/28/2021] [Indexed: 06/14/2023]
Abstract
Simultaneous monitoring of overlapped multi-wing structure by stereo-digital image correlation (stereo-DIC) may be used to quantify insect motion and deformation. We propose a dual stereo-DIC system based on multispectral imaging with a polarization RGB camera. Different fluorescent speckle patterns were fabricated on wings, which emit red and blue spectra under ultraviolet light that were imaged and separated using a polarization RGB camera and auxiliary optical splitting components. The resulting dual stereo-DIC system was validated through translation experiments with transparent sheets and reconstructed overlapped insect wings (cicadas). Dynamic measurements of the Ruban artificial flier indicate the efficacy of this approach to determining real insect flight behavior.
Collapse
|
6
|
Cervantes LJ, Tallo CA, Lopes CA, Hellier EA, Chu DS. A Novel Virtual Wet Lab-Using a Smartphone Camera Adapter and a Video Conference Platform to Provide Real-Time Surgical Instruction. Cornea 2021; 40:1639-1643. [PMID: 34173369 DOI: 10.1097/ico.0000000000002763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 03/26/2021] [Indexed: 11/27/2022]
Abstract
PURPOSE Proctored surgical instruction has traditionally been taught through in-person interactions in either the operating room or an improvised wet lab. Because of the COVID-19 pandemic, live in-person instruction was not feasible owing to social distancing protocols, so a virtual wet lab (VWL) was proposed and implemented. The purpose of this article is to describe our experience with a VWL as a Descemet membrane endothelial keratoplasty (DMEK) skills-transfer course. This is the first time that a VWL environment has been described for the instruction of ophthalmic surgery. METHODS Thirteen participant surgeons took part in VWLs designed for DMEK skills transfer in September and October 2020. A smartphone camera adapter and a video conference software platform were the unique media for the VWL. After a didactic session, participants were divided into breakout rooms where their surgical scope view was broadcast live, allowing instructors to virtually proctor their participants in real time. Participants were surveyed to assess their satisfaction with the course. RESULTS All (100%) participants successfully injected and unfolded their DMEK grafts. Ten of the 13 participants completed the survey. Respondents rated the experience highly favorably. CONCLUSIONS With the use of readily available technology, VWLs can be successfully implemented in lieu of in-person skills-transfer courses. Further development catering to the needs of the participant might allow VWLs to serve as a viable option of surgical education, currently limited by geographical and social distancing boundaries.
Collapse
Affiliation(s)
| | | | | | | | - David S Chu
- Metropolitan Eye Research and Surgery Institute, Palisades Park, NJ; and
- Institute of Ophthalmology and Visual Sciences, Rutgers University, Newark, NJ
| |
Collapse
|
7
|
Naufal F, Brady CJ, Wolle MA, Saheb Kashaf M, Mkocha H, Bradley C, Kabona G, Ngondi J, Massof RW, West SK. Evaluation of photography using head-mounted display technology (ICAPS) for district Trachoma surveys. PLoS Negl Trop Dis 2021; 15:e0009928. [PMID: 34748543 PMCID: PMC8601615 DOI: 10.1371/journal.pntd.0009928] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 11/18/2021] [Accepted: 10/19/2021] [Indexed: 12/02/2022] Open
Abstract
Background As the prevalence of trachoma declines worldwide, it is becoming increasingly expensive and challenging to standardize graders in the field for surveys to document elimination. Photography of the tarsal conjunctiva and remote interpretation may help alleviate these challenges. The purpose of this study was to develop, and field test an Image Capture and Processing System (ICAPS) to acquire hands-free images of the tarsal conjunctiva for upload to a virtual reading center for remote grading. Methodology/Principal findings This observational study was conducted during a district-level prevalence survey for trachomatous inflammation—follicular (TF) in Chamwino, Tanzania. The ICAPS was developed using a Samsung Galaxy S8 smartphone, a Samsung Gear VR headset, a foot pedal trigger and customized software allowing for hands-free photography. After a one-day training course, three trachoma graders used the ICAPS to collect images from 1305 children ages 1–9 years, which were expert-graded remotely for comparison with field grades. In our experience, the ICAPS was successful at scanning and assigning barcodes to images, focusing on the everted eyelid with adequate examiner hand visualization, and capturing images with sufficient detail to grade TF. The percentage of children with TF by photos and by field grade was 5%. Agreement between grading of the images compared to the field grades at the child level was kappa = 0.53 (95%CI = 0.40–0.66). There were ungradable images for at least one eye in 199 children (9.1%), with more occurring in children ages 1–3 (18.5%) than older children ages 4–9 (4.2%) (χ2 = 145.3, p<0.001). Conclusions/Significance The prototype ICAPS device was robust, able to image 1305 children in a district level survey and transmit images from rural Tanzania to an online grading platform. More work is needed to improve the percentage of ungradable images and to better understand the causes of disagreement between field and photo grading. Trachoma is the leading infectious cause of blindness worldwide, caused by the bacterium Chlamydia trachomatis. Programs targeting trachoma elimination in endemic regions largely rely on periodic prevalence surveys to monitor progress, but training field graders requires active cases, which is becoming challenging as prevalence declines. Photography of the tarsal conjunctiva with remote interpretation via telemedicine may serve as a more auditable, effective, and cost-efficient method for surveys. We developed and evaluated the Image Capture and Processing System (ICAPS), a smartphone-based, hands-free, head-mounted camera system (Samsung Galaxy S8 with custom app, Samsung Gear VR headset, and a Bluetooth-linked foot pedal trigger). The ICAPS was easy to use in challenging field conditions, was able to upload images from Tanzania and link images to field data. The percentage of TF was 5% by both field grade and photo grade, with agreement kappa = 0.53. Additional field training and enhanced certification of photographers may help reduce the proportion of ungradable images; further research on reasons for mismatch of grades between field and photo is needed.
Collapse
Affiliation(s)
- Fahd Naufal
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
- * E-mail:
| | - Christopher J. Brady
- Larner College of Medicine, University of Vermont, Burlington, Vermont, United States of America
| | - Meraf A. Wolle
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
| | - Michael Saheb Kashaf
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
| | | | - Christopher Bradley
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
| | - George Kabona
- Ministry of Health–Community Development, Gender, Elderly and Children, Dodoma, Tanzania
| | | | - Robert W. Massof
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
| | - Sheila K. West
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, Maryland, United States of America
| |
Collapse
|
8
|
Krafft L, Gofas-Salas E, Lai-Tim Y, Paques M, Mugnier L, Thouvenin O, Mecê P, Meimon S. Partial-field illumination ophthalmoscope: improving the contrast of a camera-based retinal imager. Appl Opt 2021; 60:9951-9956. [PMID: 34807185 DOI: 10.1364/ao.428048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 10/06/2021] [Indexed: 05/18/2023]
Abstract
Effective and accurate in vivo diagnosis of retinal pathologies requires high performance imaging devices, combining a large field of view and the ability to discriminate the ballistic signal from the diffuse background in order to provide a highly contrasted image of the retinal structures. Here, we have implemented the partial-field illumination ophthalmoscope, a patterned illumination modality, integrated to a high pixel rate adaptive optics full-field microscope. This non-invasive technique enables us to mitigate the low signal-to-noise ratio, intrinsic of full-field ophthalmoscopes, by partially illuminating the retina with complementary patterns to reconstruct a wide-field image. This new, to the best of our knowledge, modality provides an image contrast spanning from the full-field to the confocal contrast, depending on the pattern size. As a result, it offers various trade-offs in terms of contrast and acquisition speed, guiding the users towards the most efficient system for a particular clinical application.
Collapse
|
9
|
Chen JS, Coyner AS, Ostmo S, Sonmez K, Bajimaya S, Pradhan E, Valikodath N, Cole ED, Al-Khaled T, Chan RVP, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras. Ophthalmol Retina 2021; 5:1027-1035. [PMID: 33561545 PMCID: PMC8364291 DOI: 10.1016/j.oret.2020.12.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/02/2020] [Accepted: 12/16/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. DESIGN Diagnostic validation study of CNN for stage detection. PARTICIPANTS Retinal fundus images obtained from preterm infants during routine ROP screenings. METHODS Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. MAIN OUTCOME MEASURES Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. RESULTS Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. CONCLUSIONS A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
Collapse
Affiliation(s)
- Jimmy S Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Kemal Sonmez
- Cancer Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon
| | | | - Eli Pradhan
- Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
10
|
Andleeb S, Abbasi WA, Ghulam Mustafa R, Islam GU, Naseer A, Shafique I, Parween A, Shaheen B, Shafiq M, Altaf M, Ali Abbas S. ESIDE: A computationally intelligent method to identify earthworm species (E. fetida) from digital images: Application in taxonomy. PLoS One 2021; 16:e0255674. [PMID: 34529673 PMCID: PMC8445633 DOI: 10.1371/journal.pone.0255674] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/21/2021] [Indexed: 11/19/2022] Open
Abstract
Earthworms (Crassiclitellata) being ecosystem engineers significantly affect the physical, chemical, and biological properties of the soil by recycling organic material, increasing nutrient availability, and improving soil structure. The efficiency of earthworms in ecology varies along with species. Therefore, the role of taxonomy in earthworm study is significant. The taxonomy of earthworms cannot reliably be established through morphological characteristics because the small and simple body plan of the earthworm does not have anatomical complex and highly specialized structures. Recently, molecular techniques have been adopted to accurately classify the earthworm species but these techniques are time-consuming and costly. To combat this issue, in this study, we propose a machine learning-based earthworm species identification model that uses digital images of earthworms. We performed a stringent performance evaluation not only through 10-fold cross-validation and on an external validation dataset but also in real settings by involving an experienced taxonomist. In all the evaluation settings, our proposed model has given state-of-the-art performance and justified its use to aid earthworm taxonomy studies. We made this model openly accessible through a cloud-based webserver and python code available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/ESIDE.
Collapse
Affiliation(s)
- Saiqa Andleeb
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Wajid Arshad Abbasi
- Computaional Biology and Data Analysis Laboratory, Department of Computer Sciences & Information Technology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Rozina Ghulam Mustafa
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Ghafoor ul Islam
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Anum Naseer
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Irsa Shafique
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Asma Parween
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Bushra Shaheen
- Biotechnology Laboratory, Department of Zoology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Muhamad Shafiq
- Environmental Protection Agency (AJK-EPA), Government of Azad Jammu and Kashmir, Muzaffarabad, AJ&K, Pakistan
| | - Muhammad Altaf
- Department of Forestry Range and Wildlife Management, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Syed Ali Abbas
- Computaional Biology and Data Analysis Laboratory, Department of Computer Sciences & Information Technology, King Abdullah Campus, University of Azad Jammu & Kashmir, Muzaffarabad, AJ&K, Pakistan
| |
Collapse
|
11
|
Zhou H, Li C, Sun G, Yin J, Ren F. Calibration and location analysis of a heterogeneous binocular stereo vision system. Appl Opt 2021; 60:7214-7222. [PMID: 34613009 DOI: 10.1364/ao.428054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
In the dairy farming industry, we can obtain the temperature, color, and location information of dairy cows by patrol inspection robot so as to monitor the health status and abnormal behaviors of dairy cows. We build and calibrate a heterogeneous binocular stereo vision (HBSV) system comprising a high-definition color camera and infrared thermal camera and mount it on a patrol inspection robot. First, based on the traditional chessboard, an easy-to-make calibration board for the HBSV system is designed. Second, an accurate locating and sorting algorithm for the calibration points of the calibration board is designed. Then, the cameras are calibrated and the HBSV system is stereo-calibrated. Finally, target locating is achieved based on the above calibration results and Yolo target detection technology. In this paper, several experiments are carried out from many aspects. The target locating average error of HBSV system is 3.11%, which satisfies the needs of the dairy farming environment. The video's FPS captured by using HBSV is 7.3, which is 78% higher than that by using binocular stereo vision system and infrared thermal camera. The results show that the HBSV system has application value to a certain degree.
Collapse
|
12
|
Jaber A, Saghir M, Fernandez-Pellon R, Apaydin F. How Should the Cellphone Be Used to Obtain Good Pictures for Rhinoplasty? Plast Reconstr Surg 2021; 148:336e-338e. [PMID: 34233343 DOI: 10.1097/prs.0000000000008178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Ayman Jaber
- Department of Otorhinolaryngology, Ege University Hospital, Izmir, Turkey
| | - Mishary Saghir
- Department of Otorhinolaryngology, Ege University Hospital, Izmir, Turkey
| | - Rodrigo Fernandez-Pellon
- Department of Otolaryngology-Head and Neck Surgery, Division of Facial Plastic Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Fazil Apaydin
- Department of Otorhinolaryngology, Ege University Hospital, Izmir, Turkey
| |
Collapse
|
13
|
Hewitt J, Furxhi O, Renshaw CK, Driggers R. Detection of Burmese pythons in the near-infrared versus visible band. Appl Opt 2021; 60:5066-5073. [PMID: 34143081 DOI: 10.1364/ao.419320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/07/2021] [Indexed: 06/12/2023]
Abstract
Human task performance studies are commonly used for detecting and identifying potential military threats. In this work, these principles are applied to detection of an environmental threat: the invasive Burmese python. A qualitative detection of Burmese pythons with a visible light camera and an 850 nm near-infrared (NIR) camera was performed in natural Florida backgrounds. The results showed that the difference in reflectivity between the pythons and native foliage was much greater in NIR, effectively circumventing the python's natural camouflage in the visible band. In this work, a comparison of detection performance in the selected near-infrared band versus the visible band was conducted. Images of foliage backgrounds with and without a python were taken in each band in daylight and at night with illumination. Intensities of these images were then calibrated and prepared for a human perception test. Participants were tasked with detecting pythons, and the human perception data was used to compare performance between the bands. The results show that the enhanced contrast in the NIR enabled participants to detect pythons at 20% longer ranges than the use of visible imagery.
Collapse
|
14
|
Abstract
Spectral response (or sensitivity) functions of a three-color image sensor (or trichromatic camera) allow a mapping from spectral stimuli to RGB color values. Like biological photosensors, digital RGB spectral responses are device dependent and significantly vary from model to model. Thus, the information on the RGB spectral response functions of a specific device is vital in a variety of computer vision as well as mobile health (mHealth) applications. Theoretically, spectral response functions can directly be measured with sophisticated calibration equipment in a specialized laboratory setting, which is not easily accessible for most application developers. As a result, several mathematical methods have been proposed relying on standard color references. Typical optimization frameworks with constraints are often complicated, requiring a large number of colors. We report a compressive sensing framework in the frequency domain for accurately predicting RGB spectral response functions only with several primary colors. Using a scientific camera, we first validate the estimation method with direct spectral sensitivity measurements and ensure that the root mean square errors between the ground truth and recovered RGB spectral response functions are negligible. We further recover the RGB spectral response functions of smartphones and validate with an expanded color checker reference. We expect that this simple yet reliable estimation method of RGB spectral sensitivity can easily be applied for color calibration and standardization in machine vision, hyperspectral filters, and mHealth applications that capitalize on the built-in cameras of smartphones.
Collapse
Affiliation(s)
- Yuhyun Ji
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Yunsang Kwak
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Sang Mok Park
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Young L. Kim
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Purdue Quantum Science and Engineering Institute, West Lafayette, IN 47907, USA
- Regenstrief Center for Healthcare Engineering, West Lafayette, IN 47907, USA
- Purdue University Center for Cancer Research, West Lafayette, IN 47907, USA
| |
Collapse
|
15
|
Rahim A, Maqbool A, Rana T. Monitoring social distancing under various low light conditions with deep learning and a single motionless time of flight camera. PLoS One 2021; 16:e0247440. [PMID: 33630951 PMCID: PMC7906321 DOI: 10.1371/journal.pone.0247440] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 02/06/2021] [Indexed: 11/19/2022] Open
Abstract
The purpose of this work is to provide an effective social distance monitoring solution in low light environments in a pandemic situation. The raging coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has brought a global crisis with its deadly spread all over the world. In the absence of an effective treatment and vaccine the efforts to control this pandemic strictly rely on personal preventive actions, e.g., handwashing, face mask usage, environmental cleaning, and most importantly on social distancing which is the only expedient approach to cope with this situation. Low light environments can become a problem in the spread of disease because of people's night gatherings. Especially, in summers when the global temperature is at its peak, the situation can become more critical. Mostly, in cities where people have congested homes and no proper air cross-system is available. So, they find ways to get out of their homes with their families during the night to take fresh air. In such a situation, it is necessary to take effective measures to monitor the safety distance criteria to avoid more positive cases and to control the death toll. In this paper, a deep learning-based solution is proposed for the above-stated problem. The proposed framework utilizes the you only look once v4 (YOLO v4) model for real-time object detection and the social distance measuring approach is introduced with a single motionless time of flight (ToF) camera. The risk factor is indicated based on the calculated distance and safety distance violations are highlighted. Experimental results show that the proposed model exhibits good performance with 97.84% mean average precision (mAP) score and the observed mean absolute error (MAE) between actual and measured social distance values is 1.01 cm.
Collapse
Affiliation(s)
- Adina Rahim
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| | - Ayesha Maqbool
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| | - Tauseef Rana
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| |
Collapse
|
16
|
Egloff-Juras C, Yakavets I, Scherrer V, Francois A, Bezdetnaya L, Lassalle HP, Dolivet G. Validation of a Three-Dimensional Head and Neck Spheroid Model to Evaluate Cameras for NIR Fluorescence-Guided Cancer Surgery. Int J Mol Sci 2021; 22:ijms22041966. [PMID: 33671198 PMCID: PMC7922741 DOI: 10.3390/ijms22041966] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/10/2021] [Accepted: 02/14/2021] [Indexed: 01/12/2023] Open
Abstract
Near-infrared (NIR) fluorescence-guided surgery is an innovative technique for the real-time visualization of resection margins. The aim of this study was to develop a head and neck multicellular tumor spheroid model and to explore the possibilities offered by it for the evaluation of cameras for NIR fluorescence-guided surgery protocols. FaDu spheroids were incubated with indocyanine green (ICG) and then included in a tissue-like phantom. To assess the capability of Fluobeam® NIR camera to detect ICG in tissues, FaDu spheroids exposed to ICG were embedded in 2, 5 or 8 mm of tissue-like phantom. The fluorescence signal was significantly higher between 2, 5 and 8 mm of depth for spheroids treated with more than 5 µg/mL ICG (p < 0.05). The fluorescence intensity positively correlated with the size of spheroids (p < 0.01), while the correlation with depth in the tissue-like phantom was strongly negative (p < 0.001). This multicellular spheroid model embedded in a tissue-like phantom seems to be a simple and reproducible in vitro tumor model, allowing a comparison of NIR cameras. The ideal configuration seems to be 450 μm FaDu spheroids incubated for 24 h with 0.05 mg/mL of ICG, ensuring the best stability, toxicity, incorporation and signal intensity.
Collapse
Affiliation(s)
- Claire Egloff-Juras
- Université de Lorraine, CNRS UMR 7039, CRAN, F-54000 Nancy, France; (I.Y.); (L.B.); (H.-P.L.); (G.D.)
- Université de Lorraine, CHRU-Nancy, F-54000 Nancy, France
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
- Faculté d’Odontologie de Lorraine, Université de Lorraine, 7 Avenue de la Forêt de Haye, Vandœuvre-lès-Nancy, 54500 Nancy, France
- Correspondence:
| | - Ilya Yakavets
- Université de Lorraine, CNRS UMR 7039, CRAN, F-54000 Nancy, France; (I.Y.); (L.B.); (H.-P.L.); (G.D.)
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| | - Victoria Scherrer
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| | - Aurélie Francois
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| | - Lina Bezdetnaya
- Université de Lorraine, CNRS UMR 7039, CRAN, F-54000 Nancy, France; (I.Y.); (L.B.); (H.-P.L.); (G.D.)
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| | - Henri-Pierre Lassalle
- Université de Lorraine, CNRS UMR 7039, CRAN, F-54000 Nancy, France; (I.Y.); (L.B.); (H.-P.L.); (G.D.)
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| | - Gilles Dolivet
- Université de Lorraine, CNRS UMR 7039, CRAN, F-54000 Nancy, France; (I.Y.); (L.B.); (H.-P.L.); (G.D.)
- Institut de Cancérologie de Lorraine, F-54000 Nancy, France; (V.S.); (A.F.)
| |
Collapse
|
17
|
Chen W, Chang J, Zhao X, Liu S. Optical design and fabrication of a smartphone fundus camera. Appl Opt 2021; 60:1420-1427. [PMID: 33690586 DOI: 10.1364/ao.414325] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 01/09/2021] [Indexed: 05/28/2023]
Abstract
Fundus examination plays an important part in a medical setting. The fundus camera is one of the detection instruments used in obtaining fundus images, which can reflect information about disease and other conditions. However, traditional fundus cameras have many disadvantages in regard to data sharing, image recognition, and processing, as well as doctor-patient communication. In recent years, mobile medical systems have gradually become more prevalent in medical and health system environments. In this paper, we propose a design method for a smartphone fundus camera consisting of an illumination system and an imaging system. The end of the system can be combined with a smartphone to take the fundus images directly. We manufactured a prototype, designed an artificial eye model, and carried out a series of experiments. The results show that we can get fundus images clearly, and the imaging system will be able to correct refractive errors ranging from -8D∼+8D. The spatial resolution of the system is up to 15 µm. This is a portable device with an overall size of 160mm×160mm×80mm and a weight of 540 g. It has the advantages of lower price, simple operation, high resolution, and compact size, making it suitable as a portable ocular monitoring device.
Collapse
|
18
|
Kim K, Jang KW, Bae SI, Kim HK, Cha Y, Ryu JK, Jo YJ, Jeong KH. Ultrathin arrayed camera for high-contrast near-infrared imaging. Opt Express 2021; 29:1333-1339. [PMID: 33726351 DOI: 10.1364/oe.409472] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 12/22/2020] [Indexed: 06/12/2023]
Abstract
We report an ultrathin arrayed camera (UAC) for high-contrast near infrared (NIR) imaging by using microlens arrays with a multilayered light absorber. The UAC consists of a multilayered composite light absorber, inverted microlenses, gap-alumina spacers and a planar CMOS image sensor. The multilayered light absorber was fabricated through lift-off and repeated photolithography processes. The experimental results demonstrate that the image contrast is increased by 4.48 times and the MTF 50 is increased by 2.03 times by eliminating optical noise between microlenses through the light absorber. The NIR imaging of UAC successfully allows distinguishing the security strip of authentic bill and the blood vessel of finger. The ultrathin camera offers a new route for diverse applications in biometric, surveillance, and biomedical imaging.
Collapse
|
19
|
Eszes DJ, Szabó DJ, Russell G, Lengyel C, Várkonyi T, Paulik E, Nagymajtényi L, Facskó A, Petrovski G, Petrovski BÉ. Diabetic Retinopathy Screening in Patients with Diabetes Using a Handheld Fundus Camera: The Experience from the South-Eastern Region in Hungary. J Diabetes Res 2021; 2021:6646645. [PMID: 33628836 PMCID: PMC7884113 DOI: 10.1155/2021/6646645] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/20/2020] [Accepted: 01/19/2021] [Indexed: 11/18/2022] Open
Abstract
PURPOSE Diabetic retinopathy (DR) is the leading cause of vision loss among active adults in industrialized countries. We aimed to investigate the prevalence of diabetes mellitus (DM), DR and its different grades, in patients with DM in the Csongrád County, South-Eastern region, Hungary. Furthermore, we aimed to detect the risk factors for developing DR and the diabetology/ophthalmology screening patterns and frequencies, as well as the effect of socioeconomic status- (SES-) related factors on the health and behavior of DM patients. METHODS A cross-sectional study was conducted on adults (>18 years) involving handheld fundus camera screening (Smartscope Pro Optomed, Finland) and image assessment using the Spectra DR software (Health Intelligence, England). Self-completed questionnaires on self-perceived health status (SPHS) and health behavior, as well as visual acuity, HbA1c level, type of DM, and attendance at healthcare services were also recorded. RESULTS 787 participants with fundus camera images and full self-administered questionnaires were included in the study; 46.2% of the images were unassessable. T1D and T2D were present in 13.5% and 86.5% of the participants, respectively. Among the T1D and T2D patients, 25.0% and 33.5% had DR, respectively. The SES showed significant proportion differences in the T1D group. Lower education was associated with a lower DR rate compared to non-DR (7.7% vs. 40.5%), while bad/very bad perceived financial status was associated with significantly higher DR proportion compared to non-DR (63.6% vs. 22.2%). Neither the SPHS nor the health behavior showed a significant relationship with the disease for both DM groups. Mild nonproliferative retinopathy without maculopathy (R1M0) was detected in 6% and 23% of the T1D and T2D patients having DR, respectively; R1 with maculopathy (R1M1) was present in 82% and 66% of the T1D and T2D groups, respectively. Both moderate nonproliferative retinopathy with maculopathy (R2M1) and active proliferative retinopathy with maculopathy (R3M1) were detected in 6% and 7% of the T1D and T2D patients having DR, respectively. The level of HbA1c affected the attendance at the diabetology screening (HbA1c > 7% associated with >50% of all quarter-yearly attendance in DM patients, and with 10% of the diabetology screening nonattendance). CONCLUSION The prevalence of DM and DR in the studied population in Hungary followed the country trend, with a slightly higher sight-threatening DR than the previously reported national average. SES appears to affect the DR rate, in particular, for T1D. Although DR screening using handheld cameras seems to be simple and dynamic, much training and experience, as well as overcoming the issue of decreased optic clarity is needed to achieve a proper level of image assessability, and in particular, for use in future telemedicine or artificial intelligence screening programs.
Collapse
Affiliation(s)
- Dóra Júlia Eszes
- Department of Public Health, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Dóra Júlia Szabó
- Department of Ophthalmology, Szent-Györgyi Albert Clinical Center, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Greg Russell
- Eyenuk Inc., Clinical Development, Woodland Hills, CA, USA
| | - Csaba Lengyel
- Department of Medicine, Medical Faculty, Albert Szent-Györgyi Clinical Center, University of Szeged, Szeged, Hungary
| | - Tamás Várkonyi
- Department of Medicine, Medical Faculty, Albert Szent-Györgyi Clinical Center, University of Szeged, Szeged, Hungary
| | - Edit Paulik
- Department of Public Health, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - László Nagymajtényi
- Department of Public Health, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Andrea Facskó
- Department of Ophthalmology, Szent-Györgyi Albert Clinical Center, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Goran Petrovski
- Center for Eye Research, Department of Ophthalmology, Oslo University Hospital and Institute for Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Beáta Éva Petrovski
- Center for Eye Research, Department of Ophthalmology, Oslo University Hospital and Institute for Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
- The A. I. Evdokimov Moscow State University of Medicine and Dentistry of the Ministry of Healthcare the Russian Federation, Moscow, Russia
| |
Collapse
|
20
|
Koyama A, Hirata T, Kawahara Y, Iyooka H, Kubozono H, Onikura N, Itaya S, Minagawa T. Habitat suitability maps for juvenile tri-spine horseshoe crabs in Japanese intertidal zones: A model approach using unmanned aerial vehicles and the Structure from Motion technique. PLoS One 2020; 15:e0244494. [PMID: 33362230 PMCID: PMC7757885 DOI: 10.1371/journal.pone.0244494] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 12/10/2020] [Indexed: 11/22/2022] Open
Abstract
The tri-spine horseshoe crab, Tachypleus tridentatus, is a threatened species that inhabits coastal areas from South to East Asia. A Conservation management system is urgently required for managing its nursery habitats, i.e., intertidal flats, especially in Japan. Habitat suitability maps are useful in drafting conservation plans; however, they have rarely been prepared for juvenile T. tridentatus. In this study, we examined the possibility of constructing robust habitat suitability models (HSMs) for juveniles based on topographical data acquired using unmanned aerial vehicles and the Structure from Motion (UAV-SfM) technique. The distribution data of the juveniles in the Tsuyazaki and Imazu intertidal flats from 2017 to 2019 were determined. The data were divided into a training dataset for HSM construction and three test datasets for model evaluation. High accuracy digital surface models were built for each region using the UAV-SfM technique. Normalized elevation was assessed by converting the topographical models that consider the tidal range in each region, and the slope was calculated based on these models. Using the training data, HSMs of the juveniles were constructed with normalized elevation and slope as the predictor variables. The HSMs were evaluated using the test data. The results showed that HSMs exhibited acceptable discrimination performance for each region. Habitat suitability maps were built for the juveniles in each region, and the suitable areas were estimated to be approximately 6.1 ha of the total 19.5 ha in Tuyazaki, and 3.7 ha of the total 7.9 ha area in Imazu. In conclusion, our findings support the usefulness of the UAV-SfM technique in constructing HSMs for juvenile T. tridentatus. The monitoring of suitable habitat areas for the juveniles using the UAV-SfM technique is expected to reduce survey costs, as it can be conducted with fewer investigators over vast intertidal zones within a short period of time.
Collapse
Affiliation(s)
- Akihiko Koyama
- Faculty of Advanced Science and Technology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, Japan
- * E-mail:
| | - Taiga Hirata
- Department of Civil and Environmental Engineering, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, Japan
| | - Yuki Kawahara
- Department of Civil and Environmental Engineering, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, Japan
| | - Hiroki Iyooka
- Department of Civil Engineering, Fukuoka University, Nanakuma, Jonan-ku, Fukuoka, Japan
| | - Haruka Kubozono
- The 21st Century Program, Kyushu University, Motooka, Nishi-ku, Fukuoka, Japan
| | - Norio Onikura
- Fishery Research Laboratory, Kyushu University, Tsuyazaki, Fukutsu, Japan
| | - Shinji Itaya
- Tsuyazaki Seaside Nature School, Tsuyazaki, Fukutsu, Japan
| | - Tomoko Minagawa
- Faculty of Advanced Science and Technology, Kumamoto University, Kurokami, Chuo-ku, Kumamoto, Japan
| |
Collapse
|
21
|
Koh W, Khoo D, Pan LTT, Lean LL, Loh MH, Chua TYV, Ti LK. Use of GoPro point-of-view camera in intubation simulation-A randomized controlled trial. PLoS One 2020; 15:e0243217. [PMID: 33259536 PMCID: PMC7707475 DOI: 10.1371/journal.pone.0243217] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 11/17/2020] [Indexed: 11/18/2022] Open
Abstract
Introduction Teaching endotracheal intubation is uniquely challenging due to its technical, high-stakes, and highly time-sensitive nature. The GoPro is a small, lightweight, high-resolution action camera with a wide-angle field of view that can encompass both the airway as well as the procedurist’s hands and positioning technique when worn with a head mount. We aimed to evaluate its effectiveness in improving intubation teaching for novice learners in a simulated setting, via a two-arm, parallel group, randomized controlled superiority trial with 1:1 allocation ratio. Methods We recruited Year 4 medical students at the start of their compulsory 2-week Anesthesia posting. Participants underwent a standardized intubation curriculum and a formative assessment, then randomized to receive GoPro or non-GoPro led feedback. After a span of three months, participants were re-assessed in a summative assessment by blinded accessors. Participants were also surveyed on their learning experience for a qualitative thematic perspective. The primary outcomes were successful intubation and successful first-pass intubation. Results Seventy-one participants were recruited with no dropouts, and all were included in the analysis. 36 participants received GoPro led feedback, and 35 participants received non-GoPro led feedback. All participants successfully intubated the manikin. No statistically significant differences were found between the GoPro group and the non-GoPro group at summative assessment (85.3% vs 90.0%, p = 0.572). Almost all participants surveyed found the GoPro effective for their learning (98.5%). Common themes in the qualitative analysis were: the ability for an improved assessment, greater identification of small details that would otherwise be missed, and usefulness of the unique point-of-view footage in improving understanding. Conclusions The GoPro is a promising tool for simulation-based intubation teaching. There are considerations in its implementation to maximize the learning experience and yield from GoPro led feedback and training.
Collapse
Affiliation(s)
- Wenjun Koh
- Department of Anaesthesia, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Deborah Khoo
- Department of Anaesthesia, National University Health System, Singapore, Singapore
| | - Ling Te Terry Pan
- Department of Anaesthesia, National University Health System, Singapore, Singapore
| | - Lyn Li Lean
- Department of Anaesthesia, National University Health System, Singapore, Singapore
| | - May-Han Loh
- Department of Anaesthesia, National University Health System, Singapore, Singapore
| | - Tze Yuh Vanessa Chua
- Department of Anaesthesia, National University Health System, Singapore, Singapore
| | - Lian Kah Ti
- Department of Anaesthesia, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Anaesthesia, National University Health System, Singapore, Singapore
- * E-mail:
| |
Collapse
|
22
|
Liang J, Wang P, Zhu L, Wang LV. Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution. Nat Commun 2020; 11:5252. [PMID: 33067438 PMCID: PMC7567836 DOI: 10.1038/s41467-020-19065-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 09/16/2020] [Indexed: 12/27/2022] Open
Abstract
Simultaneous and efficient ultrafast recording of multiple photon tags contributes to high-dimensional optical imaging and characterization in numerous fields. Existing high-dimensional optical imaging techniques that record space and polarization cannot detect the photon's time of arrival owing to the limited speeds of the state-of-the-art electronic sensors. Here, we overcome this long-standing limitation by implementing stereo-polarimetric compressed ultrafast photography (SP-CUP) to record light-speed high-dimensional events in a single exposure. Synergizing compressed sensing and streak imaging with stereoscopy and polarimetry, SP-CUP enables video-recording of five photon tags (x, y, z: space; t: time of arrival; and ψ: angle of linear polarization) at 100 billion frames per second with a picosecond temporal resolution. We applied SP-CUP to the spatiotemporal characterization of linear polarization dynamics in early-stage plasma emission from laser-induced breakdown. This system also allowed three-dimensional ultrafast imaging of the linear polarization properties of a single ultrashort laser pulse propagating in a scattering medium.
Collapse
Affiliation(s)
- Jinyang Liang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, QC, J3X1S2, Canada
| | - Peng Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
| | - Liren Zhu
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
| | - Lihong V Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA.
| |
Collapse
|
23
|
Lin TC, Chiang YH, Hsu CL, Liao LS, Chen YY, Chen SJ. Image quality and diagnostic accuracy of a handheld nonmydriatic fundus camera: Feasibility of a telemedical approach in screening retinal diseases. J Chin Med Assoc 2020; 83:962-966. [PMID: 32649414 PMCID: PMC7526587 DOI: 10.1097/jcma.0000000000000382] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND A suitable fundus camera for telemedicine screening can expand the scale of eye care service. The purpose of this study was to compare a handheld nonmydriatic digital fundus camera and a conventional mydriatic fundus camera according to the image quality of their photographs and usability of those photographs to accurately diagnose various retinal diseases. METHODS A handheld nonmydriatic fundus camera and conventional fundus camera were used to take fundus photographs of outpatients at an ophthalmic clinic before and after pupillary dilation. Image quality and diagnostic agreement of the photos were graded by two masked and experienced retinal specialists. RESULTS A total of 867 photographs of 393 eyes of 200 patients were collected. Approximately 80% of photos taken under nonmydriasis status using the handheld nonmydriatic fundus camera had good (55.7%) or excellent (22.7%) image quality. The overall agreement of diagnoses between the doctors was more than 90%. When the handheld nonmydriatic fundus camera was used after mydriasis, the proportion of images with good (45%) or excellent (49.7%) quality reached 94.7% and diagnostic agreement was 93.4%. Lens opacity was associated with the quality of images obtained using the handheld camera (p = 0.041), and diagnosis disagreement for handheld camera images was associated with preexisting diabetes diagnosis (p = 0.009). Approximately 40% of patients expressed preference for use of the handheld nonmydriatic camera. CONCLUSION This study demonstrated the effectiveness of the handheld nonmydriatic fundus camera in clinical practice and its feasibility for telemedicine screening of retinal diseases.
Collapse
Affiliation(s)
- Tai-Chi Lin
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Clinical Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Yueh-Hua Chiang
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| | - Chih-Lu Hsu
- Medimaging Integrated Solution Inc., Hsinchu, Taiwan, ROC
| | | | - Yi-Ying Chen
- Medimaging Integrated Solution Inc., Hsinchu, Taiwan, ROC
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang-Ming University Taipei, Taiwan, ROC
- Address correspondence. Dr. Shih-Jen Chen, Department of Ophthalmology, Taipei Veterans General Hospital, 201, Section 2, Shi-Pai Road, Taipei 112, Taiwan, ROC. E-mail address: (S.-J. Chen)
| |
Collapse
|
24
|
Soranzo A, Bruno N. Nonverbal communication in selfies posted on Instagram: Another look at the effect of gender on vertical camera angle. PLoS One 2020; 15:e0238588. [PMID: 32915837 PMCID: PMC7485807 DOI: 10.1371/journal.pone.0238588] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 08/19/2020] [Indexed: 11/18/2022] Open
Abstract
Background Selfies are a novel social phenomenon that is gradually beginning to receive attention within the cognitive sciences. Several studies have documented biases that may be related to nonverbal communicative intentions. For instance, in selfies posted on the dating platform Tinder males but not females prefer camera views from below (Sedgewick, Flath & Elias, 2017). We re-examined this study to assess whether this bias is confined to dating selection contexts and to compare variability between individuals and between genders. Methods Three raters evaluated vertical camera position in 2000 selfies– 1000 by males and 1000 by females—posted in Instagram. Results We found that the choices of camera angle do seem to vary depending on the context under which the selfies were uploaded. On Tinder, females appear more likely to choose neutral, frontal presentations than they do on Instagram, whereas males on Tinder appear more likely to opt for camera angles from below than on Instagram. Conclusions This result confirms that the composition of selfies is constrained by factors affecting nonverbal communicative intentions.
Collapse
|
25
|
Pan C, Tan W, Savini G, Hua Y, Ye X, Xu W, Yu J, Wang Q, Huang J. A Comparative Study of Total Corneal Power Using a Ray Tracing Method Obtained from 3 Different Scheimpflug Camera Devices. Am J Ophthalmol 2020; 216:90-98. [PMID: 32277940 DOI: 10.1016/j.ajo.2020.03.037] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 03/22/2020] [Accepted: 03/25/2020] [Indexed: 11/18/2022]
Abstract
PURPOSE We sought to assess the agreement of ray-traced corneal power values by 3 Scheimpflug tomographers tp construct the corresponding arithmetic adjustment factor in comparison with an automated keratometer (IOLMaster) and a conventional Placido-based topographer (Allegro Topolyzer). DESIGN Prospective reliability analysis. METHODS A total of 74 eyes from 74 healthy subjects who underwent corneal power measurements using Pentacam, Sirius, Galilei, IOLMaster, and Allegro Topolyzer were included. Ray-traced corneal power values, such as total corneal refractive power (TCRP), mean pupil power (MPP), total corneal power (TCP), mean keratometry (Km), and simulated keratometry (SimK) were recorded respectively and analyzed using one-way analysis of variance (ANOVA) and Bland-Altman plots. RESULTS Among the 3 ray-traced corneal power values, TCRP and MPP did not differ significantly (P = 0.81), whereas TCP presented a slightly significant larger value (P < 0.001). Compared to Km or SimK, corneal power measurements by the ray tracing method exhibited significantly lower values (P < 0.001). Bland-Altman plots disclosed that the 3 Scheimpflug tomographers showed similar 95% limits of agreement after arithmetic adjustment compared with Km (-0.40 to 0.40 D, -0.39 to 0.39 D, and -0.35 to 0.34 D) or SimK (-0.50 to 0.51 D, -0.43 to 0.42 D, and -0.46 to 0.46 D). CONCLUSIONS Ray-traced corneal power values obtained using 3 Scheimpflug tomographers with default diameter settings were similar, indicating that they could be used interchangeably in daily clinical practice. The 3 Scheimpflug tomographers were satisfactory in agreement after arithmetical adjustment compared with conventional automated keratometer or Placido-based topographer.
Collapse
Affiliation(s)
- Chao Pan
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China; Hankou Aier Eye Hospital, Jianghan District, Wuhan, China
| | - Weina Tan
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China; Hankou Aier Eye Hospital, Jianghan District, Wuhan, China
| | | | - Yanjun Hua
- Department of Ophthalmology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Xiuhong Ye
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wenjin Xu
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Jinjin Yu
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinmei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| | - Jinhai Huang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
26
|
Lee J, Ahn B. Real-Time Human Action Recognition with a Low-Cost RGB Camera and Mobile Robot Platform. Sensors (Basel) 2020; 20:E2886. [PMID: 32438776 PMCID: PMC7287597 DOI: 10.3390/s20102886] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/08/2020] [Accepted: 05/13/2020] [Indexed: 11/16/2022]
Abstract
Human action recognition is an important research area in the field of computer vision that can be applied in surveillance, assisted living, and robotic systems interacting with people. Although various approaches have been widely used, recent studies have mainly focused on deep-learning networks using Kinect camera that can easily generate data on skeleton joints using depth data, and have achieved satisfactory performances. However, their models are deep and complex to achieve a higher recognition score; therefore, they cannot be applied to a mobile robot platform using a Kinect camera. To overcome these limitations, we suggest a method to classify human actions in real-time using a single RGB camera, which can be applied to the mobile robot platform as well. We integrated two open-source libraries, i.e., OpenPose and 3D-baseline, to extract skeleton joints on RGB images, and classified the actions using convolutional neural networks. Finally, we set up the mobile robot platform including an NVIDIA JETSON XAVIER embedded board and tracking algorithm to monitor a person continuously. We achieved an accuracy of 70% on the NTU-RGBD training dataset, and the whole process was performed on an average of 15 frames per second (FPS) on an embedded board system.
Collapse
Affiliation(s)
- Junwoo Lee
- Robotics Group, Korea Institute of Industrial Technology, Ansan 15588, Korea;
| | - Bummo Ahn
- Robotics Group, Korea Institute of Industrial Technology, Ansan 15588, Korea;
- Robotics & Virtual Engineering, KITECH Campus, Ansan 15588, Korea
| |
Collapse
|
27
|
Bae TW. Image-quality metric system for color filter array evaluation. PLoS One 2020; 15:e0232583. [PMID: 32392215 PMCID: PMC7213733 DOI: 10.1371/journal.pone.0232583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 04/17/2020] [Indexed: 11/18/2022] Open
Abstract
A modern color filter array (CFA) output is rendered into the final output image using a demosaicing algorithm. During this process, the rendered image is affected by optical and carrier cross talk of the CFA pattern and demosaicing algorithm. Although many CFA patterns have been proposed thus far, an image-quality (IQ) evaluation system capable of comprehensively evaluating the IQ of each CFA pattern has yet to be developed, although IQ evaluation items using local characteristics or specific domain have been created. Hence, we present an IQ metric system to evaluate the IQ performance of CFA patterns. The proposed CFA evaluation system includes proposed metrics such as the moiré robustness using the experimentally determined moiré starting point (MSP) and achromatic reproduction (AR) error, as well as existing metrics such as color accuracy using CIELAB, a color reproduction error using spatial CIELAB, structural information using the structure similarity, the image contrast based on MTF50, structural and color distortion using the mean deviation similarity index (MDSI), and perceptual similarity using Haar wavelet-based perceptual similarity index (HaarPSI). Through our experiment, we confirmed that the proposed CFA evaluation system can assess the IQ for an existing CFA. Moreover, the proposed system can be used to design or evaluate new CFAs by automatically checking the individual performance for the metrics used.
Collapse
Affiliation(s)
- Tae Wuk Bae
- Daegu-Gyeongbuk Research Center, Electronics and Telecommunications Research Institute, Daegu, South Korea
- * E-mail: ,
| |
Collapse
|
28
|
Nguyen AT, Van Nguyen T, Timmins R, McGowan P, Van Hoang T, Le MD. Efficacy of camera traps in detecting primates in Hue Saola Nature Reserve. Primates 2020; 61:697-705. [PMID: 32383126 DOI: 10.1007/s10329-020-00823-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 04/22/2020] [Indexed: 11/26/2022]
Abstract
Camera trapping has been demonstrated to be an effective tool in surveying a suite of species, especially elusive mammals in rough terrains. The method has become increasingly common in primate surveys for both ground-dwelling and arboreal taxa in many tropical regions of the world. However, camera trapping has rarely been used to inventory primates in Vietnam, although many species are under severe threats and in critical need of surveying for improved conservation measures. In this study, we employed camera trapping to primarily investigate the possible continued presence of galliform species, but also to opportunistically record primate species, in Hue Saola Nature Reserve in central Vietnam. We documented five primate species, including the northern pig-tailed macaque Macaca leonina, the stump-tailed macaque Macaca arctoides, the rhesus macaque Macaca mulatta, the pygmy slow loris Nycticebus pygmaeus, and the red-shanked douc Pygathrix nemaeus, which represents a majority of primate diversity in the reserve. The results show that camera trapping may be an option for documenting primate diversity, and seasonal and daily activities of ground-dwelling taxa. Our data also suggest that although human disturbance is still rampant in the area, Hue Saola Nature Reserve appears to be reasonably well protected compared to other conservation areas in Indochina. In particular, it is home to several highly threatened primates, and it therefore plays a crucial role in primate conservation in Vietnam. However, these populations are in need of greater protection, such as more targeted patrols to remove snares and prevent other violations.
Collapse
Affiliation(s)
- Anh Tuan Nguyen
- Department of Environmental Ecology, Faculty of Environmental Science, University of Science, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan District, Hanoi, Vietnam
| | - Thanh Van Nguyen
- Central Institute for Natural Resources and Environmental Studies, Vietnam National University, Hanoi, 19 Le Thanh Tong, Hanoi, Vietnam
- Department of Ecological Dynamics, Leibniz Institute for Zoo and Wildlife Research, Alfred-Kowalke-Straße 17, 10315, Berlin, Germany
| | | | - Philip McGowan
- School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| | - Thang Van Hoang
- Central Institute for Natural Resources and Environmental Studies, Vietnam National University, Hanoi, 19 Le Thanh Tong, Hanoi, Vietnam
| | - Minh Duc Le
- Department of Environmental Ecology, Faculty of Environmental Science, University of Science, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan District, Hanoi, Vietnam.
- Central Institute for Natural Resources and Environmental Studies, Vietnam National University, Hanoi, 19 Le Thanh Tong, Hanoi, Vietnam.
| |
Collapse
|
29
|
Li H, Zhu M, Graham DJ, Zhang Y. Are multiple speed cameras more effective than a single one? Causal analysis of the safety impacts of multiple speed cameras. Accid Anal Prev 2020; 139:105488. [PMID: 32126326 DOI: 10.1016/j.aap.2020.105488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 02/20/2020] [Accepted: 02/24/2020] [Indexed: 06/10/2023]
Abstract
Most previous studies investigate the safety effects of a single speed camera, ignoring the potential impacts from adjacent speed cameras. The mutual influence between two or even more adjacent speed cameras is a relevant attribute worth taking into account when evaluating the safety impacts of speed cameras. This paper investigates the safety effects of two or more speed cameras observed within a specific radius which are defined as multiple speed cameras. A total of 464 speed cameras at treated sites and 3119 control sites are observed and related to road traffic accident data from 1999 to 2007. The effects of multiple speed cameras are evaluated using pairwise comparisons between treatment units with different doses based on the propensity score methods. The spatial effect of multiple speed cameras is investigated by testing various radii. There are two major findings in this study. First, sites with multiple speed cameras perform better in reducing the absolute number of road accidents than those with a single camera. Second, speed camera sites are found to be most effective with a radius of 200 m. For a radius of 200 m and 300 m, the reduction in the personal injury collisions by multiple speed cameras are 21.4 % and 13.2 % more than a single camera. Our results also suggest that multiple speed cameras are effective within a small radius (200 m and 300 m).
Collapse
Affiliation(s)
- Haojie Li
- School of Transportation, Southeast University, China; Jiangsu Key Laboratory of Urban ITS, China; Jiangsu Province Collaborative Innovation Center of Modern Urban Traffic Technologies, China.
| | - Manman Zhu
- School of Transportation, Southeast University, China; Jiangsu Key Laboratory of Urban ITS, China; Jiangsu Province Collaborative Innovation Center of Modern Urban Traffic Technologies, China
| | | | - Yingheng Zhang
- School of Transportation, Southeast University, China; Jiangsu Key Laboratory of Urban ITS, China; Jiangsu Province Collaborative Innovation Center of Modern Urban Traffic Technologies, China
| |
Collapse
|
30
|
Singh A, Cheyne K, Wilson G, Sime MJ, Hong SC. On the use of a new monocular-indirect ophthalmoscope for retinal photography in a primary care setting. N Z Med J 2020; 133:31-38. [PMID: 32242176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
AIM There is consensus among general practitioners regarding the difficulty of direct ophthalmoscopy. Hence, there is increasing interest in smartphone-based ophthalmoscopes; the New Zealand-made oDocs Nun ophthalmoscope is one such device, released in November 2018. This study aims to subjectively assess the quality of the images captured with it in order to determine the feasibility of its use in a primary care setting. METHOD Twenty-eight general practitioners (GPs) from different practices throughout New Zealand agreed to participate in this prospective observational study and were sent an oDocs Nun ophthalmoscope. Using the device, clinicians took retinal photographs of patients who presented with visual complaints and uploaded one image per eye onto a database. Three hundred and fifty-seven photographs were collated and rated by four professionals (two ophthalmologists and two optometrists) on the basis of image quality and the anatomical features visible. RESULTS On a Likert scale from 1 (poor quality) to 4 (very good quality), the median and mode values for each professional's rating of all photographs were both 2. On average, 94.5% of the photographs were deemed to have visible optic discs and 50.0% to have visible maculae adequate for detecting an abnormality. Pairwise comparison showed 93.7% agreement among the four professionals for optic disc visibility, and 74.2% agreement for macula visibility. CONCLUSION The oDocs Nun is a promising tool which GPs could use to circumvent the challenges associated with direct ophthalmoscopy. With appropriate training to ensure proficiency, it may have a valuable role in telemedicine and tele-referral.
Collapse
Affiliation(s)
- Aqeeda Singh
- Student, Dunedin School of Medicine, University of Otago, Dunedin
| | - Kirsten Cheyne
- Research Assistant, Ophthalmology Department, University of Otago, Dunedin
| | - Graham Wilson
- Gisborne Hospital, Tairawhiti District Health Board, Gisborne
| | - Mary Jane Sime
- Dunedin Hospital, Southern District Health Board, Dunedin
| | - Sheng Chiong Hong
- Christchurch Hospital, Canterbury District Health Board, Christchurch
| |
Collapse
|
31
|
Xu Z, Sun L, Wang X, Lei P, He J, Zhou Y. Stereo camera trap for wildlife in situ observations and measurements. Appl Opt 2020; 59:3262-3269. [PMID: 32400611 DOI: 10.1364/ao.389835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 02/18/2020] [Indexed: 06/11/2023]
Abstract
This paper proposes a stereo camera trap to expand field of view (FOV) of the traditional camera trap and to measure wildlife sizes with a centimeter-scaled accuracy within the detection distance of 10 m. In the method, FOVs of the two cameras are partly overlapped with a 30-cm-long baseline and a posture angle of 100°. Typically only targets in the public FOV can be measured; in contrast, when only parts of targets appear in the public FOV they are difficult to measure. To solve the problem, a part-matching algorithm is provided. In the proposed camera trap, a central process unit is realized by a micro control unit, an advanced reduced-instruction-set-computing machine, and a field-programmable gate array, and then motion sensors trigger the cameras to capture stereo images when animals pass by. In addition, the camera trap has daytime mode and nighttime mode switched by a photosensitive sensor by perceiving ambient lights. Finally, the stereo camera trap data is transmitted by a long-term-evolution module at a scheduled time. Experimental results show that the proposed stereo camera trap can broaden the FOV of a monocular camera by up to 77% at 5 m and estimate feature sizes of targets with centimeter-scaled accuracy.
Collapse
|
32
|
Miles HC, Gunn MD, Coates AJ, Potel M. Seeing Through the "Science Eyes" of the ExoMars Rover. IEEE Comput Graph Appl 2020; 40:71-81. [PMID: 32149612 DOI: 10.1109/mcg.2020.2970796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The ExoMars rover, due to launch in mid 2020, will travel to Mars in search of signs of past or present habitability. The rover will carry the Panoramic Camera, PanCam, a scientific camera system designed to provide crucial remote sensing capabilities as mission scientists search for targets of interest. In preparation for the mission operations, the visual output of PanCam has been simulated and modeled with a three-dimensional rendering system, allowing the team to investigate the capabilities of the camera system and providing insight into how it may be calibrated and used for engineering tasks during the surface mission.
Collapse
|
33
|
Krtalić A, Bajić M, Ivelja T, Racetin I. The AIDSS Module for Data Acquisition in Crisis Situations and Environmental Protection. Sensors (Basel) 2020; 20:s20051267. [PMID: 32110938 PMCID: PMC7085737 DOI: 10.3390/s20051267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/17/2020] [Accepted: 02/24/2020] [Indexed: 11/16/2022]
Abstract
The Toolbox implementation for removal of antipersonnel mines, submunitions and unexploded ordnance (TIRAMISU) Advanced Intelligence Decision Support System is an operational system proposed to Mine Action Centres worldwide for conducting non-technical surveys in humanitarian demining. The system consists of three modules, one of which is the module for data acquisition introduced and described in this study. The module has been designed, produced, improved, used and operationally tested and validated on several platforms (helicopters, remotely piloted aircraft systems (RPAS) and a blimp), with various sensors and acquisition units (Global Positioning System (GPS) and inertial measurement unit) in a variety of combinations for additional data acquisition from deep inside a suspected hazardous area. For the purposes of aerial data acquisition over a suspected hazardous area, the use of multiple sensors such as visible digital cameras and multi-spectral visible, near infrared (VNIR), hyperspectral VNIR and thermal infrared sensors are of benefit, because they display the scene in different ways. Off-the-shelf equipment and software were mostly used, but some specific equipment, such as sensor pods, was developed and also some software solutions for data acquisition and pre-processing (transforming hyperspectral line scanner data into hyperspectral images, and producing hyperspectral cubes). The technical stability and robustness of the module were confirmed by operationally testing and evaluating the systems on the aforementioned platforms and missions in several actual suspected hazardous areas in Croatia and Bosnia and Herzegovina, between 2001 and 2015.
Collapse
Affiliation(s)
- Andrija Krtalić
- Faculty of Geodesy, University of Zagreb, 10000 Zagreb, Croatia
- Correspondence: ; Tel.: +385-1-4639-168
| | - Milan Bajić
- Scientific Council, HCR–Centre for Testing, Development and Training, 10000 Zagreb, Croatia;
| | - Tamara Ivelja
- Zagreb University of Applied Sciences, 10000 Zagreb, Croatia;
| | - Ivan Racetin
- Faculty of Civil Engineering, Architecture and Geodesy, University of Split, 21000 Split, Croatia;
| |
Collapse
|
34
|
Kritikos J, Zoitaki C, Tzannetos G, Mehmeti A, Douloudi M, Nikolaou G, Alevizopoulos G, Koutsouris D. Comparison between Full Body Motion Recognition Camera Interaction and Hand Controllers Interaction used in Virtual Reality Exposure Therapy for Acrophobia. Sensors (Basel) 2020; 20:s20051244. [PMID: 32106452 PMCID: PMC7085665 DOI: 10.3390/s20051244] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 02/10/2020] [Accepted: 02/14/2020] [Indexed: 11/16/2022]
Abstract
Virtual Reality has already been proven as a useful supplementary treatment tool for anxiety disorders. However, no specific technological importance has been given so far on how to apply Virtual Reality with a way that properly stimulates the phobic stimulus and provide the necessary means for lifelike experience. Thanks to technological advancements, there is now a variety of hardware that can help enhance stronger emotions generated by Virtual Reality systems. This study aims to evaluate the feeling of presence during different hardware setups of Virtual Reality Exposure Therapy, and, particularly how the user's interaction with those setups can affects their sense of presence during the virtual simulation. An acrophobic virtual scenario is used as a case study by 20 phobic individuals and the Witmer-Singer presence questionnaire was used for presence evaluation by the users of the system. Statistical analysis on their answers revealed that the proposed full body Motion Recognition Cameras system generates a better feeling of presence compared to the Hand Controllers system. This is thanks to the Motion Recognition Cameras, which track and allow display of the user's entire body within the virtual environment. Thus, the users are enabled to interact and confront the anxiety-provoking stimulus as in real world. Further studies are recommended, in which the proposed system could be used in Virtual Reality Exposure Therapy trials with acrophobic patients and other anxiety disorders as well, since the proposed system can provide natural interaction in various simulated environments.
Collapse
Affiliation(s)
- Jacob Kritikos
- School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece; (C.Z.); (G.T.)
- Correspondence: (J.K.); (D.K.)
| | - Chara Zoitaki
- School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece; (C.Z.); (G.T.)
| | - Giannis Tzannetos
- School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece; (C.Z.); (G.T.)
| | - Anxhelino Mehmeti
- Department of Informatics and Telecommunications, National Kapodistrian University of Athens, 15784 Athens, Greece; (A.M.); (G.N.)
| | - Marilina Douloudi
- Department of Biology, National Kapodistrian University of Athens, 15784 Athens, Greece;
| | - George Nikolaou
- Department of Informatics and Telecommunications, National Kapodistrian University of Athens, 15784 Athens, Greece; (A.M.); (G.N.)
| | | | - Dimitris Koutsouris
- School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece; (C.Z.); (G.T.)
- Correspondence: (J.K.); (D.K.)
| |
Collapse
|
35
|
Gai W, Qi M, Ma M, Wang L, Yang C, Liu J, Bian Y, de Melo G, Liu S, Meng X. Employing Shadows for Multi-Person Tracking Based on a Single RGB-D Camera. Sensors (Basel) 2020; 20:s20041056. [PMID: 32075274 PMCID: PMC7070640 DOI: 10.3390/s20041056] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 02/09/2020] [Accepted: 02/13/2020] [Indexed: 11/16/2022]
Abstract
Although there are many algorithms to track people that are walking, existing methods mostly fail to cope with occluded bodies in the setting of multi-person tracking with one camera. In this paper, we propose a method to use people’s shadows as a clue to track them instead of treating shadows as mere noise. We introduce a novel method to track multiple people by fusing shadow data from the RGB image with skeleton data, both of which are captured by a single RGB Depth (RGB-D) camera. Skeletal tracking provides the positions of people that can be captured directly, while their shadows are used to track them when they are no longer visible. Our experiments confirm that this method can efficiently handle full occlusions. It thus has substantial value in resolving the occlusion problem in multi-person tracking, even with other kinds of cameras.
Collapse
Affiliation(s)
- Wei Gai
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Meng Qi
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China
- Correspondence: ; Tel.: +86-156-1011-2163
| | - Mingcong Ma
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Lu Wang
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Chenglei Yang
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
- Engineering Research Center of Digital Media Technology, MOE, Jinan 250101, China
| | - Juan Liu
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Yulong Bian
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Gerard de Melo
- Department of Computer Science 110 Frelinghuysen Road Rutgers, The State University of New Jersey, Piscataway, NJ 08854-8019, USA
| | - Shijun Liu
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
| | - Xiangxu Meng
- School of Software, Shandong University, Jinan 250101, China; (W.G.); (C.Y.)
- Engineering Research Center of Digital Media Technology, MOE, Jinan 250101, China
| |
Collapse
|
36
|
Maudsley-Barton S, Hoon Yap M, Bukowski A, Mills R, McPhee J. A new process to measure postural sway using a Kinect depth camera during a Sensory Organisation Test. PLoS One 2020; 15:e0227485. [PMID: 32023256 PMCID: PMC7001893 DOI: 10.1371/journal.pone.0227485] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Accepted: 12/19/2019] [Indexed: 01/10/2023] Open
Abstract
Posturography provides quantitative, objective measurements of human balance and postural control for research and clinical use. However, it usually requires access to specialist equipment to measure ground reaction forces, which are not widely available in practice, due to their size or cost. In this study, we propose an alternative approach to posturography. It uses the skeletal output of an inexpensive Kinect depth camera to localise the Centre of Mass (CoM) of an upright individual. We demonstrate a pipeline which is able to measure postural sway directly from CoM trajectories, obtained from tracking the relative position of three key joints. In addition, we present the results of a pilot study that compares this method of measuring postural sway to the output of a NeuroCom SMART Balance Master. 15 healthy individuals (age: 42.3 ± 20.4 yrs, height: 172 ± 11 cm, weight: 75.1 ± 14.2 kg, male = 11), completed 25 Sensory Organisation Test (SOT) on a NeuroCom SMART Balance Master. Simultaneously, the sessions were recorded using custom software developed for this study (CoM path recorder). Postural sway was calculated from the output of both methods and the level of agreement determined, using Bland-Altman plots. Good agreement was found for eyes open tasks with a firm support, the agreement decreased as the SOT tasks became more challenging. The reasons for this discrepancy may lie in the different approaches that each method takes to calculate CoM. This discrepancy warrants further study with a larger cohort, including fall-prone individuals, cross-referenced with a marker-based system. However, this pilot study lays the foundation for the development of a portable device, which could be used to assess postural control, more cost-effectively than existing equipment.
Collapse
Affiliation(s)
- Sean Maudsley-Barton
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, United Kingdom
- * E-mail:
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, United Kingdom
| | - Anthony Bukowski
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, United Kingdom
| | - Richard Mills
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| | - Jamie McPhee
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
37
|
Conti TF, Ohlhausen M, Hom GL, Talcott KE, Golshani C, Choudhry N, Singh RP. Comparison of Widefield Imaging Between Confocal Laser Scanning Ophthalmoscopy and Broad Line Fundus Imaging in Routine Clinical Practice. Ophthalmic Surg Lasers Imaging Retina 2020; 51:89-94. [PMID: 32084281 DOI: 10.3928/23258160-20200129-03] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 08/27/2019] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE The purpose of this study was to evaluate the difference between widefield confocal scanning laser imaging (WSLO) and widefield broad line fundus (WBLF) imaging in their ability to view the peripheral retina in routine clinical practice. PATIENTS AND METHODS A retrospective chart review identified patients within routine clinical practice who were imaged with a WSLO image and a single and montaged WBLF image. The primary outcome was the number of ultra-widefield quadrants captured utilizing the UWF consensus definitions. Secondary outcomes included the area within each of quadrant and the differences in clinical grading between modalities. RESULTS More vortex ampullae were identified with the WSLO than either single image or montage WBLF image. The WSLO captured 116 of the possible 260 vortex ampullae (45%) in comparison to the WBLF single image (8 of 260; 3%) and WBLF montage (96 of 260; 37%). Only five eyes from WSLO and no images from the WBLF single image met the ultra-widefield consensus definition in routine clinical practice. The average area per individual quadrant acquired by WSLO image was greater than the single or montage WBLF image (781.67 mm2, 433.82 mm2, and 686.03 mm2, respectively; P < .001). Clinical grading of images found a substantial inter-rater agreement with both technologies (86% on WSLO; 88% on WBLF). CONCLUSIONS Both systems had a low rate of meeting UWF consensus definitions in routine clinical practice. A single WSLO image acquired a greater area than WBLF image in both single-image and montage formats. [Ophthalmic Surg Lasers Imaging Retina. 2020;51:89-94.].
Collapse
|
38
|
Choi MH, Ju YG, Park JH. Holographic near-eye display with continuously expanded eyebox using two-dimensional replication and angular spectrum wrapping. Opt Express 2020; 28:533-547. [PMID: 32118979 DOI: 10.1364/oe.381277] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 12/18/2019] [Indexed: 06/10/2023]
Abstract
Holographic near-eye displays present true three-dimensional images with full monocular depth cues. In this paper, we propose a technique to expand the eyebox of the holographic near-eye displays. The base eyebox of the holographic near-eye displays is determined by the space bandwidth product of a spatial light modulator. The proposed technique replicates and stitches the base eyebox by the combined use of a holographic optical element and high order diffractions of the spatial light modulator, achieving horizontally and vertically expanded eyebox. An angular spectrum wrapping technique is also applied to alleviate image distortions observed at the boundaries between the replicated base eyeboxes.
Collapse
|
39
|
Baek JJ, Kim SW, Kim YT. Camera-Integrable Wide-Bandwidth Antenna for Capsule Endoscope. Sensors (Basel) 2019; 20:s20010232. [PMID: 31906143 PMCID: PMC6982747 DOI: 10.3390/s20010232] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/27/2019] [Accepted: 12/29/2019] [Indexed: 11/16/2022]
Abstract
This paper presents a new antenna design for a capsule endoscope. The proposed antenna comprises a camera hole and meandered line. These features enable the antenna to be integrated on the same side as the camera, within the capsule endoscope. Moreover, light-emitting diodes can be mounted on the surface of the antenna for illumination. The antenna achieves a wide bandwidth, despite the small size owing to its meandered line structure.
Collapse
|
40
|
Bauer JR, Thomas JB, Hardeberg JY, Verdaasdonk RM. An Evaluation Framework for Spectral Filter Array Cameras to Optimize Skin Diagnosis. Sensors (Basel) 2019; 19:E4805. [PMID: 31694239 PMCID: PMC6864639 DOI: 10.3390/s19214805] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 10/31/2019] [Accepted: 11/01/2019] [Indexed: 01/02/2023]
Abstract
Comparing and selecting an adequate spectral filter array (SFA) camera is application-specific and usually requires extensive prior measurements. An evaluation framework for SFA cameras is proposed and three cameras are tested in the context of skin analysis. The proposed framework does not require application-specific measurements and spectral sensitivities together with the number of bands are the main focus. An optical model of skin is used to generate a specialized training set to improve spectral reconstruction. The quantitative comparison of the cameras is based on reconstruction of measured skin spectra, colorimetric accuracy, and oxygenation level estimation differences. Specific spectral sensitivity shapes influence the results directly and a 9-channel camera performed best regarding the spectral reconstruction metrics. Sensitivities at key wavelengths influence the performance of oxygenation level estimation the strongest. The proposed framework allows to compare spectral filter array cameras and can guide their application-specific development.
Collapse
Affiliation(s)
- Jacob Renzo Bauer
- The Norwegian Colour and Visual Computing Laboratory, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway; (J.-B.T.); (J.Y.H.)
| | - Jean-Baptiste Thomas
- The Norwegian Colour and Visual Computing Laboratory, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway; (J.-B.T.); (J.Y.H.)
| | - Jon Yngve Hardeberg
- The Norwegian Colour and Visual Computing Laboratory, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway; (J.-B.T.); (J.Y.H.)
| | - Rudolf M. Verdaasdonk
- Biomedical Photonics and Imaging group, Faculty of Science and Technology, University of Twente, 7522NB Enschede, The Netherlands;
| |
Collapse
|
41
|
Gracia-Cazaña T, García-Malinis AJ, Gilaberte Y. i-Fluorescence: Fluorescence photography with a smartphone. J Am Acad Dermatol 2019; 84:e195-e196. [PMID: 31639415 DOI: 10.1016/j.jaad.2019.10.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/26/2019] [Accepted: 10/13/2019] [Indexed: 11/18/2022]
Affiliation(s)
| | | | - Yolanda Gilaberte
- Dermatology Department, Hospital Universitario Miguel Servet, Zaragoza, Spain
| |
Collapse
|
42
|
Phillips M, Marsden H, Jaffe W, Matin RN, Wali GN, Greenhalgh J, McGrath E, James R, Ladoyanni E, Bewley A, Argenziano G, Palamaras I. Assessment of Accuracy of an Artificial Intelligence Algorithm to Detect Melanoma in Images of Skin Lesions. JAMA Netw Open 2019; 2:e1913436. [PMID: 31617929 PMCID: PMC6806667 DOI: 10.1001/jamanetworkopen.2019.13436] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 08/27/2019] [Indexed: 01/22/2023] Open
Abstract
Importance A high proportion of suspicious pigmented skin lesions referred for investigation are benign. Techniques to improve the accuracy of melanoma diagnoses throughout the patient pathway are needed to reduce the pressure on secondary care and pathology services. Objective To determine the accuracy of an artificial intelligence algorithm in identifying melanoma in dermoscopic images of lesions taken with smartphone and digital single-lens reflex (DSLR) cameras. Design, Setting, and Participants This prospective, multicenter, single-arm, masked diagnostic trial took place in dermatology and plastic surgery clinics in 7 UK hospitals. Dermoscopic images of suspicious and control skin lesions from 514 patients with at least 1 suspicious pigmented skin lesion scheduled for biopsy were captured on 3 different cameras. Data were collected from January 2017 to July 2018. Clinicians and the Deep Ensemble for Recognition of Malignancy, a deterministic artificial intelligence algorithm trained to identify melanoma in dermoscopic images of pigmented skin lesions using deep learning techniques, assessed the likelihood of melanoma. Initial data analysis was conducted in September 2018; further analysis was conducted from February 2019 to August 2019. Interventions Clinician and algorithmic assessment of melanoma. Main Outcomes and Measures Area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of the algorithmic and specialist assessment, determined using histopathology diagnosis as the criterion standard. Results The study population of 514 patients included 279 women (55.7%) and 484 white patients (96.8%), with a mean (SD) age of 52.1 (18.6) years. A total of 1550 images of skin lesions were included in the analysis (551 [35.6%] biopsied lesions; 999 [64.4%] control lesions); 286 images (18.6%) were used to train the algorithm, and a further 849 (54.8%) images were missing or unsuitable for analysis. Of the biopsied lesions that were assessed by the algorithm and specialists, 125 (22.7%) were diagnosed as melanoma. Of these, 77 (16.7%) were used for the primary analysis. The algorithm achieved an AUROC of 90.1% (95% CI, 86.3%-94.0%) for biopsied lesions and 95.8% (95% CI, 94.1%-97.6%) for all lesions using iPhone 6s images; an AUROC of 85.8% (95% CI, 81.0%-90.7%) for biopsied lesions and 93.8% (95% CI, 91.4%-96.2%) for all lesions using Galaxy S6 images; and an AUROC of 86.9% (95% CI, 80.8%-93.0%) for biopsied lesions and 91.8% (95% CI, 87.5%-96.1%) for all lesions using DSLR camera images. At 100% sensitivity, the algorithm achieved a specificity of 64.8% with iPhone 6s images. Specialists achieved an AUROC of 77.8% (95% CI, 72.5%-81.9%) and a specificity of 69.9%. Conclusions and Relevance In this study, the algorithm demonstrated an ability to identify melanoma from dermoscopic images of selected lesions with an accuracy similar to that of specialists.
Collapse
Affiliation(s)
- Michael Phillips
- Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Centre for Medical Research, University of Western Australia, Perth, Western Australia, Australia
| | | | - Wayne Jaffe
- Royal Stoke University Hospital, University Hospital North Midlands, Stoke, United Kingdom
| | - Rubeta N. Matin
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Gorav N. Wali
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | - Emily McGrath
- Royal Devon and Exeter NHS Foundation Trust, Exeter, United Kingdom
| | - Rob James
- Royal Devon and Exeter NHS Foundation Trust, Exeter, United Kingdom
| | - Evmorfia Ladoyanni
- Dudley Group NHS Foundation Trust, Corbett Hospital, Stourbridge, United Kingdom
| | - Anthony Bewley
- Barts Health, London, United Kingdom
- Queen Mary School of Medicine, University of London, London, United Kingdom
| | | | - Ioulios Palamaras
- Barnet and Chase Farm Hospitals, Royal Free NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
43
|
Smith C, Galland BC, de Bruin WE, Taylor RW. Feasibility of Automated Cameras to Measure Screen Use in Adolescents. Am J Prev Med 2019; 57:417-424. [PMID: 31377085 DOI: 10.1016/j.amepre.2019.04.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 04/23/2019] [Accepted: 04/24/2019] [Indexed: 02/03/2023]
Abstract
INTRODUCTION The influence of screens and technology on adolescent well-being is controversial and there is a need to improve methods to measure these behaviors. This study examines the feasibility and acceptability of using automated wearable cameras to measure evening screen use in adolescents. METHODS A convenience sample of adolescents (aged 13-17 years, n=15) wore an automated camera for 3 evenings from 5:00pm to bedtime. The camera (Brinno TLC120) captured an image every 15 seconds. Fieldwork was completed between October and December 2017, and data analyzed in 2018. Feasibility was examined by quality of the captured images, wear time, and whether images could be coded in relation to contextual factors (e.g., type of screen and where screen use occurred). Acceptability was examined by participant compliance to the protocol and from an exit interview. RESULTS Data from 39 evenings were analyzed (41,734 images), with a median of 268 minutes per evening. The camera was worn for 78% of the evening on Day 1, declining to 51% on Day 3. Nearly half of the images contained a screen in active use (46%), most commonly phones (13.7%), TV (12.6%), and laptops (8.2%). Multiple screen use was evident in 5% of images. Within the exit interview, participants raised no major concerns about wearing the camera, and data loss because of deletions or privacy concerns was minimal (mean, 14 minutes, 6%). CONCLUSIONS Automated cameras offer a feasible, acceptable method of measuring prebedtime screen behavior, including environmental context and aspects of media multitasking in adolescents.
Collapse
Affiliation(s)
- Claire Smith
- Department of Women's and Children's Health, University of Otago, Dunedin, New Zealand.
| | - Barbara C Galland
- Department of Women's and Children's Health, University of Otago, Dunedin, New Zealand
| | | | - Rachael W Taylor
- Department of Medicine, University of Otago, Dunedin, New Zealand
| |
Collapse
|
44
|
Cabal Mirabal CA, Berlanga Acosta J, Fernández Montequín J, Oramas Díaz L, González Dalmau E, Herrera Martínez L, Sauri JE, Baldomero Hernández J, Savigne Gutiérrez W, Valdés JL, Tabio Reyes AL, Pérez Pérez SC, Valdés Pérez C, Armstrong AA, Armstrong DG. Quantitative Studies of Diabetic Foot Ulcer Evolution Under Treatment by Digital Stereotactic Photography. J Diabetes Sci Technol 2019; 13:821-826. [PMID: 31195816 PMCID: PMC6955448 DOI: 10.1177/1932296819853843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Imaging the lower extremity reproducibly and accurately remains an elusive goal. This is particularly true in the high risk diabetic foot, where tissue loss, edema, and color changes are often concomitant. The purpose of this study was to evaluate the reproducibility of a novel and inexpensive stereotaxic frame in assessment of wound healing. METHODS The main idea is to keep constant and reproducible the relative position of extremities related to the sensor used for the examination during a serial studies by stereotaxic digital photographic sequence. Ten healthy volunteers were evaluated at 10 different time moments to estimate the foot position variations in the stereotaxic frame. The evolution of 40 of DFU patients under treatment was evaluated before and during the epidemical grow factor intralesional treatment. RESULTS The wound closing and granulation speeds, the relative contribution of the contraction and tissue restauration mechanism were evaluated by stereotaxic digital photography. CONCLUSIONS The results of this study suggest that the stereotaxic frame is a robust platform for serial study of the evolution of wound healing which allow to obtain consistent information from a variety of visible and hyperspectral measurement technologies. New stereotaxic digital photography evidences related to the diabetic foot ulcer healing process under treatment has been presented.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | - Alexandria A. Armstrong
- Southwestern Academic Limb Salvage Alliance (SALSA), Keck School of Medicine at the University of Southern California, Los Angeles, CA, USA
| | - David G. Armstrong
- Southwestern Academic Limb Salvage Alliance (SALSA), Keck School of Medicine at the University of Southern California, Los Angeles, CA, USA
- David G. Armstrong, DPM, MD, PhD, Southwestern Academic Limb Salvage Alliance (SALSA), Department of Surgery, Keck Medical Center of USC, 1520 San Pablo St, Ste 4300, Los Angeles, CA 90033, USA.
| |
Collapse
|
45
|
Affiliation(s)
- Jung Min Bae
- Department of Dermatology, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| | - Hyun Jeong Ju
- Department of Dermatology, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
46
|
Abstract
A point cloud that is obtained by an RGB-D camera will inevitably be affected by outliers that do not belong to the surface of the object, which is due to the different viewing angles, light intensities, and reflective characteristics of the object surface and the limitations of the sensors. An effective and fast outlier removal method based on RGB-D information is proposed in this paper. This method aligns the color image to the depth image, and the color mapping image is converted to an HSV image. Then, the optimal segmentation threshold of the V image that is calculated by using the Otsu algorithm is applied to segment the color mapping image into a binary image, which is used to extract the valid point cloud from the original point cloud with outliers. The robustness of the proposed method to the noise types, light intensity and contrast is evaluated by using several experiments; additionally, the method is compared with other filtering methods and applied to independently developed foot scanning equipment. The experimental results show that the proposed method can remove all type of outliers quickly and effectively.
Collapse
Affiliation(s)
- Chaochuan Jia
- College of Mechanical & Electronic Engineering, Shandong University of Science and Technology, Qingdao, Shandong Province, China
- College of Electronics and Information Engineering, West Anhui University, Lu’an, Anhui Province, China
| | - Ting Yang
- College of Electronics and Information Engineering, West Anhui University, Lu’an, Anhui Province, China
| | - Chuanjiang Wang
- College of Mechanical & Electronic Engineering, Shandong University of Science and Technology, Qingdao, Shandong Province, China
| | - Binghui Fan
- College of Mechanical & Electronic Engineering, Shandong University of Science and Technology, Qingdao, Shandong Province, China
- * E-mail:
| | - Fugui He
- College of Electronics and Information Engineering, West Anhui University, Lu’an, Anhui Province, China
| |
Collapse
|
47
|
Iacovacci V, Blanc A, Huang H, Ricotti L, Schibli R, Menciassi A, Behe M, Pané S, Nelson BJ. High-Resolution SPECT Imaging of Stimuli-Responsive Soft Microrobots. Small 2019; 15:e1900709. [PMID: 31304653 DOI: 10.1002/smll.201900709] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 05/07/2019] [Indexed: 06/10/2023]
Abstract
Untethered small-scale robots have great potential for biomedical applications. However, critical barriers to effective translation of these miniaturized machines into clinical practice exist. High resolution tracking and imaging in vivo is one of the barriers that limit the use of micro- and nanorobots in clinical applications. Here, the inclusion of radioactive compounds in soft thermoresponsive magnetic microrobots is investigated to enable their single-photon emission computed tomography imaging. Four microrobotic platforms differing in hydrogel structure and four 99m Tc[Tc]-based radioactive compounds are investigated in order to achieve optimal contrast agent retention and optimal imaging. Single microrobot imaging of structures as low as 100 µm in diameter, as well as tracking of shape switching from tubular to planar configurations by inclusion of 99m Tc[Tc] colloid in the hydrogel structure, is reported.
Collapse
Affiliation(s)
- Veronica Iacovacci
- Institute of Robotics and Intelligent Systems, ETH Zurich, Zurich, CH-8092, Switzerland
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 50126, Italy
| | - Alain Blanc
- Center for Radiopharmaceutical Sciences, Paul Scherrer Institut, Villigen, CH-5232, Switzerland
| | - Henwei Huang
- Institute of Robotics and Intelligent Systems, ETH Zurich, Zurich, CH-8092, Switzerland
| | - Leonardo Ricotti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 50126, Italy
| | - Roger Schibli
- Center for Radiopharmaceutical Sciences, Paul Scherrer Institut, Villigen, CH-5232, Switzerland
- Department of Chemistry and Applied Biosciences, ETH Zurich, 8093, Zurich, Switzerland
| | - Arianna Menciassi
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 50126, Italy
| | - Martin Behe
- Center for Radiopharmaceutical Sciences, Paul Scherrer Institut, Villigen, CH-5232, Switzerland
| | - Salvador Pané
- Institute of Robotics and Intelligent Systems, ETH Zurich, Zurich, CH-8092, Switzerland
| | - Bradley J Nelson
- Institute of Robotics and Intelligent Systems, ETH Zurich, Zurich, CH-8092, Switzerland
| |
Collapse
|
48
|
Ferlatte O, Oliffe JL, Salway T, Broom A, Bungay V, Rice S. Using Photovoice to Understand Suicidality Among Gay, Bisexual, and Two-Spirit Men. Arch Sex Behav 2019; 48:1529-1541. [PMID: 31152366 DOI: 10.1007/s10508-019-1433-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 02/13/2019] [Accepted: 02/25/2019] [Indexed: 05/24/2023]
Abstract
This study explored the drivers of suicidality from the perspectives of gay, bisexual, and two-spirit men (GB2SM) with a history of suicidality. Twenty-one GB2SM participated in this photovoice study taking photographs to depict and discuss their previous suicidality. Data were collected from in-depth individual interviews in which participants discussed their photographs and in turn offered verbal/narrative accounts of suicidality. Drawing on intersectionality, analyses of the photographs and interview data revealed three interconnected themes. First, adverse childhood events and negative adolescent experiences were described as the root causes of mental health struggles and suicidality. Second, violence and homophobia had disrupted these men's education and employment opportunities and some participants detailed how their lack of capital and challenges for maintaining employment shaped their suicidality. Third, a sociality of stigma and sense of isolation compounded experiences of suicidality. The three themes overlapped and were shaped by multiple intersectional axes including sexuality, class, ethnicity, and mental health status. The findings have implications for services and health professionals working with GB2SM who need to thoughtfully consider life-course trajectories and multiple social axes when assessing and treating GB2SM experiencing suicidality. More so, because these factors relate to social inequities, structural and policy changes warrant targeted attention.
Collapse
Affiliation(s)
- Olivier Ferlatte
- School of Nursing, University of British Columbia, Vancouver, BC, Canada.
- British Columbia Centre on Substance Use, 400 - 1045 Howe Street, Vancouver, BC, V6Z 2A9, Canada.
| | - John L Oliffe
- School of Nursing, University of British Columbia, Vancouver, BC, Canada
| | - Travis Salway
- British Columbia Center for Disease Control, Vancouver, BC, Canada
- School of Public and Population Health, University of British Columbia, Vancouver, BC, Canada
| | - Alex Broom
- School of Social Sciences, University of New South Wales, Sydney, Australia
| | - Victoria Bungay
- School of Nursing, University of British Columbia, Vancouver, BC, Canada
| | - Simon Rice
- Orygen, The National Centre for Excellence in Youth Mental Health, Centre for Youth Mental Health, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
49
|
Abstract
A leading physician in New York during the last quarter of the 19th century, Henry G. Piffard, MD, was a pioneer dermatologist in New York. He had a propensity to invent, and he used that ability to advance the nascent field of instantaneous photography. The recent discovery of a few survivors of Piffard's patented "photogenic (flash) cartridges" prompted an examination of his connection to a leading photographic supply house of his time. The study provided insights into his system and revealed that Piffard had combined the use of his patent with his passion for skin diseases. As a result, Piffard's publications were among the first to document diseases of the skin photographically.
Collapse
|
50
|
Kaczensky P, Khaliun S, Payne J, Boldgiv B, Buuveibaatar B, Walzer C. Through the eye of a Gobi khulan - Application of camera collars for ecological research of far-ranging species in remote and highly variable ecosystems. PLoS One 2019; 14:e0217772. [PMID: 31163047 PMCID: PMC6548383 DOI: 10.1371/journal.pone.0217772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/17/2019] [Indexed: 11/18/2022] Open
Abstract
The Mongolian Gobi-Eastern Steppe Ecosystem is one of the largest remaining natural drylands and home to a unique assemblage of migratory ungulates. Connectivity and integrity of this ecosystem are at risk if increasing human activities are not carefully planned and regulated. The Gobi part supports the largest remaining population of the Asiatic wild ass (Equus hemionus; locally called "khulan"). Individual khulan roam over areas of thousands of square kilometers and the scale of their movements is among the largest described for terrestrial mammals, making them particularly difficult to monitor. Although GPS satellite telemetry makes it possible to track animals in near-real time and remote sensing provides environmental data at the landscape scale, remotely collected data also harbors the risk of missing important abiotic or biotic environmental variables or life history events. We tested the potential of animal born camera systems ("camera collars") to improve our understanding of the drivers and limitations of khulan movements. Deployment of a camera collar on an adult khulan mare resulted in 7,881 images over a one-year period. Over half of the images showed other khulan and 1,630 images showed enough of the collared khulan to classify the behaviour of the animals seen into several main categories. These khulan images provided us with: i) new insights into important life history events and grouping dynamics, ii) allowed us to calculate time budgets for many more animals than the collared khulan alone, and iii) provided us with a training dataset for calibrating data from accelerometer and tilt sensors in the collar. The images also allowed to document khulan behaviour near infrastructure and to obtain a day-time encounter rate between a specific khulan with semi-nomadic herders and their livestock. Lastly, the images allowed us to ground truth the availability of water by: i) confirming waterpoints predicted from other analyses, ii) detecting new waterpoints, and iii) compare precipitation records for rain and snow from landscape scale climate products with those documented by the camera collar. We discuss the added value of deploying camera collars on a subset of animals in remote, highly variable ecosystems for research and conservation.
Collapse
Affiliation(s)
- Petra Kaczensky
- Norwegian Institute of Nature Research, Trondheim, Norway
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Sanchir Khaliun
- Ecology Group, Department of Biology, National University of Mongolia, Ulaanbaatar, Mongolia
| | - John Payne
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
- Wildlife Conservation Society, Mongolia Program, Ulaanbaatar, Mongolia
| | - Bazartseren Boldgiv
- Ecology Group, Department of Biology, National University of Mongolia, Ulaanbaatar, Mongolia
| | | | - Chris Walzer
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
- Wildlife Conservation Society, Mongolia Program, Ulaanbaatar, Mongolia
| |
Collapse
|