51
|
Ferlatte O, Oliffe JL, Salway T, Broom A, Bungay V, Rice S. Using Photovoice to Understand Suicidality Among Gay, Bisexual, and Two-Spirit Men. Arch Sex Behav 2019; 48:1529-1541. [PMID: 31152366 DOI: 10.1007/s10508-019-1433-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 02/13/2019] [Accepted: 02/25/2019] [Indexed: 05/24/2023]
Abstract
This study explored the drivers of suicidality from the perspectives of gay, bisexual, and two-spirit men (GB2SM) with a history of suicidality. Twenty-one GB2SM participated in this photovoice study taking photographs to depict and discuss their previous suicidality. Data were collected from in-depth individual interviews in which participants discussed their photographs and in turn offered verbal/narrative accounts of suicidality. Drawing on intersectionality, analyses of the photographs and interview data revealed three interconnected themes. First, adverse childhood events and negative adolescent experiences were described as the root causes of mental health struggles and suicidality. Second, violence and homophobia had disrupted these men's education and employment opportunities and some participants detailed how their lack of capital and challenges for maintaining employment shaped their suicidality. Third, a sociality of stigma and sense of isolation compounded experiences of suicidality. The three themes overlapped and were shaped by multiple intersectional axes including sexuality, class, ethnicity, and mental health status. The findings have implications for services and health professionals working with GB2SM who need to thoughtfully consider life-course trajectories and multiple social axes when assessing and treating GB2SM experiencing suicidality. More so, because these factors relate to social inequities, structural and policy changes warrant targeted attention.
Collapse
Affiliation(s)
- Olivier Ferlatte
- School of Nursing, University of British Columbia, Vancouver, BC, Canada.
- British Columbia Centre on Substance Use, 400 - 1045 Howe Street, Vancouver, BC, V6Z 2A9, Canada.
| | - John L Oliffe
- School of Nursing, University of British Columbia, Vancouver, BC, Canada
| | - Travis Salway
- British Columbia Center for Disease Control, Vancouver, BC, Canada
- School of Public and Population Health, University of British Columbia, Vancouver, BC, Canada
| | - Alex Broom
- School of Social Sciences, University of New South Wales, Sydney, Australia
| | - Victoria Bungay
- School of Nursing, University of British Columbia, Vancouver, BC, Canada
| | - Simon Rice
- Orygen, The National Centre for Excellence in Youth Mental Health, Centre for Youth Mental Health, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
52
|
Abstract
A leading physician in New York during the last quarter of the 19th century, Henry G. Piffard, MD, was a pioneer dermatologist in New York. He had a propensity to invent, and he used that ability to advance the nascent field of instantaneous photography. The recent discovery of a few survivors of Piffard's patented "photogenic (flash) cartridges" prompted an examination of his connection to a leading photographic supply house of his time. The study provided insights into his system and revealed that Piffard had combined the use of his patent with his passion for skin diseases. As a result, Piffard's publications were among the first to document diseases of the skin photographically.
Collapse
|
53
|
Kaczensky P, Khaliun S, Payne J, Boldgiv B, Buuveibaatar B, Walzer C. Through the eye of a Gobi khulan - Application of camera collars for ecological research of far-ranging species in remote and highly variable ecosystems. PLoS One 2019; 14:e0217772. [PMID: 31163047 PMCID: PMC6548383 DOI: 10.1371/journal.pone.0217772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/17/2019] [Indexed: 11/18/2022] Open
Abstract
The Mongolian Gobi-Eastern Steppe Ecosystem is one of the largest remaining natural drylands and home to a unique assemblage of migratory ungulates. Connectivity and integrity of this ecosystem are at risk if increasing human activities are not carefully planned and regulated. The Gobi part supports the largest remaining population of the Asiatic wild ass (Equus hemionus; locally called "khulan"). Individual khulan roam over areas of thousands of square kilometers and the scale of their movements is among the largest described for terrestrial mammals, making them particularly difficult to monitor. Although GPS satellite telemetry makes it possible to track animals in near-real time and remote sensing provides environmental data at the landscape scale, remotely collected data also harbors the risk of missing important abiotic or biotic environmental variables or life history events. We tested the potential of animal born camera systems ("camera collars") to improve our understanding of the drivers and limitations of khulan movements. Deployment of a camera collar on an adult khulan mare resulted in 7,881 images over a one-year period. Over half of the images showed other khulan and 1,630 images showed enough of the collared khulan to classify the behaviour of the animals seen into several main categories. These khulan images provided us with: i) new insights into important life history events and grouping dynamics, ii) allowed us to calculate time budgets for many more animals than the collared khulan alone, and iii) provided us with a training dataset for calibrating data from accelerometer and tilt sensors in the collar. The images also allowed to document khulan behaviour near infrastructure and to obtain a day-time encounter rate between a specific khulan with semi-nomadic herders and their livestock. Lastly, the images allowed us to ground truth the availability of water by: i) confirming waterpoints predicted from other analyses, ii) detecting new waterpoints, and iii) compare precipitation records for rain and snow from landscape scale climate products with those documented by the camera collar. We discuss the added value of deploying camera collars on a subset of animals in remote, highly variable ecosystems for research and conservation.
Collapse
Affiliation(s)
- Petra Kaczensky
- Norwegian Institute of Nature Research, Trondheim, Norway
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Sanchir Khaliun
- Ecology Group, Department of Biology, National University of Mongolia, Ulaanbaatar, Mongolia
| | - John Payne
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
- Wildlife Conservation Society, Mongolia Program, Ulaanbaatar, Mongolia
| | - Bazartseren Boldgiv
- Ecology Group, Department of Biology, National University of Mongolia, Ulaanbaatar, Mongolia
| | | | - Chris Walzer
- Research Institute of Wildlife Ecology, University of Veterinary Medicine Vienna, Vienna, Austria
- Wildlife Conservation Society, Mongolia Program, Ulaanbaatar, Mongolia
| |
Collapse
|
54
|
Richardson AD. Tracking seasonal rhythms of plants in diverse ecosystems with digital camera imagery. New Phytol 2019; 222:1742-1750. [PMID: 30415486 DOI: 10.1111/nph.15591] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/05/2018] [Indexed: 05/13/2023]
Abstract
Contents Summary I. Introduction II. Evolving modes of phenological study III. The phenocam approach IV. Applications of the phenocam method V. Looking forward Acknowledgements References SUMMARY: Global change is shifting the seasonality of vegetation in ecosystems around the globe. High-frequency digital camera imagery, and vegetation indices derived from that imagery, is facilitating better tracking of phenological responses to environmental variation. This method, commonly referred to as the 'phenocam' approach, is well suited to several specific applications, including: close-up observation of individual organisms; long-term canopy-level monitoring at individual sites; automated phenological monitoring in regional-to-continental scale observatory networks; and tracking responses to experimental treatments. Several camera networks are already well established, and some camera records are a more than a decade long. These data can be used to identify the environmental controls on phenology in different ecosystems, which will contribute to the development of improved prognostic phenology models.
Collapse
Affiliation(s)
- Andrew D Richardson
- School of Informatics, Computing, and Cyber Systems, Northern Arizona University, Flagstaff, AZ, 86011, USA
- Center for Ecosystem Science and Society, Northern Arizona University, Flagstaff, AZ, 86011, USA
| |
Collapse
|
55
|
Li X, Hu H, Xiao D, Wang D, Jiang S. Analysis of the spatial distribution of collectors in dust scrubber based on image processing. J Air Waste Manag Assoc 2019; 69:764-777. [PMID: 30794110 DOI: 10.1080/10962247.2019.1586012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 02/09/2019] [Indexed: 06/09/2023]
Abstract
The spatial distribution of the collectors in dust scrubber is key in determining the effectiveness of the dust removal process. In the present study, a high-speed camera was used to capture images of the distribution of the collectors. Some of the image information was extracted by image processing, such as the gray mean (GM), the angular second moment (ASM), and the entropy (ENT) from the gray-level co-occurrence matrix of the image. Subsequently, the spatial distribution rules of the collectors were studied by analyzing the spatial proportion, dispersion area, and uniformity and intensiveness of the collectors. It is an intuitive approach and a novel analysis method for the operating state of dust scrubber. The results show that the spatial distribution of the collectors could be better reflected by image processing methods. The dispersion area of the collectors expanded with an increase in the airflow velocity. When the initial liquid level (ILL) was higher, the collectors expanded in an approximate circular shape, and when the ILL was lower the collectors expanded in an approximate sector shape. In general, the variation trend in the spatial proportion enhanced with an increase in ILL and airflow velocity, which is consistent with the uniformity of collectors. When the liquid level was 0-20 mm and the airflow velocity was greater than 6.5 m/sec, the spatial proportion and uniformity of the collectors reached the highest degree. However, the growth rate of the spatial proportion and uniformity of the collectors slowed down and even led to negative growth when the ILL was lower and the airflow velocity was higher. The intensiveness of the collectors was great when the ILL was higher, which was free from the apparent influence of the airflow velocity and the ILL. However, when the ILL was lower, the intensiveness of the collectors was poor, intensifying as the airflow velocity and ILL increased. When the liquid level was -5-10 mm and the airflow velocity was greater than 8 m/sec, the intensiveness of the collectors reached the highest degree, indicating that a liquid level greater than 0 mm and a higher airflow velocity improved the spatial distribution of the collectors. Implications: This paper focuses on the spatial distribution of the collectors in dust scrubber. Some of the image information was extracted by image processing, such as the gray mean of the image, the angular second moment, and the entropy from the gray-level co-occurrence matrix of the image. The spatial distribution rules of the collectors were studied by analyzing the spatial proportion, the dispersion area, and the uniformity and intensiveness of the collectors.
Collapse
Affiliation(s)
- Xiaochuan Li
- a Key Laboratory of Coal Processing and Efficient Utilization , Ministry of Education , Xuzhou , Jiangsu , People's Republic of China
- b School of Chemical Engineering and Technology , China University of Mining and Technology , Xuzhou , Jiangsu , People's Republic of China
- c Department of Food, Agricultural, and Biological Engineering , The Ohio State University , Columbus , OH , USA
| | - Haibin Hu
- b School of Chemical Engineering and Technology , China University of Mining and Technology , Xuzhou , Jiangsu , People's Republic of China
| | - Di Xiao
- b School of Chemical Engineering and Technology , China University of Mining and Technology , Xuzhou , Jiangsu , People's Republic of China
| | - DongXue Wang
- b School of Chemical Engineering and Technology , China University of Mining and Technology , Xuzhou , Jiangsu , People's Republic of China
| | - Shuguang Jiang
- d School of Safety Engineering , China University of Mining and Technology , Xuzhou , Jiangsu , People's Republic of China
| |
Collapse
|
56
|
Sheahan G. Comparison of Personal Video Technology for Teaching and Assessment of Surgical Skills. J Grad Med Educ 2019; 11:328-331. [PMID: 31210866 PMCID: PMC6570456 DOI: 10.4300/jgme-d-18-01082.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/04/2019] [Accepted: 04/04/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Improvements in personal technology have made video recording for teaching and assessment of surgical skills possible. OBJECTIVE This study compared 5 personal video-recording devices based on their utility (image quality, hardware, mounting options, and accessibility) in recording open surgical procedures. METHODS Open procedures in a simulated setting were recorded using smartphones and tablets (MOB), laptops (LAP), sports cameras such as GoPro (SC), single-lens reflex cameras (DSLR), and spy camera glasses (SPY). Utility was rated by consensus between 2 investigators trained in observation of technology using a 5-point Likert scale (1, poor, to 5, excellent). RESULTS A total of 150 hours of muted video were reviewed with a minimum 1 hour for each device. Image quality was good (3.8) across all devices, although this was influenced by the device-mounting requirements (4.2) and its proximity to the area of interest. Device hardware (battery life and storage capacity) was problematic for long procedures (3.8). Availability of devices was high (4.2). CONCLUSIONS Personal video-recording technology can be used for assessment and teaching of open surgical skills. DSLR and SC provide the best images. DSLR provides the best zoom capability from an offset position, while SC can be placed closer to the operative field without impairing sterility. Laptops provide best overall utility for long procedures due to video file size. All devices require stable recording platforms (eg, bench space, dedicated mounting accessories). Head harnesses (SC, SPY) provide opportunities for "point-of-view" recordings. MOB and LAP can be used for multiple concurrent recordings.
Collapse
|
57
|
La Torre F, Meocci M, Nocentini A. Safety effects of automated section speed control on the Italian motorway network. J Safety Res 2019; 69:115-123. [PMID: 31235223 DOI: 10.1016/j.jsr.2019.03.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/06/2019] [Accepted: 03/06/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Automated Section Speed Control (ASSC) has been identified as an effective countermeasure to reduce speeds and improve speed limit compliance. METHOD An Empirical Bayes (EB) before-and-after study was performed in this research in order to evaluate the impact of the ASSC system on the expected crash frequency. The study was carried out on a sample of 125 ASSC sites of the Italian motorway network covering 1252 km, where a total of 21,721 crashes were recorded during a 10-year analysis period from 2004 to 2013. RESULTS Overall, the EB analysis estimated a significant 22% reduction in the expected crash frequency due to the implementation of the ASSC system. The analysis indicated that the effect is slightly larger on property damage only (PDO) crashes (-23%) than on fatal injury (FI) crashes (-18%) and that the highest reductions in crash frequency are expected for multi-vehicle FI crashes (-25%) and multi-vehicle PDO crashes (-31%). Furthermore, the results indicated that the ASSC system is more effective in reducing crash rates when traffic volume increases and it is therefore strongly recommended as a countermeasure to improve safety on high-traffic-volume motorway sections.
Collapse
Affiliation(s)
- Francesca La Torre
- Civil and Environmental Engineering Department, University of Florence, Via Santa Marta 3, 50139 Firenze, Italy.
| | - Monica Meocci
- Civil and Environmental Engineering Department, University of Florence, Via Santa Marta 3, 50139 Firenze, Italy.
| | | |
Collapse
|
58
|
Dhatchayeny DR, Chung YH. Optical extra-body communication using smartphone cameras for human vital sign transmission. Appl Opt 2019; 58:3995-3999. [PMID: 31158149 DOI: 10.1364/ao.58.003995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 04/21/2019] [Indexed: 06/09/2023]
Abstract
This paper presents an optical extra-body communication (OEBC) for the transmission of human vital signs in an optical camera communication link. The primary vital signs, such as pulse rate, respiratory rate, body temperature, blood pressure, and peripheral capillary oxygen saturation, are captured from the patient's body. The proposed OEBC system has body sensors installed on various parts of the body for detecting, processing, and communicating the vital sign data. A light-emitting diode (LED) hub is a 4×4 red, green, and blue (RGB) LED array that acts as a coordinator to collect the vital sign data from the sensors and transmit through an optical link, while an android-based smartphone camera is used as the receiver. The proposed OEBC employs color modulation, which assigns colors to each vital sign data and transmits data based on the RGB color combinations. The experiment and simulation results show that the scheme is able to transmit the vital sign data through the optical link with an acceptable bit error rate value of 1.2×10-4 at a peak signal-to-noise ratio value of 15 dB. The proposed OEBC can thus facilitate both reliable and convenient health monitoring in hospital environments.
Collapse
|
59
|
Park S, Mun S, Lee DW, Whang M. IR-camera-based measurements of 2D/3D cognitive fatigue in 2D/3D display system using task-evoked pupillary response. Appl Opt 2019; 58:3467-3480. [PMID: 31044844 DOI: 10.1364/ao.58.003467] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 04/02/2019] [Indexed: 06/09/2023]
Abstract
This study was carried out to evaluate a method used to measure three-dimensional (3D) cognitive fatigue based on the pupillary response. This technique was designed to overcome measurement burdens by using non-contact methods. The pupillary response is related to cognitive function by a neural pathway and may be an indicator of 3D cognitive fatigue. Twenty-six undergraduate students (including 14 women) watched both 2D and 3D versions of a video for 70 min. The participants experienced visual fatigue after viewing the 3D content. Measures such as subjective rating, response time, event-related potential latency, heartbeat-evoked potential (HEP) alpha power, and task-evoked pupillary response (TEPR) latency were significantly different. Multitrait-multimethod matrix analysis indicated that HEP and TEPR latency measures had stronger reliability and higher correlations with 3D cognitive fatigue than other measures. TEPR latency may be useful for quantitatively determining 3D visual fatigue, as it can be easily used to evaluate 3D visual fatigue using a non-contact method without measuring burden.
Collapse
|
60
|
Driggers R, Furxhi O, Vaca G, Reumers V, Vazimali M, Short R, Agrawal P, Lambrechts A, Charle W, Vunckx K, Arvidson C. Burmese python target reflectivity compared to natural Florida foliage background reflectivity. Appl Opt 2019; 58:D98-D104. [PMID: 31044871 DOI: 10.1364/ao.58.000d98] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 04/16/2019] [Indexed: 06/09/2023]
Abstract
The Florida Everglades is infested with Burmese pythons caused by the release of exotic pets in the 1980s. The current estimates are between 30,000 and 300,000 pythons, where the result is a severe decline in Everglade mammals: 90% reductions in raccoon, opossum, bobcats, and foxes. The marsh rabbits are completely gone. The population of the pythons is rapidly increasing exponentially with 20-50 eggs per snake with a life span of up to 20 years. Pythons have been captured in the Everglades with lengths of nearly 6 m. Researchers in the state of Florida are concerned that these pythons are (1) permanently damaging the Everglades, (2) migrating further north into populated areas of Florida, and (3) endangering wildlife, pets, and eventually, people. There have been a number of sensing efforts attempted in the large-area detection of pythons, where limited success has been achieved. For example, infrared sensors have been applied to the problem, but the pythons are cold-blooded, so the infrared bands do not work well. Imec has leveraged its expertise and infrastructure in semiconductor processing to produce highly compact, higher performance, and relatively cheaper hyperspectral image sensors and camera systems. In this work, Imec teamed with the University of Florida and Extended Reality Systems to obtain hyperspectral reflectivity measurements of Burmese pythons along with natural Florida background foliage to determine bands or band combinations that may be exploited in the large-area detection of pythons. The bands investigated are the visible-near infrared (or VisNIR) and the shortwave infrared (SWIR) bands. The results show that there are enough differences in the data collection such that a single band, inexpensive VisNIR band camera may provide reasonable results and a two-band, VisNIR/SWIR combination may provide higher performance results. In this paper, we provide the VisNIR results.
Collapse
|
61
|
Golan O, Piccinini AL, Hwang ES, De Oca Gonzalez IM, Krauthammer M, Khandelwal SS, Smadja D, Randleman JB. Distinguishing Highly Asymmetric Keratoconus Eyes Using Dual Scheimpflug/Placido Analysis. Am J Ophthalmol 2019; 201:46-53. [PMID: 30721688 DOI: 10.1016/j.ajo.2019.01.023] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 12/07/2018] [Accepted: 01/22/2019] [Indexed: 12/27/2022]
Abstract
PURPOSE To identify the best metrics or combination of metrics that provide the highest predictive power between normal eyes and the clinically unaffected eye of patients with highly asymmetric keratoconus using data from a Dual Scheimpflug/Placido device. DESIGN Retrospective case-control study. METHODS Combined Dual Scheimpflug/Placido imaging was obtained from the Galilei G4 device (Ziemer Ophthalmic Systems AG, Port, Switzerland) in 31 clinically unaffected eyes with highly asymmetric keratoconus and 178 eyes from 178 patients with bilaterally normal corneal examinations that underwent uneventful LASIK with at least 1 year follow-up. Receiver operating characteristic (ROC) curves were generated to determine area under the curve (AUC), sensitivity, and specificity for 87 metrics, and logistic regression modeling was used to determine optimal variable combinations. RESULTS No individual metric achieved an AUC greater than 0.79. A combined model consisting of 9 metrics yielded an AUC of 0.96, with 90.3% sensitivity and 92.6% specificity. Among those 9 metrics included, 5 related to corneal pachymetry: Opposite Sector Index and Anterior Height BFS Z from the anterior surface, Asphericity and Asymmetry Index, Posterior Height BFS Z, and Posterior Height BFS X from the posterior surface. The strongest variable in the model was the thinnest point location on the horizontal (x) axis. CONCLUSION While individual metrics performed poorly, using a combination of metrics from the combined Dual Scheimpflug/Placido device provided a useful model for differentiating normal corneas from the clinically normal eyes of patients with highly asymmetric keratoconus. Pachymetry values were the most impactful metrics.
Collapse
Affiliation(s)
- Oren Golan
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA; Tel Aviv Souraski Medical Center, Tel Aviv University, Tel Aviv, Israel
| | - Andre L Piccinini
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA; Sadalla Amin Ghanem Eye Hospital, Joinville, Santa Catarina, Brazil
| | - Eric S Hwang
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | | | - Mark Krauthammer
- Tel Aviv Souraski Medical Center, Tel Aviv University, Tel Aviv, Israel
| | | | - David Smadja
- Department of Ophthalmology, Shaare Zedek Medical Center, Jerusalem, Israel
| | - J Bradley Randleman
- Keck School of Medicine of the University of Southern California, Los Angeles, California, USA; USC Roski Eye Institute, Los Angeles, California, USA.
| |
Collapse
|
62
|
Hashemi H, Heydarian S, Khabazkhoob M, Yekta A, Emamian MH, Fotouhi A. Keratometry in children: Comparison between auto-refractokeratometer, rotating scheimpflug imaging, and biograph. J Optom 2019; 12:99-110. [PMID: 30879970 PMCID: PMC6449769 DOI: 10.1016/j.optom.2018.12.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/26/2018] [Accepted: 12/12/2018] [Indexed: 05/07/2023]
Abstract
PURPOSE To determine the agreement and validity of keratometric measurements in children with the Nidek ARK-510A auto-refractokeratometer compared to rotating Scheimpflug imaging with Pentacam and biograph with Lenstar LS 900. METHODS This study was conducted on 5620 schoolchildren aged 6-12 years in Shahroud, Iran. Minimum and maximum keratometry values and corneal astigmatism magnitude were compared by calculation of Paired difference, interclass correlation coefficient, and 95% limits of agreement (LoA) between devices. RESULTS After applying the exclusion criteria, 4215 right eyes were enrolled into the study. Mean minimum keratometry with Nidek ARK-510A, Pentacam, and Lenstar was 43.13±1.51, 43.14±1.48, and 42.87±1.46 diopters (D), respectively, and mean maximum keratometry was 43.97±1.59, 44.00±1.56, and 43.75±1.54D, respectively. Nidek ARK-510A overestimated minimum and maximum keratometry by 0.25±0.37 and 0.22±0.41, respectively, compared to Penatcam. The LoA between Nidek ARK-510A and Pentacam for minimum and maximum keratometry measurements were -0.98 to 0.47D and -1.02 to 0.57D, respectively. The LoA between Nidek ARK-510A and Lenstar for minimum and maximum keratometry measurements were -0.70 to 0.72D and -0.79 to 0.85D, respectively. The agreement between devices was best in emmetropes, worst in hyperopes. For astigmatic vector components, the agreements between devices were poor but best agreement was between Nidek ARK-510A and Pentacam. CONCLUSIONS Keratometry measurement with Nidek ARK-510A was not significantly different from Pentacam and Lenstar, and this device can be used in screening programs in emmetropes.
Collapse
Affiliation(s)
- Hassan Hashemi
- Noor Research Center for Ophthalmic Epidemiology, Noor Eye Hospital, Tehran, Iran
| | - Samira Heydarian
- Department of Rehabilitation Science, School of Allied Medical Sciences, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mehdi Khabazkhoob
- Department of Medical Surgical Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Abbasali Yekta
- Refractive Errors Research Center, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Mohammad Hassan Emamian
- Ophthalmic Epidemiology Research Center, Shahroud University of Medical Sciences, Shahroud, Iran.
| | - Akbar Fotouhi
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
63
|
Zhao J, Liu H, Cai W. Numerical and experimental validation of a single-camera 3D velocimetry based on endoscopic tomography. Appl Opt 2019; 58:1363-1373. [PMID: 30874020 DOI: 10.1364/ao.58.001363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Tomographic velocimetry as a 3D technique has attracted substantial research interests in recent years due to the pressing need for investigations of complex turbulent flows, which are inherently inhomogeneous. However, tomographic velocimetry usually suffers from high experimental costs, especially due to the formidable expenses of multiple high-speed cameras and the excitation laser source. To overcome this limitation, a cost-effective technique called endoscopic tomographic velocimetry has been developed in this work. As a single-camera system, nine projections of the target 3D luminous field at consecutive time instants can be registered from different orientations with one camera and customized fiber bundles, while this is possible only with the same number of cameras in a classical tomographic velocimetry system. Extensive numerical simulations were conducted with three inversion algorithms and two velocity calculation methods. According to RMS error analysis, it was found that the algebraic reconstruction technique outperformed the other two inversion algorithms, and the 3D optical flow method exhibited a better performance than cross correlation in terms of both accuracy and noise immunity. Proof-of-concept experiments were also performed to validate our developed system. The results suggested that an average reconstruction error of the artificially generated 3D velocity field was less than 6%, indicating good performance of the velocimetry system. Although this technique was demonstrated by reconstructing continuous luminous fields, it can be easily extended to discrete ones, which are typically adopted in particle image velocimetry. This technique is promising not only for flow diagnostics but other research areas such as biomedical imaging.
Collapse
|
64
|
Abstract
Wheat Grain Yield (GY) and quality are particularly susceptible to nitrogen (N) fertilizer management. However, in rain-fed Mediterranean environments, crop N requirements might be variable due to the effects of water availability on crop growth. Therefore, in-season crop N status assessment is needed in order to apply N fertilizer in a cost-effective way while reducing environmental impacts. Remote sensing techniques might be useful at assessing in-season crop N status. In this study, we evaluated the capacity of vegetation indices formulated using blue (B), green (G), red (R) and near-infrared (NIR) bands obtained with a consumer-grade camera to assess wheat N status. Chlorophyll Content Index (CCI) and fractional intercepted PAR (fIPAR) were measured at three phenological stages and GY and biomass were determined at harvest. Indices formulated using RG bands and the normalized difference vegetation index (NDVI) were significantly correlated with both CCI and fIPAR at the different phenological stage (0.71 < r < 0.81, P < 0.01). Moreover, indices formulated using RG bands were capable at differentiating unfertilized and fertilized plots. In addition, RGB indices and NDVI were found to be related to both crop biomass and GY at harvest, particularly when data were obtained at initial grain filling stage (r > 0.80, P < 0.01). Finally, RGB indices and NDVI obtained with a consumer-grade camera showed comparable capacity at assessing chlorophyll content and predicting both crop biomass and GY at harvest than those obtained with a spectroradiometer. This study highlights the potential of standard and modified consumer-grade cameras at assessing canopy traits related to crop N status and GY in wheat under Mediterranean conditions.
Collapse
Affiliation(s)
- Enric Fernández
- Geomatics division, Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona, Spain
| | - Gil Gorchs
- Departament d’Enginyeria Agroalimentària i Biotecnologia, Universitat Politècnica de Catalunya, Castelldefels, Barcelona, Spain
| | - Lydia Serrano
- Departament d’Enginyeria Agroalimentària i Biotecnologia, Universitat Politècnica de Catalunya, Castelldefels, Barcelona, Spain
| |
Collapse
|
65
|
Serabyn E. Pupil segmentation in the light-field camera and its relation to 3D object positions and the reconstructed depth of field. Appl Opt 2019; 58:A273-A282. [PMID: 30874008 DOI: 10.1364/ao.58.00a273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 01/01/2019] [Indexed: 06/09/2023]
Abstract
A ray-trace simulation of the light-field camera is used to calculate point source responses as a function of 3D source positions. Each point source location yields a unique and well-determined segmented-pupil pattern in the lenslet array's focal plane, with lateral object offsets changing the pattern's location and symmetry, and defocus distances altering the pattern's diameter. Segmented-pupil images can thus be used to infer point sources' 3D locations. Numerical simulations show that the centroids and widths of segmented pupil images can be used to deduce lateral image positions to the size of a detector pixel, and image defocus to the accuracy of the lenslet focal length. In sparse-source cases, such as, e.g., fluorescence microscopy or particle tracking, 3D point-source locations can thus be accurately determined from the observed point source response patterns. The degree of pupil segmentation also directly constrains the ability to refocus light-field images-for image defocus distances large enough that the number of pupil segments exceeds the number of pixels within a "whole" pupil behind a single lenslet, the image can no longer be brought to focus numerically, thus defining the light-field camera's depth of field. This constraint implies a depth of field larger than the usual imaging depth of focus by a factor of the number of detector pixels per lenslet, consistent with the general expectation.
Collapse
|
66
|
Nguyen DT, Pham TD, Lee MB, Park KR. Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information. Sensors (Basel) 2019; 19:s19020410. [PMID: 30669531 PMCID: PMC6359417 DOI: 10.3390/s19020410] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 01/09/2019] [Accepted: 01/17/2019] [Indexed: 12/02/2022]
Abstract
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.
Collapse
Affiliation(s)
- Dat Tien Nguyen
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Tuyen Danh Pham
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Min Beom Lee
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
67
|
Wang Y, Zhang X, Chen J, Cheng Z, Wang D. Camera sensor-based contamination detection for water environment monitoring. Environ Sci Pollut Res Int 2019; 26:2722-2733. [PMID: 30484049 DOI: 10.1007/s11356-018-3645-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 10/30/2018] [Indexed: 06/09/2023]
Abstract
Water environment monitoring is of great importance to human health, ecosystem sustainability, and water transport. Unlike traditional water quality monitoring problems, this paper focuses on visual perception of water environment. We first introduce the development of a customized aquatic sensor node equipped with an embedded camera sensor. Based on this platform, we present an efficient and holistic contamination detection approach, which can automatically adapt to the detection of floating debris in dynamic waters or the identification of salient regions in static waters. Our approach is specifically designed based on compressed sensing theory to give full consideration to the unique challenges in water environment and the resource constraints on sensor nodes. Both laboratory and field experiments demonstrate the proposed method can fast and accurately detect various types of water pollutants and is a better choice for camera sensor-based water environment monitoring compared with other methods.
Collapse
Affiliation(s)
- Yong Wang
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan, 430074, China.
| | - Xufan Zhang
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan, 430074, China
| | - Jun Chen
- School of Automation, China University of Geosciences, Wuhan, 430074, China
| | - Zhuo Cheng
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan, 430074, China
| | - Dianhong Wang
- School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan, 430074, China
| |
Collapse
|
68
|
Nitzinger V, Held S, Kevane B, Eudave Y. Latino Health Perceptions in Rural Montana: Engaging Promotores de Salud Using Photovoice Through Facebook. Fam Community Health 2019; 42:150-160. [PMID: 30768480 PMCID: PMC6383787 DOI: 10.1097/fch.0000000000000213] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The primary purposes of this study were to use photovoice with Facebook to explore health perceptions and health needs among promotores living in rural Montana and to build community among geographically dispersed promotores. Seven promotores participated in a photovoice project where they uploaded photographs and shared comments in a private Facebook group. Emergent themes based on the promotores' health perceptions, discussions, and interviews were transcribed and coded. Findings of this study will be used to assess health perceptions and needs of the promotores and Latino community in rural Montana.
Collapse
Affiliation(s)
- Violeta Nitzinger
- Departments of Health and Human Development (Miss Nitzinger and Eudave and Dr Held) and Letters and Science (Dr Kevane), Montana State University, Bozeman
| | | | | | | |
Collapse
|
69
|
Bao Z, Sha J, Li X, Hanchiso T, Shifaw E. Monitoring of beach litter by automatic interpretation of unmanned aerial vehicle images using the segmentation threshold method. Mar Pollut Bull 2018; 137:388-398. [PMID: 30503448 DOI: 10.1016/j.marpolbul.2018.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 07/30/2018] [Accepted: 08/01/2018] [Indexed: 06/09/2023]
Abstract
This study was aimed at monitoring beach litter using an unmanned aerial vehicle (UAV) in the coastal city of Fuzhou, China. The data analysis shows that the optical images obtained by digital cameras on the UAV can help to identify and monitor beach litter using remote sensing and GIS technologies. The threshold method can effectively segment the UAV image in the beach area. It is useful for quickly monitoring the distribution of beach litter in the area of interest, and hence it can help to provide effective technical support for the investigation and assessment of coastal beach litter.
Collapse
Affiliation(s)
- Zhongcong Bao
- State Key Laboratory for Subtropical Mountain Ecology of the Ministry of Science and Technology and Fujian Province, Fujian Normal University, Fuzhou, China; School of Geographical Sciences, Fujian Normal University, Fuzhou, China; Investigation and Surveying Institute, Fuzhou, China
| | - Jinming Sha
- State Key Laboratory for Subtropical Mountain Ecology of the Ministry of Science and Technology and Fujian Province, Fujian Normal University, Fuzhou, China; School of Geographical Sciences, Fujian Normal University, Fuzhou, China; China-Europe Center for Environment and Landscape Management, Fuzhou, China.
| | - Xiaomei Li
- College of Environmental Science & Engineering, Fujian Normal University, China.
| | - Terefe Hanchiso
- State Key Laboratory for Subtropical Mountain Ecology of the Ministry of Science and Technology and Fujian Province, Fujian Normal University, Fuzhou, China; School of Geographical Sciences, Fujian Normal University, Fuzhou, China.
| | - Eshetu Shifaw
- State Key Laboratory for Subtropical Mountain Ecology of the Ministry of Science and Technology and Fujian Province, Fujian Normal University, Fuzhou, China; School of Geographical Sciences, Fujian Normal University, Fuzhou, China
| |
Collapse
|
70
|
Marek AJ, Chu EY, Ming ME, Khan ZA, Kovarik CL. Piloting the Use of Smartphones, Reminders, and Accountability Partners to Promote Skin Self-Examinations in Patients with Total Body Photography: A Randomized Controlled Trial. Am J Clin Dermatol 2018; 19:779-785. [PMID: 30062632 DOI: 10.1007/s40257-018-0372-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the use of a mobile application (app) in patients already using total body photography (TBP) to increase skin self-examination (SSE) rates and pilot the effectiveness of examination reminders and accountability partners. DESIGN Randomized controlled trial with computer generated randomization table to allocate interventions. SETTING University of Pennsylvania pigmented lesion clinic. PARTICIPANTS 69 patients aged 18 years or older with an iPhone/iPad, who were already in possession of TBP photographs. INTERVENTION A mobile app loaded with digital TBP photos for all participants, and either (1) the mobile app only, (2) skin examination reminders, (3) an accountability partner, or (4) reminders and an accountability partner. MAIN OUTCOME MEASURE Change in SSE rates as assessed by enrollment and end-of-study surveys 6 months later. RESULTS Eighty one patients completed informed consent, however 12 patients did not complete trial enrollment procedures due to device incompatibility, leaving 69 patients who were randomized and analyzed [mean age 54.3 years, standard deviation 13.9). SSE rates increased significantly from 58% at baseline to 83% at 6 months (odds ratio 2.64, 95% confidence interval 1.20-4.09), with no difference among the intervention groups. The group with examination reminders alone had the highest (94%) overall satisfaction, and the group with accountability partners alone accounted for the lowest (71%). CONCLUSION A mobile app alone, or with reminders and/or accountability partners, was found to be an effective tool that can help to increase SSE rates. Skin examination reminders may help provide a better overall experience for a subset of patients. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT02520622.
Collapse
Affiliation(s)
- Andrew J Marek
- Department of Dermatology, Johns Hopkins School of Medicine, Baltimore, MD, USA.
- Department of Medicine, MedStar Harbor Hospital, Baltimore, MD, USA.
- Department of Dermatology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA.
| | - Emily Y Chu
- Department of Dermatology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Michael E Ming
- Department of Dermatology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Zeeshan A Khan
- Department of Medicine, Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Carrie L Kovarik
- Department of Dermatology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
71
|
Peller J, Farahi F, Trammell SR. Hyperspectral imaging system based on a single-pixel camera design for detecting differences in tissue properties. Appl Opt 2018; 57:7651-7658. [PMID: 30462028 DOI: 10.1364/ao.57.007651] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 05/29/2018] [Indexed: 06/09/2023]
Abstract
Optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging (HSI) system that uses autofluorescence emission from collagen (400 nm) and nicotinamide adenine dinucleotide phosphate (475 nm) along with differences in the optical reflectance spectra to differentiate between healthy and thermally damaged tissue is discussed. The changes in protein autofluorescence and reflectance due to thermal damage are studied in ex vivo porcine tissue models. Thermal lesions were created in porcine skin (n=12) and liver (n=15) samples using an IR laser. The damaged regions were clearly visible in the hyperspectral images. Sizes of the thermally damaged regions as measured via HSI are compared to sizes of these regions as measured in white-light images and via physical measurement. Good agreement between the sizes measured in the hyperspectral images, white-light imaging, and physical measurements were found. The HSI system can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and cancer diagnosis and staging.
Collapse
|
72
|
Beltran A, Dadabhoy H, Ryan C, Dholakia R, Jia W, Baranowski J, Sun M, Baranowski T. Dietary Assessment with a Wearable Camera among Children: Feasibility and Intercoder Reliability. J Acad Nutr Diet 2018; 118:2144-2153. [PMID: 30115556 DOI: 10.1016/j.jand.2018.05.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 05/14/2018] [Indexed: 11/17/2022]
Abstract
BACKGROUND The eButton, a multisensor device worn on the chest, uses a camera to passively capture images of everything in front of the child throughout the day. These images can be analyzed to provide a passive method of dietary intake assessment. OBJECTIVE This study assessed the eButton's feasibility and intercoder reliability for dietary intake assessment. DESIGN Children were recruited in the summer and fall of 2015, in Houston, TX, to wear the eButton to take 2 full days of dietary images, and the child-parent dyad participated in a following-day interview to verify what dietitians recorded from the images. PARTICIPANTS/SETTING Thirty 9- to 13-year-old children participated during days convenient to them. MAIN OUTCOME MEASURES Two dietitians independently manually reviewed the images to identify eating events, foods in those events, and portion sizes. STATISTICAL ANALYSES PERFORMED Descriptive statistics of agreements and disagreements were calculated between dietitians and with children; t tests and Bland-Altman plots of differences in total kilocalories were calculated between dietitians and between initial dietitian estimates and those finalized after the verification interviews. RESULTS The dietitians agreed on the identity of 60.5% of the 1,026 foods but disagreed on 28.6% of the foods and on the names for 10.8% of the foods. After the verification interviews, the dietitians agreed with the child-parent dyads on the identity of 77.0% of the 921 foods; the child-parent dyad identified 12.4% of the day's foods when images were not available or not clear; the child-parent dyad clarified that 5.4% of the foods identified were not consumed by the child; and the child-parent dyad clarified the identity of 5.2% of the foods. A software-based approach (three-dimensional wire mesh) could be used to estimate portion size on 24% of the foods, and professional judgment was required for 67.8%. Mean caloric intakes per day were not statistically significantly different between dietitians but were different between dietitians and child-parent dyads in total and on day 2. CONCLUSIONS An early test of intercoder reliability of an all-day image method of dietary intake assessment obtained intercoder agreement between the two dietitians processing these images of intraclass correlation coefficient=0.67. A following-day verification interview with the child and parent was necessary to ensure completeness of estimates. Several feasibility problems occurred, which may be remedied with additional participant and dietitian training and further technological development.
Collapse
|
73
|
Nguyen DT, Pham TD, Lee YW, Park KR. Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor. Sensors (Basel) 2018; 18:s18082601. [PMID: 30096832 PMCID: PMC6111611 DOI: 10.3390/s18082601] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/02/2018] [Accepted: 08/05/2018] [Indexed: 11/27/2022]
Abstract
Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.
Collapse
Affiliation(s)
- Dat Tien Nguyen
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Tuyen Danh Pham
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Young Won Lee
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
74
|
Milocco A, Conroy S, Popovichev S, Sergienko G, Huber A. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET). Radiat Prot Dosimetry 2018; 180:109-114. [PMID: 29087509 DOI: 10.1093/rpd/ncx220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 09/26/2017] [Indexed: 06/07/2023]
Abstract
The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems.
Collapse
Affiliation(s)
- Alberto Milocco
- Physics Department 'G. Occhialini', University of Milano-Bicocca, Piazza della Scienza 3, Milan, Italy
| | - Sean Conroy
- Department Physics and Astronomy, Uppsala University, Uppsala, Sweden
| | - Sergey Popovichev
- Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxfordshire, UK
| | - Gennady Sergienko
- Institut für Energie-und Klimaforschung - Plasmaphysik, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Alexander Huber
- Institut für Energie-und Klimaforschung - Plasmaphysik, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
75
|
Taniguchi K, Nishikawa A. Mouthwitch: A Novel Head Mount Type Hands-Free Input Device that Uses the Movement of the Temple to Control a Camera. Sensors (Basel) 2018; 18:E2273. [PMID: 30011872 PMCID: PMC6069124 DOI: 10.3390/s18072273] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/11/2018] [Accepted: 07/12/2018] [Indexed: 11/30/2022]
Abstract
We have developed an interface (mouthwitch) for a head-mounted type camera with which pictures can be taken with a head-mounted camera, hands-free, simply by "opening your mouth continuously for approximately one second and then closing it again". This mouthwitch uses a sensor equipped with an LED and photo transistor on the temple to optically measure the changes in the form of the temple that occur when the mouth is opened and closed. Eight test subjects (males and females aged between 21 and 44 years old) performed evaluation tests using this mouthwitch when resting, speaking, chewing, walking, and running. The results showed that all test subjects were able to open and close the mouth, and the measurement results pertaining to the temple shape changes that occurred at this time were highly reproducible. Additionally, the average value for accuracy obtained for the eight test subjects through the verification tests was 100% when resting, chewing, or walking, and 99.8% when speaking or running. Similarly, the average values for precision were 100% for all items, and the average values for recall were 100% when resting or chewing, 98.8% when speaking, 97.5% when walking, and 87.5% when running.
Collapse
Affiliation(s)
- Kazuhiro Taniguchi
- Graduate School of Information Sciences, Hiroshima City University, 3-4-1 Ozukahigashi, Asaminami-ku, Hiroshima 731-3194, Japan.
| | - Atsushi Nishikawa
- Faculty of Textile Science and Technology, Shinshu University, 3-15-1 Tokida, Ueda, Nagano 386-8567, Japan.
- Division of Biological and Medical Fibers, Institute for Fiber Engineering (IFES), Interdisciplinary Cluster for Cutting Edge Research (ICCER), Shinshu University, 3-15-1 Tokida, Ueda, Nagano 386-8567, Japan.
| |
Collapse
|
76
|
Prakalapakorn SG, Freedman SF, Hutchinson AK, Saehout P, Cetinkaya-Rundel M, Wallace DK, Kulvichit K. Real-World Simulation of an Alternative Retinopathy of Prematurity Screening System in Thailand: A Pilot Study. J Pediatr Ophthalmol Strabismus 2018; 55:245-253. [PMID: 29809267 PMCID: PMC6482815 DOI: 10.3928/01913913-20180327-04] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/28/2017] [Indexed: 01/04/2023]
Abstract
PURPOSE To evaluate an alternative retinopathy of prematurity (ROP) screening system that identifies infants meriting examination by an ophthalmologist in a middle-income country. METHODS The authors hypothesized that grading posterior pole images for the presence of pre-plus or plus disease has high sensitivity to identify infants with type 1 ROP that requires treatment. Part 1 of the study evaluated the feasibility of having a non-ophthalmologist health care worker obtain retinal images of prematurely born infants using a non-contact retinal camera (Pictor; Volk Optical, Inc., Mentor, OH) that were of sufficient quality to grade for pre-plus or plus disease. Part 2 investigated the accuracy of grading these images to identify infants with type 1 ROP. The authors prospectively recruited infants at Chulalongkorn University Hospital (Bangkok, Thailand). On days infants underwent routine ROP screening, a trained health care worker imaged their retinas with Pictor. Two ROP experts graded these serial images from a remote location for image gradability and posterior pole disease. RESULTS Fifty-six infants were included. Overall, 69.4% of infant imaging sessions were gradable. Among gradable images, the sensitivity of both graders for identifying an infant with type 1 ROP by grading for the presence of pre-plus or plus disease was 1.0 (95% confidence interval [CI]: 0.31 to 1.0) for grader 1 and 1.0 (95% CI: 0.40 to 1.0) for grader 2. The specificity was 0.93 (95% CI: 0.76 to 0.99) for grader 1 and 0.74 (95% CI: 0.53 to 0.88) for grader 2. CONCLUSIONS It was feasible for a trained non-ophthalmologist health care worker to obtain retinal images of infants using the Pictor that were of sufficient quality to identify infants with type 1 ROP. [J Pediatr Ophthalmol Strabismus. 2018;55(4):245-253.].
Collapse
Affiliation(s)
| | | | | | - Piyada Saehout
- Department of Ophthalmology, Chulalongkorn University, Bangkok, Thailand
| | | | | | - Kittisak Kulvichit
- Department of Ophthalmology, Chulalongkorn University, Bangkok, Thailand
| |
Collapse
|
77
|
Lai KHW, Lee RPW, Yiu EPF. Ultrawide−field Retinal Selfie by Smartphone, High-definition Television, and a Novel Clip-On Lens. Ophthalmology 2018; 125:1027. [PMID: 29935662 DOI: 10.1016/j.ophtha.2018.03.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2018] [Revised: 03/13/2018] [Accepted: 03/16/2018] [Indexed: 11/29/2022] Open
|
78
|
Nishiguchi S, Wada N, Yamashiro H, Ishibashi H, Takeuchi I. Continuous recordings of the coral bleaching process on Sesoko Island, Okinawa, Japan, over about 50 days using an underwater camera equipped with a lens wiper. Mar Pollut Bull 2018; 131:422-427. [PMID: 29886967 DOI: 10.1016/j.marpolbul.2018.04.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2017] [Revised: 03/29/2018] [Accepted: 04/10/2018] [Indexed: 06/08/2023]
Abstract
The colours of the hermatypic corals Porites sp. and Acropora cytherea at Sesoko Island, Okinawa, Japan, were photographed continuously, from 19 July to 6 September 2016, by an underwater camera equipped with a lens wiper. The average seawater temperature during the study period was 29.9 °C. The daily average seawater temperature (DAST) was >30.0 °C until 23 August 2016, and a maximum value of 31.2 °C was recorded on 2 August 2016. Red, green, and blue (RGB) values of these corals were analysed based on photographs taken at 14:00. The RGB values of Porites sp. were stable throughout the observation period, while those of A. cytherea gradually increased (i.e. moved toward the "white" end of the spectrum) until the beginning of September. The present study demonstrated the usefulness of RGB analysis of photographs taken by an underwater camera equipped with a lens wiper for monitoring coral beaching.
Collapse
Affiliation(s)
- Shingo Nishiguchi
- Graduate School of Agriculture, Ehime University, 3-5-7 Tarumi, Matsuyama, Ehime 790-8566, Japan
| | - Naohisa Wada
- Department of Marine Science and Resources, College of Bioresource Science, Nihon University, 1866 Kameino, Fujisawa, Kanagawa 252-0880, Japan
| | - Hideyuki Yamashiro
- Sesoko Station, Tropical Biosphere Research Center, University of the Ryukyus, 3422 Sesoko, Motobu, Okinawa 905-0227, Japan
| | - Hiroshi Ishibashi
- Graduate School of Agriculture, Ehime University, 3-5-7 Tarumi, Matsuyama, Ehime 790-8566, Japan; Center of Advanced Technology for the Environment, Graduate School of Agriculture, Ehime University, 3-5-7 Tarumi, Matsuyama, Ehime 790-8566, Japan
| | - Ichiro Takeuchi
- Graduate School of Agriculture, Ehime University, 3-5-7 Tarumi, Matsuyama, Ehime 790-8566, Japan; Center of Advanced Technology for the Environment, Graduate School of Agriculture, Ehime University, 3-5-7 Tarumi, Matsuyama, Ehime 790-8566, Japan.
| |
Collapse
|
79
|
Grujić D, Vasiljević D, Pantelić D, Tomić L, Stamenković Z, Jelenković B. Infrared camera on a butterfly's wing. Opt Express 2018; 26:14143-14158. [PMID: 29877457 DOI: 10.1364/oe.26.014143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 04/26/2018] [Indexed: 06/08/2023]
Abstract
Thermal cameras were constructed long ago, but working principles and complex technologies still limit their resolution, total number of pixels, and sensitivity. We address the problem of finding a new sensing mechanism surpassing existing limits of thermal radiation detection. Here we reveal the new mechanism on the butterfly wing, whose wing-scales act as pixels of an imaging array on a thermal detector. We observed that the tiniest features of a Morpho butterfly wing-scale match the mean free path of air molecules at atmospheric pressure - a condition when the radiation-induced heating produces an additional, thermophoretic force that deforms the wing-scales. The resulting deformation field was imaged holographically with mK temperature sensitivity and 200 Hz response speed. By imitating butterfly wing-scales, the effect can be further amplified through a suitable choice of material, working pressure, sensor design, and detection method. The technique is universally applicable to any nano-patterned, micro-scale system in other spectral ranges, such as UV and terahertz.
Collapse
|
80
|
Liu J, Yuan Y, Zhou Y, Zhu X, Syed TN. Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense. Sensors (Basel) 2018; 18:s18051510. [PMID: 29751594 PMCID: PMC5982123 DOI: 10.3390/s18051510] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 05/04/2018] [Accepted: 05/08/2018] [Indexed: 12/01/2022]
Abstract
Fruit recognition based on depth information has been a hot topic due to its advantages. However, the present equipment and methods cannot meet the requirements of rapid and reliable recognition and location of fruits in close shot for robot harvesting. To solve this problem, we propose a recognition algorithm for citrus fruit based on RealSense. This method effectively utilizes depth-point cloud data in a close-shot range of 160 mm and different geometric features of the fruit and leaf to recognize fruits with a intersection curve cut by the depth-sphere. Experiments with close-shot recognition of six varieties of fruit under different conditions were carried out. The detection rates of little occlusion and adhesion were from 80–100%. However, severe occlusion and adhesion still have a great influence on the overall success rate of on-branch fruits recognition, the rate being 63.8%. The size of the fruit has a more noticeable impact on the success rate of detection. Moreover, due to close-shot near-infrared detection, there was no obvious difference in recognition between bright and dark conditions. The advantages of close-shot limited target detection with RealSense, fast foreground and background removal and the simplicity of the algorithm with high precision may contribute to high real-time vision-servo operations of harvesting robots.
Collapse
Affiliation(s)
- Jizhan Liu
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Yan Yuan
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Yao Zhou
- College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China.
| | - Xinxin Zhu
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Tabinda Naz Syed
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| |
Collapse
|
81
|
Ennis R, Schiller F, Toscani M, Gegenfurtner KR. Hyperspectral database of fruits and vegetables. J Opt Soc Am A Opt Image Sci Vis 2018; 35:B256-B266. [PMID: 29603941 DOI: 10.1364/josaa.35.00b256] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Accepted: 02/09/2018] [Indexed: 06/08/2023]
Abstract
We have built a hyperspectral database of 42 fruits and vegetables. Both the outside (skin) and inside of the objects were imaged. We used a Specim VNIR HS-CL-30-V8E-OEM mirror-scanning hyperspectral camera and took pictures at a spatial resolution of ∼57 px/deg by 800 pixels at a wavelength resolution of ∼1.12 nm. A stable, broadband illuminant was used. Images and software are freely available on our webserver (http://www.allpsych.uni-giessen.de/GHIFVD; pronounced "gift"). We performed two kinds of analyses on these images. First, when comparing the insides and outsides of the objects, we observed that the insides were lighter than the skins, and that the hues of the insides and skins were significantly correlated (circular correlation=0.638). Second, we compared the color distribution within each object to corresponding human color discrimination thresholds. We found a significant correlation (0.75) between the orientation of ellipses fit to the chromaticity distributions of our fruits and vegetables with the orientations of interpolated MacAdam discrimination ellipses. This indicates a close relationship between sensory processing and the characteristics of environmental objects.
Collapse
|
82
|
Abstract
PURPOSE Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). PROCEDURES Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. RESULTS In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. CONCLUSION Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.
Collapse
Affiliation(s)
- Steve S Cho
- Department of Neurosurgery, Hospital of the University of Pennsylvania, 235 South Eighth Street, Philadelphia, PA, 19106, USA
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Ryan Zeh
- Department of Neurosurgery, Hospital of the University of Pennsylvania, 235 South Eighth Street, Philadelphia, PA, 19106, USA
| | - John T Pierce
- Department of Neurosurgery, Hospital of the University of Pennsylvania, 235 South Eighth Street, Philadelphia, PA, 19106, USA
| | - Ryan Salinas
- Department of Neurosurgery, Hospital of the University of Pennsylvania, 235 South Eighth Street, Philadelphia, PA, 19106, USA
| | - Sunil Singhal
- Department of Surgery, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - John Y K Lee
- Department of Neurosurgery, Hospital of the University of Pennsylvania, 235 South Eighth Street, Philadelphia, PA, 19106, USA.
| |
Collapse
|
83
|
Massei G, Coats J, Lambert MS, Pietravalle S, Gill R, Cowan D. Camera traps and activity signs to estimate wild boar density and derive abundance indices. Pest Manag Sci 2018; 74:853-860. [PMID: 29024317 DOI: 10.1002/ps.4763] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2017] [Revised: 10/06/2017] [Accepted: 10/06/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. RESULTS PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km2 . Increasing the density of camera traps above nine per km2 did not increase the precision of the estimate of wild boar density. CONCLUSION PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.
Collapse
Affiliation(s)
- Giovanna Massei
- National Wildlife Management Centre, Animal and Plant Health Agency, York, UK
| | - Julia Coats
- National Wildlife Management Centre, Animal and Plant Health Agency, York, UK
| | - Mark Simon Lambert
- National Wildlife Management Centre, Animal and Plant Health Agency, York, UK
| | | | - Robin Gill
- Centre for Ecosystems, Society and Biosecurity, Forest Research, Farnham, UK
| | - Dave Cowan
- National Wildlife Management Centre, Animal and Plant Health Agency, York, UK
| |
Collapse
|
84
|
Chen L, Parsons AM, Aria AB, Ciurea AM, Patel AB, Chan C, Griffin JR, Nguyen TH, Migden MR. Surgical site identification with personal digital device: A prospective pilot study. J Am Acad Dermatol 2018. [PMID: 29524583 DOI: 10.1016/j.jaad.2018.02.069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
BACKGROUND Various means to facilit ate accurate biopsy site identification have been proposed. OBJECTIVE To determine the accuracy of biopsy site identification by using photographs taken with a patient's digital device by a dermatologist versus professional medical photography. METHODS Photographs of circled biopsy sites were taken with personal digital devices by the principal investigator (PI). Another set of photographs was taken by a professional photographer. Secondary photographs were taken of the biopsy site location pointed to by the staff and PI on the basis of the personal digital device image and professional medical photography, respectively. On the basis of secondary photographs, 2 independent dermatologists determined whether the skin biopsy locations pointed out by the staff were consistent with the ones pointed out by PI. RESULTS Per dermatologist A, the staff correctly identified all 53 biopsy sites. Per dermatologist B, the staff were correct on 51 of 53 observations. Dermatologist C, the final arbiter, concurred with dermatologist A on the 2 cases in which dermatologist B was not certain of the location of the biopsy site. LIMITATIONS The mean interval from initial biopsy to reidentification of the site was 36.2 days. CONCLUSION Utilizing patients' personal digital devices is a cost-effective, Health Insurance Portability and Accountability Act-compliant, and readily available means to identify skin biopsy sites.
Collapse
Affiliation(s)
- Leon Chen
- Department of Dermatology, The University of Texas McGovern Medical School at Houston, Fort Worth, Texas
| | - Adam M Parsons
- Texas Center for Orthopedic and Spinal Disease, Fort Worth, Texas
| | - Alexander B Aria
- Department of Dermatology, The University of Texas McGovern Medical School at Houston, Fort Worth, Texas
| | - Ana M Ciurea
- Department of Dermatology, The University of Texas M.D. Anderson Cancer Center, Houston, Texas
| | - Anisha B Patel
- Department of Dermatology, The University of Texas M.D. Anderson Cancer Center, Houston, Texas
| | - Christopher Chan
- Department of Dermatology, The University of Texas M.D. Anderson Cancer Center, Houston, Texas
| | | | | | - Michael R Migden
- Department of Dermatology, The University of Texas M.D. Anderson Cancer Center, Houston, Texas; Department of Head and Neck Surgery, The University of Texas M.D. Anderson Cancer Center, Houston, Texas.
| |
Collapse
|
85
|
Raber M, Patterson M, Jia W, Sun M, Baranowski T. Utility of eButton images for identifying food preparation behaviors and meal-related tasks in adolescents. Nutr J 2018; 17:32. [PMID: 29477143 PMCID: PMC6389239 DOI: 10.1186/s12937-018-0341-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 02/15/2018] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Food preparation skills may encourage healthy eating. Traditional assessment of child food preparation employs self- or parent proxy-reporting methods, which are prone to error. The eButton is a wearable all-day camera that has promise as an objective, passive method for measuring child food preparation practices. PURPOSE This paper explores the feasibility of the eButton to reliably capture home food preparation behaviors and practices in a sample of pre- and early adolescents (ages 9 to 13). METHODS This is a secondary analysis of two eButton pilot projects evaluating the dietary intake of pre- and early adolescents in or around Houston, Texas. Food preparation behaviors were coded into seven major categories including: browsing, altering food/adding seasoning, food media, meal related tasks, prep work, cooking and observing. Inter-coder reliability was measured using Cohen's kappa and percent agreement. RESULTS Analysis was completed on data for 31 participants. The most common activity was browsing in the pantry or fridge. Few participants demonstrated any food preparation work beyond unwrapping of food packages and combining two or more ingredients; actual cutting or measuring of foods were rare. CONCLUSIONS Although previous research suggests children who "help" prepare meals may obtain some dietary benefit, accurate assessment tools of food preparation behavior are lacking. The eButton offers a feasible approach to food preparation behavior measurement among pre- and early adolescents. Follow up research exploring the validity of this method in a larger sample, and comparisons between cooking behavior and dietary intake are needed.
Collapse
Affiliation(s)
- Margaret Raber
- Department of Pediatrics Research, University of Texas MD Anderson Cancer Center, Houston, USA
| | - Monika Patterson
- USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine, Houston, USA
| | - Wenyan Jia
- Department of Neurological Surgery, University of Pittsburg, Pittsburg, USA
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburg, Pittsburg, USA
| | - Tom Baranowski
- USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine, Houston, USA
| |
Collapse
|
86
|
Yuan X, Pu Y. Parallel lensless compressive imaging via deep convolutional neural networks. Opt Express 2018; 26:1962-1977. [PMID: 29401917 DOI: 10.1364/oe.26.001962] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 01/14/2018] [Indexed: 06/07/2023]
Abstract
We report a parallel lensless compressive imaging system, which enjoys real-time reconstruction using deep convolutional neural networks. A prototype composed of a low-cost LCD, 16 photo-diodes and isolation chambers, has been built. Each of these 16 channels captures a fraction of the scene with 16×16 pixels and they are performing in parallel. An efficient inversion algorithm based on deep convolutional neural networks is developed to reconstruct the image. We have demonstrated encouraging results using only 2% (relative to pixel numbers, e.g. 5 for a block with 16×16 pixels) measurements per sensor for digits and around 10% measurements per sensor for facial images.
Collapse
|
87
|
Liu S, Xing Z, Wang Z, Tian S, Jahun FR. Development of machine-vision system for gap inspection of muskmelon grafted seedlings. PLoS One 2017; 12:e0189732. [PMID: 29267293 PMCID: PMC5739424 DOI: 10.1371/journal.pone.0189732] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2017] [Accepted: 11/30/2017] [Indexed: 11/19/2022] Open
Abstract
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
Collapse
Affiliation(s)
- Siyao Liu
- College of Engineering, Shenyang Agricultural University, Shenyang, China
| | - Zuochang Xing
- College of Engineering, Shenyang Agricultural University, Shenyang, China
| | - Zifan Wang
- College of Engineering, Shenyang Agricultural University, Shenyang, China
| | - Subo Tian
- College of Engineering, Shenyang Agricultural University, Shenyang, China
- * E-mail:
| | - Falalu Rabiu Jahun
- College of Engineering, Shenyang Agricultural University, Shenyang, China
- Department of Agricultural Engineering, Bayero University, Kano, Nigeria
| |
Collapse
|
88
|
Yaghoobi Ershadi N. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera. PLoS One 2017; 12:e0189145. [PMID: 29261719 PMCID: PMC5738070 DOI: 10.1371/journal.pone.0189145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 11/20/2017] [Indexed: 11/25/2022] Open
Abstract
Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.
Collapse
|
89
|
Cohen EJ, Bravi R, Minciacchi D. 3D reconstruction of human movement in a single projection by dynamic marker scaling. PLoS One 2017; 12:e0186443. [PMID: 29045439 PMCID: PMC5646814 DOI: 10.1371/journal.pone.0186443] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 10/02/2017] [Indexed: 11/19/2022] Open
Abstract
The three dimensional (3D) reconstruction of movement from videos is widely utilized as a method for spatial analysis of movement. Several approaches exist for a 3D reconstruction of movement using 2D video projection, most of them require the use of at least two cameras as well as the application of relatively complex algorithms. While a few approaches also exist for 3D reconstruction of movement with a single camera, they are not widely implemented due to tedious and complicated methods of calibration. Here we propose a simple method that allows for a 3D reconstruction of movement by using a single projection and three calibration markers. Such approach is made possible by tracking the change in diameter of a moving spherical marker within a 2D projection. In order to test our model, we compared kinematic results obtained with this model to those with the commonly used approach of two cameras and Direct Linear Transformation (DLT). Our results show that such approach appears to be in line with the DLT method for 3D reconstruction and kinematic analysis. The simplicity of this method may render it approachable for both clinical use as well as in uncontrolled environments.
Collapse
Affiliation(s)
- Erez James Cohen
- Department of Experimental and Clinical Medicine, Physiological Sciences Section, University of Florence, Florence, Italy
| | - Riccardo Bravi
- Department of Experimental and Clinical Medicine, Physiological Sciences Section, University of Florence, Florence, Italy
| | - Diego Minciacchi
- Department of Experimental and Clinical Medicine, Physiological Sciences Section, University of Florence, Florence, Italy
| |
Collapse
|
90
|
Abstract
Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.
Collapse
Affiliation(s)
- Joseph M. Kolowski
- Smithsonian Conservation Biology Institute, National Zoological Park, Front Royal, Virginia, United States of America
- * E-mail:
| | - Tavis D. Forrester
- Smithsonian Conservation Biology Institute, National Zoological Park, Front Royal, Virginia, United States of America
| |
Collapse
|
91
|
Johnson CA, Thapa S, George Kong YX, Robin AL. Performance of an iPad Application to Detect Moderate and Advanced Visual Field Loss in Nepal. Am J Ophthalmol 2017; 182:147-154. [PMID: 28844641 DOI: 10.1016/j.ajo.2017.08.007] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 08/05/2017] [Accepted: 08/12/2017] [Indexed: 11/19/2022]
Abstract
PURPOSE To evaluate the accuracy and efficiency of Visual Fields Easy (VFE), a free iPad app, for performing suprathreshold perimetric screening. DESIGN Prospective, cross-sectional validation study. METHODS We performed screening visual fields using a calibrated iPad 2 with the VFE application on 206 subjects (411 eyes): 210 normal (NL), 183 glaucoma (GL), and 18 diabetic retinopathy (DR) at Tilganga Institute of Ophthalmology, Kathmandu, Nepal. We correlated the results with a Humphrey Field Analyzer using 24-2 SITA Standard tests on 373 of these eyes (198 NL, 160 GL, 15 DR). RESULTS The number of missed locations on the VFE correlated with mean deviation (MD, r = 0.79), pattern standard deviation (PSD, r = 0.60), and number of locations that were worse than the 95% confidence limits for total deviation (r = 0.51) and pattern deviation (r = 0.68) using SITA Standard. iPad suprathreshold perimetry was able to detect most visual field deficits with moderate (MD of -6 to -12 dB) and advanced (MD worse than -12 dB) loss, but had greater difficulty in detecting early (MD better than -6 dB) loss, primarily owing to an elevated false-positive response rate. The average time to perform the Visual Fields Easy test was 3 minutes, 18 seconds (standard deviation = 16.88 seconds). DISCUSSION The Visual Fields Easy test procedure is a portable, fast, effective procedure for detecting moderate and advanced visual field loss. Improvements are currently underway to monitor eye and head tracking during testing, reduce testing time, improve performance, and eliminate the need to touch the video screen surface.
Collapse
Affiliation(s)
- Chris A Johnson
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa.
| | - Suman Thapa
- Nepal Glaucoma Eye Clinic, Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Yu Xiang George Kong
- Cambridge University Hospital, NHS, Cambridge, United Kingdom; Centre of Eye Research Australia, Department of Ophthalmology, The University of Melbourne, Melbourne, Australia; Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Alan L Robin
- Department of Ophthalmology, Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Ophthalmology, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
92
|
Shetty R, Rao H, Khamar P, Sainani K, Vunnava K, Jayadev C, Kaweri L. Keratoconus Screening Indices and Their Diagnostic Ability to Distinguish Normal From Ectatic Corneas. Am J Ophthalmol 2017; 181:140-148. [PMID: 28687218 DOI: 10.1016/j.ajo.2017.06.031] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 06/24/2017] [Accepted: 06/27/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE To compare the diagnostic ability of 3 Scheimpflug devices in differentiating normal from ectatic corneas. DESIGN Comparison of diagnostic instrument accuracy. METHODS This study included 42 normal, 37 subclinical keratoconic, and 51 keratoconic eyes seen in a tertiary eye care institute. Keratoconus screening indices were evaluated using the Pentacam (Oculus, Wetzlar, Germany), Galilei (Ziemer, Biel, Switzerland), and Sirius (Costruzione Strumenti Oftalmici, Florence, Italy). Sensitivity, specificity, and area under receiver operating characteristic curve (AUC) were calculated. RESULTS Highest sensitivity (100%) to diagnose keratoconus was seen for 6 parameters on Pentacam and 1 on Galilei. None of the indices in Sirius reached 100% sensitivity. For subclinical keratoconus, the highest sensitivity (100%) was seen for 2 parameters on Pentacam but for none of them on Galilei and Sirius. All parameters were strong enough to differentiate keratoconus (AUC > 0.9). On comparing the best parameters of all 3 machines, the AUC of the Belin/Ambrosio enhanced ectasia total derivation (BAD-D) and the inferior-superior value (ISV) of Pentacam were statistically similar to that of the keratoconus prediction index (KPI) and keratoconus probability (Kprob) of Galilei (P = .27) and 4.5 mm root mean square per unit area (RMS/A) back of Sirius (P = .55). When differentiating subclinical from normal corneas, BAD-D was similar to the surface regularity index (SRI) of Galilei (P = .78) but was significantly greater than the 8 mm RMS/A back of Sirius (P = .002). CONCLUSION Keratoconus indices measured by all 3 machines can effectively differentiate keratoconus from normal corneas. However, new cutoff values might be needed to differentiate subclinical from normal corneas.
Collapse
Affiliation(s)
- Rohit Shetty
- Narayana Nethralaya Eye Institute, Bangalore, India
| | - Harsha Rao
- Narayana Nethralaya Eye Institute, Bangalore, India
| | - Pooja Khamar
- Narayana Nethralaya Eye Institute, Bangalore, India
| | | | | | | | - Luci Kaweri
- Narayana Nethralaya Eye Institute, Bangalore, India.
| |
Collapse
|
93
|
Bombara CB, Dürr S, Machovsky-Capuska GE, Jones PW, Ward MP. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras. PLoS One 2017; 12:e0181859. [PMID: 28750073 PMCID: PMC5547700 DOI: 10.1371/journal.pone.0181859] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 07/07/2017] [Indexed: 11/22/2022] Open
Abstract
Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras–in conjunction with global positioning systems (GPS) loggers–to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts–equating to 8 to 147 contacts per dog per 24 hr–from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study, including visualisation of the nature of the contact between two dogs; and inclusion of a greater number of dogs in the study (which do not need to be wearing video or GPS collars). Some limitations identified included visualisation of contacts only during daylight hours; the camera lens being obscured on occasion by the dog’s mandible or the dog resting on the camera; an insufficiently wide viewing angle (36°); battery life and robustness of the deployments; high costs of the deployment; and analysis of large volumes of often unsteady video footage. This study demonstrates that dog-borne video cameras, are a feasible technology for estimating and characterising contacts between FRDDs. Modifying camera specifications and developing new analytical methods will improve applicability of this technology for monitoring FRDD populations, providing insights into dog-to-dog contacts and therefore how disease might spread within these populations.
Collapse
Affiliation(s)
- Courtenay B. Bombara
- Sydney School of Veterinary Science, The University of Sydney, Camden, Australia
| | - Salome Dürr
- Veterinary Public Health Institute, University of Bern, Liebefeld, Switzerland
| | - Gabriel E. Machovsky-Capuska
- Sydney School of Veterinary Science, The University of Sydney, Camden, Australia
- The Charles Perkins Centre and School of Life and Environmental Sciences, The University of Sydney, Sydney, Australia
| | - Peter W. Jones
- School of Electrical and Information Engineering, The University of Sydney, Sydney, Australia
| | - Michael P. Ward
- Sydney School of Veterinary Science, The University of Sydney, Camden, Australia
- * E-mail:
| |
Collapse
|
94
|
Jacob J, Paques M, Krivosic V, Dupas B, Erginay A, Tadayoni R, Gaudric A. Comparing Parafoveal Cone Photoreceptor Mosaic Metrics in Younger and Older Age Groups Using an Adaptive Optics Retinal Camera. Ophthalmic Surg Lasers Imaging Retina 2017; 48:45-50. [PMID: 28060393 DOI: 10.3928/23258160-20161219-06] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 11/02/2016] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. PATIENTS AND METHODS Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm2 and µm) and visual (cones/degrees2 and arcmin) units were computed. RESULTS Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. CONCLUSIONS The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.].
Collapse
|
95
|
Hernandez-Matas C, Zabulis X, Argyros AA. Retinal image registration through simultaneous camera pose and eye shape estimation. Annu Int Conf IEEE Eng Med Biol Soc 2017; 2016:3247-3251. [PMID: 28269000 DOI: 10.1109/embc.2016.7591421] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, a retinal image registration method is proposed. The approach utilizes keypoint correspondences and assumes that the human eye has a spherical or ellipsoidal shape. The image registration problem amounts to solving a camera 3D pose estimation problem and, simultaneously, an eye 3D shape estimation problem. The camera pose estimation problem is solved by estimating the relative pose between the views from which the images were acquired. The eye shape estimation problem parameterizes the shape and orientation of an ellipsoidal model for the eye. Experimental evaluation shows 17.91% reduction of registration error and 47.52% reduction of the error standard deviation over state of the art methods.
Collapse
|
96
|
Varol HA, Massalin Y. A feasibility study of depth image based intent recognition for lower limb prostheses. Annu Int Conf IEEE Eng Med Biol Soc 2017; 2016:5055-5058. [PMID: 28269404 DOI: 10.1109/embc.2016.7591863] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper presents our preliminary work on a depth camera based intent recognition system intended for future use in robotic prosthetic legs. The approach infers the activity mode of the subject for standing, walking, running, stair ascent and stair descent modes only using data from the depth camera. Depth difference images are also used to increase the performance of the approach by discriminating between static and dynamic instances. After confidence map based filtering, simple features such as mean, maximum, minimum and standard deviation are extracted from rectangular regions of the frames. A support vector machine with a cubic kernel is used for the classification task. The classification results are post-processed by a voting filter to increase the robustness of activity mode recognition. Experiments conducted with a healthy subject donning the depth camera to his lower leg showed the efficacy of the approach. Specifically, the depth camera based recognition system was able to identify 28 activity mode transitions successfully. The only case of incorrect mode switching was an intended run to stand transition, where an intermediate transition from run to walk was recognized before transitioning to the intended standing mode.
Collapse
|
97
|
Gallo A, Rosenbaum D, Kanagasabapathy C, Girerd X. Effects of carotid baroreceptor stimulation on retinal arteriole remodeling evaluated with adaptive optics camera in resistant hypertensive patients. Ann Cardiol Angeiol (Paris) 2017; 66:165-170. [PMID: 28554698 DOI: 10.1016/j.ancard.2017.04.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 04/27/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND AIM Baroreceptor activation therapy (BAT) leads to a decrease in blood pressure (BP) in patients affected by resistant hypertension (RH) by reducing sympathetic outflow. This study aimed at evaluating the effects of BAT on RH patients' retinal arteriolar microvasculature, a territory devoid of adrenergic innervation. PATIENTS AND METHODS Five patients defined as affected by RH after excluding secondary causes of hypertension and based on number of antihypertensive treatments, underwent the implantation of Barostim™ neo™. Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) were assessed by office and 24-hours ambulatory BP monitoring (ABPM). Adaptive Optics Camera RTX1® (ImagineEye, Orsay, France) was used to measure wall thickness (WT), internal diameter (ID), wall cross-sectional area (WCSA) and wall-to-lumen ratio (WLR). A cohort of 21 not-controlled hypertensive patients matched for age, gender and follow-up time, undergoing standard-antihypertensive therapy changes, was selected as a control group. SBP and DBP were assessed by office and home BP monitoring (HBPM). Evaluations were performed at baseline and after 6 months mean follow-up. RESULTS Office SBP decreased by 9.7±12.3% and 29.7±12.4% in standard-therapy and BAT group respectively, while office DBP decreased by 7.6±17.4% and 14.8±15.7%. Concerning ABPM/HBPM, a mean reduction of both SBP and DBP of 7.9±11% was observed for the standard-therapy while a reduction of 15.8±10.5% and 15.8%±5.3% was observed for SBP and DBP respectively in BAT group. While in the standard-therapy group a significant reduction in WLR (-5.9%) due to both ID increase (+2.3%) and WT reduction (-5.7%) was observed, without changes in WCSA (-0.3%), RH patients had a significant reduction in WCSA (-12.1%), due to a trend in both WT and ID reduction (-6.5% and -1.7% respectively), without significant changes in WLR (-2%). CONCLUSION While a reverse eutrophic remodeling was observed in patients undergoing a standard-antihypertensive treatment, hypotrophic changes were found in RH patients undergoing BAT. Despite the lack of adrenergic receptors on retinal vessels, chronic baroreflex stimulation may exert an effect on retinal microvasculature in RH patients by more systemic than local mechanisms.
Collapse
Affiliation(s)
- A Gallo
- Preventive cardiovascular unit, institute of cardiometabolism and nutrition,ICAN, groupe hospitalier universitaire Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris,75651 Paris cedex 13, France; Inserm 1146, CNRS 7371, laboratoire d'imagerie biomédicale, Sorbonne universités, UPMC univiversité Paris 06, 75013 Paris, France.
| | - D Rosenbaum
- Preventive cardiovascular unit, institute of cardiometabolism and nutrition,ICAN, groupe hospitalier universitaire Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris,75651 Paris cedex 13, France; Inserm 1146, CNRS 7371, laboratoire d'imagerie biomédicale, Sorbonne universités, UPMC univiversité Paris 06, 75013 Paris, France; Imaging Core Lab, institute of cardiometabolism and nutrition, ICAN, 75651 Paris cedex 13, France
| | - C Kanagasabapathy
- Preventive cardiovascular unit, institute of cardiometabolism and nutrition,ICAN, groupe hospitalier universitaire Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris,75651 Paris cedex 13, France
| | - X Girerd
- Preventive cardiovascular unit, institute of cardiometabolism and nutrition,ICAN, groupe hospitalier universitaire Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris,75651 Paris cedex 13, France
| |
Collapse
|
98
|
Mounessa JS, Box NF, Asdigian NL, Braunberger T, Dunnick CA, Crane LA, Dellavalle RR. Portable equipment for taking dramatic sun-damage revealing photos at skin cancer prevention outreach events. Dermatol Online J 2017; 23:13030/qt33j0040b. [PMID: 28537858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 05/22/2017] [Indexed: 06/07/2023] Open
Abstract
In recent years, appearance-based interventions have gained popularity as a means to improve public awareness about skin cancer and sun protective behaviors. Although numerous reports discuss the use of ultraviolet (UV) camera devices for this purpose,studies on the use of portable imaging devices in community outreach events do not presently exist. In this report, we discuss how we successfully utilize portable imaging devices at community outreach events. We also discuss the advantages and disadvantages of our portable devices in comparison to traditional UV cameras. Portable imaging devices are easy to use and have allowed us to increase our involvement in community outreach events targeting a wide range of participants.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Robert R Dellavalle
- Department of Dermatology, University of Colorado Anschutz Medical Campus, Aurora, Colorado Department of Epidemiology, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, Colorado Dermatology Service, Eastern Colorado Health Care System, US Department of Veteran Affairs, Denver, Colorado.
| |
Collapse
|
99
|
O’Connor KM, Nathan LR, Liberati MR, Tingley MW, Vokoun JC, Rittenhouse TAG. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset. PLoS One 2017; 12:e0175684. [PMID: 28422973 PMCID: PMC5396891 DOI: 10.1371/journal.pone.0175684] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 03/29/2017] [Indexed: 11/20/2022] Open
Abstract
Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1) by different sizes of camera arrays deployed (1–10 cameras), and (2) by total season length (1–365 days). Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus), bobcat (Lynx rufus), raccoon (Procyon lotor), and Virginia opossum (Didelphis virginiana). For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40–128%) from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored) detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori identify target species for which inference will be made, and then design camera trapping studies around the most difficult to detect of those species.
Collapse
Affiliation(s)
- Kelly M. O’Connor
- Wildlife and Fisheries Conservation Center, Department of Natural Resources and the Environment, University of Connecticut, Connecticut, United States of America
- * E-mail:
| | - Lucas R. Nathan
- Wildlife and Fisheries Conservation Center, Department of Natural Resources and the Environment, University of Connecticut, Connecticut, United States of America
| | - Marjorie R. Liberati
- Wildlife and Fisheries Conservation Center, Department of Natural Resources and the Environment, University of Connecticut, Connecticut, United States of America
| | - Morgan W. Tingley
- Ecology & Evolutionary Biology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Jason C. Vokoun
- Wildlife and Fisheries Conservation Center, Department of Natural Resources and the Environment, University of Connecticut, Connecticut, United States of America
| | - Tracy A. G. Rittenhouse
- Wildlife and Fisheries Conservation Center, Department of Natural Resources and the Environment, University of Connecticut, Connecticut, United States of America
| |
Collapse
|
100
|
Masis N, McCaffrey J, Johnson SL, Chapman-Novakofski K. Design and Evaluation of a Training Protocol for a Photographic Method of Visual Estimation of Fruit and Vegetable Intake among Kindergarten Through Second-Grade Students. J Nutr Educ Behav 2017; 49:346-351.e1. [PMID: 28258818 DOI: 10.1016/j.jneb.2017.01.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Revised: 12/09/2016] [Accepted: 01/04/2017] [Indexed: 06/06/2023]
Abstract
OBJECTIVE To design a replicable training protocol for visual estimation of fruit and vegetable (FV) intake of kindergarten through second-grade students through digital photography of lunch trays that results in reliable data for FV served and consumed. METHODS Protocol development through literature and researcher input was followed by 3 laboratory-based trainings of 3 trainees. Lunchroom data collection sessions were done at 2 elementary schools for kindergarten through second-graders. Intraclass correlation coefficients (ICCs) were used. RESULTS By training 3, ICC was substantial for amount of FV served and consumed (0.86 and 0.95, respectively; P < .05). The ICC was moderate for percentage of fruits consumed (0.67; P = .06). In-school estimates for ICCs were all significant for amounts served at school 1 and percentage of FV consumed at both schools. CONCLUSIONS AND IMPLICATIONS The protocol resulted in reliable estimation of combined FV served and consumed using digital photography. The ability to estimate FV intake accurately will benefit intervention development and evaluation.
Collapse
Affiliation(s)
- Natalie Masis
- Division of Nutritional Sciences, University of Illinois at Urbana-Champaign, Urbana, IL.
| | - Jennifer McCaffrey
- Office of Extension and Outreach, University of Illinois Extension, Urbana, IL
| | - Susan L Johnson
- Children's Eating Laboratory, University of Colorado School of Medicine, Anschutz Medical Campus, Aurora, CO
| | | |
Collapse
|