51
|
Smith C, Galland BC, de Bruin WE, Taylor RW. Feasibility of Automated Cameras to Measure Screen Use in Adolescents. Am J Prev Med 2019; 57:417-424. [PMID: 31377085 DOI: 10.1016/j.amepre.2019.04.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 04/23/2019] [Accepted: 04/24/2019] [Indexed: 02/03/2023]
Abstract
INTRODUCTION The influence of screens and technology on adolescent well-being is controversial and there is a need to improve methods to measure these behaviors. This study examines the feasibility and acceptability of using automated wearable cameras to measure evening screen use in adolescents. METHODS A convenience sample of adolescents (aged 13-17 years, n=15) wore an automated camera for 3 evenings from 5:00pm to bedtime. The camera (Brinno TLC120) captured an image every 15 seconds. Fieldwork was completed between October and December 2017, and data analyzed in 2018. Feasibility was examined by quality of the captured images, wear time, and whether images could be coded in relation to contextual factors (e.g., type of screen and where screen use occurred). Acceptability was examined by participant compliance to the protocol and from an exit interview. RESULTS Data from 39 evenings were analyzed (41,734 images), with a median of 268 minutes per evening. The camera was worn for 78% of the evening on Day 1, declining to 51% on Day 3. Nearly half of the images contained a screen in active use (46%), most commonly phones (13.7%), TV (12.6%), and laptops (8.2%). Multiple screen use was evident in 5% of images. Within the exit interview, participants raised no major concerns about wearing the camera, and data loss because of deletions or privacy concerns was minimal (mean, 14 minutes, 6%). CONCLUSIONS Automated cameras offer a feasible, acceptable method of measuring prebedtime screen behavior, including environmental context and aspects of media multitasking in adolescents.
Collapse
|
52
|
Cabal Mirabal CA, Berlanga Acosta J, Fernández Montequín J, Oramas Díaz L, González Dalmau E, Herrera Martínez L, Sauri JE, Baldomero Hernández J, Savigne Gutiérrez W, Valdés JL, Tabio Reyes AL, Pérez Pérez SC, Valdés Pérez C, Armstrong AA, Armstrong DG. Quantitative Studies of Diabetic Foot Ulcer Evolution Under Treatment by Digital Stereotactic Photography. J Diabetes Sci Technol 2019; 13:821-826. [PMID: 31195816 PMCID: PMC6955448 DOI: 10.1177/1932296819853843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Imaging the lower extremity reproducibly and accurately remains an elusive goal. This is particularly true in the high risk diabetic foot, where tissue loss, edema, and color changes are often concomitant. The purpose of this study was to evaluate the reproducibility of a novel and inexpensive stereotaxic frame in assessment of wound healing. METHODS The main idea is to keep constant and reproducible the relative position of extremities related to the sensor used for the examination during a serial studies by stereotaxic digital photographic sequence. Ten healthy volunteers were evaluated at 10 different time moments to estimate the foot position variations in the stereotaxic frame. The evolution of 40 of DFU patients under treatment was evaluated before and during the epidemical grow factor intralesional treatment. RESULTS The wound closing and granulation speeds, the relative contribution of the contraction and tissue restauration mechanism were evaluated by stereotaxic digital photography. CONCLUSIONS The results of this study suggest that the stereotaxic frame is a robust platform for serial study of the evolution of wound healing which allow to obtain consistent information from a variety of visible and hyperspectral measurement technologies. New stereotaxic digital photography evidences related to the diabetic foot ulcer healing process under treatment has been presented.
Collapse
|
53
|
Bae JM, Ju HJ. Simple cross-polarized photography using a smartphone. J Am Acad Dermatol 2019; 82:e185-e186. [PMID: 31422182 DOI: 10.1016/j.jaad.2019.08.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 07/29/2019] [Accepted: 08/04/2019] [Indexed: 11/19/2022]
|
54
|
Jia C, Yang T, Wang C, Fan B, He F. A new fast filtering algorithm for a 3D point cloud based on RGB-D information. PLoS One 2019; 14:e0220253. [PMID: 31419244 PMCID: PMC6697356 DOI: 10.1371/journal.pone.0220253] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2019] [Accepted: 07/11/2019] [Indexed: 11/18/2022] Open
Abstract
A point cloud that is obtained by an RGB-D camera will inevitably be affected by outliers that do not belong to the surface of the object, which is due to the different viewing angles, light intensities, and reflective characteristics of the object surface and the limitations of the sensors. An effective and fast outlier removal method based on RGB-D information is proposed in this paper. This method aligns the color image to the depth image, and the color mapping image is converted to an HSV image. Then, the optimal segmentation threshold of the V image that is calculated by using the Otsu algorithm is applied to segment the color mapping image into a binary image, which is used to extract the valid point cloud from the original point cloud with outliers. The robustness of the proposed method to the noise types, light intensity and contrast is evaluated by using several experiments; additionally, the method is compared with other filtering methods and applied to independently developed foot scanning equipment. The experimental results show that the proposed method can remove all type of outliers quickly and effectively.
Collapse
|
55
|
Iacovacci V, Blanc A, Huang H, Ricotti L, Schibli R, Menciassi A, Behe M, Pané S, Nelson BJ. High-Resolution SPECT Imaging of Stimuli-Responsive Soft Microrobots. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2019; 15:e1900709. [PMID: 31304653 DOI: 10.1002/smll.201900709] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 05/07/2019] [Indexed: 06/10/2023]
Abstract
Untethered small-scale robots have great potential for biomedical applications. However, critical barriers to effective translation of these miniaturized machines into clinical practice exist. High resolution tracking and imaging in vivo is one of the barriers that limit the use of micro- and nanorobots in clinical applications. Here, the inclusion of radioactive compounds in soft thermoresponsive magnetic microrobots is investigated to enable their single-photon emission computed tomography imaging. Four microrobotic platforms differing in hydrogel structure and four 99m Tc[Tc]-based radioactive compounds are investigated in order to achieve optimal contrast agent retention and optimal imaging. Single microrobot imaging of structures as low as 100 µm in diameter, as well as tracking of shape switching from tubular to planar configurations by inclusion of 99m Tc[Tc] colloid in the hydrogel structure, is reported.
Collapse
|
56
|
Ferlatte O, Oliffe JL, Salway T, Broom A, Bungay V, Rice S. Using Photovoice to Understand Suicidality Among Gay, Bisexual, and Two-Spirit Men. ARCHIVES OF SEXUAL BEHAVIOR 2019; 48:1529-1541. [PMID: 31152366 DOI: 10.1007/s10508-019-1433-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 02/13/2019] [Accepted: 02/25/2019] [Indexed: 05/24/2023]
Abstract
This study explored the drivers of suicidality from the perspectives of gay, bisexual, and two-spirit men (GB2SM) with a history of suicidality. Twenty-one GB2SM participated in this photovoice study taking photographs to depict and discuss their previous suicidality. Data were collected from in-depth individual interviews in which participants discussed their photographs and in turn offered verbal/narrative accounts of suicidality. Drawing on intersectionality, analyses of the photographs and interview data revealed three interconnected themes. First, adverse childhood events and negative adolescent experiences were described as the root causes of mental health struggles and suicidality. Second, violence and homophobia had disrupted these men's education and employment opportunities and some participants detailed how their lack of capital and challenges for maintaining employment shaped their suicidality. Third, a sociality of stigma and sense of isolation compounded experiences of suicidality. The three themes overlapped and were shaped by multiple intersectional axes including sexuality, class, ethnicity, and mental health status. The findings have implications for services and health professionals working with GB2SM who need to thoughtfully consider life-course trajectories and multiple social axes when assessing and treating GB2SM experiencing suicidality. More so, because these factors relate to social inequities, structural and policy changes warrant targeted attention.
Collapse
|
57
|
Abstract
A leading physician in New York during the last quarter of the 19th century, Henry G. Piffard, MD, was a pioneer dermatologist in New York. He had a propensity to invent, and he used that ability to advance the nascent field of instantaneous photography. The recent discovery of a few survivors of Piffard's patented "photogenic (flash) cartridges" prompted an examination of his connection to a leading photographic supply house of his time. The study provided insights into his system and revealed that Piffard had combined the use of his patent with his passion for skin diseases. As a result, Piffard's publications were among the first to document diseases of the skin photographically.
Collapse
|
58
|
Kaczensky P, Khaliun S, Payne J, Boldgiv B, Buuveibaatar B, Walzer C. Through the eye of a Gobi khulan - Application of camera collars for ecological research of far-ranging species in remote and highly variable ecosystems. PLoS One 2019; 14:e0217772. [PMID: 31163047 PMCID: PMC6548383 DOI: 10.1371/journal.pone.0217772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/17/2019] [Indexed: 11/18/2022] Open
Abstract
The Mongolian Gobi-Eastern Steppe Ecosystem is one of the largest remaining natural drylands and home to a unique assemblage of migratory ungulates. Connectivity and integrity of this ecosystem are at risk if increasing human activities are not carefully planned and regulated. The Gobi part supports the largest remaining population of the Asiatic wild ass (Equus hemionus; locally called "khulan"). Individual khulan roam over areas of thousands of square kilometers and the scale of their movements is among the largest described for terrestrial mammals, making them particularly difficult to monitor. Although GPS satellite telemetry makes it possible to track animals in near-real time and remote sensing provides environmental data at the landscape scale, remotely collected data also harbors the risk of missing important abiotic or biotic environmental variables or life history events. We tested the potential of animal born camera systems ("camera collars") to improve our understanding of the drivers and limitations of khulan movements. Deployment of a camera collar on an adult khulan mare resulted in 7,881 images over a one-year period. Over half of the images showed other khulan and 1,630 images showed enough of the collared khulan to classify the behaviour of the animals seen into several main categories. These khulan images provided us with: i) new insights into important life history events and grouping dynamics, ii) allowed us to calculate time budgets for many more animals than the collared khulan alone, and iii) provided us with a training dataset for calibrating data from accelerometer and tilt sensors in the collar. The images also allowed to document khulan behaviour near infrastructure and to obtain a day-time encounter rate between a specific khulan with semi-nomadic herders and their livestock. Lastly, the images allowed us to ground truth the availability of water by: i) confirming waterpoints predicted from other analyses, ii) detecting new waterpoints, and iii) compare precipitation records for rain and snow from landscape scale climate products with those documented by the camera collar. We discuss the added value of deploying camera collars on a subset of animals in remote, highly variable ecosystems for research and conservation.
Collapse
|
59
|
Richardson AD. Tracking seasonal rhythms of plants in diverse ecosystems with digital camera imagery. THE NEW PHYTOLOGIST 2019; 222:1742-1750. [PMID: 30415486 DOI: 10.1111/nph.15591] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/05/2018] [Indexed: 05/13/2023]
Abstract
Contents Summary I. Introduction II. Evolving modes of phenological study III. The phenocam approach IV. Applications of the phenocam method V. Looking forward Acknowledgements References SUMMARY: Global change is shifting the seasonality of vegetation in ecosystems around the globe. High-frequency digital camera imagery, and vegetation indices derived from that imagery, is facilitating better tracking of phenological responses to environmental variation. This method, commonly referred to as the 'phenocam' approach, is well suited to several specific applications, including: close-up observation of individual organisms; long-term canopy-level monitoring at individual sites; automated phenological monitoring in regional-to-continental scale observatory networks; and tracking responses to experimental treatments. Several camera networks are already well established, and some camera records are a more than a decade long. These data can be used to identify the environmental controls on phenology in different ecosystems, which will contribute to the development of improved prognostic phenology models.
Collapse
|
60
|
Li X, Hu H, Xiao D, Wang D, Jiang S. Analysis of the spatial distribution of collectors in dust scrubber based on image processing. JOURNAL OF THE AIR & WASTE MANAGEMENT ASSOCIATION (1995) 2019; 69:764-777. [PMID: 30794110 DOI: 10.1080/10962247.2019.1586012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 02/09/2019] [Indexed: 06/09/2023]
Abstract
The spatial distribution of the collectors in dust scrubber is key in determining the effectiveness of the dust removal process. In the present study, a high-speed camera was used to capture images of the distribution of the collectors. Some of the image information was extracted by image processing, such as the gray mean (GM), the angular second moment (ASM), and the entropy (ENT) from the gray-level co-occurrence matrix of the image. Subsequently, the spatial distribution rules of the collectors were studied by analyzing the spatial proportion, dispersion area, and uniformity and intensiveness of the collectors. It is an intuitive approach and a novel analysis method for the operating state of dust scrubber. The results show that the spatial distribution of the collectors could be better reflected by image processing methods. The dispersion area of the collectors expanded with an increase in the airflow velocity. When the initial liquid level (ILL) was higher, the collectors expanded in an approximate circular shape, and when the ILL was lower the collectors expanded in an approximate sector shape. In general, the variation trend in the spatial proportion enhanced with an increase in ILL and airflow velocity, which is consistent with the uniformity of collectors. When the liquid level was 0-20 mm and the airflow velocity was greater than 6.5 m/sec, the spatial proportion and uniformity of the collectors reached the highest degree. However, the growth rate of the spatial proportion and uniformity of the collectors slowed down and even led to negative growth when the ILL was lower and the airflow velocity was higher. The intensiveness of the collectors was great when the ILL was higher, which was free from the apparent influence of the airflow velocity and the ILL. However, when the ILL was lower, the intensiveness of the collectors was poor, intensifying as the airflow velocity and ILL increased. When the liquid level was -5-10 mm and the airflow velocity was greater than 8 m/sec, the intensiveness of the collectors reached the highest degree, indicating that a liquid level greater than 0 mm and a higher airflow velocity improved the spatial distribution of the collectors. Implications: This paper focuses on the spatial distribution of the collectors in dust scrubber. Some of the image information was extracted by image processing, such as the gray mean of the image, the angular second moment, and the entropy from the gray-level co-occurrence matrix of the image. The spatial distribution rules of the collectors were studied by analyzing the spatial proportion, the dispersion area, and the uniformity and intensiveness of the collectors.
Collapse
|
61
|
Sheahan G. Comparison of Personal Video Technology for Teaching and Assessment of Surgical Skills. J Grad Med Educ 2019; 11:328-331. [PMID: 31210866 PMCID: PMC6570456 DOI: 10.4300/jgme-d-18-01082.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/04/2019] [Accepted: 04/04/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Improvements in personal technology have made video recording for teaching and assessment of surgical skills possible. OBJECTIVE This study compared 5 personal video-recording devices based on their utility (image quality, hardware, mounting options, and accessibility) in recording open surgical procedures. METHODS Open procedures in a simulated setting were recorded using smartphones and tablets (MOB), laptops (LAP), sports cameras such as GoPro (SC), single-lens reflex cameras (DSLR), and spy camera glasses (SPY). Utility was rated by consensus between 2 investigators trained in observation of technology using a 5-point Likert scale (1, poor, to 5, excellent). RESULTS A total of 150 hours of muted video were reviewed with a minimum 1 hour for each device. Image quality was good (3.8) across all devices, although this was influenced by the device-mounting requirements (4.2) and its proximity to the area of interest. Device hardware (battery life and storage capacity) was problematic for long procedures (3.8). Availability of devices was high (4.2). CONCLUSIONS Personal video-recording technology can be used for assessment and teaching of open surgical skills. DSLR and SC provide the best images. DSLR provides the best zoom capability from an offset position, while SC can be placed closer to the operative field without impairing sterility. Laptops provide best overall utility for long procedures due to video file size. All devices require stable recording platforms (eg, bench space, dedicated mounting accessories). Head harnesses (SC, SPY) provide opportunities for "point-of-view" recordings. MOB and LAP can be used for multiple concurrent recordings.
Collapse
|
62
|
La Torre F, Meocci M, Nocentini A. Safety effects of automated section speed control on the Italian motorway network. JOURNAL OF SAFETY RESEARCH 2019; 69:115-123. [PMID: 31235223 DOI: 10.1016/j.jsr.2019.03.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/06/2019] [Accepted: 03/06/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Automated Section Speed Control (ASSC) has been identified as an effective countermeasure to reduce speeds and improve speed limit compliance. METHOD An Empirical Bayes (EB) before-and-after study was performed in this research in order to evaluate the impact of the ASSC system on the expected crash frequency. The study was carried out on a sample of 125 ASSC sites of the Italian motorway network covering 1252 km, where a total of 21,721 crashes were recorded during a 10-year analysis period from 2004 to 2013. RESULTS Overall, the EB analysis estimated a significant 22% reduction in the expected crash frequency due to the implementation of the ASSC system. The analysis indicated that the effect is slightly larger on property damage only (PDO) crashes (-23%) than on fatal injury (FI) crashes (-18%) and that the highest reductions in crash frequency are expected for multi-vehicle FI crashes (-25%) and multi-vehicle PDO crashes (-31%). Furthermore, the results indicated that the ASSC system is more effective in reducing crash rates when traffic volume increases and it is therefore strongly recommended as a countermeasure to improve safety on high-traffic-volume motorway sections.
Collapse
|
63
|
Dhatchayeny DR, Chung YH. Optical extra-body communication using smartphone cameras for human vital sign transmission. APPLIED OPTICS 2019; 58:3995-3999. [PMID: 31158149 DOI: 10.1364/ao.58.003995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 04/21/2019] [Indexed: 06/09/2023]
Abstract
This paper presents an optical extra-body communication (OEBC) for the transmission of human vital signs in an optical camera communication link. The primary vital signs, such as pulse rate, respiratory rate, body temperature, blood pressure, and peripheral capillary oxygen saturation, are captured from the patient's body. The proposed OEBC system has body sensors installed on various parts of the body for detecting, processing, and communicating the vital sign data. A light-emitting diode (LED) hub is a 4×4 red, green, and blue (RGB) LED array that acts as a coordinator to collect the vital sign data from the sensors and transmit through an optical link, while an android-based smartphone camera is used as the receiver. The proposed OEBC employs color modulation, which assigns colors to each vital sign data and transmits data based on the RGB color combinations. The experiment and simulation results show that the scheme is able to transmit the vital sign data through the optical link with an acceptable bit error rate value of 1.2×10-4 at a peak signal-to-noise ratio value of 15 dB. The proposed OEBC can thus facilitate both reliable and convenient health monitoring in hospital environments.
Collapse
|
64
|
Park S, Mun S, Lee DW, Whang M. IR-camera-based measurements of 2D/3D cognitive fatigue in 2D/3D display system using task-evoked pupillary response. APPLIED OPTICS 2019; 58:3467-3480. [PMID: 31044844 DOI: 10.1364/ao.58.003467] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 04/02/2019] [Indexed: 06/09/2023]
Abstract
This study was carried out to evaluate a method used to measure three-dimensional (3D) cognitive fatigue based on the pupillary response. This technique was designed to overcome measurement burdens by using non-contact methods. The pupillary response is related to cognitive function by a neural pathway and may be an indicator of 3D cognitive fatigue. Twenty-six undergraduate students (including 14 women) watched both 2D and 3D versions of a video for 70 min. The participants experienced visual fatigue after viewing the 3D content. Measures such as subjective rating, response time, event-related potential latency, heartbeat-evoked potential (HEP) alpha power, and task-evoked pupillary response (TEPR) latency were significantly different. Multitrait-multimethod matrix analysis indicated that HEP and TEPR latency measures had stronger reliability and higher correlations with 3D cognitive fatigue than other measures. TEPR latency may be useful for quantitatively determining 3D visual fatigue, as it can be easily used to evaluate 3D visual fatigue using a non-contact method without measuring burden.
Collapse
|
65
|
Driggers R, Furxhi O, Vaca G, Reumers V, Vazimali M, Short R, Agrawal P, Lambrechts A, Charle W, Vunckx K, Arvidson C. Burmese python target reflectivity compared to natural Florida foliage background reflectivity. APPLIED OPTICS 2019; 58:D98-D104. [PMID: 31044871 DOI: 10.1364/ao.58.000d98] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 04/16/2019] [Indexed: 06/09/2023]
Abstract
The Florida Everglades is infested with Burmese pythons caused by the release of exotic pets in the 1980s. The current estimates are between 30,000 and 300,000 pythons, where the result is a severe decline in Everglade mammals: 90% reductions in raccoon, opossum, bobcats, and foxes. The marsh rabbits are completely gone. The population of the pythons is rapidly increasing exponentially with 20-50 eggs per snake with a life span of up to 20 years. Pythons have been captured in the Everglades with lengths of nearly 6 m. Researchers in the state of Florida are concerned that these pythons are (1) permanently damaging the Everglades, (2) migrating further north into populated areas of Florida, and (3) endangering wildlife, pets, and eventually, people. There have been a number of sensing efforts attempted in the large-area detection of pythons, where limited success has been achieved. For example, infrared sensors have been applied to the problem, but the pythons are cold-blooded, so the infrared bands do not work well. Imec has leveraged its expertise and infrastructure in semiconductor processing to produce highly compact, higher performance, and relatively cheaper hyperspectral image sensors and camera systems. In this work, Imec teamed with the University of Florida and Extended Reality Systems to obtain hyperspectral reflectivity measurements of Burmese pythons along with natural Florida background foliage to determine bands or band combinations that may be exploited in the large-area detection of pythons. The bands investigated are the visible-near infrared (or VisNIR) and the shortwave infrared (SWIR) bands. The results show that there are enough differences in the data collection such that a single band, inexpensive VisNIR band camera may provide reasonable results and a two-band, VisNIR/SWIR combination may provide higher performance results. In this paper, we provide the VisNIR results.
Collapse
|
66
|
Golan O, Piccinini AL, Hwang ES, De Oca Gonzalez IM, Krauthammer M, Khandelwal SS, Smadja D, Randleman JB. Distinguishing Highly Asymmetric Keratoconus Eyes Using Dual Scheimpflug/Placido Analysis. Am J Ophthalmol 2019; 201:46-53. [PMID: 30721688 DOI: 10.1016/j.ajo.2019.01.023] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 12/07/2018] [Accepted: 01/22/2019] [Indexed: 12/27/2022]
Abstract
PURPOSE To identify the best metrics or combination of metrics that provide the highest predictive power between normal eyes and the clinically unaffected eye of patients with highly asymmetric keratoconus using data from a Dual Scheimpflug/Placido device. DESIGN Retrospective case-control study. METHODS Combined Dual Scheimpflug/Placido imaging was obtained from the Galilei G4 device (Ziemer Ophthalmic Systems AG, Port, Switzerland) in 31 clinically unaffected eyes with highly asymmetric keratoconus and 178 eyes from 178 patients with bilaterally normal corneal examinations that underwent uneventful LASIK with at least 1 year follow-up. Receiver operating characteristic (ROC) curves were generated to determine area under the curve (AUC), sensitivity, and specificity for 87 metrics, and logistic regression modeling was used to determine optimal variable combinations. RESULTS No individual metric achieved an AUC greater than 0.79. A combined model consisting of 9 metrics yielded an AUC of 0.96, with 90.3% sensitivity and 92.6% specificity. Among those 9 metrics included, 5 related to corneal pachymetry: Opposite Sector Index and Anterior Height BFS Z from the anterior surface, Asphericity and Asymmetry Index, Posterior Height BFS Z, and Posterior Height BFS X from the posterior surface. The strongest variable in the model was the thinnest point location on the horizontal (x) axis. CONCLUSION While individual metrics performed poorly, using a combination of metrics from the combined Dual Scheimpflug/Placido device provided a useful model for differentiating normal corneas from the clinically normal eyes of patients with highly asymmetric keratoconus. Pachymetry values were the most impactful metrics.
Collapse
|
67
|
Hashemi H, Heydarian S, Khabazkhoob M, Yekta A, Emamian MH, Fotouhi A. Keratometry in children: Comparison between auto-refractokeratometer, rotating scheimpflug imaging, and biograph. JOURNAL OF OPTOMETRY 2019; 12:99-110. [PMID: 30879970 PMCID: PMC6449769 DOI: 10.1016/j.optom.2018.12.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/26/2018] [Accepted: 12/12/2018] [Indexed: 05/07/2023]
Abstract
PURPOSE To determine the agreement and validity of keratometric measurements in children with the Nidek ARK-510A auto-refractokeratometer compared to rotating Scheimpflug imaging with Pentacam and biograph with Lenstar LS 900. METHODS This study was conducted on 5620 schoolchildren aged 6-12 years in Shahroud, Iran. Minimum and maximum keratometry values and corneal astigmatism magnitude were compared by calculation of Paired difference, interclass correlation coefficient, and 95% limits of agreement (LoA) between devices. RESULTS After applying the exclusion criteria, 4215 right eyes were enrolled into the study. Mean minimum keratometry with Nidek ARK-510A, Pentacam, and Lenstar was 43.13±1.51, 43.14±1.48, and 42.87±1.46 diopters (D), respectively, and mean maximum keratometry was 43.97±1.59, 44.00±1.56, and 43.75±1.54D, respectively. Nidek ARK-510A overestimated minimum and maximum keratometry by 0.25±0.37 and 0.22±0.41, respectively, compared to Penatcam. The LoA between Nidek ARK-510A and Pentacam for minimum and maximum keratometry measurements were -0.98 to 0.47D and -1.02 to 0.57D, respectively. The LoA between Nidek ARK-510A and Lenstar for minimum and maximum keratometry measurements were -0.70 to 0.72D and -0.79 to 0.85D, respectively. The agreement between devices was best in emmetropes, worst in hyperopes. For astigmatic vector components, the agreements between devices were poor but best agreement was between Nidek ARK-510A and Pentacam. CONCLUSIONS Keratometry measurement with Nidek ARK-510A was not significantly different from Pentacam and Lenstar, and this device can be used in screening programs in emmetropes.
Collapse
|
68
|
Zhao J, Liu H, Cai W. Numerical and experimental validation of a single-camera 3D velocimetry based on endoscopic tomography. APPLIED OPTICS 2019; 58:1363-1373. [PMID: 30874020 DOI: 10.1364/ao.58.001363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Tomographic velocimetry as a 3D technique has attracted substantial research interests in recent years due to the pressing need for investigations of complex turbulent flows, which are inherently inhomogeneous. However, tomographic velocimetry usually suffers from high experimental costs, especially due to the formidable expenses of multiple high-speed cameras and the excitation laser source. To overcome this limitation, a cost-effective technique called endoscopic tomographic velocimetry has been developed in this work. As a single-camera system, nine projections of the target 3D luminous field at consecutive time instants can be registered from different orientations with one camera and customized fiber bundles, while this is possible only with the same number of cameras in a classical tomographic velocimetry system. Extensive numerical simulations were conducted with three inversion algorithms and two velocity calculation methods. According to RMS error analysis, it was found that the algebraic reconstruction technique outperformed the other two inversion algorithms, and the 3D optical flow method exhibited a better performance than cross correlation in terms of both accuracy and noise immunity. Proof-of-concept experiments were also performed to validate our developed system. The results suggested that an average reconstruction error of the artificially generated 3D velocity field was less than 6%, indicating good performance of the velocimetry system. Although this technique was demonstrated by reconstructing continuous luminous fields, it can be easily extended to discrete ones, which are typically adopted in particle image velocimetry. This technique is promising not only for flow diagnostics but other research areas such as biomedical imaging.
Collapse
|
69
|
Fernández E, Gorchs G, Serrano L. Use of consumer-grade cameras to assess wheat N status and grain yield. PLoS One 2019; 14:e0211889. [PMID: 30768611 PMCID: PMC6377115 DOI: 10.1371/journal.pone.0211889] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 01/23/2019] [Indexed: 11/19/2022] Open
Abstract
Wheat Grain Yield (GY) and quality are particularly susceptible to nitrogen (N) fertilizer management. However, in rain-fed Mediterranean environments, crop N requirements might be variable due to the effects of water availability on crop growth. Therefore, in-season crop N status assessment is needed in order to apply N fertilizer in a cost-effective way while reducing environmental impacts. Remote sensing techniques might be useful at assessing in-season crop N status. In this study, we evaluated the capacity of vegetation indices formulated using blue (B), green (G), red (R) and near-infrared (NIR) bands obtained with a consumer-grade camera to assess wheat N status. Chlorophyll Content Index (CCI) and fractional intercepted PAR (fIPAR) were measured at three phenological stages and GY and biomass were determined at harvest. Indices formulated using RG bands and the normalized difference vegetation index (NDVI) were significantly correlated with both CCI and fIPAR at the different phenological stage (0.71 < r < 0.81, P < 0.01). Moreover, indices formulated using RG bands were capable at differentiating unfertilized and fertilized plots. In addition, RGB indices and NDVI were found to be related to both crop biomass and GY at harvest, particularly when data were obtained at initial grain filling stage (r > 0.80, P < 0.01). Finally, RGB indices and NDVI obtained with a consumer-grade camera showed comparable capacity at assessing chlorophyll content and predicting both crop biomass and GY at harvest than those obtained with a spectroradiometer. This study highlights the potential of standard and modified consumer-grade cameras at assessing canopy traits related to crop N status and GY in wheat under Mediterranean conditions.
Collapse
|
70
|
Serabyn E. Pupil segmentation in the light-field camera and its relation to 3D object positions and the reconstructed depth of field. APPLIED OPTICS 2019; 58:A273-A282. [PMID: 30874008 DOI: 10.1364/ao.58.00a273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 01/01/2019] [Indexed: 06/09/2023]
Abstract
A ray-trace simulation of the light-field camera is used to calculate point source responses as a function of 3D source positions. Each point source location yields a unique and well-determined segmented-pupil pattern in the lenslet array's focal plane, with lateral object offsets changing the pattern's location and symmetry, and defocus distances altering the pattern's diameter. Segmented-pupil images can thus be used to infer point sources' 3D locations. Numerical simulations show that the centroids and widths of segmented pupil images can be used to deduce lateral image positions to the size of a detector pixel, and image defocus to the accuracy of the lenslet focal length. In sparse-source cases, such as, e.g., fluorescence microscopy or particle tracking, 3D point-source locations can thus be accurately determined from the observed point source response patterns. The degree of pupil segmentation also directly constrains the ability to refocus light-field images-for image defocus distances large enough that the number of pupil segments exceeds the number of pixels within a "whole" pupil behind a single lenslet, the image can no longer be brought to focus numerically, thus defining the light-field camera's depth of field. This constraint implies a depth of field larger than the usual imaging depth of focus by a factor of the number of detector pixels per lenslet, consistent with the general expectation.
Collapse
|
71
|
Nguyen DT, Pham TD, Lee MB, Park KR. Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information. SENSORS 2019; 19:s19020410. [PMID: 30669531 PMCID: PMC6359417 DOI: 10.3390/s19020410] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 01/09/2019] [Accepted: 01/17/2019] [Indexed: 12/02/2022]
Abstract
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.
Collapse
|
72
|
Wang Y, Zhang X, Chen J, Cheng Z, Wang D. Camera sensor-based contamination detection for water environment monitoring. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2019; 26:2722-2733. [PMID: 30484049 DOI: 10.1007/s11356-018-3645-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 10/30/2018] [Indexed: 06/09/2023]
Abstract
Water environment monitoring is of great importance to human health, ecosystem sustainability, and water transport. Unlike traditional water quality monitoring problems, this paper focuses on visual perception of water environment. We first introduce the development of a customized aquatic sensor node equipped with an embedded camera sensor. Based on this platform, we present an efficient and holistic contamination detection approach, which can automatically adapt to the detection of floating debris in dynamic waters or the identification of salient regions in static waters. Our approach is specifically designed based on compressed sensing theory to give full consideration to the unique challenges in water environment and the resource constraints on sensor nodes. Both laboratory and field experiments demonstrate the proposed method can fast and accurately detect various types of water pollutants and is a better choice for camera sensor-based water environment monitoring compared with other methods.
Collapse
|
73
|
Nitzinger V, Held S, Kevane B, Eudave Y. Latino Health Perceptions in Rural Montana: Engaging Promotores de Salud Using Photovoice Through Facebook. FAMILY & COMMUNITY HEALTH 2019; 42:150-160. [PMID: 30768480 PMCID: PMC6383787 DOI: 10.1097/fch.0000000000000213] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The primary purposes of this study were to use photovoice with Facebook to explore health perceptions and health needs among promotores living in rural Montana and to build community among geographically dispersed promotores. Seven promotores participated in a photovoice project where they uploaded photographs and shared comments in a private Facebook group. Emergent themes based on the promotores' health perceptions, discussions, and interviews were transcribed and coded. Findings of this study will be used to assess health perceptions and needs of the promotores and Latino community in rural Montana.
Collapse
|
74
|
Bao Z, Sha J, Li X, Hanchiso T, Shifaw E. Monitoring of beach litter by automatic interpretation of unmanned aerial vehicle images using the segmentation threshold method. MARINE POLLUTION BULLETIN 2018; 137:388-398. [PMID: 30503448 DOI: 10.1016/j.marpolbul.2018.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 07/30/2018] [Accepted: 08/01/2018] [Indexed: 06/09/2023]
Abstract
This study was aimed at monitoring beach litter using an unmanned aerial vehicle (UAV) in the coastal city of Fuzhou, China. The data analysis shows that the optical images obtained by digital cameras on the UAV can help to identify and monitor beach litter using remote sensing and GIS technologies. The threshold method can effectively segment the UAV image in the beach area. It is useful for quickly monitoring the distribution of beach litter in the area of interest, and hence it can help to provide effective technical support for the investigation and assessment of coastal beach litter.
Collapse
|
75
|
Marek AJ, Chu EY, Ming ME, Khan ZA, Kovarik CL. Piloting the Use of Smartphones, Reminders, and Accountability Partners to Promote Skin Self-Examinations in Patients with Total Body Photography: A Randomized Controlled Trial. Am J Clin Dermatol 2018; 19:779-785. [PMID: 30062632 DOI: 10.1007/s40257-018-0372-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the use of a mobile application (app) in patients already using total body photography (TBP) to increase skin self-examination (SSE) rates and pilot the effectiveness of examination reminders and accountability partners. DESIGN Randomized controlled trial with computer generated randomization table to allocate interventions. SETTING University of Pennsylvania pigmented lesion clinic. PARTICIPANTS 69 patients aged 18 years or older with an iPhone/iPad, who were already in possession of TBP photographs. INTERVENTION A mobile app loaded with digital TBP photos for all participants, and either (1) the mobile app only, (2) skin examination reminders, (3) an accountability partner, or (4) reminders and an accountability partner. MAIN OUTCOME MEASURE Change in SSE rates as assessed by enrollment and end-of-study surveys 6 months later. RESULTS Eighty one patients completed informed consent, however 12 patients did not complete trial enrollment procedures due to device incompatibility, leaving 69 patients who were randomized and analyzed [mean age 54.3 years, standard deviation 13.9). SSE rates increased significantly from 58% at baseline to 83% at 6 months (odds ratio 2.64, 95% confidence interval 1.20-4.09), with no difference among the intervention groups. The group with examination reminders alone had the highest (94%) overall satisfaction, and the group with accountability partners alone accounted for the lowest (71%). CONCLUSION A mobile app alone, or with reminders and/or accountability partners, was found to be an effective tool that can help to increase SSE rates. Skin examination reminders may help provide a better overall experience for a subset of patients. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT02520622.
Collapse
|