76
|
Peller J, Farahi F, Trammell SR. Hyperspectral imaging system based on a single-pixel camera design for detecting differences in tissue properties. APPLIED OPTICS 2018; 57:7651-7658. [PMID: 30462028 DOI: 10.1364/ao.57.007651] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 05/29/2018] [Indexed: 06/09/2023]
Abstract
Optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging (HSI) system that uses autofluorescence emission from collagen (400 nm) and nicotinamide adenine dinucleotide phosphate (475 nm) along with differences in the optical reflectance spectra to differentiate between healthy and thermally damaged tissue is discussed. The changes in protein autofluorescence and reflectance due to thermal damage are studied in ex vivo porcine tissue models. Thermal lesions were created in porcine skin (n=12) and liver (n=15) samples using an IR laser. The damaged regions were clearly visible in the hyperspectral images. Sizes of the thermally damaged regions as measured via HSI are compared to sizes of these regions as measured in white-light images and via physical measurement. Good agreement between the sizes measured in the hyperspectral images, white-light imaging, and physical measurements were found. The HSI system can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and cancer diagnosis and staging.
Collapse
|
77
|
Beltran A, Dadabhoy H, Ryan C, Dholakia R, Jia W, Baranowski J, Sun M, Baranowski T. Dietary Assessment with a Wearable Camera among Children: Feasibility and Intercoder Reliability. J Acad Nutr Diet 2018; 118:2144-2153. [PMID: 30115556 DOI: 10.1016/j.jand.2018.05.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 05/14/2018] [Indexed: 11/17/2022]
Abstract
BACKGROUND The eButton, a multisensor device worn on the chest, uses a camera to passively capture images of everything in front of the child throughout the day. These images can be analyzed to provide a passive method of dietary intake assessment. OBJECTIVE This study assessed the eButton's feasibility and intercoder reliability for dietary intake assessment. DESIGN Children were recruited in the summer and fall of 2015, in Houston, TX, to wear the eButton to take 2 full days of dietary images, and the child-parent dyad participated in a following-day interview to verify what dietitians recorded from the images. PARTICIPANTS/SETTING Thirty 9- to 13-year-old children participated during days convenient to them. MAIN OUTCOME MEASURES Two dietitians independently manually reviewed the images to identify eating events, foods in those events, and portion sizes. STATISTICAL ANALYSES PERFORMED Descriptive statistics of agreements and disagreements were calculated between dietitians and with children; t tests and Bland-Altman plots of differences in total kilocalories were calculated between dietitians and between initial dietitian estimates and those finalized after the verification interviews. RESULTS The dietitians agreed on the identity of 60.5% of the 1,026 foods but disagreed on 28.6% of the foods and on the names for 10.8% of the foods. After the verification interviews, the dietitians agreed with the child-parent dyads on the identity of 77.0% of the 921 foods; the child-parent dyad identified 12.4% of the day's foods when images were not available or not clear; the child-parent dyad clarified that 5.4% of the foods identified were not consumed by the child; and the child-parent dyad clarified the identity of 5.2% of the foods. A software-based approach (three-dimensional wire mesh) could be used to estimate portion size on 24% of the foods, and professional judgment was required for 67.8%. Mean caloric intakes per day were not statistically significantly different between dietitians but were different between dietitians and child-parent dyads in total and on day 2. CONCLUSIONS An early test of intercoder reliability of an all-day image method of dietary intake assessment obtained intercoder agreement between the two dietitians processing these images of intraclass correlation coefficient=0.67. A following-day verification interview with the child and parent was necessary to ensure completeness of estimates. Several feasibility problems occurred, which may be remedied with additional participant and dietitian training and further technological development.
Collapse
|
78
|
Nguyen DT, Pham TD, Lee YW, Park KR. Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor. SENSORS 2018; 18:s18082601. [PMID: 30096832 PMCID: PMC6111611 DOI: 10.3390/s18082601] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/02/2018] [Accepted: 08/05/2018] [Indexed: 11/27/2022]
Abstract
Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.
Collapse
|
79
|
Milocco A, Conroy S, Popovichev S, Sergienko G, Huber A. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET). RADIATION PROTECTION DOSIMETRY 2018; 180:109-114. [PMID: 29087509 DOI: 10.1093/rpd/ncx220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 09/26/2017] [Indexed: 06/07/2023]
Abstract
The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems.
Collapse
|
80
|
Taniguchi K, Nishikawa A. Mouthwitch: A Novel Head Mount Type Hands-Free Input Device that Uses the Movement of the Temple to Control a Camera. SENSORS (BASEL, SWITZERLAND) 2018; 18:E2273. [PMID: 30011872 PMCID: PMC6069124 DOI: 10.3390/s18072273] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/11/2018] [Accepted: 07/12/2018] [Indexed: 11/30/2022]
Abstract
We have developed an interface (mouthwitch) for a head-mounted type camera with which pictures can be taken with a head-mounted camera, hands-free, simply by "opening your mouth continuously for approximately one second and then closing it again". This mouthwitch uses a sensor equipped with an LED and photo transistor on the temple to optically measure the changes in the form of the temple that occur when the mouth is opened and closed. Eight test subjects (males and females aged between 21 and 44 years old) performed evaluation tests using this mouthwitch when resting, speaking, chewing, walking, and running. The results showed that all test subjects were able to open and close the mouth, and the measurement results pertaining to the temple shape changes that occurred at this time were highly reproducible. Additionally, the average value for accuracy obtained for the eight test subjects through the verification tests was 100% when resting, chewing, or walking, and 99.8% when speaking or running. Similarly, the average values for precision were 100% for all items, and the average values for recall were 100% when resting or chewing, 98.8% when speaking, 97.5% when walking, and 87.5% when running.
Collapse
|
81
|
Prakalapakorn SG, Freedman SF, Hutchinson AK, Saehout P, Cetinkaya-Rundel M, Wallace DK, Kulvichit K. Real-World Simulation of an Alternative Retinopathy of Prematurity Screening System in Thailand: A Pilot Study. J Pediatr Ophthalmol Strabismus 2018; 55:245-253. [PMID: 29809267 PMCID: PMC6482815 DOI: 10.3928/01913913-20180327-04] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/28/2017] [Indexed: 01/04/2023]
Abstract
PURPOSE To evaluate an alternative retinopathy of prematurity (ROP) screening system that identifies infants meriting examination by an ophthalmologist in a middle-income country. METHODS The authors hypothesized that grading posterior pole images for the presence of pre-plus or plus disease has high sensitivity to identify infants with type 1 ROP that requires treatment. Part 1 of the study evaluated the feasibility of having a non-ophthalmologist health care worker obtain retinal images of prematurely born infants using a non-contact retinal camera (Pictor; Volk Optical, Inc., Mentor, OH) that were of sufficient quality to grade for pre-plus or plus disease. Part 2 investigated the accuracy of grading these images to identify infants with type 1 ROP. The authors prospectively recruited infants at Chulalongkorn University Hospital (Bangkok, Thailand). On days infants underwent routine ROP screening, a trained health care worker imaged their retinas with Pictor. Two ROP experts graded these serial images from a remote location for image gradability and posterior pole disease. RESULTS Fifty-six infants were included. Overall, 69.4% of infant imaging sessions were gradable. Among gradable images, the sensitivity of both graders for identifying an infant with type 1 ROP by grading for the presence of pre-plus or plus disease was 1.0 (95% confidence interval [CI]: 0.31 to 1.0) for grader 1 and 1.0 (95% CI: 0.40 to 1.0) for grader 2. The specificity was 0.93 (95% CI: 0.76 to 0.99) for grader 1 and 0.74 (95% CI: 0.53 to 0.88) for grader 2. CONCLUSIONS It was feasible for a trained non-ophthalmologist health care worker to obtain retinal images of infants using the Pictor that were of sufficient quality to identify infants with type 1 ROP. [J Pediatr Ophthalmol Strabismus. 2018;55(4):245-253.].
Collapse
|
82
|
Lai KHW, Lee RPW, Yiu EPF. Ultrawide−field Retinal Selfie by Smartphone, High-definition Television, and a Novel Clip-On Lens. Ophthalmology 2018; 125:1027. [PMID: 29935662 DOI: 10.1016/j.ophtha.2018.03.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2018] [Revised: 03/13/2018] [Accepted: 03/16/2018] [Indexed: 11/29/2022] Open
|
83
|
Nishiguchi S, Wada N, Yamashiro H, Ishibashi H, Takeuchi I. Continuous recordings of the coral bleaching process on Sesoko Island, Okinawa, Japan, over about 50 days using an underwater camera equipped with a lens wiper. MARINE POLLUTION BULLETIN 2018; 131:422-427. [PMID: 29886967 DOI: 10.1016/j.marpolbul.2018.04.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2017] [Revised: 03/29/2018] [Accepted: 04/10/2018] [Indexed: 06/08/2023]
Abstract
The colours of the hermatypic corals Porites sp. and Acropora cytherea at Sesoko Island, Okinawa, Japan, were photographed continuously, from 19 July to 6 September 2016, by an underwater camera equipped with a lens wiper. The average seawater temperature during the study period was 29.9 °C. The daily average seawater temperature (DAST) was >30.0 °C until 23 August 2016, and a maximum value of 31.2 °C was recorded on 2 August 2016. Red, green, and blue (RGB) values of these corals were analysed based on photographs taken at 14:00. The RGB values of Porites sp. were stable throughout the observation period, while those of A. cytherea gradually increased (i.e. moved toward the "white" end of the spectrum) until the beginning of September. The present study demonstrated the usefulness of RGB analysis of photographs taken by an underwater camera equipped with a lens wiper for monitoring coral beaching.
Collapse
|
84
|
Grujić D, Vasiljević D, Pantelić D, Tomić L, Stamenković Z, Jelenković B. Infrared camera on a butterfly's wing. OPTICS EXPRESS 2018; 26:14143-14158. [PMID: 29877457 DOI: 10.1364/oe.26.014143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 04/26/2018] [Indexed: 06/08/2023]
Abstract
Thermal cameras were constructed long ago, but working principles and complex technologies still limit their resolution, total number of pixels, and sensitivity. We address the problem of finding a new sensing mechanism surpassing existing limits of thermal radiation detection. Here we reveal the new mechanism on the butterfly wing, whose wing-scales act as pixels of an imaging array on a thermal detector. We observed that the tiniest features of a Morpho butterfly wing-scale match the mean free path of air molecules at atmospheric pressure - a condition when the radiation-induced heating produces an additional, thermophoretic force that deforms the wing-scales. The resulting deformation field was imaged holographically with mK temperature sensitivity and 200 Hz response speed. By imitating butterfly wing-scales, the effect can be further amplified through a suitable choice of material, working pressure, sensor design, and detection method. The technique is universally applicable to any nano-patterned, micro-scale system in other spectral ranges, such as UV and terahertz.
Collapse
|
85
|
Liu J, Yuan Y, Zhou Y, Zhu X, Syed TN. Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense. SENSORS 2018; 18:s18051510. [PMID: 29751594 PMCID: PMC5982123 DOI: 10.3390/s18051510] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 05/04/2018] [Accepted: 05/08/2018] [Indexed: 12/01/2022]
Abstract
Fruit recognition based on depth information has been a hot topic due to its advantages. However, the present equipment and methods cannot meet the requirements of rapid and reliable recognition and location of fruits in close shot for robot harvesting. To solve this problem, we propose a recognition algorithm for citrus fruit based on RealSense. This method effectively utilizes depth-point cloud data in a close-shot range of 160 mm and different geometric features of the fruit and leaf to recognize fruits with a intersection curve cut by the depth-sphere. Experiments with close-shot recognition of six varieties of fruit under different conditions were carried out. The detection rates of little occlusion and adhesion were from 80–100%. However, severe occlusion and adhesion still have a great influence on the overall success rate of on-branch fruits recognition, the rate being 63.8%. The size of the fruit has a more noticeable impact on the success rate of detection. Moreover, due to close-shot near-infrared detection, there was no obvious difference in recognition between bright and dark conditions. The advantages of close-shot limited target detection with RealSense, fast foreground and background removal and the simplicity of the algorithm with high precision may contribute to high real-time vision-servo operations of harvesting robots.
Collapse
|
86
|
Ennis R, Schiller F, Toscani M, Gegenfurtner KR. Hyperspectral database of fruits and vegetables. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:B256-B266. [PMID: 29603941 DOI: 10.1364/josaa.35.00b256] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Accepted: 02/09/2018] [Indexed: 06/08/2023]
Abstract
We have built a hyperspectral database of 42 fruits and vegetables. Both the outside (skin) and inside of the objects were imaged. We used a Specim VNIR HS-CL-30-V8E-OEM mirror-scanning hyperspectral camera and took pictures at a spatial resolution of ∼57 px/deg by 800 pixels at a wavelength resolution of ∼1.12 nm. A stable, broadband illuminant was used. Images and software are freely available on our webserver (http://www.allpsych.uni-giessen.de/GHIFVD; pronounced "gift"). We performed two kinds of analyses on these images. First, when comparing the insides and outsides of the objects, we observed that the insides were lighter than the skins, and that the hues of the insides and skins were significantly correlated (circular correlation=0.638). Second, we compared the color distribution within each object to corresponding human color discrimination thresholds. We found a significant correlation (0.75) between the orientation of ellipses fit to the chromaticity distributions of our fruits and vegetables with the orientations of interpolated MacAdam discrimination ellipses. This indicates a close relationship between sensory processing and the characteristics of environmental objects.
Collapse
|
87
|
Cho SS, Zeh R, Pierce JT, Salinas R, Singhal S, Lee JYK. Comparison of Near-Infrared Imaging Camera Systems for Intracranial Tumor Detection. Mol Imaging Biol 2018; 20:213-220. [PMID: 28741043 PMCID: PMC11145178 DOI: 10.1007/s11307-017-1107-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
PURPOSE Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). PROCEDURES Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. RESULTS In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. CONCLUSION Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.
Collapse
|
88
|
Massei G, Coats J, Lambert MS, Pietravalle S, Gill R, Cowan D. Camera traps and activity signs to estimate wild boar density and derive abundance indices. PEST MANAGEMENT SCIENCE 2018; 74:853-860. [PMID: 29024317 DOI: 10.1002/ps.4763] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2017] [Revised: 10/06/2017] [Accepted: 10/06/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. RESULTS PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km2 . Increasing the density of camera traps above nine per km2 did not increase the precision of the estimate of wild boar density. CONCLUSION PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.
Collapse
|
89
|
Chen L, Parsons AM, Aria AB, Ciurea AM, Patel AB, Chan C, Griffin JR, Nguyen TH, Migden MR. Surgical site identification with personal digital device: A prospective pilot study. J Am Acad Dermatol 2018. [PMID: 29524583 DOI: 10.1016/j.jaad.2018.02.069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
BACKGROUND Various means to facilit ate accurate biopsy site identification have been proposed. OBJECTIVE To determine the accuracy of biopsy site identification by using photographs taken with a patient's digital device by a dermatologist versus professional medical photography. METHODS Photographs of circled biopsy sites were taken with personal digital devices by the principal investigator (PI). Another set of photographs was taken by a professional photographer. Secondary photographs were taken of the biopsy site location pointed to by the staff and PI on the basis of the personal digital device image and professional medical photography, respectively. On the basis of secondary photographs, 2 independent dermatologists determined whether the skin biopsy locations pointed out by the staff were consistent with the ones pointed out by PI. RESULTS Per dermatologist A, the staff correctly identified all 53 biopsy sites. Per dermatologist B, the staff were correct on 51 of 53 observations. Dermatologist C, the final arbiter, concurred with dermatologist A on the 2 cases in which dermatologist B was not certain of the location of the biopsy site. LIMITATIONS The mean interval from initial biopsy to reidentification of the site was 36.2 days. CONCLUSION Utilizing patients' personal digital devices is a cost-effective, Health Insurance Portability and Accountability Act-compliant, and readily available means to identify skin biopsy sites.
Collapse
|
90
|
Raber M, Patterson M, Jia W, Sun M, Baranowski T. Utility of eButton images for identifying food preparation behaviors and meal-related tasks in adolescents. Nutr J 2018; 17:32. [PMID: 29477143 PMCID: PMC6389239 DOI: 10.1186/s12937-018-0341-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 02/15/2018] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Food preparation skills may encourage healthy eating. Traditional assessment of child food preparation employs self- or parent proxy-reporting methods, which are prone to error. The eButton is a wearable all-day camera that has promise as an objective, passive method for measuring child food preparation practices. PURPOSE This paper explores the feasibility of the eButton to reliably capture home food preparation behaviors and practices in a sample of pre- and early adolescents (ages 9 to 13). METHODS This is a secondary analysis of two eButton pilot projects evaluating the dietary intake of pre- and early adolescents in or around Houston, Texas. Food preparation behaviors were coded into seven major categories including: browsing, altering food/adding seasoning, food media, meal related tasks, prep work, cooking and observing. Inter-coder reliability was measured using Cohen's kappa and percent agreement. RESULTS Analysis was completed on data for 31 participants. The most common activity was browsing in the pantry or fridge. Few participants demonstrated any food preparation work beyond unwrapping of food packages and combining two or more ingredients; actual cutting or measuring of foods were rare. CONCLUSIONS Although previous research suggests children who "help" prepare meals may obtain some dietary benefit, accurate assessment tools of food preparation behavior are lacking. The eButton offers a feasible approach to food preparation behavior measurement among pre- and early adolescents. Follow up research exploring the validity of this method in a larger sample, and comparisons between cooking behavior and dietary intake are needed.
Collapse
|
91
|
Yuan X, Pu Y. Parallel lensless compressive imaging via deep convolutional neural networks. OPTICS EXPRESS 2018; 26:1962-1977. [PMID: 29401917 DOI: 10.1364/oe.26.001962] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 01/14/2018] [Indexed: 06/07/2023]
Abstract
We report a parallel lensless compressive imaging system, which enjoys real-time reconstruction using deep convolutional neural networks. A prototype composed of a low-cost LCD, 16 photo-diodes and isolation chambers, has been built. Each of these 16 channels captures a fraction of the scene with 16×16 pixels and they are performing in parallel. An efficient inversion algorithm based on deep convolutional neural networks is developed to reconstruct the image. We have demonstrated encouraging results using only 2% (relative to pixel numbers, e.g. 5 for a block with 16×16 pixels) measurements per sensor for digits and around 10% measurements per sensor for facial images.
Collapse
|
92
|
Liu S, Xing Z, Wang Z, Tian S, Jahun FR. Development of machine-vision system for gap inspection of muskmelon grafted seedlings. PLoS One 2017; 12:e0189732. [PMID: 29267293 PMCID: PMC5739424 DOI: 10.1371/journal.pone.0189732] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2017] [Accepted: 11/30/2017] [Indexed: 11/19/2022] Open
Abstract
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
Collapse
|
93
|
Yaghoobi Ershadi N. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera. PLoS One 2017; 12:e0189145. [PMID: 29261719 PMCID: PMC5738070 DOI: 10.1371/journal.pone.0189145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 11/20/2017] [Indexed: 11/25/2022] Open
Abstract
Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.
Collapse
|
94
|
Cohen EJ, Bravi R, Minciacchi D. 3D reconstruction of human movement in a single projection by dynamic marker scaling. PLoS One 2017; 12:e0186443. [PMID: 29045439 PMCID: PMC5646814 DOI: 10.1371/journal.pone.0186443] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 10/02/2017] [Indexed: 11/19/2022] Open
Abstract
The three dimensional (3D) reconstruction of movement from videos is widely utilized as a method for spatial analysis of movement. Several approaches exist for a 3D reconstruction of movement using 2D video projection, most of them require the use of at least two cameras as well as the application of relatively complex algorithms. While a few approaches also exist for 3D reconstruction of movement with a single camera, they are not widely implemented due to tedious and complicated methods of calibration. Here we propose a simple method that allows for a 3D reconstruction of movement by using a single projection and three calibration markers. Such approach is made possible by tracking the change in diameter of a moving spherical marker within a 2D projection. In order to test our model, we compared kinematic results obtained with this model to those with the commonly used approach of two cameras and Direct Linear Transformation (DLT). Our results show that such approach appears to be in line with the DLT method for 3D reconstruction and kinematic analysis. The simplicity of this method may render it approachable for both clinical use as well as in uncontrolled environments.
Collapse
|
95
|
Kolowski JM, Forrester TD. Camera trap placement and the potential for bias due to trails and other features. PLoS One 2017; 12:e0186679. [PMID: 29045478 PMCID: PMC5646845 DOI: 10.1371/journal.pone.0186679] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 10/05/2017] [Indexed: 11/18/2022] Open
Abstract
Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.
Collapse
|
96
|
Johnson CA, Thapa S, George Kong YX, Robin AL. Performance of an iPad Application to Detect Moderate and Advanced Visual Field Loss in Nepal. Am J Ophthalmol 2017; 182:147-154. [PMID: 28844641 DOI: 10.1016/j.ajo.2017.08.007] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 08/05/2017] [Accepted: 08/12/2017] [Indexed: 11/19/2022]
Abstract
PURPOSE To evaluate the accuracy and efficiency of Visual Fields Easy (VFE), a free iPad app, for performing suprathreshold perimetric screening. DESIGN Prospective, cross-sectional validation study. METHODS We performed screening visual fields using a calibrated iPad 2 with the VFE application on 206 subjects (411 eyes): 210 normal (NL), 183 glaucoma (GL), and 18 diabetic retinopathy (DR) at Tilganga Institute of Ophthalmology, Kathmandu, Nepal. We correlated the results with a Humphrey Field Analyzer using 24-2 SITA Standard tests on 373 of these eyes (198 NL, 160 GL, 15 DR). RESULTS The number of missed locations on the VFE correlated with mean deviation (MD, r = 0.79), pattern standard deviation (PSD, r = 0.60), and number of locations that were worse than the 95% confidence limits for total deviation (r = 0.51) and pattern deviation (r = 0.68) using SITA Standard. iPad suprathreshold perimetry was able to detect most visual field deficits with moderate (MD of -6 to -12 dB) and advanced (MD worse than -12 dB) loss, but had greater difficulty in detecting early (MD better than -6 dB) loss, primarily owing to an elevated false-positive response rate. The average time to perform the Visual Fields Easy test was 3 minutes, 18 seconds (standard deviation = 16.88 seconds). DISCUSSION The Visual Fields Easy test procedure is a portable, fast, effective procedure for detecting moderate and advanced visual field loss. Improvements are currently underway to monitor eye and head tracking during testing, reduce testing time, improve performance, and eliminate the need to touch the video screen surface.
Collapse
|
97
|
Shetty R, Rao H, Khamar P, Sainani K, Vunnava K, Jayadev C, Kaweri L. Keratoconus Screening Indices and Their Diagnostic Ability to Distinguish Normal From Ectatic Corneas. Am J Ophthalmol 2017; 181:140-148. [PMID: 28687218 DOI: 10.1016/j.ajo.2017.06.031] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 06/24/2017] [Accepted: 06/27/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE To compare the diagnostic ability of 3 Scheimpflug devices in differentiating normal from ectatic corneas. DESIGN Comparison of diagnostic instrument accuracy. METHODS This study included 42 normal, 37 subclinical keratoconic, and 51 keratoconic eyes seen in a tertiary eye care institute. Keratoconus screening indices were evaluated using the Pentacam (Oculus, Wetzlar, Germany), Galilei (Ziemer, Biel, Switzerland), and Sirius (Costruzione Strumenti Oftalmici, Florence, Italy). Sensitivity, specificity, and area under receiver operating characteristic curve (AUC) were calculated. RESULTS Highest sensitivity (100%) to diagnose keratoconus was seen for 6 parameters on Pentacam and 1 on Galilei. None of the indices in Sirius reached 100% sensitivity. For subclinical keratoconus, the highest sensitivity (100%) was seen for 2 parameters on Pentacam but for none of them on Galilei and Sirius. All parameters were strong enough to differentiate keratoconus (AUC > 0.9). On comparing the best parameters of all 3 machines, the AUC of the Belin/Ambrosio enhanced ectasia total derivation (BAD-D) and the inferior-superior value (ISV) of Pentacam were statistically similar to that of the keratoconus prediction index (KPI) and keratoconus probability (Kprob) of Galilei (P = .27) and 4.5 mm root mean square per unit area (RMS/A) back of Sirius (P = .55). When differentiating subclinical from normal corneas, BAD-D was similar to the surface regularity index (SRI) of Galilei (P = .78) but was significantly greater than the 8 mm RMS/A back of Sirius (P = .002). CONCLUSION Keratoconus indices measured by all 3 machines can effectively differentiate keratoconus from normal corneas. However, new cutoff values might be needed to differentiate subclinical from normal corneas.
Collapse
|
98
|
Bombara CB, Dürr S, Machovsky-Capuska GE, Jones PW, Ward MP. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras. PLoS One 2017; 12:e0181859. [PMID: 28750073 PMCID: PMC5547700 DOI: 10.1371/journal.pone.0181859] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 07/07/2017] [Indexed: 11/22/2022] Open
Abstract
Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras–in conjunction with global positioning systems (GPS) loggers–to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts–equating to 8 to 147 contacts per dog per 24 hr–from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study, including visualisation of the nature of the contact between two dogs; and inclusion of a greater number of dogs in the study (which do not need to be wearing video or GPS collars). Some limitations identified included visualisation of contacts only during daylight hours; the camera lens being obscured on occasion by the dog’s mandible or the dog resting on the camera; an insufficiently wide viewing angle (36°); battery life and robustness of the deployments; high costs of the deployment; and analysis of large volumes of often unsteady video footage. This study demonstrates that dog-borne video cameras, are a feasible technology for estimating and characterising contacts between FRDDs. Modifying camera specifications and developing new analytical methods will improve applicability of this technology for monitoring FRDD populations, providing insights into dog-to-dog contacts and therefore how disease might spread within these populations.
Collapse
|
99
|
Jacob J, Paques M, Krivosic V, Dupas B, Erginay A, Tadayoni R, Gaudric A. Comparing Parafoveal Cone Photoreceptor Mosaic Metrics in Younger and Older Age Groups Using an Adaptive Optics Retinal Camera. Ophthalmic Surg Lasers Imaging Retina 2017; 48:45-50. [PMID: 28060393 DOI: 10.3928/23258160-20161219-06] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 11/02/2016] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. PATIENTS AND METHODS Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm2 and µm) and visual (cones/degrees2 and arcmin) units were computed. RESULTS Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. CONCLUSIONS The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.].
Collapse
|
100
|
Hernandez-Matas C, Zabulis X, Argyros AA. Retinal image registration through simultaneous camera pose and eye shape estimation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:3247-3251. [PMID: 28269000 DOI: 10.1109/embc.2016.7591421] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, a retinal image registration method is proposed. The approach utilizes keypoint correspondences and assumes that the human eye has a spherical or ellipsoidal shape. The image registration problem amounts to solving a camera 3D pose estimation problem and, simultaneously, an eye 3D shape estimation problem. The camera pose estimation problem is solved by estimating the relative pose between the views from which the images were acquired. The eye shape estimation problem parameterizes the shape and orientation of an ellipsoidal model for the eye. Experimental evaluation shows 17.91% reduction of registration error and 47.52% reduction of the error standard deviation over state of the art methods.
Collapse
|