1
|
Research on the Effect of Vibrational Micro-Displacement of an Astronomical Camera on Detector Imaging. SENSORS (BASEL, SWITZERLAND) 2024; 24:1025. [PMID: 38339742 PMCID: PMC10857430 DOI: 10.3390/s24031025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/01/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Scientific-grade cameras are frequently employed in industries such as spectral imaging technology, aircraft, medical detection, and astronomy, and are characterized by high precision, high quality, fast speed, and high sensitivity. Especially in the field of astronomy, obtaining information about faint light often requires long exposure with high-resolution cameras, which means that any external factors can cause the camera to become unstable and result in increased errors in the detection results. This paper aims to investigate the effect of displacement introduced by various vibration factors on the imaging of an astronomical camera during long exposure. The sources of vibration are divided into external vibration and internal vibration. External vibration mainly includes environmental vibration and resonance effects, while internal vibration mainly refers to the vibration caused by the force generated by the refrigeration module inside the camera during the working process of the camera. The cooling module is divided into water-cooled and air-cooled modes. Through the displacement and vibration experiments conducted on the camera, it is proven that the air-cooled mode will cause the camera to produce greater displacement changes relative to the water-cooled mode, leading to blurring of the imaging results and lowering the accuracy of astronomical detection. This paper compares the effects of displacement produced by two methods, fan cooling and water-circulation cooling, and proposes improvements to minimize the displacement variations in the camera and improve the imaging quality. This study provides a reference basis for the design of astronomical detection instruments and for determining the vibration source of cameras, which helps to promote the further development of astronomical detection.
Collapse
|
2
|
Eyes on privacy: acceptance of video-based AAL impacted by activities being filmed. Front Public Health 2023; 11:1186944. [PMID: 37469701 PMCID: PMC10352951 DOI: 10.3389/fpubh.2023.1186944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 06/12/2023] [Indexed: 07/21/2023] Open
Abstract
Introduction The use of video-based ambient assisted living (AAL) technologies represents an innovative approach to supporting older adults living as independently and autonomously as possible in their homes. These visual devices have the potential to increase security, perceived safety, and relief for families and caregivers by detecting, among others, emergencies or serious health situations. Despite these potentials and advantages, using video-based technologies for monitoring different activities in everyday life evokes concerns about privacy intrusion and data security. For a sustainable design and adoption of such technical innovations, a detailed analysis of future users' acceptance, including perceived benefits and barriers is required and possible effects and privacy needs of different activities being filmed should be taken into account. Methods Therefore, the present study investigated the acceptance and benefit-barrier-perception of using video-based AAL technologies for different activities of daily living based on a scenario-based online survey (N = 146). Results In the first step, the results identified distinct evaluation patterns for 25 activities of daily living with very high (e.g., changing clothes, showering) and very low privacy needs (e.g., gardening, eating, and drinking). In a second step, three exemplary activity types were compared regarding acceptance, perceived benefits, and barriers. The acceptance and the perceived benefits of using video-based AAL technologies revealed to be higher in household and social activities compared to intimate activities. The strongest barrier perception was found for intimate activities and mainly regarded privacy concerns. Discussion The results can be used to derive design and information recommendations for the conception, development, and communication of video-based AAL technologies in order to meet the requirements and needs of future users.
Collapse
|
3
|
Research Scenarios of Autonomous Vehicles, the Sensors and Measurement Systems Used in Experiments. SENSORS (BASEL, SWITZERLAND) 2022; 22:6586. [PMID: 36081043 PMCID: PMC9460663 DOI: 10.3390/s22176586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 08/26/2022] [Accepted: 08/29/2022] [Indexed: 06/15/2023]
Abstract
Automated and autonomous vehicles are in an intensive development phase. It is a phase that requires a lot of modelling and experimental research. Experimental research into these vehicles is in its initial state. There is a lack of findings and standardized recommendations for the organization and creation of research scenarios. There are also many difficulties in creating research scenarios. The main difficulties are the large number of systems for simultaneous checking. Additionally, the vehicles have a very complicated structure. A review of current publications allowed for systematization of the research scenarios of vehicles and their components as well as the measurement systems used. These include perception systems, automated response to threats, and critical situations in the area of road safety. The scenarios analyzed ensure that the planned research tasks can be carried out, including the investigation of systems that enable autonomous driving. This study uses passenger cars equipped with highly sophisticated sensor systems and localization devices. Perception systems are the necessary equipment during the conducted study. They provide recognition of the environment, mainly through vision sensors (cameras) and lidars. The research tasks include autonomous driving along a detected road lane on a curvilinear track. The effective maintenance of the vehicle in this lane is assessed. The location used in the study is a set of specialized research tracks on which stationary or moving obstacles are often placed.
Collapse
|
4
|
Single-Shot Intrinsic Calibration for Autonomous Driving Applications. SENSORS 2022; 22:s22052067. [PMID: 35271212 PMCID: PMC8915015 DOI: 10.3390/s22052067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 02/26/2022] [Accepted: 03/03/2022] [Indexed: 12/10/2022]
Abstract
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these intervals defined, we generated thousands of multiple checkerboard poses and evaluated them using ground truth values, in order to obtain configurations that lead to accurate camera intrinsic parameters. We used these results to define guidelines to create multiple checkerboard setups. We tested and verified the robustness of the guidelines in the simulator, and additionally in the real world with cameras with different focal lengths and distortion profiles, which help generalize our findings. Finally, we used a 3D LiDAR (Light Detection and Ranging) to project and confirm the quality of the intrinsic parameters projection. We found it possible to obtain accurate intrinsic parameters for 3D applications, with at least seven checkerboard setups in a single image that follow our positioning guidelines.
Collapse
|
5
|
Secured Perimeter with Electromagnetic Detection and Tracking with Drone Embedded and Static Cameras. SENSORS 2021; 21:s21217379. [PMID: 34770685 PMCID: PMC8587886 DOI: 10.3390/s21217379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/22/2021] [Accepted: 11/04/2021] [Indexed: 11/26/2022]
Abstract
Perimeter detection systems detect intruders penetrating protected areas, but modern solutions require the combination of smart detectors, information networks and controlling software to reduce false alarms and extend detection range. The current solutions available to secure a perimeter (infrared and motion sensors, fiber optics, cameras, radar, among others) have several problems, such as sensitivity to weather conditions or the high failure alarm rate that forces the need for human supervision. The system exposed in this paper overcomes these problems by combining a perimeter security system based on CEMF (control of electromagnetic fields) sensing technology, a set of video cameras that remain powered off except when an event has been detected. An autonomous drone is also informed where the event has been initially detected. Then, it flies through computer vision to follow the intruder for as long as they remain within the perimeter. This paper covers a detailed view of how all three components cooperate in harmony to protect a perimeter effectively, without having to worry about false alarms, blinding due to weather conditions, clearance areas, or privacy issues. The system also provides extra information of where the intruder is or has been, at all times, no matter whether they have become mixed up with more people or not during the attack.
Collapse
|
6
|
The Application of Cameras in Precision Pig Farming: An Overview for Swine-Keeping Professionals. Animals (Basel) 2021; 11:ani11082343. [PMID: 34438800 PMCID: PMC8388688 DOI: 10.3390/ani11082343] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/19/2021] [Accepted: 08/06/2021] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The preeminent purpose of precision livestock farming (PLF) is to provide affordable and straightforward solutions to severe problems with certainty. Some data collection techniques in PLF such as RFID are accurate but not affordable for small- and medium-sized farms. On the other hand, camera sensors are cheap, commonly available, and easily used to collect information compared to other sensor systems in precision pig farming. Cameras have ample chance to monitor pigs with high precision at an affordable cost. However, the lack of targeted information about the application of cameras in the pig industry is a shortcoming for swine farmers and researchers. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors, and presents automated approaches for monitoring and investigating pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors. In addition, the review summarizes the related literature and points out limitations to open up new dimensions for future researchers to explore. Abstract Pork is the meat with the second-largest overall consumption, and chicken, pork, and beef together account for 92% of global meat production. Therefore, it is necessary to adopt more progressive methodologies such as precision livestock farming (PLF) rather than conventional methods to improve production. In recent years, image-based studies have become an efficient solution in various fields such as navigation for unmanned vehicles, human–machine-based systems, agricultural surveying, livestock, etc. So far, several studies have been conducted to identify, track, and classify the behaviors of pigs and achieve early detection of disease, using 2D/3D cameras. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors and presents automated approaches for the monitoring and investigation of pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors.
Collapse
|
7
|
Noise and landscape features influence habitat use of mammalian herbivores in a natural gas field. J Anim Ecol 2020; 90:875-885. [PMID: 33368272 DOI: 10.1111/1365-2656.13416] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 10/28/2020] [Indexed: 11/30/2022]
Abstract
Anthropogenic noise is a complex disturbance known to elicit a variety of responses in wild animals. Most studies examining the effects of noise on wildlife focus on vocal species, although theory suggests that the acoustic environment influences non-vocal species as well. Common mammalian prey species, like mule deer and hares and rabbits (members of the family Leporidae), rely on acoustic cues for information regarding predation, but the impacts of noise on their behaviour has received little attention. We paired acoustic recorders with camera traps to explore how average daily levels of anthropogenic noise from natural gas activity impacted occupancy and detection of mammalian herbivores in an energy field in the production phase of development. We consider the effects of noise in the context of several physical landscape variables associated with natural gas infrastructure that are known to influence habitat use patterns in mule deer. Our results suggest that mule deer detection probability was influenced by the interaction between physical landscape features and anthropogenic noise, with noise strongly reducing habitat use. In contrast, leporid habitat use was not related to noise but was influenced by landscape features. Notably, mule deer showed a stronger predicted negative response to roads with high noise exposure. This study highlights the complex interactions of anthropogenic disturbance and wildlife distribution and presents important evidence that the effects of anthropogenic noise should be considered in research focused on non-vocal specialist species and management plans for mule deer and other large ungulates.
Collapse
|
8
|
An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor. SENSORS 2020; 20:s20195578. [PMID: 33003402 PMCID: PMC7582330 DOI: 10.3390/s20195578] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 09/23/2020] [Accepted: 09/26/2020] [Indexed: 11/24/2022]
Abstract
Near-infrared (NIR) images are very useful in many image processing applications, including banknote recognition, vein detection, and surveillance, to name a few. To acquire the NIR image together with visible range signals, an imaging device should be able to simultaneously capture NIR and visible range images. An implementation of such a system having separate sensors for NIR and visible light has practical shortcomings due to its size and hardware cost. To overcome this, a single sensor-based acquisition method is investigated in this paper. The proposed imaging system is equipped with a conventional color filter array of cyan, magenta, yellow, and green, and achieves signal separation by applying a proposed separation matrix which is derived by mathematical modeling of the signal acquisition structure. The elements of the separation matrix are calculated through color space conversion and experimental data. Subsequently, an additional denoising process is implemented to enhance the quality of the separated images. Experimental results show that the proposed method successfully separates the acquired mixed image of visible and near-infrared signals into individual red, green, and blue (RGB) and NIR images. The separation performance of the proposed method is compared to that of related work in terms of the average peak-signal-to-noise-ratio (PSNR) and color distance. The proposed method attains average PSNR value of 37.04 and 33.29 dB, respectively for the separated RGB and NIR images, which is respectively 6.72 and 2.55 dB higher than the work used for comparison.
Collapse
|
9
|
Skymask Matching Aided Positioning Using Sky-Pointing Fisheye Camera and 3D City Models in Urban Canyons. SENSORS 2020; 20:s20174728. [PMID: 32825673 PMCID: PMC7506637 DOI: 10.3390/s20174728] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 08/18/2020] [Accepted: 08/19/2020] [Indexed: 11/17/2022]
Abstract
3D-mapping-aided (3DMA) global navigation satellite system (GNSS) positioning that improves positioning performance in dense urban areas has been under development in recent years, but it still faces many challenges. This paper details a new algorithm that explores the potential of using building boundaries for positioning and heading estimation. Rather than applying complex simulations to analyze and correct signal reflections by buildings, the approach utilizes a convolutional neural network to differentiate between the sky and building in a sky-pointing fisheye image. A new skymask matching algorithm is then proposed to match the segmented fisheye images with skymasks generated from a 3D building model. Each matched skymask holds a latitude, longitude coordinate and heading angle to determine the precise location of the fisheye image. The results are then compared with the smartphone GNSS and advanced 3DMA GNSS positioning methods. The proposed method provides degree-level heading accuracy, and improved positioning accuracy similar to other advanced 3DMA GNSS positioning methods in a rich urban environment.
Collapse
|
10
|
Comparison of gait speeds from wearable camera and accelerometer in structured and semi-structured environments. Healthc Technol Lett 2020; 7:25-28. [PMID: 32190337 PMCID: PMC7067055 DOI: 10.1049/htl.2019.0015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2019] [Revised: 09/07/2019] [Accepted: 10/23/2019] [Indexed: 11/30/2022] Open
Abstract
A feasibility study was conducted to investigate the use of a wearable gait analysis system for classifying gait speed using a low-cost wearable camera in a semi-structured indoor setting. Data were collected from 19 participants who wore the system during indoor walk sequences at varying self-determined speeds (slow, medium, and fast). Gait parameters using this system were compared with parameters obtained from a vest comprising of a single triaxial accelerometer and from a marker-based optical motion-capture system. Computer-vision techniques and signal processing methods were used to generate frequency-domain gait parameters from each gait-recording device, and those parameters were analysed to determine the effectiveness of the different measurement systems in discriminating gait speed. Results indicate that the authors’ low-cost, portable, vision-based system can be effectively used for in-home gait analysis.
Collapse
|
11
|
Learning colon centreline from optical colonoscopy, a new way to generate a map of the internal colon surface. Healthc Technol Lett 2020; 6:187-190. [PMID: 32038855 PMCID: PMC6952246 DOI: 10.1049/htl.2019.0073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 10/02/2019] [Indexed: 11/20/2022] Open
Abstract
Optical colonoscopy is known as a gold standard screening method in detecting and removing cancerous polyps. During this procedure, some polyps may be undetected due to their positions, not being covered by the camera or missed by the surgeon. In this Letter, the authors introduce a novel convolutional neural network (ConvNet) algorithm to map the internal colon surface to a 2D map (visibility map), which can be used to increase the awareness of clinicians about areas they might miss. This was achieved by leveraging a colonoscopy simulator to generate a dataset consisting of colonoscopy video frames and their corresponding colon centreline (CCL) points in 3D camera coordinates. A pair of video frames were used as input to a ConvNet, whereas the output was a point on the CCL and its direction vector. By knowing CCL for each frame and roughly modelling the colon as a cylinder, frames could be unrolled to build a visibility map. They validated their results using both simulated and real colonoscopy frames. Their results showed that using consecutive simulated frames to learn the CCL can be generalised to real colonoscopy video frames to generate a visibility map.
Collapse
|
12
|
Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System. SENSORS 2019; 19:s19235309. [PMID: 31810300 PMCID: PMC6929071 DOI: 10.3390/s19235309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 11/27/2019] [Accepted: 11/29/2019] [Indexed: 11/17/2022]
Abstract
The binocular vision system is widely used in three-dimensional measurement, drone navigation, and many other fields. However, due to the high cost, large volume, and inconvenient operation of the two-camera system, it is difficult to meet the weight and load requirements of the UAV system. Therefore, the study of mirror binocular with single camera was carried out. Existing mirror binocular systems place the catadioptric components in front of the lens, which makes the volume of measurement system still large. In this paper, a catadioptric postposition system is designed, which places the prism behind the lens to achieve mirror binocular imaging. The influence of the post prism on the focal length and imaging surface of the optical system is analyzed. The feasibility of post-mirror binocular imaging are verified by experiments, and it is reasonable to compensate the focal length change by changing the back focal plane position. This research laid the foundation for the subsequent research on the 3D reconstruction of the novel mirror binocular system.
Collapse
|
13
|
Simultaneous shape and camera-projector parameter estimation for 3D endoscopic system using CNN-based grid-oneshot scan. Healthc Technol Lett 2019; 6:249-254. [PMID: 32038866 PMCID: PMC6943237 DOI: 10.1049/htl.2019.0070] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 10/02/2019] [Indexed: 11/20/2022] Open
Abstract
For effective in situ endoscopic diagnosis and treatment, measurement of polyp sizes is important. For this purpose, 3D endoscopic systems have been researched. Among such systems, an active stereo technique, which projects a special pattern wherein each feature is coded, is a promising approach because of simplicity and high precision. However, previous works of this approach have problems. First, the quality of 3D reconstruction depended on the stabilities of feature extraction from the images captured by the endoscope camera. Second, due to the limited pattern projection area, the reconstructed region was relatively small. In this Letter, the authors propose a learning-based technique using convolutional neural networks to solve the first problem and an extended bundle adjustment technique, which integrates multiple shapes into a consistent single shape, to address the second. The effectiveness of the proposed techniques compared to previous techniques was evaluated experimentally.
Collapse
|
14
|
Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality. Healthc Technol Lett 2019; 6:255-260. [PMID: 32038867 PMCID: PMC6952262 DOI: 10.1049/htl.2019.0094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 10/02/2019] [Indexed: 12/28/2022] Open
Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Collapse
|
15
|
An Indoor Positioning Approach Based on Fusion of Cameras and Infrared Sensors. SENSORS 2019; 19:s19112519. [PMID: 31159431 PMCID: PMC6603635 DOI: 10.3390/s19112519] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 05/13/2019] [Accepted: 05/28/2019] [Indexed: 11/16/2022]
Abstract
A method for infrared and cameras sensor fusion, applied to indoor positioning in intelligent spaces, is proposed in this work. The fused position is obtained with a maximum likelihood estimator from infrared and camera independent observations. Specific models are proposed for variance propagation from infrared and camera observations (phase shifts and image respectively) to their respective position estimates and to the final fused estimation. Model simulations are compared with real measurements in a setup designed to validate the system. The difference between theoretical prediction and real measurements is between 0.4 cm (fusion) and 2.5 cm (camera), within a 95% confidence margin. The positioning precision is in the cm level (sub-cm level can be achieved at most tested positions) in a 4×3 m locating cell with 5 infrared detectors on the ceiling and one single camera, at distances from target up to 5 m and 7 m respectively. Due to the low cost system design and the results observed, the system is expected to be feasible and scalable to large real spaces.
Collapse
|
16
|
Usefulness of Wearable Cameras as a Tool to Enhance Chronic Disease Self-Management: Scoping Review. JMIR Mhealth Uhealth 2019; 7:e10371. [PMID: 30609985 PMCID: PMC6682294 DOI: 10.2196/10371] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 08/13/2018] [Accepted: 08/30/2018] [Indexed: 11/30/2022] Open
Abstract
Background Self-management is a critical component of chronic disease management and can include a host of activities, such as adhering to prescribed medications, undertaking daily care activities, managing dietary intake and body weight, and proactively contacting medical practitioners. The rise of technologies (mobile phones, wearable cameras) for health care use offers potential support for people to better manage their disease in collaboration with their treating health professionals. Wearable cameras can be used to provide rich contextual data and insight into everyday activities and aid in recall. This information can then be used to prompt memory recall or guide the development of interventions to support self-management. Application of wearable cameras to better understand and augment self-management by people with chronic disease has yet to be investigated. Objective The objective of our review was to ascertain the scope of the literature on the use of wearable cameras for self-management by people with chronic disease and to determine the potential of wearable cameras to assist people to better manage their disease. Methods We conducted a scoping review, which involved a comprehensive electronic literature search of 9 databases in July 2017. The search strategy focused on studies that used wearable cameras to capture one or more modifiable lifestyle risk factors associated with chronic disease or to capture typical self-management behaviors, or studies that involved a chronic disease population. We then categorized and described included studies according to their characteristics (eg, behaviors measured, study design or type, characteristics of the sample). Results We identified 31 studies: 25 studies involved primary or secondary data analysis, and 6 were review, discussion, or descriptive articles. Wearable cameras were predominantly used to capture dietary intake, physical activity, activities of daily living, and sedentary behavior. Populations studied were predominantly healthy volunteers, school students, and sports people, with only 1 study examining an intervention using wearable cameras for people with an acquired brain injury. Most studies highlighted technical or ethical issues associated with using wearable cameras, many of which were overcome. Conclusions This scoping review highlighted the potential of wearable cameras to capture health-related behaviors and risk factors of chronic disease, such as diet, exercise, and sedentary behaviors. Data collected from wearable cameras can be used as an adjunct to traditional data collection methods such as self-reported diaries in addition to providing valuable contextual information. While most studies to date have focused on healthy populations, wearable cameras offer promise to better understand self-management of chronic disease and its context.
Collapse
|
17
|
Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation. Healthc Technol Lett 2018; 5:221-225. [PMID: 30464854 PMCID: PMC6222243 DOI: 10.1049/htl.2018.5079] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 09/03/2018] [Indexed: 11/20/2022] Open
Abstract
Major hurdles for Microsoft's HoloLens as a tool in medicine have been accessing tracking data, as well as a relatively high-localisation error of the displayed information; cumulatively resulting in its limited use and minimal quantification. The following work investigates the augmentation of HoloLens with the proprietary image processing SDK Vuforia, allowing integration of data from its front-facing RGB camera to provide more spatially stable holograms for neuronavigational use. Continuous camera tracking was able to maintain hologram registration with a mean perceived drift of 1.41 mm, as well as a mean sub 2-mm surface point localisation accuracy of 53%, all while allowing the researcher to walk about a test area. This represents a 68% improvement for the later and a 34% improvement for the former compared with a typical HoloLens deployment used as a control. Both represent a significant improvement on hologram stability given the current state-of-the-art, and to the best of the authors knowledge are the first example of quantified measurements when augmenting hologram stability using data from the RGB sensor.
Collapse
|
18
|
Methods for the Real-World Evaluation of Fall Detection Technology: A Scoping Review. SENSORS 2018; 18:s18072060. [PMID: 29954155 PMCID: PMC6068511 DOI: 10.3390/s18072060] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 06/18/2018] [Accepted: 06/25/2018] [Indexed: 01/08/2023]
Abstract
Falls in older adults present a major growing healthcare challenge and reliable detection of falls is crucial to minimise their consequences. The majority of development and testing has used laboratory simulations. As simulations do not cover the wide range of real-world scenarios performance is poor when retested using real-world data. There has been a move from the use of simulated falls towards the use of real-world data. This review aims to assess the current methods for real-world evaluation of fall detection systems, identify their limitations and propose improved robust methods of evaluation. Twenty-two articles met the inclusion criteria and were assessed with regard to the composition of the datasets, data processing methods and the measures of performance. Real-world tests of fall detection technology are inherently challenging and it is clear the field is in its infancy. Most studies used small datasets and studies differed on how to quantify the ability to avoid false alarms and how to identify non-falls, a concept which is virtually impossible to define and standardise. To increase robustness and make results comparable, larger standardised datasets are needed containing data from a range of participant groups. Measures that depend on the definition and identification of non-falls should be avoided. Sensitivity, precision and F-measure emerged as the most suitable robust measures for evaluating the real-world performance of fall detection systems.
Collapse
|
19
|
FieldSAFE: Dataset for Obstacle Detection in Agriculture. SENSORS 2017; 17:s17112579. [PMID: 29120383 PMCID: PMC5713196 DOI: 10.3390/s17112579] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 11/06/2017] [Accepted: 11/07/2017] [Indexed: 12/01/2022]
Abstract
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
Collapse
|
20
|
CuFusion: Accurate Real-Time Camera Tracking and Volumetric Scene Reconstruction with a Cuboid. SENSORS 2017; 17:s17102260. [PMID: 28974030 PMCID: PMC5677406 DOI: 10.3390/s17102260] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Revised: 09/19/2017] [Accepted: 09/27/2017] [Indexed: 12/02/2022]
Abstract
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.
Collapse
|
21
|
The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions. EARTH AND SPACE SCIENCE (HOBOKEN, N.J.) 2017; 4:506-539. [PMID: 29098171 PMCID: PMC5652233 DOI: 10.1002/2016ea000252] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2017] [Revised: 04/18/2017] [Accepted: 06/08/2017] [Indexed: 05/13/2023]
Abstract
The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from ~1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the ~2 m tall Remote Sensing Mast, have a 360° azimuth and ~180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at ~66 cm above the surface. Its fixed focus lens is in focus from ~2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of ~70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.
Collapse
|
22
|
λ = 2.4 - 5 μm spectroscopy with the JWST NIRCam instrument. JOURNAL OF ASTRONOMICAL TELESCOPES, INSTRUMENTS, AND SYSTEMS 2017; 3:035001. [PMID: 29250563 PMCID: PMC5729281 DOI: 10.1117/1.jatis.3.3.035001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The James Webb Space Telescope near-infrared camera (JWST NIRCam) has two 2'. 2 × 2'.2 fields of view that can be observed with either imaging or spectroscopic modes. Either of two R ∼ 1500 grisms with orthogonal dispersion directions can be used for slitless spectroscopy over λ = 2.4 - 5.0 μm in each module, and shorter wavelength observations of the same fields can be obtained simultaneously. We describe the design drivers and parameters of the grisms and present the latest predicted spectroscopic sensitivities, saturation limits, resolving powers, and wavelength coverage values. Simultaneous short wavelength (0.6 - 2.3 μm) imaging observations of the 2.4 - 5.0 μm spectroscopic field can be performed in one of several different filter bands, either in-focus or defocused via weak lenses internal to NIRCam. The grisms are available for single-object time series spectroscopy and wide-field multi-object slitless spectroscopy modes in the first cycle of JWST observations. We present and discuss operational considerations including subarray sizes and data volume limits. Potential scientific uses of the grisms are illustrated with simulated observations of deep extragalactic fields, dark clouds, and transiting exoplanets. Information needed to plan observations using these spectroscopic modes are also provided.
Collapse
|
23
|
Comprehensive and Highly Accurate Measurements of Crane Runways, Profiles and Fastenings. SENSORS 2017; 17:s17051118. [PMID: 28505076 PMCID: PMC5470794 DOI: 10.3390/s17051118] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 05/04/2017] [Accepted: 05/06/2017] [Indexed: 11/24/2022]
Abstract
The process of surveying crane runways has been continually refined due to the competitive situation, modern surveying instruments, additional sensors, accessories and evaluation procedures. Guidelines, such as the International Organization for Standardization (ISO) 12488-1, define target values that must be determined by survey. For a crane runway these are for example the span, the position and height of the rails. The process has to be objective and reproducible. However, common processes of surveying crane runways do not meet these requirements sufficiently. The evaluation of the protocols, ideally by an expert, requires many years of experience. Additionally, the recording of crucial parameters, e.g., the wear of the rail, or the condition of the rail fastening and rail joints, is not regulated and for that reason are often not considered during the measurement. To solve this deficit the Advanced Rail Track Inspection System (ARTIS) was developed. ARTIS is used to measure the 3D position of crane rails, the cross-section of the crane rails, joints and, for the first time, the (crane-rail) fastenings. The system consists of a monitoring vehicle and an external tracking sensor. It makes kinematic observations with the tracking sensor from outside the rail run, e.g., the floor of an overhead crane runway, possible. In this paper we present stages of the development process of ARTIS, new target values, calibration of sensors and results of a test measurement.
Collapse
|
24
|
Development of a handheld smart dental instrument for root canal imaging. JOURNAL OF BIOMEDICAL OPTICS 2016; 21:114002. [PMID: 27851855 PMCID: PMC8357325 DOI: 10.1117/1.jbo.21.11.114002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 10/27/2016] [Indexed: 06/06/2023]
Abstract
Ergonomics and ease of visualization play a major role in the effectiveness of endodontic therapy. Using only commercial off-the-shelf components, we present the pulpascope—a prototype of a compact, handheld, wireless dental instrument for pulp cavity imaging. This instrument addresses the current limitations of occupational injuries, size, and cost that exist with current endodontic microscopes used for root canal procedures. Utilizing a 15,000 coherent, imaging fiber bundle along with an integrated illumination source and wireless CMOS sensor, we demonstrate images of various teeth with resolution of ?48???m and angular field-of-view of 70 deg.
Collapse
|
25
|
The sensory power of cameras and noise meters for protest surveillance in South Korea. SOCIAL STUDIES OF SCIENCE 2016; 46:396-416. [PMID: 28948889 DOI: 10.1177/0306312716648403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This article analyzes sensory aspects of material politics in social movements, focusing on two police tools: evidence-collecting cameras and noise meters for protest surveillance. Through interviews with Korean political activists, this article examines the relationship between power and the senses in the material culture of Korean protests and asks why cameras and noise meters appeared in order to control contemporary peaceful protests in the 2000s. The use of cameras and noise meters in contemporary peaceful protests evidences the exercise of what Michel Foucault calls 'micro-power'. Building on material culture studies, this article also compares the visual power of cameras with the sonic power of noise meters, in terms of a wide variety of issues: the control of things versus words, impacts on protest size, differential effects on organizers and participants, and differences in timing regarding surveillance and punishment.
Collapse
|
26
|
Bridging the gap between real-life data and simulated data by providing a highly realistic fall dataset for evaluating camera-based fall detection algorithms. Healthc Technol Lett 2016; 3:6-11. [PMID: 27222726 DOI: 10.1049/htl.2015.0047] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Revised: 12/21/2015] [Accepted: 02/02/2016] [Indexed: 11/19/2022] Open
Abstract
Fall incidents are an important health hazard for older adults. Automatic fall detection systems can reduce the consequences of a fall incident by assuring that timely aid is given. The development of these systems is therefore getting a lot of research attention. Real-life data which can help evaluate the results of this research is however sparse. Moreover, research groups that have this type of data are not at liberty to share it. Most research groups thus use simulated datasets. These simulation datasets, however, often do not incorporate the challenges the fall detection system will face when implemented in real-life. In this Letter, a more realistic simulation dataset is presented to fill this gap between real-life data and currently available datasets. It was recorded while re-enacting real-life falls recorded during previous studies. It incorporates the challenges faced by fall detection algorithms in real life. A fall detection algorithm from Debard et al. was evaluated on this dataset. This evaluation showed that the dataset possesses extra challenges compared with other publicly available datasets. In this Letter, the dataset is discussed as well as the results of this preliminary evaluation of the fall detection algorithm. The dataset can be downloaded from www.kuleuven.be/advise/datasets.
Collapse
|
27
|
The Impact of Red Light Cameras on Crashes Within Miami-Dade County, Florida. TRAFFIC INJURY PREVENTION 2015; 16:773-780. [PMID: 25793316 DOI: 10.1080/15389588.2015.1023896] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVE To determine the safety effect of red light camera (RLC) programs, this study attempted to estimate its impact on collisions within Miami-Dade County, Florida Methods: A before-after evaluation using a comparison group along with traffic control correction was employed. Twenty signalized intersections with RLCs that began enforcement on January 1, 2011, were matched to 2 comparison sites located at least 2 miles from camera sites to minimize spillover effect. An empirical Bayes analysis was used to account for potential regression to the mean effects. An index of effectiveness along with 95% confidence intervals were calculated based on the comparison between the estimated and actual number of crashes in the after period. RESULTS During the first year, RLC sites experienced a marginal decrease in right angle/turn collisions (-3%), a significant increase in rear-end collisions (+40%), and significant decreases in all injury (-19%) and RLR-related injury collisions (-24%). An increase in right angle/turning (+14%) and rear-end (+51%) collisions at the RLC sites was observed after 2 years despite camera enforcement. A significant reduction in RLR-related injury crashes (-17%), however, was still observed after 2 years. A nonsignificant decline in all injury collisions (-12%) was also noted. CONCLUSIONS RLCs showed a benefit in reducing RLR-related injury collisions at camera sites after enforcement commenced, yet its tradeoff was a large increase in rear-end collisions. There was inconclusive evidence whether RLCs affected right angle/turning and all injury collisions. Statutory changes in crash reporting during the second year of camera enforcement affected the incidence of right angle and rear-end collisions; nevertheless, a "novelty effect" could not be ruled out. Future research should consider events such as low frequencies of severe injury/fatal collisions and changes in crash reporting requirements when conducting RLC analyses.
Collapse
|
28
|
Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras. Cardiovasc J Afr 2014; 25:44-9. [PMID: 24844547 PMCID: PMC4026769 DOI: 10.5830/cvja-2013-082] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Accepted: 11/18/2013] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. METHODS The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. RESULTS The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.
Collapse
|
29
|
The effectiveness of red light cameras in the United States-a literature review. TRAFFIC INJURY PREVENTION 2014; 15:542-550. [PMID: 24867566 DOI: 10.1080/15389588.2013.845751] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
OBJECTIVE To examine the available scientific evidence based on peer-reviewed publications concerning the effectiveness of red light cameras (RLCs) within the U.S. traffic system. METHODS Relevant literature published prior to December 2012 was retrieved from the PubMed, Medline, and Engineering Index databases using free-text term queries. Jurisdictions with either a fixed number of RLCs studied or area wide programs within the United States were included. RLC studies with additional interventions were excluded. Nine RLC studies were extracted and grouped into 3 categories based on outcome measures: violations, crashes, and injuries/fatalities. RESULTS All 9 studies reviewed showed significant reductions in the frequency/rate of violations, crashes, and injuries at intersections after RLC implementation. RLC interventions appear to decrease violations, crashes, and injuries at intersections. CONCLUSIONS Despite limited peer-reviewed publications available in the literature, it appears that RLCs decrease violations, crashes, and injuries at intersections. Some studies, however, contained methodological shortcomings. Therefore, the apparent effectiveness should be confirmed with stronger methodological approaches. Although spillover effects appeared to be evident, many of the jurisdictions examined were small in area. Thus, it is unknown whether spillover resulting from RLCs would have similar effects in large metropolitan areas. To determine the full public health impact of RLC programs, crashes, injuries, and fatalities should be considered as primary outcomes of interest. To accomplish this requires a clear definition of which types of crashes will be included for RLC studies. Lastly, it is unknown whether RLCs would be effective in reducing crashes resulting from distracted or alcohol-impaired drivers. Future studies should examine the effects of RLCs by exclusively analyzing these crash types.
Collapse
|
30
|
Sensor for distance measurement using pixel grey-level information. SENSORS 2009; 9:8896-906. [PMID: 22291543 PMCID: PMC3260620 DOI: 10.3390/s91108896] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2009] [Revised: 10/29/2009] [Accepted: 11/04/2009] [Indexed: 11/30/2022]
Abstract
An alternative method for distance measurement is presented, based on a radiometric approach to the image formation process. The proposed methodology uses images from an infrared emitting diode (IRED) to estimate the distance between the camera and the IRED. Camera output grey-level intensities are a function of the accumulated image irradiance, which is also related by inverse distance square law to the distance between the camera and the IRED. Analyzing camera-IRED distance, magnitudes that affected image grey-level intensities, and therefore accumulated image irradiance, were integrated into a differential model which was calibrated and used for distance estimation over a 200 to 600 cm range. In a preliminary model, the camera and the emitter were aligned.
Collapse
|
31
|
Is thermal scanner losing its bite in mass screening of fever due to SARS? Med Phys 2005; 32:93-7. [PMID: 15719959 PMCID: PMC7168465 DOI: 10.1118/1.1819532] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2004] [Revised: 09/17/2004] [Accepted: 09/21/2004] [Indexed: 12/02/2022] Open
Abstract
Severe acute respiratory syndrome (SARS) is a highly infectious disease caused by a coronavirus. Screening to detect a potential SARS infected person plays an important role in preventing the spread of SARS. The use of infrared thermal imaging cameras has been proposed as a noninvasive, speedy, cost effective and fairly accurate means for mass blind screening of potential SARS infected persons. Infrared thermography provides a digital image showing temperature patterns. This has been previously utilized in the detection of inflammation and nerve dysfunctions. It is believed that IR cameras can potentially be used to detect subjects with fever, the cardinal symptom of SARS, and avian influenza. The accuracy of the infrared system can, however, be affected by human, environmental, and equipment variables. It is also limited by the fact that the thermal imager measures the skin temperature and not the core body temperature. As known, the body determines a temperature as its so-called "set point" at any one time during the body temperature regulation. Fever happens if the hypothalamus detects pyrogens and then raises the set point. The time course of a typical fever can be divided into three stages. When the fever initiates, the body attempts to raise its temperature but vasoconstriction occurs to prevent heat loss through the skin. With this reason, some individuals at this stage of fever (at the rising slope and immediately after fever begins or falling slope after the fever breaks) will not be detected by the scanner if it is not designed to detect subject at the plateau of the fever (with her/his high core temperature) in particular. This paper aims to study the effectiveness of infrared systems for its application in mass blind screening to detect subjects with elevated body temperature. For this application, it is critical for thermal imagers to be able to identify febrile from normal subjects accurately. Minimizing the number of false positive and false negative cases, improves the efficiency of the screening stations. False negative results should be avoided at all costs, as letting a SARS infected person through the screening process may result in potentially catastrophic results. Various statistical methods such as linear regression, Receiver Operating Characteristics analysis, and neural networks based classification were used to analyze the temperature data collected from various sites on the face on both the frontal and side profiles. Two important conclusions were drawn from the analysis: the best region on the face to obtain temperature readings and the optimal preset threshold temperature for the thermal imager. To conclude, the current research application will remain an interest and useful for reference by both local and overseas manufacturers of thermal scanners, users, and various government and private establishments. As elevation of body temperature is a common presenting symptom for many illnesses including infectious diseases, thermal imagers are useful tools for mass screening of body temperature not only for SARS but also during other public health crisis where widespread transmission of infection is a concern.
Collapse
|