1
|
Lessons learned from regulatory submissions involving endogenous therapeutic analyte bioanalysis. Bioanalysis 2024; 16:171-184. [PMID: 38088828 DOI: 10.4155/bio-2023-0209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Endogenous therapeutic analytes include hormones, neurotransmitters, vitamins, fatty acids and inorganic elements that are naturally present in the body because either the body produces them or they are present in the normal diet. The accurate measurement of endogenous therapeutic analytes poses a challenge when the administered exogenous therapeutic analyte and its endogenous counterpart cannot be distinguished. In this article, real case examples with endogenous therapeutic analyte bioanalysis during drug development in support of regulatory submissions are collected and presented. The article highlights common challenges encountered and lessons learned related to bioanalysis of endogenous therapeutic analytes and provides practical tips and strategies to consider from a regulatory perspective.
Collapse
|
2
|
Background optimization of powder electron diffraction for implementation of the e-PDF technique and study of the local structure of iron oxide nanocrystals. Acta Crystallogr A Found Adv 2023; 79:412-426. [PMID: 37490406 DOI: 10.1107/s2053273323005107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 06/08/2023] [Indexed: 07/27/2023] Open
Abstract
The local structural characterization of iron oxide nanoparticles is explored using a total scattering analysis method known as pair distribution function (PDF) (also known as reduced density function) analysis. The PDF profiles are derived from background-corrected powder electron diffraction patterns (the e-PDF technique). Due to the strong Coulombic interaction between the electron beam and the sample, electron diffraction generally leads to multiple scattering, causing redistribution of intensities towards higher scattering angles and an increased background in the diffraction profile. In addition to this, the electron-specimen interaction gives rise to an undesirable inelastic scattering signal that contributes primarily to the background. The present work demonstrates the efficacy of a pre-treatment of the underlying complex background function, which is a combination of both incoherent multiple and inelastic scatterings that cannot be identical for different electron beam energies. Therefore, two different background subtraction approaches are proposed for the electron diffraction patterns acquired at 80 kV and 300 kV beam energies. From the least-square refinement (small-box modelling), both approaches are found to be very promising, leading to a successful implementation of the e-PDF technique to study the local structure of the considered nanomaterial.
Collapse
|
3
|
Raman spectroscopy and U-Net deep neural network in antiresorptive drug-related osteonecrosis of the jaw. Oral Dis 2023. [PMID: 37650266 DOI: 10.1111/odi.14721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 07/30/2023] [Accepted: 08/14/2023] [Indexed: 09/01/2023]
Abstract
OBJECTIVE Application of an optical method for the identification of antiresorptive drug-related osteonecrosis of the jaw (ARONJ). METHODS We introduce shifted-excitation Raman difference spectroscopy followed by U-Net deep neural network refinement to determine bone tissue viability. The obtained results are validated through established histological methods. RESULTS Discrimination of osteonecrosis from physiological tissues was evaluated at 119 distinct measurement loci in 40 surgical specimens from 28 patients. Mean Raman spectra were refined from 11,900 raw spectra, and characteristic peaks were assigned to their respective molecular origin. Then, following principal component and linear discriminant analyses, osteonecrotic lesions were distinguished from physiological tissue entities, such as viable bone, with a sensitivity, specificity, and overall accuracy of 100%. Moreover, bone mineral content, quality, maturity, and crystallinity were quantified, revealing an increased mineral-to-matrix ratio and decreased carbonate-to-phosphate ratio in ARONJ lesions compared to physiological bone. CONCLUSION The results demonstrate feasibility with high classification accuracy in this collective. The differentiation was determined by the spectral features of the organic and mineral composition of bone. This merely optical, noninvasive technique is a promising candidate to ameliorate both the diagnosis and treatment of ARONJ in the future.
Collapse
|
4
|
Vision-Based In-Flight Collision Avoidance Control Based on Background Subtraction Using Embedded System. SENSORS (BASEL, SWITZERLAND) 2023; 23:6297. [PMID: 37514592 PMCID: PMC10385618 DOI: 10.3390/s23146297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/20/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
The development of high-performance, low-cost unmanned aerial vehicles paired with rapid progress in vision-based perception systems herald a new era of autonomous flight systems with mission-ready capabilities. One of the key features of an autonomous UAV is a robust mid-air collision avoidance strategy. This paper proposes a vision-based in-flight collision avoidance system based on background subtraction using an embedded computing system for unmanned aerial vehicles (UAVs). The pipeline of proposed in-flight collision avoidance system is as follows: (i) subtract dynamic background subtraction to remove it and to detect moving objects, (ii) denoise using morphology and binarization methods, (iii) cluster the moving objects and remove noise blobs, using Euclidean clustering, (iv) distinguish independent objects and track the movement using the Kalman filter, and (v) avoid collision, using the proposed decision-making techniques. This work focuses on the design and the demonstration of a vision-based fast-moving object detection and tracking system with decision-making capabilities to perform evasive maneuvers to replace a high-vision system such as event camera. The novelty of our method lies in the motion-compensating moving object detection framework, which accomplishes the task with background subtraction via a two-dimensional transformation approximation. Clustering and tracking algorithms process detection data to track independent objects, and stereo-camera-based distance estimation is conducted to estimate the three-dimensional trajectory, which is then used during decision-making procedures. The examination of the system is conducted with a test quadrotor UAV, and appropriate algorithm parameters for various requirements are deduced.
Collapse
|
5
|
Wafer Surface Defect Detection Based on Background Subtraction and Faster R-CNN. MICROMACHINES 2023; 14:mi14050905. [PMID: 37241529 DOI: 10.3390/mi14050905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 05/28/2023]
Abstract
Concerning the problem that wafer surface defects are easily confused with the background and are difficult to detect, a new detection method for wafer surface defects based on background subtraction and Faster R-CNN is proposed. First, an improved spectral analysis method is proposed to measure the period of the image, and the substructure image can then be obtained on the basis of the period. Then, a local template matching method is adopted to position the substructure image, thereby reconstructing the background image. Then, the interference of the background can be eliminated by an image difference operation. Finally, the difference image is input into an improved Faster R-CNN network for detection. The proposed method has been validated on a self-developed wafer dataset and compared with other detectors. The experimental results show that compared with the original Faster R-CNN, the proposed method increases the mAP effectively by 5.2%, which can meet the requirements of intelligent manufacturing and high detection accuracy.
Collapse
|
6
|
Global Xenobiotic Profiling of Rat Plasma Using Untargeted Metabolomics and Background Subtraction-Based Approaches: Method Evaluation and Comparison. Curr Drug Metab 2023; 24:200-210. [PMID: 37157207 DOI: 10.2174/1389200224666230508122240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/23/2023] [Accepted: 03/28/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Global xenobiotic profiling (GXP) is to detect and structurally characterize all xenobiotics in biological samples using mainly liquid chromatography-high resolution mass spectrometry (LC-HRMS) based methods. GXP is highly needed in drug metabolism study, food safety testing, forensic chemical analysis, and exposome research. For detecting known or predictable xenobiotics, targeted LC-HRMS data processing methods based on molecular weights, mass defects and fragmentations of analytes are routinely employed. For profiling unknown xenobiotics, untargeted and LC-HRMS based metabolomics and background subtraction-based approaches are required. OBJECTIVE This study aimed to evaluate the effectiveness of untargeted metabolomics and the precise and thorough background subtraction (PATBS) in GXP of rat plasma. METHODS Rat plasma samples collected from an oral administration of nefazodone (NEF) or Glycyrrhizae Radix et Rhizoma (Gancao, GC) were analyzed by LC-HRMS. NEF metabolites and GC components in rat plasma were thoroughly searched and characterized via processing LC-HRMS datasets using targeted and untargeted methods. RESULTS PATBS detected 68 NEF metabolites and 63 GC components, while the metabolomic approach (MS-DIAL) found 67 NEF metabolites and 60 GC components in rat plasma. The two methods found 79 NEF metabolites and 80 GC components with 96% and 91% successful rates, respectively. CONCLUSION Metabolomics methods are capable of GXP and measuring alternations of endogenous metabolites in a group of biological samples, while PATBS is more suited for sensitive GXP of a single biological sample. A combination of metabolomics and PATBS approaches can generate better results in the untargeted profiling of unknown xenobiotics.
Collapse
|
7
|
Context-Unsupervised Adversarial Network for Video Sensors. SENSORS 2022; 22:s22093171. [PMID: 35590863 PMCID: PMC9102692 DOI: 10.3390/s22093171] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 04/08/2022] [Accepted: 04/15/2022] [Indexed: 02/01/2023]
Abstract
Foreground object segmentation is a crucial first step for surveillance systems based on networks of video sensors. This problem in the context of dynamic scenes has been widely explored in the last two decades, but it still has open research questions due to challenges such as strong shadows, background clutter and illumination changes. After years of solid work based on statistical background pixel modeling, most current proposals use convolutional neural networks (CNNs) either to model the background or to make the foreground/background decision. Although these new techniques achieve outstanding results, they usually require specific training for each scene, which is unfeasible if we aim at designing software for embedded video systems and smart cameras. Our approach to the problem does not require specific context or scene training, and thus no manual labeling. We propose a network for a refinement step on top of conventional state-of-the-art background subtraction systems. By using a statistical technique to produce a rough mask, we do not need to train the network for each scene. The proposed method can take advantage of the specificity of the classic techniques, while obtaining the highly accurate segmentation that a deep learning system provides. We also show the advantage of using an adversarial network to improve the generalization ability of the network and produce more consistent results than an equivalent non-adversarial network. The results provided were obtained by training the network on a common database, without fine-tuning for specific scenes. Experiments on the unseen part of the CDNet database provided 0.82 a F-score, and 0.87 was achieved for LASIESTA databases, which is a database unrelated to the training one. On this last database, the results outperformed by 8.75% those available in the official table. The results achieved for CDNet are well above those of the methods not based on CNNs, and according to the literature, among the best for the context-unsupervised CNNs systems.
Collapse
|
8
|
A Resource-Efficient CNN-Based Method for Moving Vehicle Detection. SENSORS 2022; 22:s22031193. [PMID: 35161938 PMCID: PMC8839159 DOI: 10.3390/s22031193] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 01/02/2022] [Accepted: 01/18/2022] [Indexed: 02/05/2023]
Abstract
There has been significant interest in using Convolutional Neural Networks (CNN) based methods for Automated Vehicular Surveillance (AVS) systems. Although these methods provide high accuracy, they are computationally expensive. On the other hand, Background Subtraction (BS)-based approaches are lightweight but provide insufficient information for tasks such as monitoring driving behavior and detecting traffic rules violations. In this paper, we propose a framework to reduce the complexity of CNN-based AVS methods, where a BS-based module is introduced as a preprocessing step to optimize the number of convolution operations executed by the CNN module. The BS-based module generates image-candidates containing only moving objects. A CNN-based detector with the appropriate number of convolutions is then applied to each image-candidate to handle the overlapping problem and improve detection performance. Four state-of-the-art CNN-based detection architectures were benchmarked as base models of the detection cores to evaluate the proposed framework. The experiments were conducted using a large-scale dataset. The computational complexity reduction of the proposed framework increases with the complexity of the considered CNN model's architecture (e.g., 30.6% for YOLOv5s with 7.3M parameters; 52.2% for YOLOv5x with 87.7M parameters), without undermining accuracy.
Collapse
|
9
|
Fast and Accurate Background Reconstruction Using Background Bootstrapping. J Imaging 2022; 8:jimaging8010009. [PMID: 35049850 PMCID: PMC8780815 DOI: 10.3390/jimaging8010009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/18/2021] [Accepted: 01/05/2022] [Indexed: 11/30/2022] Open
Abstract
The goal of background reconstruction is to recover the background image of a scene from a sequence of frames showing this scene cluttered by various moving objects. This task is fundamental in image analysis, and is generally the first step before more advanced processing, but difficult because there is no formal definition of what should be considered as background or foreground and the results may be severely impacted by various challenges such as illumination changes, intermittent object motions, highly cluttered scenes, etc. We propose in this paper a new iterative algorithm for background reconstruction, where the current estimate of the background is used to guess which image pixels are background pixels and a new background estimation is performed using those pixels only. We then show that the proposed algorithm, which uses stochastic gradient descent for improved regularization, is more accurate than the state of the art on the challenging SBMnet dataset, especially for short videos with low frame rates, and is also fast, reaching an average of 52 fps on this dataset when parameterized for maximal accuracy using acceleration with a graphics processing unit (GPU) and a Python implementation.
Collapse
|
10
|
Saliency Detection with Moving Camera via Background Model Completion. SENSORS 2021; 21:s21248374. [PMID: 34960461 PMCID: PMC8707474 DOI: 10.3390/s21248374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 12/09/2021] [Accepted: 12/13/2021] [Indexed: 11/16/2022]
Abstract
Detecting saliency in videos is a fundamental step in many computer vision systems. Saliency is the significant target(s) in the video. The object of interest is further analyzed for high-level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive errors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD-BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video completion, a good background frame can be synthesized with the co-existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre-trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output deteriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F-measure results, obtained from the pan-tilt-zoom (PTZ) videos, show that our proposed framework outperforms some deep learning-based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high-ranking background subtraction methods by more than 3%.
Collapse
|
11
|
Methodology for the Automated Visual Detection of Bird and Bat Collision Fatalities at Onshore Wind Turbines. J Imaging 2021; 7:jimaging7120272. [PMID: 34940738 PMCID: PMC8704095 DOI: 10.3390/jimaging7120272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 11/18/2021] [Accepted: 11/26/2021] [Indexed: 11/17/2022] Open
Abstract
The number of collision fatalities is one of the main quantification measures for research concerning wind power impacts on birds and bats. Despite being integral in ongoing investigations as well as regulatory approvals, the state-of-the-art method for the detection of fatalities remains a manual search by humans or dogs. This is expensive, time consuming and the efficiency varies greatly among different studies. Therefore, we developed a methodology for the automatic detection using visual/near-infrared cameras for daytime and thermal cameras for nighttime. The cameras can be installed in the nacelle of wind turbines and monitor the area below. The methodology is centered around software that analyzes the images in real time using pixel-wise and region-based methods. We found that the structural similarity is the most important measure for the decision about a detection. Phantom drop tests in the actual wind test field with the system installed on 75 m above the ground resulted in a sensitivity of 75.6% for the nighttime detection and 84.3% for the daylight detection. The night camera detected 2.47 false positives per hour using a time window designed for our phantom drop tests. However, in real applications this time window can be extended to eliminate false positives caused by nightly active animals. Excluding these from our data reduced the false positive rate to 0.05. The daylight camera detected 0.20 false positives per hour. Our proposed method has the advantages of being more consistent, more objective, less time consuming, and less expensive than manual search methods.
Collapse
|
12
|
DeepBhvTracking: A Novel Behavior Tracking Method for Laboratory Animals Based on Deep Learning. Front Behav Neurosci 2021; 15:750894. [PMID: 34776893 PMCID: PMC8581673 DOI: 10.3389/fnbeh.2021.750894] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 09/24/2021] [Indexed: 11/13/2022] Open
Abstract
Behavioral measurement and evaluation are broadly used to understand brain functions in neuroscience, especially for investigations of movement disorders, social deficits, and mental diseases. Numerous commercial software and open-source programs have been developed for tracking the movement of laboratory animals, allowing animal behavior to be analyzed digitally. In vivo optical imaging and electrophysiological recording in freely behaving animals are now widely used to understand neural functions in circuits. However, it is always a challenge to accurately track the movement of an animal under certain complex conditions due to uneven environment illumination, variations in animal models, and interference from recording devices and experimenters. To overcome these challenges, we have developed a strategy to track the movement of an animal by combining a deep learning technique, the You Only Look Once (YOLO) algorithm, with a background subtraction algorithm, a method we label DeepBhvTracking. In our method, we first train the detector using manually labeled images and a pretrained deep-learning neural network combined with YOLO, then generate bounding boxes of the targets using the trained detector, and finally track the center of the targets by calculating their centroid in the bounding box using background subtraction. Using DeepBhvTracking, the movement of animals can be tracked accurately in complex environments and can be used in different behavior paradigms and for different animal models. Therefore, DeepBhvTracking can be broadly used in studies of neuroscience, medicine, and machine learning algorithms.
Collapse
|
13
|
A Temporal Boosted YOLO-Based Model for Birds Detection around Wind Farms. J Imaging 2021; 7:jimaging7110227. [PMID: 34821858 PMCID: PMC8617668 DOI: 10.3390/jimaging7110227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/07/2021] [Accepted: 10/22/2021] [Indexed: 11/17/2022] Open
Abstract
Object detection for sky surveillance is a challenging problem due to having small objects in a large volume and a constantly changing background which requires high resolution frames. For example, detecting flying birds in wind farms to prevent their collision with the wind turbines. This paper proposes a YOLOv4-based ensemble model for bird detection in grayscale videos captured around wind turbines in wind farms. In order to tackle this problem, we introduce two datasets-(1) Klim and (2) Skagen-collected at two locations in Denmark. We use Klim training set to train three increasingly capable YOLOv4 based models. Model 1 uses YOLOv4 trained on the Klim dataset, Model 2 introduces tiling to improve small bird detection, and the last model uses tiling and temporal stacking and achieves the best mAP values on both Klim and Skagen datasets. We used this model to set up an ensemble detector, which further improves mAP values on both datasets. The three models achieve testing mAP values of 82%, 88%, and 90% on the Klim dataset. mAP values for Model 1 and Model 3 on the Skagen dataset are 60% and 92%. Improving object detection accuracy could mitigate birds' mortality rate by choosing the locations for such establishment and the turbines location. It can also be used to improve the collision avoidance systems used in wind energy facilities.
Collapse
|
14
|
Estrus Detection Using Background Image Subtraction Technique in Tie-Stalled Cows. Animals (Basel) 2021; 11:ani11061795. [PMID: 34208569 PMCID: PMC8235789 DOI: 10.3390/ani11061795] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/11/2021] [Accepted: 06/14/2021] [Indexed: 12/02/2022] Open
Abstract
Simple Summary With increasing herd sizes and labor costs in recent decades, visual estrus detection by farmers has become more difficult because of the reduced manpower input per cow. To address this problem, various wearable devices have been developed for automatic estrus detection in cows, such as neck- or leg-mounted activity meters for monitoring estrus-associated increments in the amount of activity. However, these animal-contact devices have several limitations; namely, it can be dangerous to attach or remove the device and it can cause discomfort. Recently, a background image subtraction technique has been proposed as a non-contact method for monitoring activity changes in livestock animals. In this study, a new method was developed by combining the background subtraction technique and the thresholding method to detect estrus-associated activity increases in tie-stalled cows. Using this method, a substantial increase in activity in estrus was detectable, and the estrus detection sensitivity reached as high as 90% with a precision of 50%, where the sensitivity and precision were calculated as: (true-positive/[true-positive + false-negative]) × 100% and (true-positive/[true-positive + false-positive]) × 100%, respectively. This result may indicate that activity monitoring using the background subtraction technique has the potential to be a non-contact estrus detection method in tie-stalled cows. Abstract In this study, we determined the applicability of the background image subtraction technique to detect estrus in tie-stalled cows. To investigate the impact of the camera shooting direction, webcams were set up to capture the front, top, and rear views of a cow simultaneously. Video recording was performed for a total of ten estrous cycles in six cows. Standing estrus was confirmed by testing at 6 h intervals. From the end of estrus, transrectal ultrasonography was performed every 2 h to confirm ovulation time. Foreground objects (moving objects) were extracted in the videos using the background subtraction technique, and the pixels were counted at each frame of five frames-per-second sequences. After calculating the hourly averaged pixel counts, the change in values was expressed as the pixel ratio (total value during the last 24 h/total value during the last 24 to 48 h). The mean pixel ratio gradually increased at approximately 48 h before ovulation, and the highest value was observed at estrus, regardless of the camera shooting direction. When using front-view videos with an appropriate threshold, estrus was detected with 90% sensitivity and 50% precision. The present method in particular has the potential to be a non-contact estrus detection method for tie-stalled cows.
Collapse
|
15
|
Fall Detection System-Based Posture-Recognition for Indoor Environments. J Imaging 2021; 7:jimaging7030042. [PMID: 34460698 PMCID: PMC8321307 DOI: 10.3390/jimaging7030042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/11/2021] [Accepted: 02/18/2021] [Indexed: 02/08/2023] Open
Abstract
The majority of the senior population lives alone at home. Falls can cause serious injuries, such as fractures or head injuries. These injuries can be an obstacle for a person to move around and normally practice his daily activities. Some of these injuries can lead to a risk of death if not handled urgently. In this paper, we propose a fall detection system for elderly people based on their postures. The postures are recognized from the human silhouette which is an advantage to preserve the privacy of the elderly. The effectiveness of our approach is demonstrated on two well-known datasets for human posture classification and three public datasets for fall detection, using a Support-Vector Machine (SVM) classifier. The experimental results show that our method can not only achieves a high fall detection rate but also a low false detection.
Collapse
|
16
|
Ghost Detection and Removal Based on Two-Layer Background Model and Histogram Similarity. SENSORS 2020; 20:s20164558. [PMID: 32823909 PMCID: PMC7472150 DOI: 10.3390/s20164558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 08/09/2020] [Accepted: 08/10/2020] [Indexed: 12/04/2022]
Abstract
Detecting and removing ghosts is an important challenge for moving object detection because ghosts will remain forever once formed, leading to the overall detection performance degradation. To deal with this issue, we first classified the ghosts into two categories according to the way they were formed. Then, the sample-based two-layer background model and histogram similarity of ghost areas were proposed to detect and remove the two types of ghosts, respectively. Furthermore, three important parameters in the two-layer model, i.e., the distance threshold, similarity threshold of local binary similarity pattern (LBSP), and time sub-sampling factor, were automatically determined by the spatial-temporal information of each pixel for adapting to the scene change rapidly. The experimental results on the CDnet 2014 dataset demonstrated that our proposed algorithm not only effectively eliminated ghost areas, but was also superior to the state-of-the-art approaches in terms of the overall performance.
Collapse
|
17
|
Foreground Scattering Elimination by Inverse Lock-in-Like Spatial Modulation. Vision (Basel) 2020; 4:vision4030037. [PMID: 32823703 PMCID: PMC7558313 DOI: 10.3390/vision4030037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 08/09/2020] [Accepted: 08/10/2020] [Indexed: 11/20/2022] Open
Abstract
We describe a simple approach to enhance vision, which is impaired by close range obscuring and/or scattering structures. Such structures may be found on a dirty windscreen of a car, or by tree branches blocking the vision of objects behind. The main idea is to spatially modulate the obscuration, either by periodically moving the detector/eye or by letting the obscuration modulate itself, such as branches swinging in the wind. The approach has similarities to electronic lock-in techniques, where the feature of interest is modulated to enable it to be isolated from the strong perturbing background, but now, we modulate the background instead to isolate the static feature of interest. Thus, the approach can be denoted as “inverse lock-in-like spatial modulation”. We also apply a new digital imaging processing technique based on a combination of the Interframe Difference and Gaussian Mixture models for digital separation between the objects of interest and the background, and make connections to the Gestalt vision psychology field.
Collapse
|
18
|
Asynchronous Semantic Background Subtraction. J Imaging 2020; 6:jimaging6060050. [PMID: 34460596 PMCID: PMC8321070 DOI: 10.3390/jimaging6060050] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 06/09/2020] [Accepted: 06/13/2020] [Indexed: 11/16/2022] Open
Abstract
The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences. While SBS has been shown to improve background subtraction, a major difficulty is that it combines two streams generated at different frame rates. This results in SBS operating at the slowest frame rate of the two streams, usually being the one of the semantic segmentation algorithm. We present a method, referred to as "Asynchronous Semantic Background Subtraction" (ASBS), able to combine a semantic segmentation algorithm with any background subtraction algorithm asynchronously. It achieves performances close to that of SBS while operating at the fastest possible frame rate, being the one of the background subtraction algorithm. Our method consists in analyzing the temporal evolution of pixel features to possibly replicate the decisions previously enforced by semantics when no semantic information is computed. We showcase ASBS with several background subtraction algorithms and also add a feedback mechanism that feeds the background model of the background subtraction algorithm to upgrade its updating strategy and, consequently, enhance the decision. Experiments show that we systematically improve the performance, even when the semantic stream has a much slower frame rate than the frame rate of the background subtraction algorithm. In addition, we establish that, with the help of ASBS, a real-time background subtraction algorithm, such as ViBe, stays real time and competes with some of the best non-real-time unsupervised background subtraction algorithms such as SuBSENSE.
Collapse
|
19
|
An FPGA Based Tracking Implementation for Parkinson's Patients. SENSORS 2020; 20:s20113189. [PMID: 32512749 PMCID: PMC7309050 DOI: 10.3390/s20113189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 05/28/2020] [Accepted: 05/28/2020] [Indexed: 11/16/2022]
Abstract
This paper presents a study on the optimization of the tracking system designed for patients with Parkinson’s disease tested at a day hospital center. The work performed significantly improves the efficiency of the computer vision based system in terms of energy consumption and hardware requirements. More specifically, it optimizes the performances of the background subtraction by segmenting every frame previously characterized by a Gaussian mixture model (GMM). This module is the most demanding part in terms of computation resources, and therefore, this paper proposes a method for its implementation by means of a low-cost development board based on Zynq XC7Z020 SoC (system on chip). The platform used is the ZedBoard, which combines an ARM Processor unit and a FPGA. It achieves real-time performance and low power consumption while performing the target request accurately. The results and achievements of this study, validated in real medical settings, are discussed and analyzed within.
Collapse
|
20
|
Abstract
Habituation is a form of simple memory that suppresses neural activity in response to repeated, neutral stimuli. This process is critical in helping organisms guide attention toward the most salient and novel features in the environment. Here, we follow known circuit mechanisms in the fruit fly olfactory system to derive a simple algorithm for habituation. We show, both empirically and analytically, that this algorithm is able to filter out redundant information, enhance discrimination between odors that share a similar background, and improve detection of novel components in odor mixtures. Overall, we propose an algorithmic perspective on the biological mechanism of habituation and use this perspective to understand how sensory physiology can affect odor perception. Our framework may also help toward understanding the effects of habituation in other more sophisticated neural systems.
Collapse
|
21
|
Crystallographic Characterisation of Ultra-Thin, or Amorphous Transparent Conducting Oxides-The Case for Raman Spectroscopy. MATERIALS 2020; 13:ma13020267. [PMID: 31936137 PMCID: PMC7013887 DOI: 10.3390/ma13020267] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 12/31/2019] [Accepted: 01/02/2020] [Indexed: 01/31/2023]
Abstract
The electronic and optical properties of transparent conducting oxides (TCOs) are closely linked to their crystallographic structure on a macroscopic (grain sizes) and microscopic (bond structure) level. With the increasing drive towards using reduced film thicknesses in devices and growing interest in amorphous TCOs such as n-type InGaZnO4 (IGZO), ZnSnO3 (ZTO), p-type CuxCrO2, or ZnRh2O4, the task of gaining in-depth knowledge on their crystal structure by conventional X-ray diffraction-based measurements are becoming increasingly difficult. We demonstrate the use of a focal shift based background subtraction technique for Raman spectroscopy specifically developed for the case of transparent thin films on amorphous substrates. Using this technique we demonstrate, for a variety of TCOs CuO, a-ZTO, ZnO:Al), how changes in local vibrational modes reflect changes in the composition of the TCO and consequently their electronic properties.
Collapse
|
22
|
Accurate background correction in neutron reflectometry studies of soft condensed matter films in contact with fluid reservoirs. J Appl Crystallogr 2020; 53:10.1107/s160057671901481x. [PMID: 34194075 PMCID: PMC8240731 DOI: 10.1107/s160057671901481x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 11/01/2019] [Indexed: 11/10/2022] Open
Abstract
Neutron reflectometry (NR) is a powerful method for looking at the structures of multilayered thin films, including biomolecules on surfaces, particularly proteins at lipid interfaces. The spatial resolution of the film structure obtained through an NR experiment is limited by the maximum wavevector transfer at which the reflectivity can be measured. This maximum is in turn determined primarily by the scattering background, e.g. from incoherent scattering from a liquid reservoir or inelastic scattering from cell materials. Thus, reduction of scattering background is an important part of improving the spatial resolution attainable in NR measurements. Here, the background field generated by scattering from a thin liquid reservoir on a monochromatic reflectometer is measured and calculated. It is shown that background subtraction utilizing the entire background field improves data modeling and reduces experimental uncertainties associated with localized background subtraction.
Collapse
|
23
|
An Open Source, Iterative Dual-Tree Wavelet Background Subtraction Method Extended from Automated Diffraction Pattern Analysis to Optical Spectroscopy. APPLIED SPECTROSCOPY 2019; 73:1370-1379. [PMID: 31397582 DOI: 10.1177/0003702819871330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Background subtraction is a general problem in spectroscopy often addressed with application-specific techniques, or methods that introduce a variety of implementation barriers such as having to specify peak-free regions of the spectrum. An iterative dual-tree complex wavelet transform-based background subtraction method (DTCWT-IA) was recently developed for the analysis of ultrafast electron diffraction patterns. The method was designed to require minimal user intervention, to support streamlined analysis of many diffraction patterns with complex overlapping peaks and time-varying backgrounds, and is implemented in an open-source computer program. We examined the performance of DTCWT-IA for the analysis of spectra acquired by a range of optical spectroscopies including ultraviolet-visible spectroscopy (UV-Vis), X-ray photoelectron spectroscopy (XPS), and surface-enhanced Raman spectroscopy (SERS). A key benefit of the method is that the user need not specify regions of the spectrum where no peaks are expected to occur. SER spectra were used to investigate the robustness of DTCWT-IA to signal-to-noise levels in the spectrum and to user operation, specifically to two of the algorithm parameter settings: decomposition level and iteration number. The single, general DTCWT-IA implementation performs well in comparison to the different conventional approaches to background subtraction for UV-Vis, XPS, and SERS, while requiring minimal input. The method thus holds the same potential for optical spectroscopy as for ultrafast electron diffraction, namely streamlined analysis of spectra with complex distributions of peaks and varying signal levels, thus supporting real-time spectral analysis or the analysis of data acquired from different sources.
Collapse
|
24
|
Robust Vehicle Detection and Counting Algorithm Employing a Convolution Neural Network and Optical Flow. SENSORS 2019; 19:s19204588. [PMID: 31652552 PMCID: PMC6832389 DOI: 10.3390/s19204588] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 10/17/2019] [Accepted: 10/18/2019] [Indexed: 11/16/2022]
Abstract
Automatic vehicle detection and counting are considered vital in improving traffic control and management. This work presents an effective algorithm for vehicle detection and counting in complex traffic scenes by combining both convolution neural network (CNN) and the optical flow feature tracking-based methods. In this algorithm, both the detection and tracking procedures have been linked together to get robust feature points that are updated regularly every fixed number of frames. The proposed algorithm detects moving vehicles based on a background subtraction method using CNN. Then, the vehicle's robust features are refined and clustered by motion feature points analysis using a combined technique between KLT tracker and K-means clustering. Finally, an efficient strategy is presented using the detected and tracked points information to assign each vehicle label with its corresponding one in the vehicle's trajectories and truly counted it. The proposed method is evaluated on videos representing challenging environments, and the experimental results showed an average detection and counting precision of 96.3% and 96.8%, respectively, which outperforms other existing approaches.
Collapse
|
25
|
Development, validation and comparison of four methods for quantifying endogenous 25OH-D3 in human plasma. Biomed Chromatogr 2019; 33:e4691. [PMID: 31452227 DOI: 10.1002/bmc.4691] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 08/20/2019] [Accepted: 08/21/2019] [Indexed: 12/22/2022]
Abstract
To meet the increasing clinical needs for 25-hydroxyvitamin D3 (25OH-D3) detection, the development of an efficient and accurate high-performance liquid chromatography-mass spectrometry (HPLC-MS) method for plasma 25OH-D3 quantitation is important. Since 25OH-D3 is an endogenous compound, the lack of a plasma blank increases the difficulty of accurately quantifying 25OH-D3. Selection of a method suitable for clinical monitoring among various methods for endogenous compound quantification is necessary. Methyl tert butyl ether was chosen for the sample treatment in a liquid-liquid extraction protocol. Water as a blank matrix, 5% human serum albumin in water as a blank matrix, surrogate analyte and background subtraction were designed to address the problem of a deficiency of a plasma blank. Four liquid chromatography-tandem mass spectrometry methods were fully validated to verify the advantages and limitations owing to regulatory deficiencies for endogenous compound validation. All four methods met the criteria and could be used to monitor clinical samples. Overall 30 human plasma samples were quantified in parallel using the four methods. The difference between any two methods was <12.6% and the total relative standard deviation was <5.2%. Background subtraction and 5% human serum albumin in water as a blank matrix may be better choices considering data quality, matrix similarity, cost and practicality.
Collapse
|
26
|
Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems. SENSORS 2019; 19:s19143217. [PMID: 31336590 PMCID: PMC6679522 DOI: 10.3390/s19143217] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/18/2019] [Accepted: 07/19/2019] [Indexed: 11/29/2022]
Abstract
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz.
Collapse
|
27
|
Robust Shelf Monitoring Using Supervised Learning for Improving On-Shelf Availability in Retail Stores. SENSORS 2019; 19:s19122722. [PMID: 31213015 PMCID: PMC6631981 DOI: 10.3390/s19122722] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 06/10/2019] [Accepted: 06/14/2019] [Indexed: 11/16/2022]
Abstract
This paper proposes a method to robustly monitor shelves in retail stores using supervised learning for improving on-shelf availability. To ensure high on-shelf availability, which is a key factor for improving profits in retail stores, we focus on understanding changes in products regarding increases/decreases in product amounts on the shelves. Our method first detects changed regions of products in an image by using background subtraction followed by moving object removal. It then classifies the detected change regions into several classes representing the actual changes on the shelves, such as “product taken (decrease)” and “product replenished/returned (increase)”, by supervised learning using convolutional neural networks. It finally updates the shelf condition representing the presence/absence of products using classification results and computes the product amount visible in the image as on-shelf availability using the updated shelf condition. Three experiments were conducted using two videos captured from a surveillance camera on the ceiling in a real store. Results of the first and second experiments show the effectiveness of the product change classification in our method. Results of the third experiment show that our method achieves a success rate of 89.6% for on-shelf availability when an error margin is within one product. With high accuracy, store clerks can maintain high on-shelf availability, enabling retail stores to increase profits.
Collapse
|
28
|
Efficient Recognition of Informative Measurement in the RF-Based Device-Free Localization. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19051219. [PMID: 30857378 PMCID: PMC6427128 DOI: 10.3390/s19051219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 03/01/2019] [Accepted: 03/06/2019] [Indexed: 06/09/2023]
Abstract
Device-Free Localization (DFL) based on the Radio Frequency (RF) is an emerging wireless sensing technology to perceive the position information of the target. To realize the real-time DFL with lower power, Back-projection Radio Tomographic Imaging (BRTI) has been used as a lightweight method to achieve the goal. However, the multipath noise in the RF sensing network may interfere with the measurement and the BRTI reconstruction performance. To resist the multipath interference in the observed data, it is necessary to recognize the informative RF link measurements that are truly affected by the target appearance. However, the existing methods based on the RF link state analysis are limited by the complex distribution of the RF link state and the high time complexity. In this paper, to enhance the performance of RF link state analysis, the RF link state analysis is transformed into a decomposition problem of the RF link state matrix, and an efficient RF link recognition method based on the low-rank and sparse decomposition is proposed to sense the spatiotemporal variation of the RF link state and accurately figure out the target-affected RF links. From the experimental results, the RF links recognized by the proposed method effectively reflect the target-induced RSS measurement variation with less time. Besides, the proposed method by recognizing the informative measurement is helpful to improve the accuracy of BRTI and enhance the efficiency in actual DFL applications.
Collapse
|
29
|
Extended Codebook with Multispectral Sequences for Background Subtraction. SENSORS 2019; 19:s19030703. [PMID: 30744074 PMCID: PMC6387333 DOI: 10.3390/s19030703] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 02/02/2019] [Accepted: 02/04/2019] [Indexed: 11/23/2022]
Abstract
The Codebook model is one of the popular real-time models for background subtraction. In this paper, we first extend it from traditional Red-Green-Blue (RGB) color model to multispectral sequences. A self-adaptive mechanism is then designed based on the statistical information extracted from the data themselves, with which the performance has been improved, in addition to saving time and effort to search for the appropriate parameters. Furthermore, the Spectral Information Divergence is introduced to evaluate the spectral distance between the current and reference vectors, together with the Brightness and Spectral Distortion. Experiments on five multispectral sequences with different challenges have shown that the multispectral self-adaptive Codebook model is more capable of detecting moving objects than the corresponding RGB sequences. The proposed research framework opens a door for future works for applying multispectral sequences in moving object detection.
Collapse
|
30
|
Animal Scanner: Software for classifying humans, animals, and empty frames in camera trap images. Ecol Evol 2019; 9:1578-1589. [PMID: 30847057 PMCID: PMC6392355 DOI: 10.1002/ece3.4747] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 10/19/2018] [Accepted: 10/24/2018] [Indexed: 11/20/2022] Open
Abstract
Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple photographs for each detection and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of photographs per study. The task of converting photographs to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g., camera malfunction or moving vegetation) or pictures of humans. We developed computer vision algorithms to detect and classify moving objects to aid the first step of camera trap image filtering-separating the animal detections from the empty frames and pictures of humans. Our new work couples foreground object segmentation through background subtraction with deep learning classification to provide a fast and accurate scheme for human-animal detection. We provide these programs as both Matlab GUI and command prompt developed with C++. The software reads folders of camera trap images and outputs images annotated with bounding boxes around moving objects and a text file summary of results. This software maintains high accuracy while reducing the execution time by 14 times. It takes about 6 seconds to process a sequence of ten frames (on a 2.6 GHZ CPU computer). For those cameras with excessive empty frames due to camera malfunction or blowing vegetation automatically removes 54% of the false-triggers sequences without influencing the human/animal sequences. We achieve 99.58% on image-level empty versus object classification of Serengeti dataset. We offer the first computer vision tool for processing camera trap images providing substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps.
Collapse
|
31
|
Detection and Tracking of Moving Targets for Thermal Infrared Video Sequences. SENSORS 2018; 18:s18113944. [PMID: 30441869 PMCID: PMC6263761 DOI: 10.3390/s18113944] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 11/04/2018] [Accepted: 11/12/2018] [Indexed: 11/16/2022]
Abstract
The joint detection and tracking of multiple targets from raw thermal infrared (TIR) image observations plays a significant role in the video surveillance field, and it has extensive applied foreground and practical value. In this paper, a novel multiple-target track-before-detect (TBD) method, which is based on background subtraction within the framework of labeled random finite sets (RFS) is presented. First, a background subtraction method based on a random selection strategy is exploited to obtain the foreground probability map from a TIR sequence. Second, in the foreground probability map, the probability of each pixel belonging to a target is calculated by non-overlapping multi-target likelihood. Finally, a δ generalized labeled multi-Bernoulli ( δ -GLMB) filter is employed to produce the states of multi-target along with their labels. Unlike other RFS-based filters, the proposed approach describes the target state by a pixel set instead of a single point. To meet the requirement of factual application, some extra procedures, including pixel sampling and update, target merging and splitting, and new birth target initialization, are incorporated into the algorithm. The experimental results show that the proposed method performs better in multi-target detection than six compared methods. Also, the method is effective for the continuous tracking of multi-targets.
Collapse
|
32
|
A Practical Multi-Sensor Cooling Demand Estimation Approach Based on Visual, Indoor and Outdoor Information Sensing. SENSORS 2018; 18:s18113591. [PMID: 30360459 PMCID: PMC6263512 DOI: 10.3390/s18113591] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 10/05/2018] [Accepted: 10/05/2018] [Indexed: 11/16/2022]
Abstract
The operating efficiency of heating, ventilation and air conditioning (HVAC) system is critical for building energy performance. Demand-based control is an efficient HVAC operating strategy, which can provide an appropriate level of HVAC services based on the recognition of actual cooling "demand." The cooling demand primarily relies on the accurate detection of occupancy. The current researches of demand-based HVAC control tend to detect the occupant count using cameras or other sensors, which often impose high computation and costs with limited real-life applications. Instead of detecting the occupant count, this paper proposes to detect the occupancy density. The occupancy density (estimated by image foreground moving pixels) together with the indoor and outdoor information (acquired from existing sensors) are used as inputs to an artificial neural network model for cooling demand estimation. Experiments have been implemented in a university design studio. Results show that, by adding the occupancy density, the cooling demand estimation error is greatly reduced by 67.4% and the R value is improved from 0.75 to 0.96. The proposed approach also features low-cost, computationally efficient, privacy-friendly and easily implementable. It shows good application potentials and can be readily incorporated into existing building management systems for improving energy efficiency.
Collapse
|
33
|
Infrared Thermography Approach for Effective Shielding Area of Field Smoke Based on Background Subtraction and Transmittance Interpolation. SENSORS (BASEL, SWITZERLAND) 2018; 18:E1450. [PMID: 29734796 PMCID: PMC5982648 DOI: 10.3390/s18051450] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 05/03/2018] [Accepted: 05/04/2018] [Indexed: 11/16/2022]
Abstract
Effective shielding area is a crucial indicator for the evaluation of the infrared smoke-obscuring effectiveness on the battlefield. The conventional methods for assessing the shielding area of the smoke screen are time-consuming and labor intensive, in addition to lacking precision. Therefore, an efficient and convincing technique for testing the effective shielding area of the smoke screen has great potential benefits in the smoke screen applications in the field trial. In this study, a thermal infrared sensor with a mid-wavelength infrared (MWIR) range of 3 to 5 μm was first used to capture the target scene images through clear as well as obscuring smoke, at regular intervals. The background subtraction in motion detection was then applied to obtain the contour of the smoke cloud at each frame. The smoke transmittance at each pixel within the smoke contour was interpolated based on the data that was collected from the image. Finally, the smoke effective shielding area was calculated, based on the accumulation of the effective shielding pixel points. One advantage of this approach is that it utilizes only one thermal infrared sensor without any other additional equipment in the field trial, which significantly contributes to the efficiency and its convenience. Experiments have been carried out to demonstrate that this approach can determine the effective shielding area of the field infrared smoke both practically and efficiently.
Collapse
|
34
|
Reconstruction-Based Change Detection with Image Completion for a Free-Moving Camera. SENSORS 2018; 18:s18041232. [PMID: 29673193 PMCID: PMC5948507 DOI: 10.3390/s18041232] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 04/12/2018] [Accepted: 04/13/2018] [Indexed: 11/16/2022]
Abstract
Reconstruction-based change detection methods are robust for camera motion. The methods learn reconstruction of input images based on background images. Foreground regions are detected based on the magnitude of the difference between an input image and a reconstructed input image. For learning, only background images are used. Therefore, foreground regions have larger differences than background regions. Traditional reconstruction-based methods have two problems. One is over-reconstruction of foreground regions. The other is that decision of change detection depends on magnitudes of differences only. It is difficult to distinguish magnitudes of differences in foreground regions when the foreground regions are completely reconstructed in patch images. We propose the framework of a reconstruction-based change detection method for a free-moving camera using patch images. To avoid over-reconstruction of foreground regions, our method reconstructs a masked central region in a patch image from a region surrounding the central region. Differences in foreground regions are enhanced because foreground regions in patch images are removed by the masking procedure. Change detection is learned from a patch image and a reconstructed image automatically. The decision procedure directly uses patch images rather than the differences between patch images. Our method achieves better accuracy compared to traditional reconstruction-based methods without masking patch images.
Collapse
|
35
|
Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors. SENSORS 2017; 17:s17091945. [PMID: 28837112 PMCID: PMC5621003 DOI: 10.3390/s17091945] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2017] [Revised: 08/07/2017] [Accepted: 08/17/2017] [Indexed: 11/28/2022]
Abstract
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.
Collapse
|
36
|
Background Subtraction of Raman Spectra Based on Iterative Polynomial Smoothing. APPLIED SPECTROSCOPY 2017; 71:1169-1179. [PMID: 27694430 DOI: 10.1177/0003702816670915] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a novel background subtraction algorithm is presented that can automatically recover Raman signal. This algorithm is based on an iterative polynomial smoothing method that highly reduces the need for experience and a priori knowledge. First, a polynomial filter is applied to smooth the input spectrum (the input spectrum is just an original spectrum at the first iteration). The output curve of the filter divides the original spectrum into two parts, top and bottom. Second, a proportion is calculated between the lowest point of the signal in the bottom part and the highest point of the signal in the top part. The proportion is a key index that decides whether to go into a new iteration. If a new iteration is needed, the minimum value between the output curve and the original spectrum forms a new curve that goes into the same filter in the first step and continues as another iteration until no more iteration is needed to finally get the background of the original spectrum. Results from the simulation experiments not only show that the iterative polynomial smoothing algorithm achieves good performance, processing time, cost, and accuracy of recovery, but also prove that the algorithm adapts to different background types and a large signal-to-noise ratio range. Furthermore, real measured Raman spectra of organic mixtures and non-organic samples are used to demonstrate the application of the algorithm.
Collapse
|
37
|
Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm. SENSORS 2017; 17:s17051177. [PMID: 28531134 PMCID: PMC5470922 DOI: 10.3390/s17051177] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2017] [Revised: 05/12/2017] [Accepted: 05/18/2017] [Indexed: 11/16/2022]
Abstract
Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth) algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.
Collapse
|
38
|
DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field. SENSORS (BASEL, SWITZERLAND) 2016; 16:E1904. [PMID: 27845717 PMCID: PMC5134563 DOI: 10.3390/s16111904] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Revised: 10/26/2016] [Accepted: 11/07/2016] [Indexed: 11/16/2022]
Abstract
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Collapse
|
39
|
Low-Rank Matrix Recovery Approach for Clutter Rejection in Real-Time IR-UWB Radar-Based Moving Target Detection. SENSORS 2016; 16:s16091409. [PMID: 27598159 PMCID: PMC5038687 DOI: 10.3390/s16091409] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 08/15/2016] [Accepted: 08/26/2016] [Indexed: 11/17/2022]
Abstract
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method.
Collapse
|
40
|
Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health. Diagnostics (Basel) 2016; 6:E19. [PMID: 27196933 PMCID: PMC4931414 DOI: 10.3390/diagnostics6020019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Revised: 04/19/2016] [Accepted: 05/06/2016] [Indexed: 12/20/2022] Open
Abstract
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings.
Collapse
|
41
|
Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform. SENSORS 2016; 16:456. [PMID: 27043570 PMCID: PMC4850970 DOI: 10.3390/s16040456] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 03/13/2016] [Accepted: 03/23/2016] [Indexed: 12/04/2022]
Abstract
Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques.
Collapse
|
42
|
Abstract
Reverse-phase protein arrays (RPPAs) are widely used in biological and biomedical fields of study. One of the most popular analytic methods in RPPA data analysis is the SuperCurve method, which requires estimation of the background fluorescence level. This estimation is usually not accurate and has sample bias and spatial bias. Here, we propose a taking-the-difference method to overcome this problem. Briefly, for each two consecutive RPPA cycles, we subtract the later cycle from the earlier cycle, transforming the m-cycle data into m-1 cycle of data. This removes most of the background fluorescence noise. We then use the m-1 cycle of data to fit a new model accordingly derived from the SuperCurve model. To evaluate our proposed method, we compare the accuracy and precision between our proposed model and the original SuperCurve model by testing them on both real and simulated datasets. For both situations, our modified model shows improved results. The modified SuperCurve method is easy to perform and the taking-the-difference idea is recommended for application to all current methods of RPPA data analysis.
Collapse
|
43
|
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams. ACTA CRYSTALLOGRAPHICA. SECTION D, BIOLOGICAL CRYSTALLOGRAPHY 2015; 71:1184-96. [PMID: 25945583 PMCID: PMC4427202 DOI: 10.1107/s1399004715004514] [Citation(s) in RCA: 102] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 03/04/2015] [Indexed: 01/30/2023]
Abstract
High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able to read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Å resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering.
Collapse
|
44
|
A new filtering technique for removing anti-Stokes emission background in gated CW-STED microscopy. JOURNAL OF BIOPHOTONICS 2014; 7:376-80. [PMID: 24639427 DOI: 10.1002/jbio.201300208] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2013] [Revised: 02/24/2014] [Accepted: 02/24/2014] [Indexed: 05/26/2023]
Abstract
Stimulated emission depletion (STED) microscopy is a prominent approach of super-resolution optical microscopy, which allows cellular imaging with so far unprecedented unlimited spatial resolution. The introduction of time-gated detection in STED microscopy significantly reduces the (instantaneous) intensity required to obtain sub-diffraction spatial resolution. If the time-gating is combined with a STED beam operating in continuous wave (CW), a cheap and low labour demand implementation is obtained, the so called gated CW-STED microscope. However, time-gating also reduces the fluorescence signal which forms the image. Thereby, background sources such as fluorescence emission excited by the STED laser (anti-Stokes fluorescence) can reduce the effective resolution of the system. We propose a straightforward method for subtraction of anti-Stokes background. The method hinges on the uncorrelated nature of the anti-Stokes emission background with respect to the wanted fluorescence signal. The specific importance of the method towards the combination of two-photon-excitation with gated CW-STED microscopy is demonstrated.
Collapse
|
45
|
Uncertainties in forces extracted from non-contact atomic force microscopy measurements by fitting of long-range background forces. BEILSTEIN JOURNAL OF NANOTECHNOLOGY 2014; 5:386-93. [PMID: 24778964 PMCID: PMC3999863 DOI: 10.3762/bjnano.5.45] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2013] [Accepted: 02/03/2014] [Indexed: 05/27/2023]
Abstract
In principle, non-contact atomic force microscopy (NC-AFM) now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired 'short-range' force from the experimental observable (frequency shift) is often far from trivial. In most cases there is a significant contribution to the total tip-sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the 'on-minus-off' method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method.
Collapse
|
46
|
Background subtraction approach based on independent component analysis. SENSORS 2010; 10:6092-114. [PMID: 22219704 PMCID: PMC3247749 DOI: 10.3390/s100606092] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2010] [Revised: 05/16/2010] [Accepted: 05/28/2010] [Indexed: 11/29/2022]
Abstract
In this work, a new approach to background subtraction based on independent component analysis is presented. This approach assumes that background and foreground information are mixed in a given sequence of images. Then, foreground and background components are identified, if their probability density functions are separable from a mixed space. Afterwards, the components estimation process consists in calculating an unmixed matrix. The estimation of an unmixed matrix is based on a fast ICA algorithm, which is estimated as a Newton-Raphson maximization approach. Next, the motion components are represented by the mid-significant eigenvalues from the unmixed matrix. Finally, the results show the approach capabilities to detect efficiently motion in outdoors and indoors scenarios. The results show that the approach is robust to luminance conditions changes at scene.
Collapse
|
47
|
A multiscale region-based motion detection and background subtraction algorithm. SENSORS 2010; 10:1041-61. [PMID: 22205857 PMCID: PMC3244003 DOI: 10.3390/s100201041] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2009] [Revised: 01/15/2010] [Accepted: 01/22/2010] [Indexed: 11/29/2022]
Abstract
This paper presents a region-based method for background subtraction. It relies on color histograms, texture information, and successive division of candidate rectangular image regions to model the background and detect motion. Our proposed algorithm uses this principle and combines it with Gaussian Mixture background modeling to produce a new method which outperforms the classic Gaussian Mixture background subtraction method. Our method has the advantages of filtering noise during image differentiation and providing a selectable level of detail for the contour of the moving shapes. The algorithm is tested on various video sequences and is shown to outperform state-of-the-art background subtraction methods.
Collapse
|