1
|
Song F, Li P. YOLOv5-MS: Real-Time Multi-Surveillance Pedestrian Target Detection Model for Smart Cities. Biomimetics (Basel) 2023; 8:480. [PMID: 37887611 PMCID: PMC10604626 DOI: 10.3390/biomimetics8060480] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/26/2023] [Accepted: 10/04/2023] [Indexed: 10/28/2023] Open
Abstract
Intelligent video surveillance plays a pivotal role in enhancing the infrastructure of smart urban environments. The seamless integration of multi-angled cameras, functioning as perceptive sensors, significantly enhances pedestrian detection and augments security measures in smart cities. Nevertheless, current pedestrian-focused target detection encounters challenges such as slow detection speeds and increased costs. To address these challenges, we introduce the YOLOv5-MS model, an YOLOv5-based solution for target detection. Initially, we optimize the multi-threaded acquisition of video streams within YOLOv5 to ensure image stability and real-time performance. Subsequently, leveraging reparameterization, we replace the original BackBone convolution with RepvggBlock, streamlining the model by reducing convolutional layer channels, thereby enhancing the inference speed. Additionally, the incorporation of a bioinspired "squeeze and excitation" module in the convolutional neural network significantly enhances the detection accuracy. This module improves target focusing and diminishes the influence of irrelevant elements. Furthermore, the integration of the K-means algorithm and bioinspired Retinex image augmentation during training effectively enhances the model's detection efficacy. Finally, loss computation adopts the Focal-EIOU approach. The empirical findings from our internally developed smart city dataset unveil YOLOv5-MS's impressive 96.5% mAP value, indicating a significant 2.0% advancement over YOLOv5s. Moreover, the average inference speed demonstrates a notable 21.3% increase. These data decisively substantiate the model's superiority, showcasing its capacity to effectively perform pedestrian detection within an Intranet of over 50 video surveillance cameras, in harmony with our stringent requisites.
Collapse
Affiliation(s)
- Fangzheng Song
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China;
| | - Peng Li
- Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
2
|
Feng J, Jiang H, Jin Y, Rong S, Wang S, Wang H, Wang L, Xu W, Sun B. A device-independent method for the colorimetric quantification on microfluidic sensors using a color adaptation algorithm. Mikrochim Acta 2023; 190:148. [PMID: 36952027 DOI: 10.1007/s00604-023-05731-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 03/02/2023] [Indexed: 03/24/2023]
Abstract
A general and adaptable method is proposed to reliably extract quantitative information from smartphone images of microfluidic sensors. By analyzing and processing the color information of selected standard substances, the influence of light conditions, device differences, and human factors could be significantly reduced. Machine learning and multivariate fitting methods were proved to be effective for chroma correction, and a key element was the training of sample size and the fitting form, respectively. A custom APP was developed and validated using a high-sensitivity chromium ion quantification paper chip. The average chroma deviations under different conditions were reduced by more than 75% in RGB color space, and the concentration test error was reduced by more than half compared with the commonly used method. The proposed approach could be a beneficial supplement to existing and potential colorimetry-based detection methods.
Collapse
Affiliation(s)
- Junjie Feng
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China.
| | - Huiyun Jiang
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Yan Jin
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Shenghui Rong
- Ocean University of China, School of Electronic Engineering, 238 Songling Road, Qingdao, 266100, China
| | - Shiqiang Wang
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Haozhi Wang
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Lin Wang
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Wei Xu
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China
| | - Bing Sun
- SINOPEC Research Institute of Safety Engineering Co., Ltd., State Key Laboratory of Safety and Control for Chemicals, 339 Songling Road, Qingdao, 266100, China.
| |
Collapse
|
3
|
Zhou H, Shu D, Wu C, Wang Q, Wang Q. Image Illumination Adaptive Correction Algorithm Based on a Combined Model of Bottom-Hat and Improved Gamma Transformation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07368-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
Khan R, Mehmood A, Zheng Z. Robust contrast enhancement method using a retinex model with adaptive brightness for detection applications. OPTICS EXPRESS 2022; 30:37736-37752. [PMID: 36258356 DOI: 10.1364/oe.472557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 09/10/2022] [Indexed: 06/16/2023]
Abstract
Low light image enhancement with adaptive brightness, color and contrast preservation in degraded visual conditions (e.g., extreme dark background, lowlight, back-light, mist. etc.) is becoming more challenging for machine cognition applications than anticipated. A realistic image enhancement framework should preserve brightness and contrast in robust scenarios. The extant direct enhancement methods amplify objectionable structure and texture artifacts, whereas network-based enhancement approaches are based on paired or large-scale training datasets, raising fundamental concerns about their real-world applicability. This paper presents a new framework to get deep into darkness in degraded visual conditions following the fundamental of retinex-based image decomposition. We separate the reflection and illumination components to perform independent weighted enhancement operations on each component to preserve the visual details with a balance of brightness and contrast. A comprehensive weighting strategy is proposed to constrain image decomposition while disrupting the irregularities of high frequency reflection and illumination to improve the contrast. At the same time, we propose to guide the illumination component with a high-frequency component for structure and texture preservation in degraded visual conditions. Unlike existing approaches, the proposed method works regardless of the training data type (i.e., low light, normal light, or normal and low light pairs). A deep into darkness network (D2D-Net) is proposed to maintain the visual balance of smoothness without compromising the image quality. We conduct extensive experiments to demonstrate the superiority of the proposed enhancement. We test the performance of our method for object detection tasks in extremely dark scenarios. Experimental results demonstrate that our method maintains the balance of visual smoothness, making it more viable for future interactive visual applications.
Collapse
|
5
|
Sun L, Tang C, Xu M, Lei Z. DIC measurement for large-scale structures based on adaptive warping image stitching. APPLIED OPTICS 2022; 61:G28-G37. [PMID: 36255861 DOI: 10.1364/ao.455564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/05/2022] [Indexed: 06/16/2023]
Abstract
As a representative method of optical non-interference measurement, digital image correlation (DIC) technology is a non-contact optical mechanics method that can measure the displacement and deformation of the whole field. However, when the measurement range of the field is too large, the existing DIC method cannot measure the full-field strain accurately, which limits the application of the DIC measurement in the case of a large size and wide-field view. To address this issue, a DIC measurement method for large-scale structures based on adaptive warping image stitching is proposed in this paper. First, multiple adjacent high-resolution images are collected at different locations of large-scale structures. Secondly, the collected images are stitched by applying the adaptive warping image stitching algorithm to obtain a panoramic image. Finally, the DIC algorithm is applied to solve the whole deformation field. In the experiments, we first verify the feasibility of the proposed method for image matching and fusion through the numerical simulation of a rigid body translation experiment. Then the accuracy and robustness of the proposed method in practical application are verified by rigid body translation and a three-point bending experiment. The experimental results demonstrate that the measurement range of DIC is improved significantly with the adaptive warping image stitching algorithm.
Collapse
|
6
|
Abstract
This paper describes an image enhancement method for reliable image feature matching. Image features such as SIFT and SURF have been widely used in various computer vision tasks such as image registration and object recognition. However, the reliable extraction of such features is difficult in poorly illuminated scenes. One promising approach is to apply an image enhancement method before feature extraction, which preserves the original characteristics of the scene. We thus propose to use the Multi-Scale Retinex algorithm, which is aimed to emulate the human visual system and it provides more information of a poorly illuminated scene. We experimentally assessed various combinations of image enhancement (MSR, Gamma correction, Histogram Equalization and Sharpening) and feature extraction methods (SIFT, SURF, ORB, AKAZE) using images of a large variety of scenes, demonstrating that the combination of the Multi-Scale Retinex and SIFT provides the best results in terms of the number of reliable feature matches.
Collapse
|