1
|
Peng Y, Ma C, Li M, Liu Y, Yu J, Pan L, Zhang Z. Intelligent devices for assessing essential tremor: a comprehensive review. J Neurol 2024; 271:4733-4750. [PMID: 38816480 DOI: 10.1007/s00415-024-12354-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 06/01/2024]
Abstract
Essential tremor (ET) stands as the most prevalent movement disorder, characterized by rhythmic and involuntary shaking of body parts. Achieving an accurate and comprehensive assessment of tremor severity is crucial for effectively diagnosing and managing ET. Traditional methods rely on clinical observation and rating scales, which may introduce subjective biases and hinder continuous evaluation of disease progression. Recent research has explored new approaches to quantifying ET. A promising method involves the use of intelligent devices to facilitate objective and quantitative measurements. These devices include inertial measurement units, electromyography, video equipment, and electronic handwriting boards, and more. Their deployment enables real-time monitoring of human activity data, featuring portability and efficiency. This capability allows for more extensive research in this field and supports the shift from in-lab/clinic to in-home monitoring of ET symptoms. Therefore, this review provides an in-depth analysis of the application, current development, potential characteristics, and roles of intelligent devices in assessing ET.
Collapse
Affiliation(s)
- Yumeng Peng
- Center for Artificial Intelligence in Medicine, Medical Innovation Research Department, PLA General Hospital, Beijing, 100853, China
- Department of Neurology, 923th Hospital of the Joint Logistics Support Force of PLA, Nanning, 530021, China
- Chinese PLA Medical School, Beijing, 100853, China
| | - Chenbin Ma
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Mengwei Li
- Center for Artificial Intelligence in Medicine, Medical Innovation Research Department, PLA General Hospital, Beijing, 100853, China
- Chinese PLA Medical School, Beijing, 100853, China
| | - Yunmo Liu
- Chinese PLA Medical School, Beijing, 100853, China
| | - Jinze Yu
- School of Computer Science and Engineering, Beihang University, Beijing, 100191, China
| | - Longsheng Pan
- Department of Neurosurgery, First Medical Center, PLA General Hospital, Beijing, 100853, China.
| | - Zhengbo Zhang
- Center for Artificial Intelligence in Medicine, Medical Innovation Research Department, PLA General Hospital, Beijing, 100853, China.
| |
Collapse
|
2
|
Dalai R, Senapati KK, Dalai N. Modified U-Net based 3D reconstruction model to estimate volume from multi-view images of a solid object. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2177583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Affiliation(s)
- Radhamadhab Dalai
- Department of Computer Science & Engineering, BIT Mesra, Ranchi, India
| | | | - Nibedita Dalai
- Department of Civil Engineering, PMEC College, Berhampur, India
| |
Collapse
|
3
|
Sperling Y, Bartsch J, Gauchan S, Bergmann RB. Extending vision ray calibration by determination of focus distances. OPTICS EXPRESS 2022; 30:47801-47815. [PMID: 36558699 DOI: 10.1364/oe.475420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 11/11/2022] [Indexed: 06/17/2023]
Abstract
The application of cameras as sensors in optical metrology techniques for three-dimensional topography measurement, such as fringe projection profilometry and deflectometry, presumes knowledge regarding the metric relationship between image space and object space. This relation is established by camera calibration and a variety of techniques are available. Vision ray calibration achieves highly precise camera calibration by employing a display as calibration target, enabling the use of active patterns in the form of series of phase-shifted sinusoidal fringes. Besides the required spatial coding of the display surface, this procedure yields additional full-field contrast information. Exploiting the relation between full-field contrast and defocus, we present an extension of vision ray calibration providing the additional information of the focus distances of the calibrated camera. In our experiments we achieve a reproducibility of the focus distances in the order of mm. Using a modified Laplacian based focus determination method, we confirm our focus distance results within a few mm.
Collapse
|
4
|
Yu J, Liu Y, Zhang Z, Gao F, Gao N, Meng Z, Jiang X. High-accuracy camera calibration method based on coded concentric ring center extraction. OPTICS EXPRESS 2022; 30:42454-42469. [PMID: 36366699 DOI: 10.1364/oe.470990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 09/04/2022] [Indexed: 06/16/2023]
Abstract
In the field of three-dimensional (3-D) metrology based on fringe projection profilometry (FPP), accurate camera calibration is an essential task and a primary requirement. In order to improve the accuracy of camera calibration, the calibration board or calibration target needs to be manufactured with high accuracy, and the marker points in calibration image require to be positioned with high accuracy. This paper presents an improved camera calibration method by simultaneously optimizing the camera parameters and target geometry. Specifically, a set of regularly distributed target markers with rich coded concentric ring pattern is first displayed on a liquid crystal display (LCD) screen. Then, the sub-pixel edges of all coded bands radial straight lines are automatically located at several positions of the LCD screen. Finally, the sub-pixel edge point set is mapped into parameter space to form a line set, and the intersection of the lines is defined as the center pixel coordinates of each target point to complete the camera calibration. The simulation and experimental results verify that the proposed camera calibration method is feasible and easy to operate, which can essentially eliminate the perspective transformation error to improve the accuracy of camera parameters and target geometry.
Collapse
|
5
|
Bartsch J, Sperling Y, Bergmann RB. Efficient vision ray calibration of multi-camera systems. OPTICS EXPRESS 2021; 29:17125-17139. [PMID: 34154262 DOI: 10.1364/oe.424337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 04/28/2021] [Indexed: 06/13/2023]
Abstract
Vision ray calibration provides imaging properties of cameras for application in optical metrology by identifying an independent vision ray for each sensor pixel. Due to this generic description of imaging properties, setups of multiple cameras can be considered as one imaging device. This enables holistic calibration of such setups with the same algorithm that is used for the calibration of a single camera. Obtaining reference points for the calculation of independent vision rays requires knowledge of the parameters of the calibration setup. This is achieved by numerical optimization which comes with high computational effort due to the large amount of calibration data. Using the collinearity of reference points corresponding to individual sensor pixels as the measure of accuracy of system parameters, we derived a cost function that does not require explicit calculation of vision rays. We analytically derived formulae for gradient and Hessian matrix of this cost function to improve computational efficiency of vision ray calibration. Fringe projection measurements using a holistically vision ray calibrated system of two cameras demonstrate the effectiveness of our approach. To the best of our knowledge, neither any explicit description of vision ray calibration calculations nor the application of vision ray calibration in holistic camera system calibration can be found in literature.
Collapse
|
6
|
Semeniuta O. Subset-based stereo calibration method optimizing triangulation accuracy. PeerJ Comput Sci 2021; 7:e485. [PMID: 33977133 PMCID: PMC8064236 DOI: 10.7717/peerj-cs.485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/18/2021] [Indexed: 06/12/2023]
Abstract
Calibration of vision systems is essential for performing measurement in real world coordinates. For stereo vision, one performs stereo calibration, the results of which are used for 3D reconstruction of points imaged in the two cameras. A common and flexible technique for such calibration is based on collection and processing pairs of images of a planar chessboard calibration pattern. The inherent weakness of this approach lies in its reliance on the random nature of data collection, which might lead to better or worse calibration results, depending on the collected set of image pairs. In this paper, a subset-based approach to camera and stereo calibration, along with its implementation based on OpenCV, is presented. It utilizes a series of calibration runs based on randomly chosen subsets from the global set of image pairs, with subsequent evaluation of metrics based on triangulating the features in each image pair. The proposed method is evaluated on a collected set of chessboard image pairs obtained with two identical industrial cameras. To highlight the capabilities of the method to select the best-performing calibration parameters, a principal component analysis and clustering of the transformed data was performed, based on the set of metric measurements per each calibration run.
Collapse
|
7
|
Huang Q, Liu J. Practical limitations of lane detection algorithm based on Hough transform in challenging scenarios. INT J ADV ROBOT SYST 2021. [DOI: 10.1177/17298814211008752] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.
Collapse
Affiliation(s)
- Qiao Huang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jinlong Liu
- Power Machinery and Vehicular Engineering Institute, College of Energy Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
8
|
Abstract
Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of stereo vision: using two fixed cameras for a moving object, or using a single moving camera for a stationary object. This research addresses the problem of determining the location of a moving object using only a single moving camera, and it does not make use of any prior information on the type of object nor the size of the object. Our technique makes use of a single camera mounted on a quadrotor drone, which flies in a specific pattern relative to the object in order to remove the depth ambiguity associated with their relative motion. In our previous work, we showed that with three images, we can recover the location of an object moving parallel to the direction of motion of the camera. In this research, we find that with four images, we can recover the location of an object moving linearly in an arbitrary direction. We evaluated our algorithm on over 70 image sequences of objects moving in various directions, and the results showed a much smaller depth error rate (less than 8.0% typically) than other state-of-the-art algorithms.
Collapse
|
9
|
Improved Pose Estimation of Aruco Tags Using a Novel 3D Placement Strategy. SENSORS 2020; 20:s20174825. [PMID: 32858985 PMCID: PMC7506853 DOI: 10.3390/s20174825] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 08/20/2020] [Accepted: 08/23/2020] [Indexed: 11/17/2022]
Abstract
This paper extends the topic of monocular pose estimation of an object using Aruco tags imaged by RGB cameras. The accuracy of the Open CV Camera calibration and Aruco pose estimation pipelines is tested in detail by performing standardized tests with multiple Intel Realsense D435 Cameras. Analyzing the results led to a way to significantly improve the performance of Aruco tag localization which involved designing a 3D Aruco board, which is a set of Aruco tags placed at an angle to each other, and developing a library to combine the pose data from the individual tags for both higher accuracy and stability.
Collapse
|
10
|
de Francisco Ortiz Ó, Ortiz I, Bueno A. New Global Referencing Approach in a Camera-LCD Micro Positioning System. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20072118. [PMID: 32283728 PMCID: PMC7181139 DOI: 10.3390/s20072118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 04/05/2020] [Accepted: 04/07/2020] [Indexed: 06/11/2023]
Abstract
In any precision manufacturing process, positioning systems play a very important role in achieving a quality product. As a new approach to current systems, camera-LCD positioning systems are a new technology that can provide substantial improvements enabling better accuracy and repeatability. However, in order to provide stability to the system a global positioning system is required. This paper presents an improvement of a positioning system based on the treatment of images on an LCD in which a new algorithm with absolute reference has been implemented. The method is based on basic geometry and linear algebra applied to computer vision. The algorithm determines the spiral center using an image taken at any point. Consequently, the system constantly knows its position and does not lose its reference. Several modifications of the algorithm are proposed and compared. The simulation and test of the algorithm provide an important improvement in the reliability and stability of the positioning system providing errors of microns for the calculation of the global position used by the algorithm.
Collapse
Affiliation(s)
- Óscar de Francisco Ortiz
- Department of Engineering and Applied Technologies, University Center of Defense, San Javier Air Force Base, MDE-UPCT, 30720 Santiago de la Ribera, Spain
| | - Irene Ortiz
- Department of Science and Computer Science, University Center of Defense, San Javier Air Force Base, MDE-UPCT, 30720 Santiago de la Ribera, Spain;
| | - Antonio Bueno
- Department of Geometry and Topology, University of Granada, 18071 Granada, Spain;
| |
Collapse
|