1
|
Brilhault A, Neuenschwander S, Rios RA. A new robust multivariate mode estimator for eye-tracking calibration. Behav Res Methods 2023; 55:516-553. [PMID: 35297014 DOI: 10.3758/s13428-022-01809-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2022] [Indexed: 11/08/2022]
Abstract
We propose in this work a new method for estimating the main mode of multivariate distributions, with application to eye-tracking calibration. When performing eye-tracking experiments with poorly cooperative subjects, such as infants or monkeys, the calibration data generally suffer from high contamination. Outliers are typically organized in clusters, corresponding to fixations in the time intervals when subjects were not looking at the calibration points. In this type of multimodal distributions, most central tendency measures fail at estimating the principal fixation coordinates (the first mode), resulting in errors and inaccuracies when mapping the gaze to the screen coordinates. Here, we developed a new algorithm to identify the first mode of multivariate distributions, named BRIL, which relies on recursive depth-based filtering. This novel approach was tested on artificial mixtures of Gaussian and Uniform distributions, and compared to existing methods (conventional depth medians, robust estimators of location and scatter, and clustering-based approaches). We obtained outstanding performances, even for distributions containing very high proportions of outliers, both grouped in clusters and randomly distributed. Finally, we demonstrate the strength of our method in a real-world scenario using experimental data from eye-tracking calibrations with Capuchin monkeys, especially for highly contaminated distributions where other algorithms typically lack accuracy.
Collapse
Affiliation(s)
- Adrien Brilhault
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil
| | | | - Ricardo Araujo Rios
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil.
| |
Collapse
|
2
|
de Bataille C, Bernard D, Dumoncel J, Vaysse F, Cussat-Blanc S, Telmon N, Maret D, Monsarrat P. Machine Learning Analysis of the Anatomical Parameters of the Upper Airway Morphology: A Retrospective Study from Cone-Beam CT Examinations in a French Population. J Clin Med 2022; 12:84. [PMID: 36614885 PMCID: PMC9820916 DOI: 10.3390/jcm12010084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/12/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022] Open
Abstract
The objective of this study is to assess, using cone-beam CT (CBCT) examinations, the correlation between hard and soft anatomical parameters and their impact on the characteristics of the upper airway using symbolic regression as a machine learning strategy. Methods: On each CBCT, the upper airway was segmented, and 24 anatomical landmarks were positioned to obtain six angles and 19 distances. Some anatomical landmarks were related to soft tissues and others were related to hard tissues. To explore which variables were the most influential to explain the morphology of the upper airway, principal component and symbolic regression analyses were conducted. Results: In total, 60 CBCT were analyzed from subjects with a mean age of 39.5 ± 13.5 years. The intra-observer reproducibility for each variable was between good and excellent. The horizontal soft palate measure mostly contributed to the reduction of the airway volume and minimal section area with a variable importance of around 50%. The tongue and the position of the hyoid bone were also linked to the upper airway morphology. For hard anatomical structures, the anteroposterior position of the mandible and the maxilla had some influence. Conclusions: Although the volume of the airway is not accessible on all CBCT scans performed by dental practitioners, this study demonstrates that a small number of anatomical elements may be markers of the reduction of the upper airway with, potentially, an increased risk of obstructive sleep apnea. This could help the dentist refer the patient to a suitable physician.
Collapse
Affiliation(s)
- Caroline de Bataille
- Laboratoire Centre d’Anthropobiologie et de Génomique de Toulouse, Université Paul Sabatier, 31073 Toulouse, France
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
| | - David Bernard
- Institute of Research in Informatics (IRIT) of Toulouse, CNRS—UMR5505, 31062 Toulouse, France
- RESTORE Research Center, Department of Oral Medicine, Université de Toulouse, INSERM, CNRS, EFS, ENVT, Université P. Sabatier, Toulouse University Hospital (CHU), Batiment INCERE, 4bis Avenue Hubert Curien, 31100 Toulouse, France
| | - Jean Dumoncel
- Laboratoire Centre d’Anthropobiologie et de Génomique de Toulouse, Université Paul Sabatier, 31073 Toulouse, France
| | - Frédéric Vaysse
- Laboratoire Centre d’Anthropobiologie et de Génomique de Toulouse, Université Paul Sabatier, 31073 Toulouse, France
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
| | - Sylvain Cussat-Blanc
- Institute of Research in Informatics (IRIT) of Toulouse, CNRS—UMR5505, 31062 Toulouse, France
- Artificial and Natural Intelligence Toulouse Institute ANITI, 31013 Toulouse, France
| | - Norbert Telmon
- Laboratoire Centre d’Anthropobiologie et de Génomique de Toulouse, Université Paul Sabatier, 31073 Toulouse, France
- Service de Médecine Légale, Centre Hospitalier Universitaire Rangueil, Avenue du Professeur Jean Poulhès, CEDEX 9, 31059 Toulouse, France
| | - Delphine Maret
- Laboratoire Centre d’Anthropobiologie et de Génomique de Toulouse, Université Paul Sabatier, 31073 Toulouse, France
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
| | - Paul Monsarrat
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- RESTORE Research Center, Department of Oral Medicine, Université de Toulouse, INSERM, CNRS, EFS, ENVT, Université P. Sabatier, Toulouse University Hospital (CHU), Batiment INCERE, 4bis Avenue Hubert Curien, 31100 Toulouse, France
- Artificial and Natural Intelligence Toulouse Institute ANITI, 31013 Toulouse, France
| |
Collapse
|
3
|
Qian K, Arichi T, Price A, Dall'Orso S, Eden J, Noh Y, Rhode K, Burdet E, Neil M, Edwards AD, Hajnal JV. An eye tracking based virtual reality system for use inside magnetic resonance imaging systems. Sci Rep 2021; 11:16301. [PMID: 34381099 PMCID: PMC8357830 DOI: 10.1038/s41598-021-95634-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/18/2021] [Indexed: 11/09/2022] Open
Abstract
Patients undergoing Magnetic Resonance Imaging (MRI) often experience anxiety and sometimes distress prior to and during scanning. Here a full MRI compatible virtual reality (VR) system is described and tested with the aim of creating a radically different experience. Potential benefits could accrue from the strong sense of immersion that can be created with VR, which could create sense experiences designed to avoid the perception of being enclosed and could also provide new modes of diversion and interaction that could make even lengthy MRI examinations much less challenging. Most current VR systems rely on head mounted displays combined with head motion tracking to achieve and maintain a visceral sense of a tangible virtual world, but this technology and approach encourages physical motion, which would be unacceptable and could be physically incompatible for MRI. The proposed VR system uses gaze tracking to control and interact with a virtual world. MRI compatible cameras are used to allow real time eye tracking and robust gaze tracking is achieved through an adaptive calibration strategy in which each successive VR interaction initiated by the subject updates the gaze estimation model. A dedicated VR framework has been developed including a rich virtual world and gaze-controlled game content. To aid in achieving immersive experiences physical sensations, including noise, vibration and proprioception associated with patient table movements, have been made congruent with the presented virtual scene. A live video link allows subject-carer interaction, projecting a supportive presence into the virtual world.
Collapse
Affiliation(s)
- Kun Qian
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK.
| | - Tomoki Arichi
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Anthony Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Sofia Dall'Orso
- Department of Electrical Engineering, Chalmers University of Technology, 412 96, Gothenburg, Sweden
| | - Jonathan Eden
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Yohan Noh
- Department of Mechanical and Aerospace Engineering, Brunel University London, London, UB8 3PN, UK
| | - Kawal Rhode
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Etienne Burdet
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Mark Neil
- Department of Physics, Imperial College London, London, SW7 2AZ, UK
| | - A David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK.
| |
Collapse
|
4
|
Visual Neuroscience Methods for Marmosets: Efficient Receptive Field Mapping and Head-Free Eye Tracking. eNeuro 2021; 8:ENEURO.0489-20.2021. [PMID: 33863782 PMCID: PMC8143020 DOI: 10.1523/eneuro.0489-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 02/18/2021] [Accepted: 03/25/2021] [Indexed: 11/21/2022] Open
Abstract
The marmoset has emerged as a promising primate model system, in particular for visual neuroscience. Many common experimental paradigms rely on head fixation and an extended period of eye fixation during the presentation of salient visual stimuli. Both of these behavioral requirements can be challenging for marmosets. Here, we present two methodological developments, each addressing one of these difficulties. First, we show that it is possible to use a standard eye-tracking system without head fixation to assess visual behavior in the marmoset. Eye-tracking quality from head-free animals is sufficient to obtain precise psychometric functions from a visual acuity task. Second, we introduce a novel method for efficient receptive field (RF) mapping that does not rely on moving stimuli but uses fast flashing annuli and wedges. We present data recorded during head-fixation in areas V1 and V6 and show that RF locations are readily obtained within a short period of recording time. Thus, the methodological advancements presented in this work will contribute to establish the marmoset as a valuable model in neuroscience.
Collapse
|