1
|
Abstract
Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map Quantization Error (SOM-QE). Shape pairs with perfect geometry mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within and/or across the shapes in a given pair produce, as shown here, a longer choice response time (RT) for “yes” responses relative to symmetry. These data are consistently mirrored by the variations in the SOM-QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metric’s proven selectivity to local contrast and color variations in large and highly complex image data.
Collapse
|
2
|
Dresp-Langley B, Nageotte F, Zanne P, de Mathelin M. Correlating Grip Force Signals from Multiple Sensors Highlights Prehensile Control Strategies in a Complex Task-User System. Bioengineering (Basel) 2020; 7:E143. [PMID: 33182694 PMCID: PMC7711794 DOI: 10.3390/bioengineering7040143] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/04/2020] [Accepted: 11/07/2020] [Indexed: 11/16/2022] Open
Abstract
Wearable sensor systems with transmitting capabilities are currently employed for the biometric screening of exercise activities and other performance data. Such technology is generally wireless and enables the non-invasive monitoring of signals to track and trace user behaviors in real time. Examples include signals relative to hand and finger movements or force control reflected by individual grip force data. As will be shown here, these signals directly translate into task, skill, and hand-specific (dominant versus non-dominant hand) grip force profiles for different measurement loci in the fingers and palm of the hand. The present study draws from thousands of such sensor data recorded from multiple spatial locations. The individual grip force profiles of a highly proficient left-hander (expert), a right-handed dominant-hand-trained user, and a right-handed novice performing an image-guided, robot-assisted precision task with the dominant or the non-dominant hand are analyzed. The step-by-step statistical approach follows Tukey's "detective work" principle, guided by explicit functional assumptions relating to somatosensory receptive field organization in the human brain. Correlation analyses (Person's product moment) reveal skill-specific differences in co-variation patterns in the individual grip force profiles. These can be functionally mapped to from-global-to-local coding principles in the brain networks that govern grip force control and its optimization with a specific task expertise. Implications for the real-time monitoring of grip forces and performance training in complex task-user systems are brought forward.
Collapse
Affiliation(s)
- Birgitta Dresp-Langley
- ICube UMR 7357, Centre National de la Recherche Scientifique (CNRS), 75016 Paris, France
| | - Florent Nageotte
- ICube UMR 7357 Robotics Department, University of Strasbourg, 67081 Strasbourg, France; (F.N.); (P.Z.); (M.d.M.)
| | - Philippe Zanne
- ICube UMR 7357 Robotics Department, University of Strasbourg, 67081 Strasbourg, France; (F.N.); (P.Z.); (M.d.M.)
| | - Michel de Mathelin
- ICube UMR 7357 Robotics Department, University of Strasbourg, 67081 Strasbourg, France; (F.N.); (P.Z.); (M.d.M.)
| |
Collapse
|
3
|
Dresp-Langley B, Reeves A. Color for the perceptual organization of the pictorial plane: Victor Vasarely's legacy to Gestalt psychology. Heliyon 2020; 6:e04375. [PMID: 32695904 PMCID: PMC7365985 DOI: 10.1016/j.heliyon.2020.e04375] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 05/17/2020] [Accepted: 06/29/2020] [Indexed: 11/30/2022] Open
Abstract
Victor Vasarely's (1906-1997) important legacy to the study of human perception is brought to the forefront and discussed. A large part of his impressive work conveys the appearance of striking three-dimensional shapes and structures in a large-scale pictorial plane. Current perception science explains such effects by invoking brain mechanisms for the processing of monocular (2D) depth cues. Here in this study, we illustrate and explain local effects of 2D color and contrast cues on the perceptual organization in terms of figure-ground assignments, i.e. which local surfaces are likely to be seen as "nearer" or "bigger" in the image plane. Paired configurations are embedded in a larger, structurally ambivalent pictorial context inspired by some of Vasarely's creations. The figure-ground effects these configurations produce reveal a significant correlation between perceptual solutions for "nearer" and "bigger" when other geometric depth cues are missing. In consistency with previous findings on similar, albeit simpler visual displays, a specific color may compete with luminance contrast to resolve the planar ambiguity of a complex pattern context at a critical point in the hierarchical resolution of figure-ground uncertainty. The potential role of color temperature in this process is brought forward here. Vasarely intuitively understood and successfully exploited the subtle context effects accounted for in this paper, well before empirical investigation had set out to study and explain them in terms of information processing by the visual brain.
Collapse
Affiliation(s)
- Birgitta Dresp-Langley
- Centre National de la Recherche Scientifique CNRS, ICube UMR 7357 CNRS -Université de Strasbourg, France
| | - Adam Reeves
- Northeastern University, Psychology Department, Boston, USA
| |
Collapse
|
4
|
Abstract
Pieron’s and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the forefront the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations based on physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative, forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results showed that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by audio-visual probability summation.
Collapse
|
5
|
de Mathelin M, Nageotte F, Zanne P, Dresp-Langley B. Sensors for Expert Grip Force Profiling: Towards Benchmarking Manual Control of a Robotic Device for Surgical Tool Movements. SENSORS 2019; 19:s19204575. [PMID: 31640204 PMCID: PMC6848933 DOI: 10.3390/s19204575] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 10/04/2019] [Accepted: 10/17/2019] [Indexed: 12/21/2022]
Abstract
STRAS (Single access Transluminal Robotic Assistant for Surgeons) is a new robotic system based on the Anubis® platform of Karl Storz for application to intra-luminal surgical procedures. Pre-clinical testing of STRAS has recently permitted to demonstrate major advantages of the system in comparison with classic procedures. Benchmark methods permitting to establish objective criteria for 'expertise' need to be worked out now to effectively train surgeons on this new system in the near future. STRAS consists of three cable-driven sub-systems, one endoscope serving as guide, and two flexible instruments. The flexible instruments have three degrees of freedom and can be teleoperated by a single user via two specially designed master interfaces. In this study, small force sensors sewn into a wearable glove to ergonomically fit the master handles of the robotic system were employed for monitoring the forces applied by an expert and a trainee (complete novice) during all the steps of surgical task execution in a simulator task (4-step-pick-and-drop). Analysis of grip-force profiles is performed sensor by sensor to bring to the fore specific differences in handgrip force profiles in specific sensor locations on anatomically relevant parts of the fingers and hand controlling the master/slave system.
Collapse
Affiliation(s)
- Michel de Mathelin
- ICube Lab, UMR 7357 CNRS, Robotics Department, University of Strasbourg, 6700 Strasbourg, France.
| | - Florent Nageotte
- ICube Lab, UMR 7357 CNRS, Robotics Department, University of Strasbourg, 6700 Strasbourg, France.
| | - Philippe Zanne
- ICube Lab, UMR 7357 CNRS, Robotics Department, University of Strasbourg, 6700 Strasbourg, France.
| | - Birgitta Dresp-Langley
- ICube Lab, UMR 7357 CNRS, Robotics Department, University of Strasbourg, 6700 Strasbourg, France.
| |
Collapse
|
6
|
Occam’s Razor for Big Data? On Detecting Quality in Large Unstructured Datasets. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153065] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Detecting quality in large unstructured datasets requires capacities far beyond the limits of human perception and communicability and, as a result, there is an emerging trend towards increasingly complex analytic solutions in data science to cope with this problem. This new trend towards analytic complexity represents a severe challenge for the principle of parsimony (Occam’s razor) in science. This review article combines insight from various domains such as physics, computational science, data engineering, and cognitive science to review the specific properties of big data. Problems for detecting data quality without losing the principle of parsimony are then highlighted on the basis of specific examples. Computational building block approaches for data clustering can help to deal with large unstructured datasets in minimized computation time, and meaning can be extracted rapidly from large sets of unstructured image or video data parsimoniously through relatively simple unsupervised machine learning algorithms. Why we still massively lack in expertise for exploiting big data wisely to extract relevant information for specific tasks, recognize patterns and generate new information, or simply store and further process large amounts of sensor data is then reviewed, and examples illustrating why we need subjective views and pragmatic methods to analyze big data contents are brought forward. The review concludes on how cultural differences between East and West are likely to affect the course of big data analytics, and the development of increasingly autonomous artificial intelligence (AI) aimed at coping with the big data deluge in the near future.
Collapse
|
7
|
Abstract
Although symmetry has been discussed in terms of a major law of perceptual organization since the early conceptual efforts of the Gestalt school (Wertheimer, Metzger, Koffka and others), the first quantitative measurements testing for effects of symmetry on processes of Gestalt formation have seen the day only recently. In this study, a psychophysical rating study and a “foreground”-“background” choice response time experiment were run with human observers to test for effects of bilateral symmetry on the perceived strength of figure-ground in triangular Kanizsa configurations. Displays with and without bilateral symmetry, identical physically-specified-to-total contour ratio, and constant local contrast intensity within and across conditions, but variable local contrast polarity and variable orientation in the plane, were presented in a random order to human observers. Configurations with bilateral symmetry produced significantly stronger figure-ground percepts reflected by greater subjective magnitudes and consistently higher percentages of “foreground” judgments accompanied by significantly shorter response times. These effects of symmetry depend neither on the orientation of the axis of symmetry, nor on the contrast polarity of the physical inducers. It is concluded that bilateral symmetry, irrespective of orientation, significantly contributes to the, largely sign-invariant, visual mechanisms of figure-ground segregation that determine the salience of figure-ground in perceptually ambiguous configurations.
Collapse
|
8
|
Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons. INFORMATION 2018. [DOI: 10.3390/info9120316] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained.
Collapse
|
9
|
Dresp-Langley B, Reeves A. Colour for Behavioural Success. Iperception 2018; 9:2041669518767171. [PMID: 29770183 PMCID: PMC5946649 DOI: 10.1177/2041669518767171] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 03/05/2018] [Indexed: 11/17/2022] Open
Abstract
Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images.
Collapse
Affiliation(s)
- Birgitta Dresp-Langley
- ICube UMR 7357, Centre National de la Recherche Scientifique, University of Strasbourg, France
| | - Adam Reeves
- Department of Psychology, Northeastern University, Boston, MA, USA
| |
Collapse
|
10
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS One 2017; 12:e0183789. [PMID: 28859092 PMCID: PMC5578485 DOI: 10.1371/journal.pone.0183789] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 08/11/2017] [Indexed: 11/18/2022] Open
Abstract
Effects of different visual displays on the time and precision of bare-handed or tool-mediated eye-hand coordination were investigated in a pick-and-place-task with complete novices. All of them scored well above average in spatial perspective taking ability and performed the task with their dominant hand. Two groups of novices, four men and four women in each group, had to place a small object in a precise order on the centre of five targets on a Real-world Action Field (RAF), as swiftly as possible and as precisely as possible, using a tool or not (control). Each individual session consisted of four visual display conditions. The order of conditions was counterbalanced between individuals and sessions. Subjects looked at what their hands were doing 1) directly in front of them (“natural” top-down view) 2) in top-down 2D fisheye view 3) in top-down undistorted 2D view or 4) in 3D stereoscopic top-down view (head-mounted OCULUS DK 2). It was made sure that object movements in all image conditions matched the real-world movements in time and space. One group was looking at the 2D images with the monitor positioned sideways (sub-optimal); the other group was looking at the monitor placed straight ahead of them (near-optimal). All image viewing conditions had significantly detrimental effects on time (seconds) and precision (pixels) of task execution when compared with “natural” direct viewing. More importantly, we find significant trade-offs between time and precision between and within groups, and significant interactions between viewing conditions and manipulation conditions. The results shed new light on controversial findings relative to visual display effects on eye-hand coordination, and lead to conclude that differences in camera systems and adaptive strategies of novices are likely to explain these.
Collapse
Affiliation(s)
- Anil U. Batmaz
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Michel de Mathelin
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Birgitta Dresp-Langley
- ICube Lab Cognitive Science Department, Centre National de la Recherche Scientifique, 1 Place de l'Hôpital, Strasbourg, France
- * E-mail:
| |
Collapse
|
11
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Getting nowhere fast: trade-off between speed and precision in training to execute image-guided hand-tool movements. BMC Psychol 2016; 4:55. [PMID: 27842577 PMCID: PMC5109684 DOI: 10.1186/s40359-016-0161-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Accepted: 10/27/2016] [Indexed: 12/02/2022] Open
Abstract
Background The speed and precision with which objects are moved by hand or hand-tool interaction under image guidance depend on a specific type of visual and spatial sensorimotor learning. Novices have to learn to optimally control what their hands are doing in a real-world environment while looking at an image representation of the scene on a video monitor. Previous research has shown slower task execution times and lower performance scores under image-guidance compared with situations of direct action viewing. The cognitive processes for overcoming this drawback by training are not yet understood. Methods We investigated the effects of training on the time and precision of direct view versus image guided object positioning on targets of a Real-world Action Field (RAF). Two men and two women had to learn to perform the task as swiftly and as precisely as possible with their dominant hand, using a tool or not and wearing a glove or not. Individuals were trained in sessions of mixed trial blocks with no feed-back. Results As predicted, image-guidance produced significantly slower times and lesser precision in all trainees and sessions compared with direct viewing. With training, all trainees get faster in all conditions, but only one of them gets reliably more precise in the image-guided conditions. Speed-accuracy trade-offs in the individual performance data show that the highest precision scores and steepest learning curve, for time and precision, were produced by the slowest starter. Fast starters produced consistently poorer precision scores in all sessions. The fastest starter showed no sign of stable precision learning, even after extended training. Conclusions Performance evolution towards optimal precision is compromised when novices start by going as fast as they can. The findings have direct implications for individual skill monitoring in training programmes for image-guided technology applications with human operators.
Collapse
Affiliation(s)
- Anil Ufuk Batmaz
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France
| | - Michel de Mathelin
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France
| | - Birgitta Dresp-Langley
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France.
| |
Collapse
|
12
|
Affine Geometry, Visual Sensation, and Preference for Symmetry of Things in a Thing. Symmetry (Basel) 2016. [DOI: 10.3390/sym8110127] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
13
|
Dresp-Langley B, Grossberg S. Neural Computation of Surface Border Ownership and Relative Surface Depth from Ambiguous Contrast Inputs. Front Psychol 2016; 7:1102. [PMID: 27516746 PMCID: PMC4963386 DOI: 10.3389/fpsyg.2016.01102] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Accepted: 07/07/2016] [Indexed: 11/13/2022] Open
Abstract
The segregation of image parts into foreground and background is an important aspect of the neural computation of 3D scene perception. To achieve such segregation, the brain needs information about border ownership; that is, the belongingness of a contour to a specific surface represented in the image. This article presents psychophysical data derived from 3D percepts of figure and ground that were generated by presenting 2D images composed of spatially disjoint shapes that pointed inward or outward relative to the continuous boundaries that they induced along their collinear edges. The shapes in some images had the same contrast (black or white) with respect to the background gray. Other images included opposite contrasts along each induced continuous boundary. Psychophysical results demonstrate conditions under which figure-ground judgment probabilities in response to these ambiguous displays are determined by the orientation of contrasts only, not by their relative contrasts, despite the fact that many border ownership cells in cortical area V2 respond to a preferred relative contrast. Studies are also reviewed in which both polarity-specific and polarity-invariant properties obtain. The FACADE and 3D LAMINART models are used to explain these data.
Collapse
Affiliation(s)
- Birgitta Dresp-Langley
- Centre National de la Recherche Scientifique, ICube UMR 7357, University of Strasbourg Strasbourg, France
| | - Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Boston University, Boston MA, USA
| |
Collapse
|