1
|
Rea DJ, Young JE. It's not what you think: shaping beliefs about a robot to influence a teleoperator's expectations and behavior. Front Robot AI 2023; 10:1271337. [PMID: 38178990 PMCID: PMC10764549 DOI: 10.3389/frobt.2023.1271337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 12/01/2023] [Indexed: 01/06/2024] Open
Abstract
In this paper we present a novel design approach for shaping a teleoperator's expectations and behaviors when teleoperating a robot. Just as how people may drive a car differently based on their expectations of it (e.g., the brakes may be poor), we assert that teleoperators may likewise operate a robot differently based on expectations of robot capability and robustness. We present 3 novel interaction designs that proactively shape teleoperator perceptions, and the results from formal studies that demonstrate that these techniques do indeed shape operator perceptions, and in some cases, measures of driving behavior such as changes in collisions. Our methods shape operator perceptions of a robot's speed, weight, or overall safety, designed to encourage them to drive more safely. This approach shows promise as an avenue for improving teleoperator effectiveness without requiring changes to a robot, novel sensors, algorithms, or other functionality.
Collapse
Affiliation(s)
- Daniel J. Rea
- Faculty of Computer Science, University of New Brunswick, Fredericton, Canada
| | - James E. Young
- Department of Computer Science, University of Manitoba, Winnipeg, Canada
| |
Collapse
|
2
|
Wang J, Shi R, Zheng W, Xie W, Kao D, Liang HN. Effect of Frame Rate on User Experience, Performance, and Simulator Sickness in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2478-2488. [PMID: 37027727 DOI: 10.1109/tvcg.2023.3247057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The refresh rate of virtual reality (VR) head-mounted displays (HMDs) has been growing rapidly in recent years because of the demand to provide higher frame rate content as it is often linked with a better experience. Today's HMDs come with different refresh rates ranging from 20Hz to 180Hz, which determines the actual maximum frame rate perceived by users' naked eyes. VR users and content developers often face a choice because having high frame rate content and the hardware that supports it comes with higher costs and other trade-offs (such as heavier and bulkier HMDs). Both VR users and developers can choose a suitable frame rate if they are aware of the benefits of different frame rates in user experience, performance, and simulator sickness (SS). To our knowledge, limited research on frame rate in VR HMDs is available. In this paper, we aim to fill this gap and report a study with two VR application scenarios that compared four of the most common and highest frame rates currently available (60, 90, 120, and 180 frames per second (fps)) to explore their effect on users' experience, performance, and SS symptoms. Our results show that 120fps is an important threshold for VR. After 120fps, users tend to feel lower SS symptoms without a significant negative effect on their experience. Higher frame rates (e.g., 120 and 180fps) can ensure better user performance than lower rates. Interestingly, we also found that at 60fps and when users are faced with fast-moving objects, they tend to adopt a strategy to compensate for the lack of visual details by predicting or filling the gaps to try to meet the performance needs. At higher fps, users do not need to follow this compensatory strategy to meet the fast response performance requirements.
Collapse
|
3
|
Toda S, Fujii H. Projection Mapping-Based Interactive Gimmick Picture Book with Visual Illusion Effects. JOURNAL OF ROBOTICS AND MECHATRONICS 2023. [DOI: 10.20965/jrm.2023.p0125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
Abstract
In this study, we proposed an electronic gimmick picture book system based on projection mapping for early childhood education in ordinary households. In our system, the visual effects of pop-up gimmicks in books are electronically reproduced by combining projection mapping, three-dimensional expression, and visual illusion effects. In addition, the proposed projector-camera system can provide children effective interaction experiences, which will have positive effects on their early childhood education. The performance of the proposed system was validated by projection experiments.
Collapse
Affiliation(s)
- Sayaka Toda
- Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba 275-0016, Japan
| | - Hiromitsu Fujii
- Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba 275-0016, Japan
| |
Collapse
|
4
|
Nawaz R, Wood G, Nisar H, Yap VV. Exploring the Effects of EEG-Based Alpha Neurofeedback on Working Memory Capacity in Healthy Participants. Bioengineering (Basel) 2023; 10:bioengineering10020200. [PMID: 36829694 PMCID: PMC9952280 DOI: 10.3390/bioengineering10020200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/20/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
Neurofeedback, an operant conditioning neuromodulation technique, uses information from brain activities in real-time via brain-computer interface (BCI) technology. This technique has been utilized to enhance the cognitive abilities, including working memory performance, of human beings. The aims of this study are to investigate how alpha neurofeedback can improve working memory performance in healthy participants and to explore the underlying neural mechanisms in a working memory task before and after neurofeedback. Thirty-six participants divided into the NFT group and the control group participated in this study. This study was not blinded, and both the participants and the researcher were aware of their group assignments. Increasing power in the alpha EEG band was used as a neurofeedback in the eyes-open condition only in the NFT group. The data were collected before and after neurofeedback while they were performing the N-back memory task (N = 1 and N = 2). Both groups showed improvement in their working memory performance. There was an enhancement in the power of their frontal alpha and beta activities with increased working memory load (i.e., 2-back). The experimental group showed improvements in their functional connections between different brain regions at the theta level. This effect was absent in the control group. Furthermore, brain hemispheric lateralization was found during the N-back task, and there were more intra-hemisphere connections than inter-hemisphere connections of the brain. These results suggest that healthy participants can benefit from neurofeedback and from having their brain networks changed after the training.
Collapse
Affiliation(s)
- Rab Nawaz
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, Kampar 31900, Malaysia
- Biomedical Engineering Research Division, University of Glasgow, Glasgow G12 8QQ, UK
| | - Guilherme Wood
- Department of Psychology, University of Graz, Universitaetsplatz 2/III, 8010 Graz, Austria
- BioTechMed-Graz, 8010 Graz, Austria
| | - Humaira Nisar
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, Kampar 31900, Malaysia
- Centre for Healthcare Science and Technology, Universiti Tunku Abdul Rahman, Sungai Long 31900, Malaysia
- Correspondence:
| | - Vooi Voon Yap
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, Kampar 31900, Malaysia
- Department of Computer Science, Aberystwyth University, Penglais SY23 3FL, UK
| |
Collapse
|
5
|
Washabaugh EP, Shanmugam TA, Ranganathan R, Krishnan C. Comparing the accuracy of open-source pose estimation methods for measuring gait kinematics. Gait Posture 2022; 97:188-195. [PMID: 35988434 DOI: 10.1016/j.gaitpost.2022.08.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/30/2022] [Accepted: 08/12/2022] [Indexed: 02/02/2023]
Abstract
BACKGROUND Open-source pose estimation is rapidly reducing the costs associated with motion capture, as machine learning partially eliminates the need for specialized cameras and equipment. This technology could be particularly valuable for clinical gait analysis, which is often performed qualitatively due to the prohibitive cost and setup required for conventional, marker-based motion capture. RESEARCH QUESTION How do open-source pose estimation software packages compare in their ability to measure kinematics and spatiotemporal gait parameters for gait analysis? METHODS This analysis used an existing dataset that contained video and synchronous motion capture data from 32 able-bodied participants while walking. Sagittal plane videos were analyzed using pre-trained algorithms from four open-source pose estimation methods-OpenPose, Tensorflow MoveNet Lightning, Tensorflow MoveNet Thunder, and DeepLabCut-to extract keypoints (i.e., landmarks) and calculate hip and knee kinematics and spatiotemporal gait parameters. The absolute error when using each markerless pose estimation method was computed against conventional marker-based optical motion capture. Errors were compared between pose estimation methods using statistical parametric mapping. RESULTS Pose estimation methods differed in their ability to measure kinematics. OpenPose and Tensorflow MoveNet Thunder methods were most accurate for measuring hip kinematics, averaging 3.7 ± 1.3 deg and 4.6 ± 1.8 deg (mean ± std) over the entire gait cycle, respectively. OpenPose was most accurate when measuring knee kinematics, averaging 5.1 ± 2.5 deg of error over the gait cycle. MoveNet Thunder and OpenPose had the lowest errors when measuring spatiotemporal gait parameters but were not statistically different from one another. SIGNIFICANCE The results indicate that OpenPose significantly outperforms other existing platforms for pose-estimation of healthy gait kinematics and spatiotemporal gait parameters and could serve as an alternative to conventional motion capture systems in clinical and research settings when marker-based systems are not available.
Collapse
Affiliation(s)
- Edward P Washabaugh
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA; Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, MI, USA
| | | | - Rajiv Ranganathan
- Department of Kinesiology, Michigan State University, East Lansing, MI, USA
| | - Chandramouli Krishnan
- Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, MI, USA; Michigan Robotics Institute, University of Michigan, Ann Arbor, MI, USA; Department of Physical Therapy, College of Health Sciences, University of Michigan-Flint, Flint, MI, USA.
| |
Collapse
|
6
|
Rea DJ, Seo SH. Still Not Solved: A Call for Renewed Focus on User-Centered Teleoperation Interfaces. Front Robot AI 2022; 9:704225. [PMID: 35425813 PMCID: PMC9002051 DOI: 10.3389/frobt.2022.704225] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Accepted: 03/07/2022] [Indexed: 11/25/2022] Open
Abstract
Teleoperation is one of the oldest applications of human-robot interaction, yet decades later, robots are still difficult to control in a variety of situations, especially when used by non-expert robot operators. That difficulty has relegated teleoperation to mostly expert-level use cases, though everyday jobs and lives could benefit from teleoperated robots by enabling people to get tasks done remotely. Research has made great progress by improving the capabilities of robots, and exploring a variety of interfaces to improve operator performance, but many non-expert applications of teleoperation are limited by the operator’s ability to understand and control the robot effectively. We discuss the state of the art of user-centered research for teleoperation interfaces along with challenges teleoperation researchers face and discuss how an increased focus on human-centered teleoperation research can help push teleoperation into more everyday situations.
Collapse
Affiliation(s)
- Daniel J. Rea
- Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada
- *Correspondence: Daniel J. Rea,
| | - Stela H. Seo
- Department of Social Informatics, Kyoto University, Kyoto, Japan
| |
Collapse
|
7
|
Frame ME, Maresca AM, Boydstun AS, Warren R. Impact of video content and resolution on the cognitive dynamics of surveillance decision‐making. JOURNAL OF BEHAVIORAL DECISION MAKING 2021. [DOI: 10.1002/bdm.2267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Mary E. Frame
- RDT&E Parallax Advanced Research Beavercreek Ohio USA
| | | | | | - Rik Warren
- 711th Human Performance Wing Air Force Research Laboratory Wright Patterson AFB Ohio USA
| |
Collapse
|
8
|
Martinez-Garcia M, Kalawsky RS, Gordon T, Smith T, Meng Q, Flemisch F. Communication and Interaction With Semiautonomous Ground Vehicles by Force Control Steering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3913-3924. [PMID: 32966229 DOI: 10.1109/tcyb.2020.3020217] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
While full automation of road vehicles remains a future goal, shared-control and semiautonomous driving-involving transitions of control between the human and the machine-are more feasible objectives in the near term. These alternative driving modes will benefit from new research toward novel steering control devices, more suitably where machine intelligence only partially controls the vehicle. In this article, it is proposed that when the human shares the control of a vehicle with an autonomous or semiautonomous system, a force control, or nondisplacement steering wheel (i.e., a steering wheel which does not rotate but detects the applied torque by the human driver) can be advantageous under certain schemes: tight rein or loose rein modes according to the H -metaphor. We support this proposition with the first experiments to the best of our knowledge, in which human participants drove in a simulated road scene with a force control steering wheel (FCSW). The experiments exhibited that humans can adapt promptly to force control steering and are able to control the vehicle smoothly. Different transfer functions are tested, which translate the applied torque at the FCSW to the steering angle at the wheels of the vehicle; it is shown that fractional order transfer functions increment steering stability and control accuracy when using a force control device. The transition of control experiments is also performed with both: a conventional and an FCSW. This prototypical steering system can be realized via steer-by-wire controls, which are already incorporated in commercially available vehicles.
Collapse
|
9
|
Eftekharifar S, Thaler A, Bebko AO, Troje NF. The role of binocular disparity and active motion parallax in cybersickness. Exp Brain Res 2021; 239:2649-2660. [PMID: 34216232 DOI: 10.1007/s00221-021-06124-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 04/24/2021] [Indexed: 10/20/2022]
Abstract
Cybersickness is an enduring problem for users of virtual environments. While it is generally assumed that cybersickness is caused by discrepancies in perceived self-motion between the visual and vestibular systems, little is known about the relative contribution of active motion parallax and binocular disparity to the occurrence of cybersickness. We investigated the role of these two depth cues in cybersickness by simulating a roller-coaster ride using a head-mounted display. Participants could see the tracks via a virtual frame placed at the front of the roller-coaster cart. We manipulated the state of the frame, so it behaved like: (1) a window into the virtual scene, (2) a 2D screen, (3) and (4) a window for one of the two depth cues, and a 2D screen for the other. Participants completed the Simulator Sickness Questionnaire before and after the experiment, and verbally reported their level of discomfort at repeated intervals during the ride. Additionally, participants' electrodermal activity (EDA) was recorded. The results of the questionnaire and the continuous ratings revealed the largest increase in cybersickness when the frame behaved like a window, and least increase when the frame behaved like a 2D screen. Cybersickness scores were at an intermediate level for the conditions where the frame simulated only one depth cue. This suggests that neither active motion parallax nor binocular disparity had a more prominent effect on the severity of cybersickness. The EDA responses increased at about the same rate in all conditions, suggesting that EDA is not necessarily coupled with subjectively experienced cybersickness.
Collapse
Affiliation(s)
| | - Anne Thaler
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| | - Adam O Bebko
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| | - Nikolaus F Troje
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| |
Collapse
|
10
|
Martínez-García M, Zhang Y, Gordon T. Memory Pattern Identification for Feedback Tracking Control in Human-Machine Systems. HUMAN FACTORS 2021; 63:210-226. [PMID: 31647885 DOI: 10.1177/0018720819881008] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE The aim of this paper was to identify the characteristics of memory patterns with respect to a visual input, perceived by the human operator during a manual control task, which consisted in following a moving target on a display with a cursor. BACKGROUND Manual control tasks involve nondeclarative memory. The memory encodings of different motor skills have been referred to as procedural memories. The procedural memories have a pattern, which this paper sought to identify for the particular case of a one-dimensional tracking task. Specifically, data recorded from human subjects controlling dynamic systems with different fractional order were investigated. METHOD A finite impulse response (FIR) controller was fitted to the data, and pattern analysis of the fitted parameters was performed. Then, the FIR model was further reduced to a lower order controller; from the simplified model, the stability analysis of the human-machine system in closed-loop was conducted. RESULTS It is shown that the FIR model can be used to identify and represent patterns in human procedural memories during manual control tasks. The obtained procedural memory pattern presents a time scale of about 650 ms before decay. Furthermore, the fitted controller is stable for systems with fractional order less than or equal to 1. CONCLUSION For systems of different fractional order, the proposed control scheme-based on an FIR model-can effectively characterize the linear properties of manual control in humans. APPLICATION This research supports a biofidelic approach to human manual control modeling over feedback visual perceptions. Relevant applications of this research are the following: the development of shared-control systems, where a virtual human model assists the human during a control task, and human operator state monitoring.
Collapse
|
11
|
Farshchi M, Kiba A, Sawada T. Seeing our 3D world while only viewing contour-drawings. PLoS One 2021; 16:e0242581. [PMID: 33481778 PMCID: PMC7822326 DOI: 10.1371/journal.pone.0242581] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 11/04/2020] [Indexed: 11/19/2022] Open
Abstract
Artists can represent a 3D object by using only contours in a 2D drawing. Prior studies have shown that people can use such drawings to perceive 3D shapes reliably, but it is not clear how useful this kind of contour information actually is in a real dynamical scene in which people interact with objects. To address this issue, we developed an Augmented Reality (AR) device that can show a participant a contour-drawing or a grayscale-image of a real dynamical scene in an immersive manner. We compared the performance of people in a variety of run-of-the-mill tasks with both contour-drawings and grayscale-images under natural viewing conditions in three behavioral experiments. The results of these experiments showed that the people could perform almost equally well with both types of images. This contour information may be sufficient to provide the basis for our visual system to obtain much of the 3D information needed for successful visuomotor interactions in our everyday life.
Collapse
Affiliation(s)
- Maddex Farshchi
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Alexandra Kiba
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Tadamasa Sawada
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
12
|
Belinskaya A, Smetanin N, Lebedev MA, Ossadtchi A. Short-delay neurofeedback facilitates training of the parietal alpha rhythm. J Neural Eng 2020; 17. [PMID: 33166941 DOI: 10.1088/1741-2552/abc8d7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 11/09/2020] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Feedback latency was shown to be a critical parameter in a range of applications that imply learning. The therapeutic effects of neurofeedback (NFB) remain controversial. We hypothesized that often encountered unreliable results of NFB intervention could be associated with large feedback latency values that are often uncontrolled and may preclude the efficient learning. APPROACH We engaged our subjects into a parietal alpha power unpregulating paradigm faciliated by visual neurofeedback based on the invidually extracted envelope of the alpha-rhythm at P4 electrode. NFB was displayed either as soon as EEG envelope was processed, or with an extra 250 or 500-ms delay. The feedback training consisted of 15 two-minute long blocks interleaved with 15s pauses. We have also recorded two minute long baselines immediately before and after the training. MAIN RESULTS The time course of NFB-induced changes in the alpha rhythm power clearly depended on NFB latency, as shown with the adaptive Neyman test. NFB had a strong effect on the alpha-spindle incidence rate, but not on their duration or amplitude. The sustained changes in alpha activity measured after the completion of NFB training were negatively correlated to latency, with the maximum change for the shortest tested latency and no change for the longest. SIGNIFICANCE Here we for the first time show that visual NFB of parietal electroencephalographic (EEG) alpha-activity is efficient only when delivered to human subjects at short latency, which guarantees that NFB arrives when an alpha spindle is still ongoing. Such a considerable effect of NFB latency on the alpha-activity temporal structure could explain some of the previous inconsistent results, where latency was neither controlled nor documented. Clinical practitioners and manufacturers of NFB equipment should add latency to their specifications while enabling latency monitoring and supporting short-latency operations.
Collapse
Affiliation(s)
- Anastasia Belinskaya
- Centre for Bioelectric Interfaces, National Research University Higher School of Economics, Moskva, Moskva, RUSSIAN FEDERATION
| | - Nikolai Smetanin
- Centre for Bioelectric Interfaces, National Research University Higher School of Economics, Moskva, Moskva, RUSSIAN FEDERATION
| | - M A Lebedev
- Center for Bioelectric Interfaces, National Research University Higher School of Economics, Moskva, Moskva, RUSSIAN FEDERATION
| | - Alexei Ossadtchi
- Center for bioelectirc interfaces, National Research University Higher School of Economics, Moskva, Moskva, RUSSIAN FEDERATION
| |
Collapse
|
13
|
Abstract
Transferring skills and expertise to remote places, without being present, is a new challenge for our digitally interconnected society. People can experience and perform actions in distant places through a robotic agent wearing immersive interfaces to feel physically there. However, technological contingencies can affect human perception, compromising skill-based performances. Considering the results from studies on human factors, a set of recommendations for the construction of immersive teleoperation systems is provided, followed by an example of the evaluation methodology. We developed a testbed to study perceptual issues that affect task performance while users manipulated the environment either through traditional or immersive interfaces. The analysis of its effect on perception, navigation, and manipulation relies on performances measures and subjective answers. The goal is to mitigate the effect of factors such as system latency, field of view, frame of reference, or frame rate to achieve the sense of telepresence. By decoupling the flows of an immersive teleoperation system, we aim to understand how vision and interaction fidelity affects spatial cognition. Results show that misalignments between the frame of reference for vision and motor-action or the use of tools affecting the sense of body position or movement have a higher effect on mental workload and spatial cognition.
Collapse
|
14
|
Attention Distribution While Detecting Conflicts between Converging Objects: An Eye-Tracking Study. Vision (Basel) 2020; 4:vision4030034. [PMID: 32707819 PMCID: PMC7558710 DOI: 10.3390/vision4030034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/12/2020] [Accepted: 07/15/2020] [Indexed: 11/17/2022] Open
Abstract
In many domains, including air traffic control, observers have to detect conflicts between moving objects. However, it is unclear what the effect of conflict angle is on observers’ conflict detection performance. In addition, it has been speculated that observers use specific viewing techniques while performing a conflict detection task, but evidence for this is lacking. In this study, participants (N = 35) observed two converging objects while their eyes were recorded. They were tasked to continuously indicate whether a conflict between the two objects was present. Independent variables were conflict angle (30, 100, 150 deg), update rate (discrete, continuous), and conflict occurrence. Results showed that 30 deg conflict angles yielded the best performance, and 100 deg conflict angles the worst. For 30 deg conflict angles, participants applied smooth pursuit while attending to the objects. In comparison, for 100 and especially 150 deg conflict angles, participants showed a high fixation rate and glances towards the conflict point. Finally, the continuous update rate was found to yield shorter fixation durations and better performance than the discrete update rate. In conclusion, shallow conflict angles yield the best performance, an effect that can be explained using basic perceptual heuristics, such as the ‘closer is first’ strategy. Displays should provide continuous rather than discrete update rates.
Collapse
|
15
|
Morales-Bader D, Castillo RD, Olivares C, Miño F. How Do Object Shape, Semantic Cues, and Apparent Velocity Affect the Attribution of Intentionality to Figures With Different Types of Movements? Front Psychol 2020; 11:935. [PMID: 32477225 PMCID: PMC7242622 DOI: 10.3389/fpsyg.2020.00935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/15/2020] [Indexed: 12/01/2022] Open
Abstract
A series of experiments show that attribution of intentionality to figures depends on the interaction between the type of movement –Theory of Mind (ToM), Goal-Directed (GD), Random (R)– with the presence of human attributes, the way these figures are labeled, and their apparent velocity. In addition, the effect of these conditions or their interaction varies when the use of human nouns –present in the participant’s responses– is statistically controlled. In Experiment 1, one group of participants observed triangular figures (n = 46) and another observed humanized figures, called Stickman figures (n = 38). In ToM movements, participants attributed more intentionality to triangular figures than to Stickman figures. However, in R movements, the opposite trend was observed. In Experiment 2 (n = 42), triangular figures were presented as if they were people and compared to triangular figures presented in Experiment 1. Here, when the figures were labeled as people the attribution of intentionality only increased in R and GD movements, but not in ToM movements. Finally, in Experiment 3, Stickman figures (n = 45) move at a higher (unnatural) speed with higher frames per second (fps) than the Stickman figures of Experiment 1. This manipulation decreased the attribution of intentionality in R and GD movements but not in ToM movements. In general terms, it was found that the human attributes and labels promote the use of human nouns in participants’ responses, while a high apparent speed reduces their use. The use of human nouns was associated to intentionality scores significantly in R movements, but at a lesser extent in GD and ToM movements. We conclude that, although the type of movement is the most important cue in this sort of task, the tendency to attribute intentionality to figures is affected by the interaction between perceptual and semantic cues (figure shape, label, and apparent speed).
Collapse
Affiliation(s)
- Diego Morales-Bader
- Centro de Investigación en Ciencias Cognitivas, Facultad de Psicología, Universidad de Talca, Talca, Chile
| | - Ramón D Castillo
- Centro de Investigación en Ciencias Cognitivas, Facultad de Psicología, Universidad de Talca, Talca, Chile
| | - Charlotte Olivares
- Centro de Investigación en Ciencias Cognitivas, Facultad de Psicología, Universidad de Talca, Talca, Chile
| | - Francisca Miño
- Centro de Investigación en Ciencias Cognitivas, Facultad de Psicología, Universidad de Talca, Talca, Chile
| |
Collapse
|
16
|
Reichert D, Erkkilä MT, Holst G, Hecker-Denschlag N, Wilzbach M, Hauger C, Drexler W, Gesperger J, Kiesel B, Roetzer T, Unterhuber A, Widhalm G, Leitgeb RA, Andreana M. Towards real-time wide-field fluorescence lifetime imaging of 5-ALA labeled brain tumors with multi-tap CMOS cameras. BIOMEDICAL OPTICS EXPRESS 2020; 11:1598-1616. [PMID: 32206431 PMCID: PMC7075617 DOI: 10.1364/boe.382817] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 01/30/2020] [Accepted: 02/04/2020] [Indexed: 05/24/2023]
Abstract
Fluorescence guided neurosurgery based on 5-aminolevulinic acid (5-ALA) has significantly increased maximal safe resections. Fluorescence lifetime imaging (FLIM) of 5-ALA could further boost this development by its increased sensitivity. However, neurosurgeons require real-time visual feedback which was so far limited in dual-tap CMOS camera based FLIM. By optimizing the number of phase frames required for reconstruction, we here demonstrate real-time 5-ALA FLIM of human high- and low-grade glioma with up to 12 Hz imaging rate over a wide field of view (11.0 x 11.0 mm). Compared to conventional fluorescence imaging, real-time FLIM offers enhanced contrast of weakly fluorescent tissue.
Collapse
Affiliation(s)
- David Reichert
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
- Medical University of Vienna, Christian Doppler Laboratory OPTRAMED, 1090 Vienna, Austria
- These authors contributed equally to this work
| | - Mikael T. Erkkilä
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
- These authors contributed equally to this work
| | - Gerhard Holst
- PCO AG, Science and Research, 93309 Kelheim, Germany
| | | | | | | | - Wolfgang Drexler
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
| | - Johanna Gesperger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
- General Hospital and Medical University of Vienna, Institute of Neurology, 1090 Vienna, Austria
| | - Barbara Kiesel
- General Hospital and Medical University of Vienna, Department of Neurosurgery, 1090 Vienna, Austria
| | - Thomas Roetzer
- General Hospital and Medical University of Vienna, Institute of Neurology, 1090 Vienna, Austria
| | - Angelika Unterhuber
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
| | - Georg Widhalm
- General Hospital and Medical University of Vienna, Department of Neurosurgery, 1090 Vienna, Austria
| | - Rainer A. Leitgeb
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
- Medical University of Vienna, Christian Doppler Laboratory OPTRAMED, 1090 Vienna, Austria
| | - Marco Andreana
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, 1090 Vienna, Austria
| |
Collapse
|
17
|
Carlier S, Van der Paelt S, Ongenae F, De Backere F, De Turck F. Empowering Children with ASD and Their Parents: Design of a Serious Game for Anxiety and Stress Reduction. SENSORS 2020; 20:s20040966. [PMID: 32054025 PMCID: PMC7070716 DOI: 10.3390/s20040966] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/27/2020] [Accepted: 02/07/2020] [Indexed: 12/22/2022]
Abstract
Autism Spectrum Disorder (ASD) is characterized by social interaction difficulties and communication difficulties. Moreover, children with ASD often suffer from other co-morbidities, such as anxiety and depression. Finding appropriate treatment can be difficult as symptoms of ASD and co-morbidities often overlap. Due to these challenges, parents of children with ASD often suffer from higher levels of stress. This research aims to investigate the feasibility of empowering children with ASD and their parents through the use of a serious game to reduce stress and anxiety and a supporting parent application. The New Horizon game and the SpaceControl application were developed together with therapists and according to guidelines for e-health patient empowerment. The game incorporates two mini-games with relaxation techniques. The performance of the game was analyzed and usability studies with three families were conducted. Parents and children were asked to fill in the Spence’s Children Anxiety Scale (SCAS) and Spence Children Anxiety Scale-Parents (SCAS-P) anxiety scale. The game shows potential for stress and anxiety reduction in children with ASD.
Collapse
Affiliation(s)
- Stéphanie Carlier
- IDLab, iGent Tower—Department of Information Technology, Ghent University—imec, Technologiepark-Zwijnaarde 126, B-9052 Ghent, Belgium
- Correspondence:
| | - Sara Van der Paelt
- Department of Experimental Clinical and Health Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | - Femke Ongenae
- IDLab, iGent Tower—Department of Information Technology, Ghent University—imec, Technologiepark-Zwijnaarde 126, B-9052 Ghent, Belgium
| | - Femke De Backere
- IDLab, iGent Tower—Department of Information Technology, Ghent University—imec, Technologiepark-Zwijnaarde 126, B-9052 Ghent, Belgium
| | - Filip De Turck
- IDLab, iGent Tower—Department of Information Technology, Ghent University—imec, Technologiepark-Zwijnaarde 126, B-9052 Ghent, Belgium
| |
Collapse
|
18
|
Chan S, Li P, Locketz G, Salisbury K, Blevins NH. High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery. Comput Assist Surg (Abingdon) 2016; 21:85-101. [DOI: 10.1080/24699322.2016.1189966] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Affiliation(s)
- Sonny Chan
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Peter Li
- Department of Otolaryngology – Head and Neck Surgery, Stanford University, Stanford, CA, USA
| | - Garrett Locketz
- Department of Otolaryngology – Head and Neck Surgery, Stanford University, Stanford, CA, USA
| | - Kenneth Salisbury
- Departments of Computer Science and Surgery, Stanford University, Stanford, CA, USA
| | - Nikolas H. Blevins
- Department of Otolaryngology – Head and Neck Surgery, Stanford University, Stanford, CA, USA
| |
Collapse
|
19
|
Tran JJ, Riskin EA, Ladner RE, Wobbrock JO. Evaluating Intelligibility and Battery Drain of Mobile Sign Language Video Transmitted at Low Frame Rates and Bit Rates. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2015. [DOI: 10.1145/2797142] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Mobile sign language video conversations can become unintelligible if high video transmission rates cause network congestion and delayed video. In an effort to understand the perceived lower limits of intelligible sign language video intended for mobile communication, we evaluated sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bit rates (15, 30, 60, and 120 kilobits per second [kbps]) at a constant spatial resolution of 320 × 240 pixels. We discovered an “intelligibility ceiling effect,” in which increasing the frame rate above 10fps did not improve perceived intelligibility, and increasing the bit rate above 60kbps produced diminishing returns. Given the study parameters, our findings suggest that relaxing the recommended frame rate and bit rate to 10fps at 60kbps will provide intelligible video conversations while reducing total bandwidth consumption to 25% of the ITU-T standard (at least 25fps and 100kbps). As part of this work, we developed the
Human Signal Intelligibility Model
, a new conceptual model useful for informing evaluations of video intelligibility and our methodology for creating linguistically accessible web surveys for deaf people. We also conducted a battery-savings experiment quantifying battery drain when sign language video is transmitted at the lower frame rates and bit rates. Results confirmed that increasing the transmission rates monotonically decreased the battery life.
Collapse
|
20
|
He L, Ming X, Liu Q. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices. J Med Syst 2014; 38:44. [PMID: 24705800 DOI: 10.1007/s10916-014-0044-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2013] [Accepted: 03/18/2014] [Indexed: 10/25/2022]
Abstract
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
Collapse
Affiliation(s)
- Longjun He
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | | | | |
Collapse
|
21
|
Chan S, Conti F, Salisbury K, Blevins NH. Virtual Reality Simulation in Neurosurgery. Neurosurgery 2013; 72 Suppl 1:154-64. [DOI: 10.1227/neu.0b013e3182750d26] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
22
|
Chen JYC, Barnes MJ, Harper-Sciarini M. Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design. ACTA ACUST UNITED AC 2011. [DOI: 10.1109/tsmcc.2010.2056682] [Citation(s) in RCA: 110] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
23
|
Prewett MS, Johnson RC, Saboe KN, Elliott LR, Coovert MD. Managing workload in human–robot interaction: A review of empirical studies. COMPUTERS IN HUMAN BEHAVIOR 2010. [DOI: 10.1016/j.chb.2010.03.010] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat Biotechnol 2010; 28:348-53. [PMID: 20231818 PMCID: PMC2857929 DOI: 10.1038/nbt.1612] [Citation(s) in RCA: 450] [Impact Index Per Article: 32.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2009] [Accepted: 02/08/2010] [Indexed: 11/16/2022]
Abstract
The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. Combined with highly ergonomic features for selecting an X, Y, Z location of an image directly in 3D space and for visualizing overlays of a variety of surface objects, V3D streamlines the on-line analysis, measurement, and proofreading of complicated image patterns. V3D is cross-platform and can be enhanced by plug-ins. We built V3D-Neuron on top of V3D to reconstruct complex 3D neuronal structures from large brain images. V3D-Neuron enables us to precisely digitize the morphology of a single neuron in a fruit fly brain in minutes, with about 17-fold improvement in reliability and 10-fold savings in time compared to other neuron reconstruction tools. Using V3D-Neuron, we demonstrated the feasibility of building a 3D digital atlas of neurite tracts in the fruit fly brain.
Collapse
|