1
|
Wang Y, Zhu R, Wang L, Xu Y, Guo D, Gao S. Improved VIDAR and machine learning-based road obstacle detection method. ARRAY 2023. [DOI: 10.1016/j.array.2023.100283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
|
2
|
Norman JF, Baig M, Graham JD, Lewis JL. Aging and the detection of moving objects defined by common fate. Sci Rep 2022; 12:20811. [PMID: 36460782 PMCID: PMC9718786 DOI: 10.1038/s41598-022-25456-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022] Open
Abstract
Grouping by common fate plays an important role in how human observers perceive environmental objects. In this study, the effect of aging upon the ability to utilize common fate was evaluated. Twenty-two younger and older adults (mean ages were 23.4 and 74.7 years, respectively) participated in two experiments. On any given trial, the participants sequentially viewed two apparent motion sequences and were required to indicate which temporal interval contained a coherently moving dotted line embedded in noisy random background motion. In Experiment 1, the number of dots defining the target was varied, while in Experiment 2, the target interpoint spacing was varied. The younger adults outperformed the older adults by 19.4 percent in Experiment 1 and 50.5 percent in Experiment 2. The older and younger adults were similarly affected by variations in the number of target dots and the target interpoint spacing. The individual older participants' object detection accuracies were highly correlated with their individual chronological ages, such that the performance of the younger old participants was much higher than that exhibited by the older old. Increases in age systematically affect the ability of older adults to detect and visually perceive objects defined by common fate.
Collapse
Affiliation(s)
- J. Farley Norman
- grid.268184.10000 0001 2286 2224Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, 1906 College Heights Blvd. #22030, Bowling Green, KY 42101-2030 USA ,grid.268184.10000 0001 2286 2224Center for Applied Science in Health and Aging, Western Kentucky University, Bowling Green, KY 42101-2030 USA
| | - Maheen Baig
- grid.268184.10000 0001 2286 2224Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, KY 42101-2030 USA
| | - Jiali D. Graham
- Carol Martin Gatton Academy of Mathematics and Science, Bowling Green, KY USA
| | - Jessica L. Lewis
- grid.268184.10000 0001 2286 2224Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, 1906 College Heights Blvd. #22030, Bowling Green, KY 42101-2030 USA
| |
Collapse
|
3
|
Zhang H, Pan JS. Visual search as an embodied process: The effects of perspective change and external reference on search performance. J Vis 2022; 22:13. [PMID: 36107125 PMCID: PMC9483234 DOI: 10.1167/jov.22.10.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Traditional visual search tasks in the laboratories typically involve looking for targets in 2D displays with exemplar views of objects. In real life, visual search commonly entails 3D objects in 3D spaces with nonperpendicular viewing and relative motions between observers and search array items, both of which lead to transformations of objects’ projected images in lawful but unpredicted ways. Furthermore, observers often do not have to memorize a target before searching, but may refer to it while searching, for example, holding a picture of someone while looking for them from a crowd. Extending the traditional visual search task, in this study, we investigated the effects of image transformation as a result of perspective change yielded by discrete viewing angle change (Experiment 1) or continuous rotation of the search array (Experiment 2) and of having external references on visual search performance. Results showed that when searching from 3D objects with a non-zero viewing angle, performance was similar to searching from 2D exemplar views of objects; when searching for 3D targets from rotating arrays in virtual reality, performance was similar to searching from stationary arrays. In general, discrete or continuous perspective change did not affect the search outcomes in terms of accuracy, response time, and self-rated confidence, or the search process in terms of eye movement patterns. Therefore, visual search does not require the exact match of retinal images. Additionally, being able to see the target during the search improved search accuracy and observers’ confidence. It increased search time because, as revealed by the eye movements, observers actively checked back on the reference target. Thus, visual search is an embodied process that involves real-time information exchange between the observers and the environment.
Collapse
Affiliation(s)
- Huiyuan Zhang
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Jing Samantha Pan
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Guangzhou, China
| |
Collapse
|
4
|
Exploring the Visual Space Structure of Oil Painting Based on Visual Importance. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5112537. [PMID: 36017451 PMCID: PMC9398719 DOI: 10.1155/2022/5112537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/03/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
For present and future design, research on visual-spatial language and the mastery of its fundamental concepts is extremely valuable. Since a more comprehensive research system has not yet been established to investigate the aesthetics of visual form and its significance, the research fields of spatial color and spatial structure relationships are currently relatively unique. The visual forms of design will be fundamentally affected by the diverse development of the design thinking ideology, and the expansion of design language will become a given. Design thinking and methods are significantly impacted by the emphasis on digital technology in online media. Design is no longer restricted to the arrangement of two-dimensional planes thanks to the visual space of oil paintings; three-dimensional space and time organization structures are combined with visual elements, and content layout evolves from static layout to dynamic interactive layout. In order to better understand visual space and intuition, this study begins by examining the structure of visual space in oil paintings. It does this by using artistic psychology and spatial space as research hints. The experimental results demonstrate that the depth-aware image quality evaluation algorithm in this study consumes more time than the MSEPF + 30 algorithm and the MSEPF + 50 algorithm by 12 ms and 15 ms, respectively, while the time consumption of individual particles is increased by 0.08 ms and 0.04 ms, respectively. This algorithm can thus meet the real-time requirements. As a result, it is anticipated that this study will address more comprehensive and detailed data as well as theoretical underpinnings that will be helpful in the study of painting art.
Collapse
|
5
|
Norman JF, Shapiro HK, Sanders KN, Sher AF. Aging and the perception of texture-defined form. Vision Res 2021; 187:1-5. [PMID: 34091366 DOI: 10.1016/j.visres.2021.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 05/26/2021] [Accepted: 05/26/2021] [Indexed: 10/21/2022]
Abstract
In this study 28 younger and older observers discriminated the global shapes of objects that were defined by differences in texture. The judged stimulus patterns were 3-point micropattern textures. On any given trial, a texture-defined shape (either a vertically- or horizontally-oriented rectangle) was presented; the observers' task was to discriminate between the two rectangles. The task difficulty was manipulated by varying the deviation from colinearity of each of the individual 3-point texture elements between figure and background (the larger the difference in deviation between figure and ground, the higher the discrimination performance). The results revealed a substantial effect of age. In order for the older observers to reliably discriminate the shape of the target rectangle (with a d' value of 1.5), they needed differences from colinearity that were 54.4 percent larger than those required for the younger observers. While older adults can utilize differences in texture to perceive global shape, their ability is nevertheless significantly impaired.
Collapse
Affiliation(s)
- J Farley Norman
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, Kentucky, USA; Center for Applied Science in Health & Aging, Western Kentucky University, Bowling Green, Kentucky, USA.
| | - Hannah K Shapiro
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, Kentucky, USA
| | - Karli N Sanders
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, Kentucky, USA
| | - Abdallah F Sher
- Carol Martin Gatton Academy of Mathematics and Science, Bowling Green, Kentucky, USA
| |
Collapse
|
6
|
Tian Q, Xin G, Lim KS, He Y, Liu J, Ahmad H, Liu X, Yang H. Cascaded Fabry-Perot interferometer-regenerated fiber Bragg grating structure for temperature-strain measurement under extreme temperature conditions. OPTICS EXPRESS 2020; 28:30478-30488. [PMID: 33115048 DOI: 10.1364/oe.403716] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 09/12/2020] [Indexed: 06/11/2023]
Abstract
We demonstrated an optical fiber sensor based on a cascaded fiber Fabry-Perot interferometer (FPI)-regenerated fiber Bragg grating (RFBG) for simultaneous measurement of temperature and strain under high temperature environments. The FPI is manufactured from a ∼74 µm long hollow core silica tube (HCST) sandwiched between two single mode fibers (SMFs). The RFBG is inscribed in one of the SMF arms which is embedded inside an alundum tube, making it insensitive to the applied strain on the entire fiber sensor, just in case the temperature and strain recovery process are described using the strain-free RFBG instead of a characteristic due-parameter matrix. This feature is intended for thermal compensation for the FPI structure that is sensitive to both temperature and strain. In the characterization tests, the proposed device has exhibited a temperature sensitivity ∼ 18.01 pm/°C in the range of 100 °C - 1000 °C and excellent linear response to strain in the range of 300 °C - 1000 °C. The measured strain sensitivity is as high as ∼ 2.17 pm/µɛ for a detection range from 0 µɛ to 450 µɛ at 800 °C, which is ∼ 1.5 times that of a FPI-RFBG without the alundum tube.
Collapse
|
7
|
Galloway JAM, Green SD, Stevens M, Kelley LA. Finding a signal hidden among noise: how can predators overcome camouflage strategies? Philos Trans R Soc Lond B Biol Sci 2020; 375:20190478. [PMID: 32420842 PMCID: PMC7331011 DOI: 10.1098/rstb.2019.0478] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Substantial progress has been made in the past 15 years regarding how prey use a variety of visual camouflage types to exploit both predator visual processing and cognition, including background matching, disruptive coloration, countershading and masquerade. By contrast, much less attention has been paid to how predators might overcome these defences. Such strategies include the evolution of more acute senses, the co-opting of other senses not targeted by camouflage, changes in cognition such as forming search images, and using behaviours that change the relationship between the cryptic individual and the environment or disturb prey and cause movement. Here, we evaluate the methods through which visual camouflage prevents detection and recognition, and discuss if and how predators might evolve, develop or learn counter-adaptations to overcome these. This article is part of the theme issue ‘Signal detection theory in recognition systems: from evolving models to experimental tests'.
Collapse
Affiliation(s)
- James A M Galloway
- Centre for Ecology and Conservation, University of Exeter (Penryn Campus), Cornwall TR10 9FE, UK
| | - Samuel D Green
- Centre for Ecology and Conservation, University of Exeter (Penryn Campus), Cornwall TR10 9FE, UK
| | - Martin Stevens
- Centre for Ecology and Conservation, University of Exeter (Penryn Campus), Cornwall TR10 9FE, UK
| | - Laura A Kelley
- Centre for Ecology and Conservation, University of Exeter (Penryn Campus), Cornwall TR10 9FE, UK
| |
Collapse
|
8
|
Dutta A, Lev-Ari T, Barzilay O, Mairon R, Wolf A, Ben-Shahar O, Gutfreund Y. Self-motion trajectories can facilitate orientation-based figure-ground segregation. J Neurophysiol 2020; 123:912-926. [PMID: 31967932 DOI: 10.1152/jn.00439.2019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Segregation of objects from the background is a basic and essential property of the visual system. We studied the neural detection of objects defined by orientation difference from background in barn owls (Tyto alba). We presented wide-field displays of densely packed stripes with a dominant orientation. Visual objects were created by orienting a circular patch differently from the background. In head-fixed conditions, neurons in both tecto- and thalamofugal visual pathways (optic tectum and visual Wulst) were weakly responsive to these objects in their receptive fields. However, notably, in freely viewing conditions, barn owls occasionally perform peculiar side-to-side head motions (peering) when scanning the environment. In the second part of the study we thus recorded the neural response from head-fixed owls while the visual displays replicated the peering conditions; i.e., the displays (objects and backgrounds) were shifted along trajectories that induced a retinal motion identical to sampled peering motions during viewing of a static object. These conditions induced dramatic neural responses to the objects, in the very same neurons that where unresponsive to the objects in static displays. By reverting to circular motions of the display, we show that the pattern of the neural response is mostly shaped by the orientation of the background relative to motion and not the orientation of the object. Thus our findings provide evidence that peering and/or other self-motions can facilitate orientation-based figure-ground segregation through interaction with inhibition from the surround.NEW & NOTEWORTHY Animals frequently move their sensory organs and thereby create motion cues that can enhance object segregation from background. We address a special example of such active sensing, in barn owls. When scanning the environment, barn owls occasionally perform small-amplitude side-to-side head movements called peering. We show that the visual outcome of such peering movements elicit neural detection of objects that are rotated from the dominant orientation of the background scene and which are otherwise mostly undetected. These results suggest a novel role for self-motions in sensing objects that break the regular orientation of elements in the scene.
Collapse
Affiliation(s)
- Arkadeb Dutta
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| | - Tidhar Lev-Ari
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| | - Ouriel Barzilay
- Faculty of Mechanical Engineering, The Technion, Haifa, Israel
| | - Rotem Mairon
- Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Alon Wolf
- Faculty of Mechanical Engineering, The Technion, Haifa, Israel
| | - Ohad Ben-Shahar
- Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,The Zlotowski Center for Neuroscience Research, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yoram Gutfreund
- The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Haifa, Israel
| |
Collapse
|
9
|
|
10
|
Yi X, Song G, Derong T, Dong G, Liang S, Yuqiong W. Fast road obstacle detection method based on maximally stable extremal regions. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881418759118] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Road obstacle detection is an important component of the advanced driver assistance system, and to improve the speed and accuracy of road obstacle detection method is a vital task. In this article, fast image region-matching method based on the maximally stable extremal regions method is proposed to improve the speed of image matching. The theoretical feasibility of detection method combining monocular camera with inertial measurement unit (IMU) is clarified. The fast road obstacle detection method based on maximally stable extremal regions combining fast image region-matching method based on maximally stable extremal regions and the vision-IMU-based obstacle detection method is proposed to bypass obstacle classification and to reduce time and space complexity for road environment perception. The AdaBoost cascade detector, the speeded-up robust features-based obstacle detection method, and the proposed method are used to detect obstacles in outdoor contrast tests. Test results show that the proposed method has higher accuracy, and the reason of high accuracy is analyzed. The processing time of AdaBoost cascade detector, speeded-up robust features-based obstacle detection method, and proposed method are compared, and the results show that the proposed method has faster processing speed, and the reason of faster processing speed is analyzed.
Collapse
Affiliation(s)
- Xu Yi
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
- New Energy Automotive Engineering Research Institute, Shandong University of Technology, Shandong, China
| | - Gao Song
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
- New Energy Automotive Engineering Research Institute, Shandong University of Technology, Shandong, China
| | - Tan Derong
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
| | - Guo Dong
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
| | - Sun Liang
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
| | - Wang Yuqiong
- School of Transportation and Vehicle Engineering, Shandong University of Technology, Shandong, China
| |
Collapse
|