1
|
Identifying the “Dangshan” Physiological Disease of Pear Woolliness Response via Feature-Level Fusion of Near-Infrared Spectroscopy and Visual RGB Image. Foods 2023; 12:foods12061178. [PMID: 36981105 PMCID: PMC10048714 DOI: 10.3390/foods12061178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/06/2023] [Accepted: 03/08/2023] [Indexed: 03/14/2023] Open
Abstract
The “Dangshan” pear woolliness response is a physiological disease that causes large losses for fruit farmers and nutrient inadequacies.The cause of this disease is predominantly a shortage of boron and calcium in the pear and water loss from the pear. This paper used the fusion of near-infrared Spectroscopy (NIRS) and Computer Vision Technology (CVS) to detect the woolliness response disease of “Dangshan” pears. This paper employs the merging of NIRS features and image features for the detection of “Dangshan” pear woolliness response disease. Near-infrared Spectroscopy (NIRS) reflects information on organic matter containing hydrogen groups and other components in various biochemical structures in the sample under test, and Computer Vision Technology (CVS) captures image information on the disease. This study compares the results of different fusion models. Compared with other strategies, the fusion model combining spectral features and image features had better performance. These fusion models have better model effects than single-feature models, and the effects of these models may vary according to different image depth features selected for fusion modeling. Therefore, the model results of fusion modeling using different image depth features are further compared. The results show that the deeper the depth model in this study, the better the fusion modeling effect of the extracted image features and spectral features. The combination of the MLP classification model and the Xception convolutional neural classification network fused with the NIR spectral features and image features extracted, respectively, was the best combination, with accuracy (0.972), precision (0.974), recall (0.972), and F1 (0.972) of this model being the highest compared to the other models. This article illustrates that the accuracy of the “Dangshan” pear woolliness response disease may be considerably enhanced using the fusion of near-infrared spectra and image-based neural network features. It also provides a theoretical basis for the nondestructive detection of several techniques of spectra and pictures.
Collapse
|
2
|
Lauha P, Somervuo P, Lehikoinen P, Geres L, Richter T, Seibold S, Ovaskainen O. Domain‐specific neural networks improve automated bird sound recognition already with small amount of local data. Methods Ecol Evol 2022. [DOI: 10.1111/2041-210x.14003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Patrik Lauha
- Organismal and Evolutionary Biology Research Programme, Faculty of Biological and Environmental Sciences University of Helsinki Helsinki Finland
| | - Panu Somervuo
- Organismal and Evolutionary Biology Research Programme, Faculty of Biological and Environmental Sciences University of Helsinki Helsinki Finland
| | - Petteri Lehikoinen
- Organismal and Evolutionary Biology Research Programme, Faculty of Biological and Environmental Sciences University of Helsinki Helsinki Finland
| | - Lisa Geres
- Berchtesgaden National Park Berchtesgaden Germany
- Goethe University Frankfurt, Faculty of Biological Sciences Institute for Ecology, Evolution and Diversity, Conservation Biology Frankfurt am Main Germany
| | - Tobias Richter
- Berchtesgaden National Park Berchtesgaden Germany
- TUM School of Life Sciences, Ecosystem Dynamics and Forest Management Technical University of Munich Freising Germany
| | - Sebastian Seibold
- Berchtesgaden National Park Berchtesgaden Germany
- TUM School of Life Sciences, Ecosystem Dynamics and Forest Management Technical University of Munich Freising Germany
| | - Otso Ovaskainen
- Organismal and Evolutionary Biology Research Programme, Faculty of Biological and Environmental Sciences University of Helsinki Helsinki Finland
- Department of Biological and Environmental Science University of Jyväskylä Jyväskylä Finland
- Department of Biology, Centre for Biodiversity Dynamics Norwegian University of Science and Technology Trondheim Norway
| |
Collapse
|
3
|
Matsubayashi S, Nakadai K, Suzuki R, Ura T, Hasebe M, Okuno HG. Auditory Survey of Endangered Eurasian Bittern Using Microphone Arrays and Robot Audition. Front Robot AI 2022; 9:854572. [PMID: 35462782 PMCID: PMC9019347 DOI: 10.3389/frobt.2022.854572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 03/08/2022] [Indexed: 11/13/2022] Open
Abstract
Bioacoustics monitoring has become increasingly popular for studying the behavior and ecology of vocalizing birds. This study aims to verify the practical effectiveness of localization technology for auditory monitoring of endangered Eurasian bittern (Botaurus stellaris) which inhabits wetlands in remote areas with thick vegetation. Their crepuscular and highly secretive nature, except during the breeding season when they vocalize advertisement calls, make them difficult to monitor. Because of the increasing rates of habitat loss, surveying accurate numbers and their habitat needs are both important conservation tasks. We investigated the feasibility of localizing their booming calls, at a low frequency range between 100–200 Hz, using microphone arrays and robot audition HARK (Honda Research Institute, Audition for Robots with Kyoto University). We first simulated sound source localization of actual bittern calls for microphone arrays of radii 10 cm, 50 cm, 1 m, and 10 m, under different noise levels. Second, we monitored bitterns in an actual field environment using small microphone arrays (height = 12 cm; width = 8 cm), in the Sarobetsu Mire, Hokkaido Island, Japan. The simulation results showed that the spectral detectability was higher for larger microphone arrays, whereas the temporal detectability was higher for smaller microphone arrays. We identified that false detection in smaller microphone arrays, which was coincidentally generated in the calculation proximate to the transfer function for the opposite side. Despite technical limitations, we successfully localized booming calls of at least two males in a reverberant wetland, surrounded by thick vegetation and riparian trees. This study is the first case of localizing such rare birds using small-sized microphone arrays in the field, thereby presenting how this technology could contribute to auditory surveys of population numbers, behaviors, and microhabitat selection, all of which are difficult to investigate using other observation methods. This methodology is not only useful for the better understanding of bitterns, but it can also be extended to investigate other rare nocturnal birds with low-frequency vocalizations, without direct ringing or tagging. Our results also suggest a future necessity for a robust localization system to avoid reverberation and echoing in the field, resulting in the false detection of the target birds.
Collapse
Affiliation(s)
- Shiho Matsubayashi
- Graduate School of Engineering Science, Osaka University, Toyonaka, Japan
- *Correspondence: Shiho Matsubayashi,
| | - Kazuhiro Nakadai
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
- Honda Research Institute of Japan, Wako, Japan
| | - Reiji Suzuki
- Graduate School of Information Science, Nagoya University, Nagoya, Japan
| | | | | | | |
Collapse
|
4
|
A novel deep transfer learning models for recognition of birds sounds in different environment. Soft comput 2022. [DOI: 10.1007/s00500-021-06640-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
5
|
Zhong M, Taylor R, Bates N, Christey D, Basnet H, Flippin J, Palkovitz S, Dodhia R, Lavista Ferres J. Acoustic detection of regionally rare bird species through deep convolutional neural networks. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101333] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
6
|
Rhinehart TA, Chronister LM, Devlin T, Kitzes J. Acoustic localization of terrestrial wildlife: Current practices and future opportunities. Ecol Evol 2020; 10:6794-6818. [PMID: 32724552 PMCID: PMC7381569 DOI: 10.1002/ece3.6216] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 03/02/2020] [Accepted: 03/04/2020] [Indexed: 01/17/2023] Open
Abstract
Autonomous acoustic recorders are an increasingly popular method for low-disturbance, large-scale monitoring of sound-producing animals, such as birds, anurans, bats, and other mammals. A specialized use of autonomous recording units (ARUs) is acoustic localization, in which a vocalizing animal is located spatially, usually by quantifying the time delay of arrival of its sound at an array of time-synchronized microphones. To describe trends in the literature, identify considerations for field biologists who wish to use these systems, and suggest advancements that will improve the field of acoustic localization, we comprehensively review published applications of wildlife localization in terrestrial environments. We describe the wide variety of methods used to complete the five steps of acoustic localization: (1) define the research question, (2) obtain or build a time-synchronizing microphone array, (3) deploy the array to record sounds in the field, (4) process recordings captured in the field, and (5) determine animal location using position estimation algorithms. We find eight general purposes in ecology and animal behavior for localization systems: assessing individual animals' positions or movements, localizing multiple individuals simultaneously to study their interactions, determining animals' individual identities, quantifying sound amplitude or directionality, selecting subsets of sounds for further acoustic analysis, calculating species abundance, inferring territory boundaries or habitat use, and separating animal sounds from background noise to improve species classification. We find that the labor-intensive steps of processing recordings and estimating animal positions have not yet been automated. In the near future, we expect that increased availability of recording hardware, development of automated and open-source localization software, and improvement of automated sound classification algorithms will broaden the use of acoustic localization. With these three advances, ecologists will be better able to embrace acoustic localization, enabling low-disturbance, large-scale collection of animal position data.
Collapse
Affiliation(s)
- Tessa A. Rhinehart
- Department of Biological SciencesUniversity of PittsburghPittsburghPAUSA
| | | | - Trieste Devlin
- Department of Biological SciencesUniversity of PittsburghPittsburghPAUSA
| | - Justin Kitzes
- Department of Biological SciencesUniversity of PittsburghPittsburghPAUSA
| |
Collapse
|
7
|
Gabriel D, Kojima R, Hoshiba K, Itoyama K, Nishida K, Nakadai K. 2D sound source position estimation using microphone arrays and its application to a VR-based bird song analysis system. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1598491] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- D. Gabriel
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | - R. Kojima
- Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - K. Hoshiba
- Department of Electrical, Electronics and Information Engineering, Faculty of Engineering, Kanagawa University, Yokohama, Japan
| | - K. Itoyama
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | - K. Nishida
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | - K. Nakadai
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
- Honda Research Institute Japan Co., Ltd., Wako, Japan
| |
Collapse
|
8
|
Stowell D, Wood MD, Pamuła H, Stylianou Y, Glotin H. Automatic acoustic detection of birds through deep learning: The first Bird Audio Detection challenge. Methods Ecol Evol 2018. [DOI: 10.1111/2041-210x.13103] [Citation(s) in RCA: 115] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Dan Stowell
- Machine Listening Laboratory; Centre for Digital Music; Queen Mary University of London; London UK
| | - Michael D. Wood
- Ecosystems and Environment Research Centre, School of Environment and Life Sciences; University of Salford; Salford UK
| | - Hanna Pamuła
- Department of Mechanics and Vibroacoustics; AGH University of Science and Technology; Krakow Poland
| | | | - Hervé Glotin
- University Toulon; Aix Marseille University; CNRS, LIS, DYNI Team; SABIOD; Marseille France
| |
Collapse
|
9
|
Suzuki R, Matsubayashi S, Saito F, Murate T, Masuda T, Yamamoto K, Kojima R, Nakadai K, Okuno HG. A spatiotemporal analysis of acoustic interactions between great reed warblers ( Acrocephalus arundinaceus) using microphone arrays and robot audition software HARK. Ecol Evol 2018; 8:812-825. [PMID: 29321916 PMCID: PMC5756896 DOI: 10.1002/ece3.3645] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 09/05/2017] [Accepted: 10/18/2017] [Indexed: 11/29/2022] Open
Abstract
Acoustic interactions are important for understanding intra‐ and interspecific communication in songbird communities from the viewpoint of soundscape ecology. It has been suggested that birds may divide up sound space to increase communication efficiency in such a manner that they tend to avoid overlap with other birds when they sing. We are interested in clarifying the dynamics underlying the process as an example of complex systems based on short‐term behavioral plasticity. However, it is very problematic to manually collect spatiotemporal patterns of acoustic events in natural habitats using data derived from a standard single‐channel recording of several species singing simultaneously. Our purpose here was to investigate fine‐scale spatiotemporal acoustic interactions of the great reed warbler. We surveyed spatial and temporal patterns of several vocalizing color‐banded great reed warblers (Acrocephalus arundinaceus) using an open‐source software for robot audition HARK (Honda Research Institute Japan Audition for Robots with Kyoto University) and three new 16‐channel, stand‐alone, and water‐resistant microphone arrays, named DACHO spread out in the bird's habitat. We first show that our system estimated the location of two color‐banded individuals’ song posts with mean error distance of 5.5 ± 4.5 m from the location of observed song posts. We then evaluated the temporal localization accuracy of the songs by comparing the duration of localized songs around the song posts with those annotated by human observers, with an accuracy score of average 0.89 for one bird that stayed at one song post. We further found significant temporal overlap avoidance and an asymmetric relationship between songs of the two singing individuals, using transfer entropy. We believe that our system and analytical approach contribute to a better understanding of fine‐scale acoustic interactions in time and space in bird communities.
Collapse
Affiliation(s)
- Reiji Suzuki
- Graduate School of Informatics Nagoya University Nagoya Japan
| | - Shiho Matsubayashi
- Center for Open Innovation Research and Education Graduate School of Engineering Osaka University Suita Japan
| | | | | | | | | | - Ryosuke Kojima
- Department of Biomedical Data Intelligence Graduate School of Medicine Kyoto University Kyoto Japan
| | - Kazuhiro Nakadai
- Honda Research Institute Japan Co., Ltd. Wako Saitama Japan.,Department of Systems and Control Engineering School of Engineering Tokyo Institute of Technology Meguro-ku Tokyo Japan
| | - Hiroshi G Okuno
- Graduate School of Creative Science and Engineering Faculty of Science and Engineering Waseda University Shinjuku-ku Tokyo Japan
| |
Collapse
|