1
|
Unsupervised feature learning for environmental sound classification using Weighted Cycle-Consistent Generative Adversarial Network. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105912] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
2
|
Gontier F, Lagrange M, Aumond P, Can A, Lavandier C. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach. SENSORS 2017; 17:s17122758. [PMID: 29186021 PMCID: PMC5751573 DOI: 10.3390/s17122758] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 11/20/2017] [Accepted: 11/24/2017] [Indexed: 11/16/2022]
Abstract
The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1) the estimation of standard acoustic indicators; and (2) the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.
Collapse
Affiliation(s)
- Félix Gontier
- LS2N, UMR 6004, École Centrale de Nantes, 44300 Nantes, France.
| | | | - Pierre Aumond
- LAE, AME, IFSTTAR, 44340 Bouguenais, France.
- ETIS, UMR 8051, Université Paris Seine, Université de Cergy-Pontoise, ENSEA, CNRS, 95000 Cergy-Pontoise, France.
| | - Arnaud Can
- LAE, AME, IFSTTAR, 44340 Bouguenais, France.
| | - Catherine Lavandier
- ETIS, UMR 8051, Université Paris Seine, Université de Cergy-Pontoise, ENSEA, CNRS, 95000 Cergy-Pontoise, France.
| |
Collapse
|
3
|
McLoughlin I, Zhang H, Xie Z, Song Y, Xiao W, Phan H. Continuous robust sound event classification using time-frequency features and deep learning. PLoS One 2017; 12:e0182309. [PMID: 28892478 PMCID: PMC5593179 DOI: 10.1371/journal.pone.0182309] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2015] [Accepted: 07/17/2017] [Indexed: 11/18/2022] Open
Abstract
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Collapse
Affiliation(s)
- Ian McLoughlin
- School of Computing, The University of Kent, Medway, Kent, United Kingdom
- National Engineering Laboratory of Speech and Language Information Processing, The University of Science and Technology of China, Hefei, PR China
- * E-mail:
| | - Haomin Zhang
- National Engineering Laboratory of Speech and Language Information Processing, The University of Science and Technology of China, Hefei, PR China
| | - Zhipeng Xie
- National Engineering Laboratory of Speech and Language Information Processing, The University of Science and Technology of China, Hefei, PR China
| | - Yan Song
- National Engineering Laboratory of Speech and Language Information Processing, The University of Science and Technology of China, Hefei, PR China
| | - Wei Xiao
- European Research Center, Huawei Technologies Duesseldorf GmbH, Munich, Germany
| | - Huy Phan
- The Institute for Signal Processing, University of Lübeck, Lübeck, Germany
| |
Collapse
|
4
|
Bach JH, Kollmeier B, Anemüller J. Matching Pursuit Analysis of Auditory Receptive Fields' Spectro-Temporal Properties. Front Syst Neurosci 2017; 11:4. [PMID: 28232791 PMCID: PMC5299023 DOI: 10.3389/fnsys.2017.00004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 01/23/2017] [Indexed: 11/13/2022] Open
Abstract
Gabor filters have long been proposed as models for spectro-temporal receptive fields (STRFs), with their specific spectral and temporal rate of modulation qualitatively replicating characteristics of STRF filters estimated from responses to auditory stimuli in physiological data. The present study builds on the Gabor-STRF model by proposing a methodology to quantitatively decompose STRFs into a set of optimally matched Gabor filters through matching pursuit, and by quantitatively evaluating spectral and temporal characteristics of STRFs in terms of the derived optimal Gabor-parameters. To summarize a neuron's spectro-temporal characteristics, we introduce a measure for the “diagonality,” i.e., the extent to which an STRF exhibits spectro-temporal transients which cannot be factorized into a product of a spectral and a temporal modulation. With this methodology, it is shown that approximately half of 52 analyzed zebra finch STRFs can each be well approximated by a single Gabor or a linear combination of two Gabor filters. Moreover, the dominant Gabor functions tend to be oriented either in the spectral or in the temporal direction, with truly “diagonal” Gabor functions rarely being necessary for reconstruction of an STRF's main characteristics. As a toy example for the applicability of STRF and Gabor-STRF filters to auditory detection tasks, we use STRF filters as features in an automatic event detection task and compare them to idealized Gabor filters and mel-frequency cepstral coefficients (MFCCs). STRFs classify a set of six everyday sounds with an accuracy similar to reference Gabor features (94% recognition rate). Spectro-temporal STRF and Gabor features outperform reference spectral MFCCs in quiet and in low noise conditions (down to 0 dB signal to noise ratio).
Collapse
Affiliation(s)
- Jörg-Hendrik Bach
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
| | - Jörn Anemüller
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
- *Correspondence: Jörn Anemüller
| |
Collapse
|
5
|
|
6
|
A Review of Physical and Perceptual Feature Extraction Techniques for Speech, Music and Environmental Sounds. APPLIED SCIENCES-BASEL 2016. [DOI: 10.3390/app6050143] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
7
|
Abstract
The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test.
Collapse
|
8
|
Smith DV, Shahriar MS. A context aware sound classifier applied to prawn feed monitoring and energy disaggregation. Knowl Based Syst 2013. [DOI: 10.1016/j.knosys.2013.05.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
9
|
ROS open-source audio recognizer: ROAR environmental sound detection tools for robot programming. Auton Robots 2013. [DOI: 10.1007/s10514-013-9323-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
10
|
NTALAMPIRAS STAVROS, POTAMITIS ILYAS, FAKOTAKIS NIKOS. A PRACTICAL SYSTEM FOR ACOUSTIC SURVEILLANCE OF HAZARDOUS SITUATIONS. INT J ARTIF INTELL T 2011. [DOI: 10.1142/s021821301100005x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The present study describes a practical methodology for automatic space monitoring based solely on the perceived acoustic information. Our approach is based on a two stage recognition schema that detects and classifies sound events related to hazardous situations. The main objective is to identify on time abnormal events that may lead to life-threatening situations or property damage and forward detected sound events to an authorized officer for further evaluation of the cases. We consider the case where the atypical situations of screams, explosions or gunshots take place in a metro station environment. For describing the audio signals we constructed a feature vector which includes the Mel-frequency cepstral coefficients and three MPEG-7 low level descriptors. These are subsequently fed to hidden Markov models towards representing each sound category. The accuracy of the proposed method is tested under several SNR conditions.
Collapse
Affiliation(s)
- STAVROS NTALAMPIRAS
- Electrical and Computer Engineering Department, University of Patras, Patras, Achaia, 26500, Greece
| | - ILYAS POTAMITIS
- Department of Music Technology and Acoustics, Technological Educational Institute of Crete, Crete, Daskalaki-Perivolia, 74100, Greece
| | - NIKOS FAKOTAKIS
- Electrical and Computer Engineering Department, University of Patras, Patras, Achaia, 26500, Greece
| |
Collapse
|
11
|
Chu S, Narayanan S, Kuo CCJ. Environmental Sound Recognition With Time–Frequency Audio Features. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tasl.2009.2017438] [Citation(s) in RCA: 389] [Impact Index Per Article: 25.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
12
|
|