1
|
Gajecki T, Nogueira W. A Fused Deep Denoising Sound Coding Strategy for Bilateral Cochlear Implants. IEEE Trans Biomed Eng 2024; 71:2232-2242. [PMID: 38376983 DOI: 10.1109/tbme.2024.3367530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cochlear implants (CIs) provide a solution for individuals with severe sensorineural hearing loss to regain their hearing abilities. When someone experiences this form of hearing impairment in both ears, they may be equipped with two separate CI devices, which will typically further improve the CI benefits. This spatial hearing is particularly crucial when tackling the challenge of understanding speech in noisy environments, a common issue CI users face. Currently, extensive research is dedicated to developing algorithms that can autonomously filter out undesired background noises from desired speech signals. At present, some research focuses on achieving end-to-end denoising, either as an integral component of the initial CI signal processing or by fully integrating the denoising process into the CI sound coding strategy. This work is presented in the context of bilateral CI (BiCI) systems, where we propose a deep-learning-based bilateral speech enhancement model that shares information between both hearing sides. Specifically, we connect two monaural end-to-end deep denoising sound coding techniques through intermediary latent fusion layers. These layers amalgamate the latent representations generated by these techniques by multiplying them together, resulting in an enhanced ability to reduce noise and improve learning generalization. The objective instrumental results demonstrate that the proposed fused BiCI sound coding strategy achieves higher interaural coherence, superior noise reduction, and enhanced predicted speech intelligibility scores compared to the baseline methods. Furthermore, our speech-in-noise intelligibility results in BiCI users reveal that the deep denoising sound coding strategy can attain scores similar to those achieved in quiet conditions.
Collapse
|
2
|
Borjigin A, Kokkinakis K, Bharadwaj HM, Stohl JS. Deep learning restores speech intelligibility in multi-talker interference for cochlear implant users. Sci Rep 2024; 14:13241. [PMID: 38853168 PMCID: PMC11163011 DOI: 10.1038/s41598-024-63675-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/31/2024] [Indexed: 06/11/2024] Open
Abstract
Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffective against realistic, non-stationary noise such as multi-talker interference. Recent developments in deep neural network (DNN) algorithms have achieved noteworthy performance in speech enhancement and separation, especially in removing speech noise. However, more work is needed to investigate the potential of DNN algorithms in removing speech noise when tested with listeners fitted with CIs. Here, we implemented two DNN algorithms that are well suited for applications in speech audio processing: (1) recurrent neural network (RNN) and (2) SepFormer. The algorithms were trained with a customized dataset ( ∼ 30 h), and then tested with thirteen CI listeners. Both RNN and SepFormer algorithms significantly improved CI listener's speech intelligibility in noise without compromising the perceived quality of speech overall. These algorithms not only increased the intelligibility in stationary non-speech noise, but also introduced a substantial improvement in non-stationary noise, where conventional signal processing strategies fall short with little benefits. These results show the promise of using DNN algorithms as a solution for listening challenges in multi-talker noise interference.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, 47907, IN, USA.
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA.
- North American Research Laboratory, MED-EL Corporation, Durham, NC, 27713, USA.
| | - Kostas Kokkinakis
- Concha Labs, San Francisco, CA, 94114, USA
- North American Research Laboratory, MED-EL Corporation, Durham, NC, 27713, USA
| | - Hari M Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, 47907, IN, USA
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, 47907, IN, USA
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Joshua S Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, 27713, USA
| |
Collapse
|
3
|
Gaultier C, Goehring T. Recovering speech intelligibility with deep learning and multiple microphones in noisy-reverberant situations for people using cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3833-3847. [PMID: 38884525 DOI: 10.1121/10.0026218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 05/10/2024] [Indexed: 06/18/2024]
Abstract
For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This study compared several deep-learning algorithms with access to one, two unilateral, or six bilateral microphones that were trained to recover speech signals by jointly removing noise and reverberation. The noisy-reverberant speech and an ideal noise reduction algorithm served as lower and upper references, respectively. Objective signal metrics were compared with results from two listening tests, including 15 typical hearing listeners with CI simulations and 12 CI listeners. Large and statistically significant improvements in speech reception thresholds of 7.4 and 10.3 dB were found for the multi-microphone algorithms. For the single-microphone algorithm, there was an improvement of 2.3 dB but only for the CI listener group. The objective signal metrics correctly predicted the rank order of results for CI listeners, and there was an overall agreement for most effects and variances between results for CI simulations and CI listeners. These algorithms hold promise to improve speech intelligibility for CI listeners in environments with noise and reverberation and benefit from a boost in performance when using features extracted from multiple microphones.
Collapse
Affiliation(s)
- Clément Gaultier
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| | - Tobias Goehring
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
4
|
Chang YJ, Han JY, Chu WC, Li LPH, Lai YH. Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1694-1703. [PMID: 38426839 DOI: 10.1121/10.0025057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 02/09/2024] [Indexed: 03/02/2024]
Abstract
Cochlear implant (CI) is currently the vital technological device for assisting deaf patients in hearing sounds and greatly enhances their sound listening appreciation. Unfortunately, it performs poorly for music listening because of the insufficient number of electrodes and inaccurate identification of music features. Therefore, this study applied source separation technology with a self-adjustment function to enhance the music listening benefits for CI users. In the objective analysis method, this study showed that the results of the source-to-distortion, source-to-interference, and source-to-artifact ratios were 4.88, 5.92, and 15.28 dB, respectively, and significantly better than the Demucs baseline model. For the subjective analysis method, it scored higher than the traditional baseline method VIR6 (vocal to instrument ratio, 6 dB) by approximately 28.1 and 26.4 (out of 100) in the multi-stimulus test with hidden reference and anchor test, respectively. The experimental results showed that the proposed method can benefit CI users in identifying music in a live concert, and the personal self-fitting signal separation method had better results than any other default baselines (vocal to instrument ratio of 6 dB or vocal to instrument ratio of 0 dB) did. This finding suggests that the proposed system is a potential method for enhancing the music listening benefits for CI users.
Collapse
Affiliation(s)
- Yuh-Jer Chang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ji-Yan Han
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Chung Chu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lieber Po-Hung Li
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Otolaryngology, Cheng Hsin General Hospital, Taipei, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
- Institute of Brain Science, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Medical Device Innovation Translation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
5
|
Mohagheghian F, Han D, Ghetia O, Peitzsch A, Nishita N, Pirayesh Shirazi Nejad M, Ding EY, Noorishirazi K, Hamel A, Otabil EM, DiMezza D, Dickson EL, Tran KV, McManus DD, Chon KH. Noise Reduction in Photoplethysmography Signals Using a Convolutional Denoising Autoencoder With Unconventional Training Scheme. IEEE Trans Biomed Eng 2024; 71:456-466. [PMID: 37682653 DOI: 10.1109/tbme.2023.3307400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
OBJECTIVE We propose an efficient approach based on a convolutional denoising autoencoder (CDA) network to reduce motion and noise artifacts (MNA) from corrupted atrial fibrillation (AF) and non-AF photoplethysmography (PPG) data segments so that an accurate PPG-signal-derived heart rate can be obtained. Our method's main innovation is the optimization of the CDA performance for both rhythms using more AF than non-AF data for training the AF-specific CDA model and vice versa for the non-AF CDA network. METHODS To evaluate this unconventional training scheme, our proposed network was trained and tested on 25-sec PPG data segments from 48 subjects from two different databases-the Pulsewatch dataset and Stanford University's publicly available PPG dataset. In total, our dataset contains 10,773 data segments: 7,001 segments for training and 3,772 independent segments from out-of-sample subjects for testing. RESULTS Using real-life corrupted PPG segments, our approach significantly reduced the average heart rate root mean square error (RMSE) of the reconstructed PPG segments by 45.74% and 23% compared to the corrupted non-AF and AF data, respectively. Further, our approach exhibited lower RMSE, and higher sensitivity and PPV for detected peaks compared to the reconstructed data produced by the alternative methods. CONCLUSION These results show the promise of our approach as a reliable denoising method, which should be used prior to AF detection algorithms for an accurate cardiac health monitoring involving wearable devices. SIGNIFICANCE PPG signals collected from wearables are vulnerable to MNA, which limits their use as a reliable measurement, particularly in uncontrolled real-life environments.
Collapse
|
6
|
Gajecki T, Zhang Y, Nogueira W. A Deep Denoising Sound Coding Strategy for Cochlear Implants. IEEE Trans Biomed Eng 2023; 70:2700-2709. [PMID: 37030808 DOI: 10.1109/tbme.2023.3262677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Cochlear implants (CIs) have proven to be successful at restoring the sensation of hearing in people who suffer from profound sensorineural hearing loss. CI users generally achieve good speech understanding in quiet acoustic conditions. However, their ability to understand speech degrades drastically when background interfering noise is present. To address this problem, current CI systems are delivered with front-end speech enhancement modules that can aid the listener in noisy environments. However, these only perform well under certain noisy conditions, leaving quite some room for improvement in more challenging circumstances. In this work, we propose replacing the CI sound coding strategy with a deep neural network (DNN) that performs end-to-end speech denoising by taking the raw audio as input and providing a denoised electrodogram, i.e., the electrical stimulation patterns applied to the electrodes across time. We specifically introduce a DNN that emulates a common CI sound coding strategy, the advanced combination encoder (ACE). We refer to the proposed algorithm as 'Deep ACE'. Deep ACE is designed not only to accurately code the acoustic signals in the same way that ACE would but also to automatically remove unwanted interfering noises, without sacrificing processing latency. The model was optimized using a CI-specific loss function and evaluated using objective measures as well as listening tests in CI participants. Results show that, based on objective measures, the proposed model achieved higher scores when compared to the baseline algorithms. Also, the proposed deep learning-based sound coding strategy gave eight CI users the highest speech intelligibility scores.
Collapse
|
7
|
Henry F, Parsi A, Glavin M, Jones E. Experimental Investigation of Acoustic Features to Optimize Intelligibility in Cochlear Implants. SENSORS (BASEL, SWITZERLAND) 2023; 23:7553. [PMID: 37688009 PMCID: PMC10490615 DOI: 10.3390/s23177553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/21/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023]
Abstract
Although cochlear implants work well for people with hearing impairment in quiet conditions, it is well-known that they are not as effective in noisy environments. Noise reduction algorithms based on machine learning allied with appropriate speech features can be used to address this problem. The purpose of this study is to investigate the importance of acoustic features in such algorithms. Acoustic features are extracted from speech and noise mixtures and used in conjunction with the ideal binary mask to train a deep neural network to estimate masks for speech synthesis to produce enhanced speech. The intelligibility of this speech is objectively measured using metrics such as Short-time Objective Intelligibility (STOI), Hit Rate minus False Alarm Rate (HIT-FA) and Normalized Covariance Measure (NCM) for both simulated normal-hearing and hearing-impaired scenarios. A wide range of existing features is experimentally evaluated, including features that have not been traditionally applied in this application. The results demonstrate that frequency domain features perform best. In particular, Gammatone features performed best for normal hearing over a range of signal-to-noise ratios and noise types (STOI = 0.7826). Mel spectrogram features exhibited the best overall performance for hearing impairment (NCM = 0.7314). There is a stronger correlation between STOI and NCM than HIT-FA and NCM, suggesting that the former is a better predictor of intelligibility for hearing-impaired listeners. The results of this study may be useful in the design of adaptive intelligibility enhancement systems for cochlear implants based on both the noise level and the nature of the noise (stationary or non-stationary).
Collapse
Affiliation(s)
- Fergal Henry
- Department of Computing and Electronic Engineering, Atlantic Technological University Sligo, Ash Lane, F91 YW50 Sligo, Ireland
| | - Ashkan Parsi
- Electrical and Electronic Engineering, University of Galway, University Road, H91 TK33 Galway, Ireland; (A.P.); (M.G.); (E.J.)
| | - Martin Glavin
- Electrical and Electronic Engineering, University of Galway, University Road, H91 TK33 Galway, Ireland; (A.P.); (M.G.); (E.J.)
| | - Edward Jones
- Electrical and Electronic Engineering, University of Galway, University Road, H91 TK33 Galway, Ireland; (A.P.); (M.G.); (E.J.)
| |
Collapse
|
8
|
Wang B, Saniie J. Massive Ultrasonic Data Compression Using Wavelet Packet Transformation Optimized by Convolutional Autoencoders. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:1395-1405. [PMID: 34499606 DOI: 10.1109/tnnls.2021.3105367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ultrasonic signal acquisition platforms generate considerable amounts of data to be stored and processed, especially when multichannel scanning or beamforming is employed. Reducing the mass storage and allowing high-speed data transmissions necessitate the compression of ultrasonic data into a representation with fewer bits. High compression accuracy is crucial in many applications, such as ultrasonic medical imaging and nondestructive testing (NDT). In this study, we present learning models for massive ultrasonic data compression on the order of megabytes. A common and highly efficient compression method for ultrasonic data is signal decomposition and subband elimination using wavelet packet transformation (WPT). We designed an algorithm for finding the wavelet kernel that provides maximum energy compaction and the optimal subband decomposition tree structure for a given ultrasonic signal. Furthermore, the WPT convolutional autoencoder (WPTCAE) compression algorithm is proposed based on the WPT compression tree structure and the use of machine learning for estimating the optimal kernel. To further improve the compression accuracy, an autoencoder (AE) is incorporated into the WPTCAE model to build a hybrid model. The performance of the WPTCAE compression model is examined and benchmarked against other compression algorithms using ultrasonic radio frequency (RF) datasets acquired in NDT and medical imaging applications. The experimental results clearly show that the WPTCAE compression model provides improved compression ratios while maintaining high signal fidelity. The proposed learning models can achieve a compression accuracy of 98% by using only 6% of the original data.
Collapse
|
9
|
Jeon HJ, Lim HG, Shung KK, Lee OJ, Kim MG. Automated cell-type classification combining dilated convolutional neural networks with label-free acoustic sensing. Sci Rep 2022; 12:19873. [PMID: 36400803 PMCID: PMC9674693 DOI: 10.1038/s41598-022-22075-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 10/10/2022] [Indexed: 11/19/2022] Open
Abstract
This study aimed to automatically classify live cells based on their cell type by analyzing the patterns of backscattered signals of cells with minimal effect on normal cell physiology and activity. Our previous studies have demonstrated that label-free acoustic sensing using high-frequency ultrasound at a high pulse repetition frequency (PRF) can capture and analyze a single object from a heterogeneous sample. However, eliminating possible errors in the manual setting and time-consuming processes when postprocessing integrated backscattering (IB) coefficients of backscattered signals is crucial. In this study, an automated cell-type classification system that combines a label-free acoustic sensing technique with deep learning-empowered artificial intelligence models is proposed. We applied an one-dimensional (1D) convolutional autoencoder to denoise the signals and conducted data augmentation based on Gaussian noise injection to enhance the robustness of the proposed classification system to noise. Subsequently, denoised backscattered signals were classified into specific cell types using convolutional neural network (CNN) models for three types of signal data representations, including 1D CNN models for waveform and frequency spectrum analysis and two-dimensional (2D) CNN models for spectrogram analysis. We evaluated the proposed system by classifying two types of cells (e.g., RBC and PNT1A) and two types of polystyrene microspheres by analyzing their backscattered signal patterns. We attempted to discover cell physical properties reflected on backscattered signals by controlling experimental variables, such as diameter and structure material. We further evaluated the effectiveness of the neural network models and efficacy of data representations by comparing their accuracy with that of baseline methods. Therefore, the proposed system can be used to classify reliably and precisely several cell types with different intrinsic physical properties for personalized cancer medicine development.
Collapse
Affiliation(s)
- Hyeon-Ju Jeon
- grid.482520.90000 0004 0578 4668Data Assimilation Group, Korea Institute of Atmospheric Prediction Systems, Seoul, 07071 Republic of Korea
| | - Hae Gyun Lim
- grid.412576.30000 0001 0719 8994Department of Biomedical Engineering, Pukyong National University, Busan, 48513 Republic of Korea
| | - K. Kirk Shung
- grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - O-Joun Lee
- grid.411947.e0000 0004 0470 4224Department of Artificial Intelligence, The Catholic University of Korea, Bucheon, 14662 Republic of Korea
| | - Min Gon Kim
- grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| |
Collapse
|
10
|
Tseng RY, Wang TW, Fu SW, Lee CY, Tsao Y. A Study of Joint Effect on Denoising Techniques and Visual Cues to Improve Speech Intelligibility in Cochlear Implant Simulation. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3017042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Kang Y, Zheng N, Meng Q. Deep Learning-Based Speech Enhancement With a Loss Trading Off the Speech Distortion and the Noise Residue for Cochlear Implants. Front Med (Lausanne) 2021; 8:740123. [PMID: 34820392 PMCID: PMC8606413 DOI: 10.3389/fmed.2021.740123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 10/04/2021] [Indexed: 11/18/2022] Open
Abstract
The cochlea plays a key role in the transmission from acoustic vibration to neural stimulation upon which the brain perceives the sound. A cochlear implant (CI) is an auditory prosthesis to replace the damaged cochlear hair cells to achieve acoustic-to-neural conversion. However, the CI is a very coarse bionic imitation of the normal cochlea. The highly resolved time-frequency-intensity information transmitted by the normal cochlea, which is vital to high-quality auditory perception such as speech perception in challenging environments, cannot be guaranteed by CIs. Although CI recipients with state-of-the-art commercial CI devices achieve good speech perception in quiet backgrounds, they usually suffer from poor speech perception in noisy environments. Therefore, noise suppression or speech enhancement (SE) is one of the most important technologies for CI. In this study, we introduce recent progress in deep learning (DL), mostly neural networks (NN)-based SE front ends to CI, and discuss how the hearing properties of the CI recipients could be utilized to optimize the DL-based SE. In particular, different loss functions are introduced to supervise the NN training, and a set of objective and subjective experiments is presented. Results verify that the CI recipients are more sensitive to the residual noise than the SE-induced speech distortion, which has been common knowledge in CI research. Furthermore, speech reception threshold (SRT) in noise tests demonstrates that the intelligibility of the denoised speech can be significantly improved when the NN is trained with a loss function bias to more noise suppression than that with equal attention on noise residue and speech distortion.
Collapse
Affiliation(s)
- Yuyong Kang
- Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China
| | - Nengheng Zheng
- Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China.,Pengcheng Laboratory, Shenzhen, China
| | - Qinglin Meng
- Acoustics Laboratory, School of Physics and Optoelectronics, South China University of Technology, Guangzhou, China
| |
Collapse
|
12
|
Huang EHH, Wu CM, Lin HC. Combination and Comparison of Sound Coding Strategies Using Cochlear Implant Simulation With Mandarin Speech. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2407-2416. [PMID: 34767509 DOI: 10.1109/tnsre.2021.3128064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Three cochlear implant (CI) sound coding strategies were combined in the same signal processing path and compared for speech intelligibility with vocoded Mandarin sentences. The three CI coding strategies, biologically-inspired hearing aid algorithm (BioAid), envelope enhancement (EE), and fundamental frequency modulation (F0mod), were combined with the advanced combination encoder (ACE) strategy. Hence, four singular coding strategies and four combinational coding strategies were derived. Mandarin sentences with speech-shape noise were processed using these coding strategies. Speech understanding of vocoded Mandarin sentences was evaluated using short-time objective intelligibility (STOI) and subjective sentence recognition tests with normal-hearing listeners. For signal-to-noise ratios at 5 dB or above, the EE strategy had slightly higher average scores in both STOI and listening tests compared to ACE. The addition of EE to BioAid slightly increased the mean scores for BioAid+EE, which was the combination strategy with the highest scores in both objective and subjective speech intelligibility. The benefits of BioAid, F0mod, and the four combinational coding strategies were not observed in CI simulation. The findings of this study may be useful for the future design of coding strategies and related studies with Mandarin.
Collapse
|
13
|
Li LPH, Han JY, Zheng WZ, Huang RJ, Lai YH. Improved Environment-Aware-Based Noise Reduction System for Cochlear Implant Users Based on a Knowledge Transfer Approach: Development and Usability Study. J Med Internet Res 2021; 23:e25460. [PMID: 34709193 PMCID: PMC8587190 DOI: 10.2196/25460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 02/11/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cochlear implant technology is a well-known approach to help deaf individuals hear speech again and can improve speech intelligibility in quiet conditions; however, it still has room for improvement in noisy conditions. More recently, it has been proven that deep learning-based noise reduction, such as noise classification and deep denoising autoencoder (NC+DDAE), can benefit the intelligibility performance of patients with cochlear implants compared to classical noise reduction algorithms. OBJECTIVE Following the successful implementation of the NC+DDAE model in our previous study, this study aimed to propose an advanced noise reduction system using knowledge transfer technology, called NC+DDAE_T; examine the proposed NC+DDAE_T noise reduction system using objective evaluations and subjective listening tests; and investigate which layer substitution of the knowledge transfer technology in the NC+DDAE_T noise reduction system provides the best outcome. METHODS The knowledge transfer technology was adopted to reduce the number of parameters of the NC+DDAE_T compared with the NC+DDAE. We investigated which layer should be substituted using short-time objective intelligibility and perceptual evaluation of speech quality scores as well as t-distributed stochastic neighbor embedding to visualize the features in each model layer. Moreover, we enrolled 10 cochlear implant users for listening tests to evaluate the benefits of the newly developed NC+DDAE_T. RESULTS The experimental results showed that substituting the middle layer (ie, the second layer in this study) of the noise-independent DDAE (NI-DDAE) model achieved the best performance gain regarding short-time objective intelligibility and perceptual evaluation of speech quality scores. Therefore, the parameters of layer 3 in the NI-DDAE were chosen to be replaced, thereby establishing the NC+DDAE_T. Both objective and listening test results showed that the proposed NC+DDAE_T noise reduction system achieved similar performances compared with the previous NC+DDAE in several noisy test conditions. However, the proposed NC+DDAE_T only required a quarter of the number of parameters compared to the NC+DDAE. CONCLUSIONS This study demonstrated that knowledge transfer technology can help reduce the number of parameters in an NC+DDAE while keeping similar performance rates. This suggests that the proposed NC+DDAE_T model may reduce the implementation costs of this noise reduction system and provide more benefits for cochlear implant users.
Collapse
Affiliation(s)
- Lieber Po-Hung Li
- Department of Otolaryngology, Cheng Hsin General Hospital, Taipei, Taiwan.,Faculty of Medicine, Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan.,Department of Speech Language Pathology and Audiology, College of Health Technology, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Ji-Yan Han
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Zhong Zheng
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ren-Jie Huang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
14
|
Speech Enhancement for Hearing Impaired Based on Bandpass Filters and a Compound Deep Denoising Autoencoder. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Deep neural networks have been applied for speech enhancements efficiently. However, for large variations of speech patterns and noisy environments, an individual neural network with a fixed number of hidden layers causes strong interference, which can lead to a slow learning process, poor generalisation in an unknown signal-to-noise ratio in new inputs, and some residual noise in the enhanced output. In this paper, we present a new approach for the hearing impaired based on combining two stages: (1) a set of bandpass filters that split up the signal into eight separate bands each performing a frequency analysis of the speech signal; (2) multiple deep denoising autoencoder networks, with each working for a small specific enhancement task and learning to handle a subset of the whole training set. To evaluate the performance of the approach, the hearing-aid speech perception index, the hearing aid sound quality index, and the perceptual evaluation of speech quality were used. Improvements in speech quality and intelligibility were evaluated using seven subjects of sensorineural hearing loss audiogram. We compared the performance of the proposed approach with individual denoising autoencoder networks with three and five hidden layers. The experimental results showed that the proposed approach yielded higher quality and was more intelligible compared with three and five layers.
Collapse
|
15
|
Wang NYH, Wang HLS, Wang TW, Fu SW, Lu X, Wang HM, Tsao Y. Improving the Intelligibility of Speech for Simulated Electric and Acoustic Stimulation Using Fully Convolutional Neural Networks. IEEE Trans Neural Syst Rehabil Eng 2020; 29:184-195. [PMID: 33275585 DOI: 10.1109/tnsre.2020.3042655] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Combined electric and acoustic stimulation (EAS) has demonstrated better speech recognition than conventional cochlear implant (CI) and yielded satisfactory performance under quiet conditions. However, when noise signals are involved, both the electric signal and the acoustic signal may be distorted, thereby resulting in poor recognition performance. To suppress noise effects, speech enhancement (SE) is a necessary unit in EAS devices. Recently, a time-domain speech enhancement algorithm based on the fully convolutional neural networks (FCN) with a short-time objective intelligibility (STOI)-based objective function (termed FCN(S) in short) has received increasing attention due to its simple structure and effectiveness of restoring clean speech signals from noisy counterparts. With evidence showing the benefits of FCN(S) for normal speech, this study sets out to assess its ability to improve the intelligibility of EAS simulated speech. Objective evaluations and listening tests were conducted to examine the performance of FCN(S) in improving the speech intelligibility of normal and vocoded speech in noisy environments. The experimental results show that, compared with the traditional minimum-mean square-error SE method and the deep denoising autoencoder SE method, FCN(S) can obtain better gain in the speech intelligibility for normal as well as vocoded speech. This study, being the first to evaluate deep learning SE approaches for EAS, confirms that FCN(S) is an effective SE approach that may potentially be integrated into an EAS processor to benefit users in noisy environments.
Collapse
|
16
|
Tama BA, Kim DH, Kim G, Kim SW, Lee S. Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery. Clin Exp Otorhinolaryngol 2020; 13:326-339. [PMID: 32631041 PMCID: PMC7669308 DOI: 10.21053/ceo.2020.00654] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/24/2020] [Accepted: 06/09/2020] [Indexed: 12/12/2022] Open
Abstract
This study presents an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, considering opportunities, research challenges, and research directions. We searched PubMed, the Cochrane Central Register of Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles. The exclusion of non-English publications and duplicates yielded a total of 90 remaining studies. These 90 studies were divided into those analyzing medical images, voice, medical devices, and clinical diagnoses and treatments. Most studies (42.2%, 38/90) used AI for image-based analysis, followed by clinical diagnoses and treatments (24 studies). Each of the remaining two subcategories included 14 studies. Machine learning and deep learning have been extensively applied in the field of otorhinolaryngology. However, the performance of AI models varies and research challenges remain.
Collapse
Affiliation(s)
- Bayu Adhi Tama
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Do Hyun Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Gyuwon Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Soo Whan Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Pohang, Korea
| |
Collapse
|
17
|
Improving Speech Quality for Hearing Aid Applications Based on Wiener Filter and Composite of Deep Denoising Autoencoders. SIGNALS 2020. [DOI: 10.3390/signals1020008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In hearing aid devices, speech enhancement techniques are a critical component to enable users with hearing loss to attain improved speech quality under noisy conditions. Recently, the deep denoising autoencoder (DDAE) was adopted successfully for recovering the desired speech from noisy observations. However, a single DDAE cannot extract contextual information sufficiently due to the poor generalization in an unknown signal-to-noise ratio (SNR), the local minima, and the fact that the enhanced output shows some residual noise and some level of discontinuity. In this paper, we propose a hybrid approach for hearing aid applications based on two stages: (1) the Wiener filter, which attenuates the noise component and generates a clean speech signal; (2) a composite of three DDAEs with different window lengths, each of which is specialized for a specific enhancement task. Two typical high-frequency hearing loss audiograms were used to test the performance of the approach: Audiogram 1 = (0, 0, 0, 60, 80, 90) and Audiogram 2 = (0, 15, 30, 60, 80, 85). The hearing-aid speech perception index, the hearing-aid speech quality index, and the perceptual evaluation of speech quality were used to evaluate the performance. The experimental results show that the proposed method achieved significantly better results compared with the Wiener filter or a single deep denoising autoencoder alone.
Collapse
|
18
|
Mourão GL, Costa MH, Paul S. Speech Intelligibility for Cochlear Implant Users with the MMSE Noise-Reduction Time-Frequency Mask. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101982] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Xu D, Zheng D, Chen F. Studying the Effect of Carrier Type on the Perception of Vocoded Stimuli via Mismatch Negativity. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3167-3170. [PMID: 31946560 DOI: 10.1109/embc.2019.8856932] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Vocoder processing has been long used in many studies to examine how acoustic cues affect speech understanding and auditory processing. Early behavioral studies have shown that the type of carrier (i.e., pure-tone or noise) used in vocoding process affected the intelligibility of the perceived speech, and tone-vocoded stimuli had a perceptual advantage over noise-vocoded stimuli. This work further assessed whether the auditory evoked cortical response could objectively measure the perceptual difference between the two types of vocoded stimuli using an oddball-paradigm based event-related potential (ERP) experiment. A vowel stimulus was processed by noise- and tone-vocoding processes, and the processed stimuli were presented to normal-hearing listeners in an ERP experiment. The noise-vocoded and tone-vocoded vowel stimuli served as the deviant stimuli and the non-vocoded vowel stimulus as the standard stimulus. Experimental results showed that tone-vocoded stimulus evoked a significantly larger mismatch negativity (MMN) amplitude and a significantly shorter MMN peak latency than noise-vocoded stimulus did. Results in this work suggested that compared to noise-vocoded stimulus, tone-vocoded stimulus had a larger perceptual difference relative to the reference stimulus, and this effect caused by the usage of different carrier signals could be reflected by the MMN response.
Collapse
|
20
|
Machine Learning and Cochlear Implantation-A Structured Review of Opportunities and Challenges. Otol Neurotol 2019; 41:e36-e45. [PMID: 31644477 DOI: 10.1097/mao.0000000000002440] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The use of machine learning technology to automate intellectual processes and boost clinical process efficiency in medicine has exploded in the past 5 years. Machine learning excels in automating pattern recognition and in adapting learned representations to new settings. Moreover, machine learning techniques have the advantage of incorporating complexity and are free from many of the limitations of traditional deterministic approaches. Cochlear implants (CI) are a unique fit for machine learning techniques given the need for optimization of signal processing to fit complex environmental scenarios and individual patients' CI MAPping. However, there are many other opportunities where machine learning may assist in CI beyond signal processing. The objective of this review was to synthesize past applications of machine learning technologies for pediatric and adult CI and describe novel opportunities for research and development. DATA SOURCES The PubMed/MEDLINE, EMBASE, Scopus, and ISI Web of Knowledge databases were mined using a directed search strategy to identify the nexus between CI and artificial intelligence/machine learning literature. STUDY SELECTION Non-English language articles, articles without an available abstract or full-text, and nonrelevant articles were manually appraised and excluded. Included articles were evaluated for specific machine learning methodologies, content, and application success. DATA SYNTHESIS The database search identified 298 articles. Two hundred fifty-nine articles (86.9%) were excluded based on the available abstract/full-text, language, and relevance. The remaining 39 articles were included in the review analysis. There was a marked increase in year-over-year publications from 2013 to 2018. Applications of machine learning technologies involved speech/signal processing optimization (17; 43.6% of articles), automated evoked potential measurement (6; 15.4%), postoperative performance/efficacy prediction (5; 12.8%), and surgical anatomy location prediction (3; 7.7%), and 2 (5.1%) in each of robotics, electrode placement performance, and biomaterials performance. CONCLUSION The relationship between CI and artificial intelligence is strengthening with a recent increase in publications reporting successful applications. Considerable effort has been directed toward augmenting signal processing and automating postoperative MAPping using machine learning algorithms. Other promising applications include augmenting CI surgery mechanics and personalized medicine approaches for boosting CI patient performance. Future opportunities include addressing scalability and the research and clinical communities' acceptance of machine learning algorithms as effective techniques.
Collapse
|
21
|
Multi-objective learning based speech enhancement method to increase speech quality and intelligibility for hearing aid device users. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.09.010] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
22
|
Lv SX, Peng L, Wang L. Stacked autoencoder with echo-state regression for tourism demand forecasting using search query data. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2018.08.024] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
23
|
Lai YH, Zheng WZ, Tang ST, Fang SH, Liao WH, Tsao Y. Improving the performance of hearing aids in noisy environments based on deep learning technology. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:404-408. [PMID: 30440419 DOI: 10.1109/embc.2018.8512277] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The performance of a deep-learning-based speech enhancement (SE) technology for hearing aid users, called a deep denoising autoencoder (DDAE), was investigated. The hearing-aid speech perception index (HASPI) and the hearing- aid sound quality index (HASQI), which are two well-known evaluation metrics for speech intelligibility and quality, were used to evaluate the performance of the DDAE SE approach in two typical high-frequency hearing loss (HFHL) audiograms. Our experimental results show that the DDAE SE approach yields higher intelligibility and quality scores than two classical SE approaches. These results suggest that a deep-learning-based SE method could be used to improve speech intelligibility and quality for hearing aid users in noisy environments.
Collapse
|
24
|
Xu D, Wang L, Chen F. An ERP Study on the Combined-stimulation Advantage in Vocoder Simulations. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2442-2445. [PMID: 30440901 DOI: 10.1109/embc.2018.8512890] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Electric hearing is presently the only treatment solution for patients with profound-to-severe hearing loss. For those patients also preserving low-frequency residual hearing on the ipsilateral ear, combined electric-and-acoustic stimulation (EAS) could notably improve their speech understanding abilities relative to those aided with electric-only (E-only) hearing. Early behavioral studies have consistently shown the advantage of combined stimulation. The aim of this work was to objectively examine the advantage of combined stimulation over electric-only hearing using an oddballparadigm based event-related potential (ERP) experiment. The vowel stimulus was processed by vocoding processes simulating the E-only and EAS conditions, and the generated stimuli were presented to normal-hearing listeners in the ERP experiment. Experiment results showed that the mismatch negativity (MMN) response elicited in the combined-stimulation condition featured a smaller peak amplitude and a more delayed peak latency than that in the E-only condition. The MMN results in this work demonstrated that compared with the ERP response elicited in the E-only condition, the response in the combinedstimulation condition was much closer to that elicited by the full-spectrum stimulus, yielding neurophysiological evidence on the combined-stimulation advantage.
Collapse
|
25
|
Deep Learning–Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients. Ear Hear 2018; 39:795-809. [DOI: 10.1097/aud.0000000000000537] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
26
|
Chen F, Chen J. Effects of fundamental frequency contour on understanding Mandarin sentences in bimodal hearing simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:EL354. [PMID: 29857756 DOI: 10.1121/1.5037720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Fundamental frequency (F0) contour carries important information for understanding a tonal language. The present work assessed the effects of F0 contour on understanding Mandarin sentences in bimodal hearing simulations, including three conditions of acoustic-only, electric-only, and combined stimulations. Test stimuli were synthesized Mandarin sentences, each word with a normal, flat, or randomly assigned lexical tone, and presented to normal-hearing Mandarin-speaking listeners to recognize. Experimental results showed that changing F0 contour significantly affected the perception of Mandarin sentences under all conditions of acoustic-only, electric-only, and combined stimulations. The combined-stimulation advantage was only observed for test stimuli with the normal F0 contour.
Collapse
Affiliation(s)
- Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Xueyuan Road 1088#, Xili, Nanshan District, Shenzhen, China
| | - Jing Chen
- Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| |
Collapse
|
27
|
Işil Ç, Yorulmaz M, Solmaz B, Turhan AB, Yurdakul C, Ünlü S, Ozbay E, Koç A. Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders. APPLIED OPTICS 2018; 57:2545-2552. [PMID: 29714238 DOI: 10.1364/ao.57.002545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 03/01/2018] [Indexed: 06/08/2023]
Abstract
Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.
Collapse
|
28
|
Hou JC, Wang SS, Lai YH, Tsao Y, Chang HW, Wang HM. Audio-Visual Speech Enhancement Using Multimodal Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2018. [DOI: 10.1109/tetci.2017.2784878] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|