1
|
Dimski T, Brandenburger T, Vollmer C, Kindgen-Milles D. A safe and effective protocol for postdilution hemofiltration with regional citrate anticoagulation. BMC Nephrol 2024; 25:218. [PMID: 38982339 PMCID: PMC11234626 DOI: 10.1186/s12882-024-03659-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 07/01/2024] [Indexed: 07/11/2024] Open
Abstract
BACKGROUND Regional citrate anticoagulation (RCA) is recommended during continuous renal replacement therapy. Compared to systemic anticoagulation, RCA provides a longer filter lifespan with the risk of metabolic alkalosis and impaired calcium homeostasis. Surprisingly, most RCA protocols are designed for continuous veno-venous hemodialysis or hemodiafiltration. Effective protocols for continuous veno-venous hemofiltration (CVVH) are rare, although CVVH is a standard treatment for high-molecular-weight clearance. Therefore, we evaluated a new RCA protocol for postdilution CVVH. METHODS This is a monocentric prospective interventional study to evaluate a new RCA protocol for postdilution CVVH. We recruited surgical patients with stage III acute kidney injury who needed renal replacement therapy. We recorded dialysis and RCA data and hemodynamic and laboratory parameters during treatment sessions of 72 h. The primary endpoint was filter patency at 72 h. The major safety parameters were metabolic alkalosis and severe hypocalcemia at any time. RESULTS We included 38 patients who underwent 66 treatment sessions. The mean filter lifespan was 66 ± 12 h, and 44 of 66 (66%) filters were patent at 72 h. After censoring for non-CVVH-related cessation of treatment, 83% of all filters were patent at 72 h. The delivered dialysis dose was 28 ± 5 ml/kgBW/h. The serum levels of creatinine, urea and beta2-microglobulin decreased significantly from day 0 to day 3. Metabolic alkalosis occurred in one patient. An iCa++ below 1.0 mmol/L occurred in four patients. Citrate accumulation did not occur. CONCLUSIONS We describe a safe, effective, and easy-to-use RCA protocol for postdilution CVVH. This protocol provides a long and sustained filter lifespan without serious adverse effects. The risk of metabolic alkalosis and hypocalcemia is low. Using this protocol, a recommended dialysis dose can be safely administered with effective clearance of low- and middle-molecular-weight molecules. TRIAL REGISTRATION The study was approved by the medical ethics committee of Heinrich-Heine University Duesseldorf (No. 2018-82KFogU). The trial was registered in the local study register of the university (No: 2018044660) on 07/04/2018 and was retrospectively registered at ClinicalTrials.gov (ClinicalTrials.gov Identifier: NCT03969966) on 31/05/2019.
Collapse
Affiliation(s)
- Thomas Dimski
- Department of Anesthesiology, University Hospital Duesseldorf, Heinrich-Heine University Duesseldorf, Moorenstr. 5, Duesseldorf, 40225, Germany.
| | - Timo Brandenburger
- Department of Anesthesiology, University Hospital Duesseldorf, Heinrich-Heine University Duesseldorf, Moorenstr. 5, Duesseldorf, 40225, Germany
| | - Christian Vollmer
- Department of Anesthesiology, University Hospital Duesseldorf, Heinrich-Heine University Duesseldorf, Moorenstr. 5, Duesseldorf, 40225, Germany
| | - Detlef Kindgen-Milles
- Department of Anesthesiology, University Hospital Duesseldorf, Heinrich-Heine University Duesseldorf, Moorenstr. 5, Duesseldorf, 40225, Germany
| |
Collapse
|
2
|
Pham TD, Holmes SB, Zou L, Patel M, Coulthard P. Diagnosis of pathological speech with streamlined features for long short-term memory learning. Comput Biol Med 2024; 170:107976. [PMID: 38219647 DOI: 10.1016/j.compbiomed.2024.107976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/14/2023] [Accepted: 01/04/2024] [Indexed: 01/16/2024]
Abstract
BACKGROUND Pathological speech diagnosis is crucial for identifying and treating various speech disorders. Accurate diagnosis aids in developing targeted intervention strategies, improving patients' communication abilities, and enhancing their overall quality of life. With the rising incidence of speech-related conditions globally, including oral health, the need for efficient and reliable diagnostic tools has become paramount, emphasizing the significance of advanced research in this field. METHODS This paper introduces novel features for deep learning in the analysis of short voice signals. It proposes the incorporation of time-space and time-frequency features to accurately discern between two distinct groups: Individuals exhibiting normal vocal patterns and those manifesting pathological voice conditions. These advancements aim to enhance the precision and reliability of diagnostic procedures, paving the way for more targeted treatment approaches. RESULTS Utilizing a publicly available voice database, this study carried out training and validation using long short-term memory (LSTM) networks learning on the combined features, along with a data balancing strategy. The proposed approach yielded promising performance metrics: 90% accuracy, 93% sensitivity, 87% specificity, 88% precision, an F1 score of 0.90, and an area under the receiver operating characteristic curve of 0.96. The results surpassed those obtained by the networks trained using wavelet-time scattering coefficients, as well as several algorithms trained with alternative feature types. CONCLUSIONS The incorporation of time-frequency and time-space features extracted from short segments of voice signals for LSTM learning demonstrates significant promise as an AI tool for the diagnosis of speech pathology. The proposed approach has the potential to enhance the accuracy and allow for real-time pathological speech assessment, thereby facilitating more targeted and effective therapeutic interventions.
Collapse
Affiliation(s)
- Tuan D Pham
- Barts and The London Faculty of Medicine and Dentistry, Queen Mary University of London, Turner Street, E1 2AD, London, UK.
| | - Simon B Holmes
- Barts and The London Faculty of Medicine and Dentistry, Queen Mary University of London, Turner Street, E1 2AD, London, UK
| | - Lifong Zou
- Barts and The London Faculty of Medicine and Dentistry, Queen Mary University of London, Turner Street, E1 2AD, London, UK
| | - Mangala Patel
- Barts and The London Faculty of Medicine and Dentistry, Queen Mary University of London, Turner Street, E1 2AD, London, UK
| | - Paul Coulthard
- Barts and The London Faculty of Medicine and Dentistry, Queen Mary University of London, Turner Street, E1 2AD, London, UK
| |
Collapse
|
3
|
Verma V, Benjwal A, Chhabra A, Singh SK, Kumar S, Gupta BB, Arya V, Chui KT. A novel hybrid model integrating MFCC and acoustic parameters for voice disorder detection. Sci Rep 2023; 13:22719. [PMID: 38123627 PMCID: PMC10733415 DOI: 10.1038/s41598-023-49869-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Voice is an essential component of human communication, serving as a fundamental medium for expressing thoughts, emotions, and ideas. Disruptions in vocal fold vibratory patterns can lead to voice disorders, which can have a profound impact on interpersonal interactions. Early detection of voice disorders is crucial for improving voice health and quality of life. This research proposes a novel methodology called VDDMFS [voice disorder detection using MFCC (Mel-frequency cepstral coefficients), fundamental frequency and spectral centroid] which combines an artificial neural network (ANN) trained on acoustic attributes and a long short-term memory (LSTM) model trained on MFCC attributes. Subsequently, the probabilities generated by both the ANN and LSTM models are stacked and used as input for XGBoost, which detects whether a voice is disordered or not, resulting in more accurate voice disorder detection. This approach achieved promising results, with an accuracy of 95.67%, sensitivity of 95.36%, specificity of 96.49% and f1 score of 96.9%, outperforming existing techniques.
Collapse
Affiliation(s)
- Vyom Verma
- Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Sector-26, Chandigarh, India
| | - Anish Benjwal
- Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Sector-26, Chandigarh, India
| | - Amit Chhabra
- Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Sector-26, Chandigarh, India
| | - Sunil K Singh
- Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Sector-26, Chandigarh, India
| | - Sudhakar Kumar
- Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Sector-26, Chandigarh, India.
| | - Brij B Gupta
- Department of Computer Science and Information Engineering, Asia University, Taichung, 413, Taiwan.
- Kyung Hee University, 26 Kyungheedae-ro, Dongdaemun-gu, 02447, Seoul, Korea.
- Symbiosis Centre for Information Technology (SCIT), Symbiosis International University, Pune, India.
- Department of Electrical and Computer Engineering, Lebanese American University, 1102, Beirut, Lebanon.
- Center for Interdisciplinary Research, University of Petroleum and Energy Studies (UPES), Dehradun, India.
| | - Varsha Arya
- Department of Electrical and Computer Engineering, Lebanese American University, 1102, Beirut, Lebanon
- Department of Business Administration, Asia University, Taichung, 413, Taiwan
| | - Kwok Tai Chui
- Department of Electronic Engineering and Computer Science, School of Science and Technology, Hong Kong Metropolitan University (HKMU), Kowloon, Hong Kong
| |
Collapse
|
4
|
Methods in Medicine CAM. Retracted: An Analytical Study of Speech Pathology Detection Based on MFCC and Deep Neural Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:9829813. [PMID: 38124984 PMCID: PMC10732986 DOI: 10.1155/2023/9829813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023]
Abstract
[This retracts the article DOI: 10.1155/2022/7814952.].
Collapse
|
5
|
Contreras RC, Viana MS, Fonseca ES, Dos Santos FL, Zanin RB, Guido RC. An Experimental Analysis on Multicepstral Projection Representation Strategies for Dysphonia Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115196. [PMID: 37299922 DOI: 10.3390/s23115196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/20/2023] [Accepted: 05/23/2023] [Indexed: 06/12/2023]
Abstract
Biometrics-based authentication has become the most well-established form of user recognition in systems that demand a certain level of security. For example, the most commonplace social activities stand out, such as access to the work environment or to one's own bank account. Among all biometrics, voice receives special attention due to factors such as ease of collection, the low cost of reading devices, and the high quantity of literature and software packages available for use. However, these biometrics may have the ability to represent the individual impaired by the phenomenon known as dysphonia, which consists of a change in the sound signal due to some disease that acts on the vocal apparatus. As a consequence, for example, a user with the flu may not be properly authenticated by the recognition system. Therefore, it is important that automatic voice dysphonia detection techniques be developed. In this work, we propose a new framework based on the representation of the voice signal by the multiple projection of cepstral coefficients to promote the detection of dysphonic alterations in the voice through machine learning techniques. Most of the best-known cepstral coefficient extraction techniques in the literature are mapped and analyzed separately and together with measures related to the fundamental frequency of the voice signal, and its representation capacity is evaluated on three classifiers. Finally, the experiments on a subset of the Saarbruecken Voice Database prove the effectiveness of the proposed material in detecting the presence of dysphonia in the voice.
Collapse
Affiliation(s)
- Rodrigo Colnago Contreras
- Department of Computer Science and Statistics, Institute of Biosciences, Letters and Exact Sciences, São Paulo State University, São José do Rio Preto 15054-000, SP, Brazil
| | | | | | | | - Rodrigo Bruno Zanin
- Faculty of Architecture and Engineering, Mato Grosso State University, Cáceres 78217-900, MT, Brazil
| | - Rodrigo Capobianco Guido
- Department of Computer Science and Statistics, Institute of Biosciences, Letters and Exact Sciences, São Paulo State University, São José do Rio Preto 15054-000, SP, Brazil
| |
Collapse
|
6
|
Donati E, Chousidis C, Ribeiro HDM, Russo N. Classification of Speaking and Singing Voices Using Bioimpedance Measurements and Deep Learning. J Voice 2023:S0892-1997(23)00120-0. [PMID: 37156686 DOI: 10.1016/j.jvoice.2023.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/29/2023] [Accepted: 03/29/2023] [Indexed: 05/10/2023]
Abstract
The acts of speaking and singing are different phenomena displaying distinct characteristics. The classification and distinction of these voice acts is vastly approached utilizing voice audio recordings and microphones. The use of audio recordings, however, can become challenging and computationally expensive due to the complexity of the voice signal. The research presented in this paper seeks to address this issue by implementing a deep learning classifier of speaking and singing voices based on bioimpedance measurement in replacement of audio recordings. In addition, the proposed research aims to develop a real-time voice act classification for the integration with voice-to-MIDI conversion. For such purposes, a system was designed, implemented, and tested using electroglottographic signals, Mel Frequency Cepstral Coefficients, and a deep neural network. The lack of datasets for the training of the model was tackled by creating a dedicated dataset 7200 bioimpedance measurement of both singing and speaking. The use of bioimpedance measurements allows to deliver high classification accuracy whilst keeping low computational needs for both preprocessing and classification. These characteristics, in turn, allows a fast deployment of the system for near-real-time applications. After the training, the system was broadly tested achieving a testing accuracy of 92% to 94%.
Collapse
Affiliation(s)
- Eugenio Donati
- School of Computing and Engineering, University of West London, London, UK.
| | - Christos Chousidis
- Department of Music and Media, Institute of Sound Recording, University of Surrey, Guildford, UK
| | | | - Nicola Russo
- School of Computing and Engineering, University of West London, London, UK
| |
Collapse
|