1
|
Latif S, Qadir J, Qayyum A, Usama M, Younis S. Speech Technology for Healthcare: Opportunities, Challenges, and State of the Art. IEEE Rev Biomed Eng 2021; 14:342-356. [PMID: 32746367 DOI: 10.1109/rbme.2020.3006860] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Speech technology is not appropriately explored even though modern advances in speech technology-especially those driven by deep learning (DL) technology-offer unprecedented opportunities for transforming the healthcare industry. In this paper, we have focused on the enormous potential of speech technology for revolutionising the healthcare domain. More specifically, we review the state-of-the-art approaches in automatic speech recognition (ASR), speech synthesis or text to speech (TTS), and health detection and monitoring using speech signals. We also present a comprehensive overview of various challenges hindering the growth of speech-based services in healthcare. To make speech-based healthcare solutions more prevalent, we discuss open issues and suggest some possible research directions aimed at fully leveraging the advantages of other technologies for making speech-based healthcare solutions more effective.
Collapse
|
2
|
|
3
|
Zech J, Forde J, Titano JJ, Kaji D, Costa A, Oermann EK. Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:233. [PMID: 31317003 DOI: 10.21037/atm.2018.08.11] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model. Methods Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions, substitutions, and deletions of words were randomly introduced. Seq2seq models were trained using corrupted sentences as input to predict original uncorrupted sentences. Three models were trained using head CTs from MSH, chest radiographs from MSH, and head CTs from all three collections. Model performance was assessed across different sites and modalities. A sample of original, uncorrupted sentences were manually reviewed for any error in syntax, usage, or spelling to estimate real-world proofreading performance of the algorithm. Results Seq2seq detected 90.3% and 88.2% of corrupted sentences with 97.7% and 98.8% specificity in same-site, same-modality test sets for head CTs and chest radiographs, respectively. Manual review of original, uncorrupted same-site same-modality head CT sentences demonstrated seq2seq positive predictive value (PPV) 0.393 (157/400; 95% CI, 0.346-0.441) and negative predictive value (NPV) 0.986 (789/800; 95% CI, 0.976-0.992) for detecting sentences containing real-world errors, with estimated sensitivity of 0.389 (95% CI, 0.267-0.542) and specificity 0.986 (95% CI, 0.985-0.987) over n=86,211 uncorrupted training examples. Conclusions Seq2seq models can be highly effective at detecting erroneous insertions, deletions, and substitutions of words in radiology reports. To achieve high performance, these models require site- and modality-specific training examples. Incorporating additional targeted training data could further improve performance in detecting real-world errors in reports.
Collapse
Affiliation(s)
- John Zech
- Department of Radiology, Icahn School of Medicine, New York, NY, USA
| | | | - Joseph J Titano
- Department of Radiology, Icahn School of Medicine, New York, NY, USA
| | - Deepak Kaji
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| | - Anthony Costa
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| | - Eric Karl Oermann
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| |
Collapse
|
4
|
Blackley SV, Huynh J, Wang L, Korach Z, Zhou L. Speech recognition for clinical documentation from 1990 to 2018: a systematic review. J Am Med Inform Assoc 2019; 26:324-338. [PMID: 30753666 PMCID: PMC7647182 DOI: 10.1093/jamia/ocy179] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 11/16/2018] [Accepted: 11/28/2018] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The study sought to review recent literature regarding use of speech recognition (SR) technology for clinical documentation and to understand the impact of SR on document accuracy, provider efficiency, institutional cost, and more. MATERIALS AND METHODS We searched 10 scientific and medical literature databases to find articles about clinician use of SR for documentation published between January 1, 1990, and October 15, 2018. We annotated included articles with their research topic(s), medical domain(s), and SR system(s) evaluated and analyzed the results. RESULTS One hundred twenty-two articles were included. Forty-eight (39.3%) involved the radiology department exclusively and 10 (8.2%) involved emergency medicine; 10 (8.2%) mentioned multiple departments. Forty-eight (39.3%) articles studied productivity; 20 (16.4%) studied the effect of SR on documentation time, with mixed findings. Decreased turnaround time was reported in all 19 (15.6%) studies in which it was evaluated. Twenty-nine (23.8%) studies conducted error analyses, though various evaluation metrics were used. Reported percentage of documents with errors ranged from 4.8% to 71%; reported word error rates ranged from 7.4% to 38.7%. Seven (5.7%) studies assessed documentation-associated costs; 5 reported decreases and 2 reported increases. Many studies (44.3%) used products by Nuance Communications. Other vendors included IBM (9.0%) and Philips (6.6%); 7 (5.7%) used self-developed systems. CONCLUSION Despite widespread use of SR for clinical documentation, research on this topic remains largely heterogeneous, often using different evaluation metrics with mixed findings. Further, that SR-assisted documentation has become increasingly common in clinical settings beyond radiology warrants further investigation of its use and effectiveness in these settings.
Collapse
Affiliation(s)
- Suzanne V Blackley
- Clinical and Quality Analysis, Information Systems, Partners HealthCare, Boston, Massachusetts, USA
| | - Jessica Huynh
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Liqin Wang
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Zfania Korach
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Li Zhou
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
5
|
Kovacs MD, Cho MY, Burchett PF, Trambert M. Benefits of Integrated RIS/PACS/Reporting Due to Automatic Population of Templated Reports. Curr Probl Diagn Radiol 2019; 48:37-39. [DOI: 10.1067/j.cpradiol.2017.12.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 11/19/2017] [Accepted: 12/12/2017] [Indexed: 11/22/2022]
|
6
|
Fernandes J, Brunton I, Strudwick G, Banik S, Strauss J. Physician experience with speech recognition software in psychiatry: usage and perspective. BMC Res Notes 2018; 11:690. [PMID: 30285818 PMCID: PMC6167903 DOI: 10.1186/s13104-018-3790-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 09/25/2018] [Indexed: 11/22/2022] Open
Abstract
Objective The purpose of this paper is to extend a previous study by evaluating the use of a speech recognition software in a clinical psychiatry milieu. Physicians (n = 55) at a psychiatric hospital participated in a limited implementation and were provided with training, licenses, and relevant devices. Post-implementation usage data was collected via the software. Additionally, a post-implementation survey was distributed 5 months after the technology was introduced. Results In the first month, 45 out of 51 (88%) physicians were active users of the technology; however, after the full evaluation period only 53% were still active. The average active user minutes and the average active user lines dictated per month remained consistent throughout the evaluation. The use of speech recognition software within a psychiatric setting is of value to some physicians. Our results indicate a post-implementation reduction in adoption, with stable usage for physicians who remained active users. Future studies to identify characteristics of users and/or technology that contribute to ongoing use would be of value.
Collapse
Affiliation(s)
- John Fernandes
- Shannon Centennial Informatics Lab, Centre for Addiction and Mental Health, 1001 Queen St W, Toronto, M6J 1H4, Canada
| | | | - Gillian Strudwick
- Shannon Centennial Informatics Lab, Centre for Addiction and Mental Health, 1001 Queen St W, Toronto, M6J 1H4, Canada.,University of Toronto, Toronto, Canada
| | | | - John Strauss
- Shannon Centennial Informatics Lab, Centre for Addiction and Mental Health, 1001 Queen St W, Toronto, M6J 1H4, Canada. .,University of Toronto, Toronto, Canada.
| |
Collapse
|
7
|
Abstract
The widespread use of technology in hospitals and the difficulty of sterilising computer controls has increased opportunities for the spread of pathogens. This leads to an interest in touchless user interfaces for computer systems. We present a review of touchless interaction with computer equipment in the hospital environment, based on a systematic search of the literature. Sterility provides an implied theme and motivation for the field as a whole, but other advantages, such as hands-busy settings, are also proposed. Overcoming hardware restrictions has been a major theme, but in recent research, technical difficulties have receded. Image navigation is the most frequently considered task and the operating room the most frequently considered environment. Gestures have been implemented for input, system and content control. Most of the studies found have small sample sizes and focus on feasibility, acceptability or gesture-recognition accuracy. We conclude this article with an agenda for future work.
Collapse
|
8
|
Hammana I, Lepanto L, Poder T, Bellemare C, Ly MS. Speech recognition in the radiology department: a systematic review. Health Inf Manag 2016; 44:4-10. [PMID: 26157081 DOI: 10.1177/183335831504400201] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE To conduct a systematic review of the literature describing the impact of speech recognition systems on report error rates and productivity in radiology departments. METHODS The search was conducted for relevant papers published from January 1992 to October 2013. Comparative studies reporting any of the following outcomes were selected: error rates, departmental productivity, and radiologist productivity. The retrieved studies were assessed for quality and risk of bias. RESULTS The literature search identified 85 potentially relevant publications, but, based on the inclusion and exclusion criteria, only 20 were included. Most studies were before and after assessments with no control group. There was a large amount of heterogeneity due to differences in the imaging modalities assessed and the outcomes measured. The percentage of reports containing at least one error varied from 4.8% to 89% for speech recognition, and from 2.1% to 22% for transcription. Departmental productivity was improved with decreases in report turnaround times varying from 35% to 99%. Most studies found a lengthening of radiologist dictation time. CONCLUSION Overall gains in departmental productivity were high, but radiologist productivity, as measured by the time to produce a report, was diminished.
Collapse
Affiliation(s)
- Imane Hammana
- Centre hospitalier de l'Universitè de Montrèal Montrèal, Quèbec CANADA
| | - Luigi Lepanto
- Centre hospitalier de l'Universitè de Montrèal Tour Saint-Antoine, Montrèal, Quèbec, H2X 0A9 CANADA
| | - Thomas Poder
- Centre hospitalier de l'Universitè de Sherbrooke Sherbrooke, Quèbec CANADA
| | | | - My-Sandra Ly
- Universitè de Montrèal, Ècole Polytechnique Montrèal, Quèbec CANADA
| |
Collapse
|
9
|
Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software. Ir J Med Sci 2016; 185:921-927. [PMID: 27696148 DOI: 10.1007/s11845-016-1507-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2016] [Accepted: 09/22/2016] [Indexed: 10/20/2022]
Abstract
BACKGROUND Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. AIM Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. METHODS Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. RESULTS 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. CONCLUSION These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Collapse
|
10
|
Minn MJ, Zandieh AR, Filice RW. Improving Radiology Report Quality by Rapidly Notifying Radiologist of Report Errors. J Digit Imaging 2016; 28:492-8. [PMID: 25694167 DOI: 10.1007/s10278-015-9781-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
Abstract
Radiology report errors occur for many reasons including the use of pre-filled report templates, wrong-word substitution, nonsensical phrases, and missing words. Reports may also contain clinical errors that are not specific to the speech recognition including wrong laterality and gender-specific discrepancies. Our goal was to create a custom algorithm to detect potential gender and laterality mismatch errors and to notify the interpreting radiologists for rapid correction. A JavaScript algorithm was devised to flag gender and laterality mismatch errors by searching the text of the report for keywords and comparing them to parameters within the study's HL7 metadata (i.e., procedure type, patient sex). The error detection algorithm was retrospectively applied to 82,353 reports 4 months prior to its development and then prospectively to 309,304 reports 15 months after implementation. Flagged reports were reviewed individually by two radiologists for a true gender or laterality error and to determine if the errors were ultimately corrected. There was significant improvement in the number of flagged reports (pre, 198/82,353 [0.24%]; post, 628/309,304 [0.20%]; P = 0.04) and reports containing confirmed gender or laterality errors (pre, 116/82,353 [0.014%]; post, 285/309,304 [0.09%]; P < 0.0001) after implementing our error notification system. The number of flagged reports containing an error that were ultimately corrected improved dramatically after implementing the notification system (pre, 17/116 [15%]; post, 239/285 [84%]; P < 0.0001). We developed a successful automated tool for detecting and notifying radiologists of potential gender and laterality errors, allowing for rapid report correction and reducing the overall rate of report errors.
Collapse
Affiliation(s)
- Matthew J Minn
- MedStar Georgetown University Hospital, 3800 Reservoir Rd, NW, Washington, DC, 20007, USA,
| | | | | |
Collapse
|
11
|
Stanescu AL, Parisi MT, Weinberger E, Ferguson MR, Otto RK, Iyer RS. Peer Review: Lessons Learned in A Pediatric Radiology Department. Curr Probl Diagn Radiol 2015; 45:139-48. [PMID: 26489791 DOI: 10.1067/j.cpradiol.2015.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 09/05/2015] [Indexed: 12/21/2022]
Abstract
The purpose of this article is to illustrate types of diagnostic errors and feedback given to radiologists, using cases to support and clarify these categories. A comment-enhanced peer review system may be leveraged to generate a comprehensive feedback categorization scheme. These include errors of observation, errors of interpretation, inadequate patient data gathering, errors of communication, interobserver variability, informational feedback, and compliments. Much of this feedback is captured through comments associated with interpretative agreements.
Collapse
Affiliation(s)
- A Luana Stanescu
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA.
| | - Marguerite T Parisi
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA
| | - Edward Weinberger
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA
| | - Mark R Ferguson
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA
| | - Randolph K Otto
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA
| | - Ramesh S Iyer
- Department of Radiology, Seattle Children's Hospital, University of Washington, Seattle, WA
| |
Collapse
|
12
|
du Toit J, Hattingh R, Pitcher R. The accuracy of radiology speech recognition reports in a multilingual South African teaching hospital. BMC Med Imaging 2015; 15:8. [PMID: 25879906 PMCID: PMC4464850 DOI: 10.1186/s12880-015-0048-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Accepted: 02/04/2015] [Indexed: 11/23/2022] Open
Abstract
Background Speech recognition (SR) technology, the process whereby spoken words are converted to digital text, has been used in radiology reporting since 1981. It was initially anticipated that SR would dominate radiology reporting, with claims of up to 99% accuracy, reduced turnaround times and significant cost savings. However, expectations have not yet been realised. The limited data available suggest SR reports have significantly higher levels of inaccuracy than traditional dictation transcription (DT) reports, as well as incurring greater aggregate costs. There has been little work on the clinical significance of such errors, however, and little is known of the impact of reporter seniority on the generation of errors, or the influence of system familiarity on reducing error rates. Furthermore, there have been conflicting findings on the accuracy of SR amongst users with English as first- and second-language respectively. Methods The aim of the study was to compare the accuracy of SR and DT reports in a resource-limited setting. The first 300 SR and the first 300 DT reports generated during March 2010 were retrieved from the hospital’s PACS, and reviewed by a single observer. Text errors were identified, and then classified as either clinically significant or insignificant based on their potential impact on patient management. In addition, a follow-up analysis was conducted exactly 4 years later. Results Of the original 300 SR reports analysed, 25.6% contained errors, with 9.6% being clinically significant. Only 9.3% of the DT reports contained errors, 2.3% having potential clinical impact. Both the overall difference in SR and DT error rates, and the difference in ‘clinically significant’ error rates (9.6% vs. 2.3%) were statistically significant. In the follow-up study, the overall SR error rate was strikingly similar at 24.3%, 6% being clinically significant. Radiologists with second-language English were more likely to generate reports containing errors, but level of seniority had no bearing. Conclusion SR technology consistently increased inaccuracies in Tygerberg Hospital (TBH) radiology reports, thereby potentially compromising patient care. Awareness of increased error rates in SR reports, particularly amongst those transcribing in a second-language, is important for effective implementation of SR in a multilingual healthcare environment.
Collapse
Affiliation(s)
- Jacqueline du Toit
- Department of Diagnostic Radiology, Tygerberg Academic Hospital, Stellenbosch University, Francie van Zyl Avenue, Cape Town, 7700, South Africa.
| | - Retha Hattingh
- Department of Diagnostic Radiology, Tygerberg Academic Hospital, Stellenbosch University, Francie van Zyl Avenue, Cape Town, 7700, South Africa.
| | - Richard Pitcher
- Department of Diagnostic Radiology, Tygerberg Academic Hospital, Stellenbosch University, Francie van Zyl Avenue, Cape Town, 7700, South Africa.
| |
Collapse
|
13
|
Olisemeke B, Chen YF, Hemming K, Girling A. The effectiveness of service delivery initiatives at improving patients' waiting times in clinical radiology departments: a systematic review. J Digit Imaging 2014; 27:751-78. [PMID: 24888629 PMCID: PMC4391068 DOI: 10.1007/s10278-014-9706-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
We reviewed the literature for the impact of service delivery initiatives (SDIs) on patients' waiting times within radiology departments. We searched MEDLINE, EMBASE, CINAHL, INSPEC and The Cochrane Library for relevant articles published between 1995 and February, 2013. The Cochrane EPOC risk of bias tool was used to assess the risk of bias on studies that met specified design criteria. Fifty-seven studies met the inclusion criteria. The types of SDI implemented included extended scope practice (ESP, three studies), quality management (12 studies), productivity-enhancing technologies (PETs, 29 studies), multiple interventions (11 studies), outsourcing and pay-for-performance (one study each). The uncontrolled pre- and post-intervention and the post-intervention designs were used in 54 (95%) of the studies. The reporting quality was poor: many of the studies did not test and/or report the statistical significance of their results. The studies were highly heterogeneous, therefore meta-analysis was inappropriate. The following type of SDIs showed promising results: extended scope practice; quality management methodologies including Six Sigma, Lean methodology, and continuous quality improvement; productivity-enhancing technologies including speech recognition reporting, teleradiology and computerised physician order entry systems. We have suggested improved study design and the mapping of the definitions of patient waiting times in radiology to generic timelines as a starting point for moving towards a situation where it becomes less restrictive to compare and/or pool the results of future studies in a meta-analysis.
Collapse
Affiliation(s)
- B Olisemeke
- Radiology Department, Heart of England NHS Foundation Trust, Birmingham, UK,
| | | | | | | |
Collapse
|
14
|
Chan P, Thyparampil PJ, Chiang MF. Accuracy and speed of electronic health record versus paper-based ophthalmic documentation strategies. Am J Ophthalmol 2013; 156:165-172.e2. [PMID: 23664152 DOI: 10.1016/j.ajo.2013.02.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Revised: 02/16/2013] [Accepted: 02/21/2013] [Indexed: 11/18/2022]
Abstract
PURPOSE To compare accuracy and speed of keyboard and mouse electronic health record (EHR) documentation strategies with those of a paper documentation strategy. DESIGN Prospective cohort study. METHODS Three documentation strategies were developed: (1) keyboard EHR, (2) mouse EHR, and (3) paper. Ophthalmology trainees recruited for the study were presented with 5 clinical cases and documented findings using each strategy. For each case-strategy pair, findings and documentation time were recorded. Accuracy of each strategy was calculated based on sensitivity (fraction of findings in actual case that were documented by subject) and positive ratio (fraction of findings identified by subject that were present in the actual case). RESULTS Twenty subjects were enrolled. A total of 258 findings were identified in the 5 cases, resulting in 300 case-strategy pairs and 77 400 possible total findings documented. Sensitivity was 89.1% for the keyboard EHR, 87.2% for mouse EHR, and 88.6% for the paper strategy (no statistically significant differences). The positive ratio was 99.4% for the keyboard EHR, 98.9% for mouse EHR, and 99.9% for the paper strategy (P < .001 for mouse EHR vs paper; no significant differences between other pairs). Mean ± standard deviation documentation speed was significantly slower for the keyboard (2.4 ± 1.1 seconds/finding) and mouse (2.2 ± 0.7 seconds/finding) EHR compared with the paper strategy (2.0 ± 0.8 seconds/finding). Documentation speed of the mouse EHR strategy worsened with repetition. CONCLUSIONS No documentation strategy was perfectly accurate in this study. Documentation speed for both EHR strategies was slower than with paper. Further studies involving total physician time requirements for ophthalmic EHRs are required.
Collapse
Affiliation(s)
- Patrick Chan
- Department of Ophthalmology, Harkness Eye Institute, Columbia University College of Physicians and Surgeons, New York, NY, USA
| | | | | |
Collapse
|
15
|
Patel K, Harbord M. Digital dictation and voice transcription software enhances outpatient clinic letter production: a crossover study. Frontline Gastroenterol 2012; 3:162-165. [PMID: 28839659 PMCID: PMC5517278 DOI: 10.1136/flgastro-2011-100100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 03/19/2012] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Digital voice transcription has been introduced widely in the National Health Service (NHS), though primarily in radiology departments. There has been a long-standing problem with recruitment of medical secretaries within the NHS, leading to long delays in the production of correspondence from outpatient clinics. OBJECTIVE To determine whether use of widely available digital transcription software improves efficiency and the time taken to produce correspondence. METHODS The project used a prospective, crossover trial design in a 'real-world' environment. Correspondence from clinics was transcribed after dictation by a secretary using conventional analogue audio tape or the dictation software. After a 2-week washout period the same clinics' dictations were transcribed using the other method to produce identical correspondence. The two sets of letters were compared. RESULTS The mean time for the secretary to produce letters for a complete clinic using digital dictation was 66 min whereas analogue dictation took 121 min (p<0.00002). There was no difference in the number of mistakes per letter (p>0.05). CONCLUSION Voice transcription software significantly decreased the time taken to transcribe outpatient clinic letters with minimal training of secretarial staff, resulting in improved efficiency.
Collapse
Affiliation(s)
- Kinesh Patel
- Department of Gastroenterology, Chelsea and Westminster NHS Foundation Trust, London, UK
| | - Marcus Harbord
- Department of Gastroenterology, Chelsea and Westminster NHS Foundation Trust, London, UK
| |
Collapse
|