1
|
Clinical Decision Support Principles for Quality Improvement and Research. Hosp Pediatr 2024; 14:e219-e224. [PMID: 38545665 PMCID: PMC10965756 DOI: 10.1542/hpeds.2023-007540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Pediatric hospitalists frequently interact with clinical decision support (CDS) tools in patient care and use these tools for quality improvement or research. In this method/ology paper, we provide an introduction and practical approach to developing and evaluating CDS tools within the electronic health record. First, we define CDS and describe the types of CDS interventions that exist. We then outline a stepwise approach to CDS development, which begins with defining the problem and understanding the system. We present a framework for metric development and then describe tools that can be used for CDS design (eg, 5 Rights of CDS, "10 commandments," usability heuristics, human-centered design) and testing (eg, validation, simulation, usability testing). We review approaches to evaluating CDS tools, which range from randomized studies to traditional quality improvement methods. Lastly, we discuss practical considerations for implementing CDS, including the assessment of a project team's skills and an organization's information technology resources.
Collapse
|
2
|
Development and evaluation of trigger tools to identify pediatric blood management errors. BLOOD TRANSFUSION = TRASFUSIONE DEL SANGUE 2024:BloodTransfus.606. [PMID: 38557324 DOI: 10.2450/bloodtransfus.606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/07/2023] [Indexed: 04/04/2024]
Abstract
BACKGROUND Pediatric patient blood management (PBM) programs require continuous surveillance of errors and near misses. However, most PBM programs rely on passive surveillance methods. Our objective was to develop and evaluate a set of automated trigger tools for active surveillance of pediatric PBM errors. MATERIALS AND METHODS We used the Rand-UCLA method with an expert panel of pediatric transfusion medicine specialists to identify and prioritize candidate trigger tools for all transfused blood products. We then iteratively developed automated queries of electronic health record (EHR) data for the highest priority triggers. Two physicians manually reviewed a subset of cases meeting trigger tool criteria and estimated each trigger tool's positive predictive value (PPV). We then estimated the rate of PBM errors, whether they reached the patient, and adverse events for each trigger tool across four years in a single pediatric health system. RESULTS We identified 28 potential triggers for pediatric PBM errors and developed 5 automated trigger tools (positive patient identification, missing irradiation, unwashed products despite prior anaphylaxis, transfusion lasting >4 hours, over-transfusion by volume). The PPV for ordering errors ranged from 38-100%. The most frequently detected near miss event reaching patients was first transfusions without positive patient identification (estimate 303, 95% CI: 288-318 per year). The only adverse events detected were from over-transfusions by volume, including 4 adverse events detected on manual review that had not been reported in passive surveillance systems. DISCUSSION It is feasible to automatically detect pediatric PBM errors using existing data captured in the EHR that enable active surveillance systems. Over-transfusions may be one of the most frequent causes of harm in the pediatric environment.
Collapse
|
3
|
Reducing Therapeutic Duplication in Inpatient Medication Orders. Appl Clin Inform 2023; 14:538-543. [PMID: 37105228 PMCID: PMC10356184 DOI: 10.1055/a-2082-4631] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/25/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Therapeutic duplication, the presence of multiple agents prescribed for the same indication without clarification for when each should be used, can contribute to serious medical errors. Joint Commission standards require that orders contain clarifying information about when each order should be given. In our system, as needed (PRN) acetaminophen and ibuprofen orders are major contributors to therapeutic duplication. OBJECTIVE The objective of this study is to design and evaluate effectiveness of clinical decision support (CDS) to reduce therapeutic duplication with acetaminophen and ibuprofen orders. METHODS This study was done in a pediatric health system with three freestanding hospitals. We iteratively designed and implemented two CDS strategies aimed at reducing the therapeutic duplication with these agents: (1) interruptive alert prompting clinicians for clarifying PRN comments at order entry and (2) addition of discrete "first-line" and "second-line" PRN reasons to orders. Therapeutic duplications were measured by manual review of orders for 30-day periods before and after each intervention and 6 months later. RESULTS Therapeutic duplications decreased from 1,485 in the 30 days prior to the first alert implementation to 818 in the 30 days after but rose back to 1,208 in the 30 days prior to the second intervention. After discrete reasons were added to the order, therapeutic duplication decreased to 336 in the immediate 30 days and 6 months later remained at 277. Alerts firing rates decreased from 76.0 per 1,000 PRN acetaminophen or ibuprofen orders to 42.9 after the second intervention. CONCLUSION Interruptive alerts may reduce therapeutic duplication but are associated with high rates of user frustration and alert fatigue. Leveraging discrete PRN reasons for "first line" and "second line" produced a greater reduction in therapeutic duplication as well as fewer interruptive alerts and less manual entry for providers.
Collapse
|
4
|
Integrating structured and unstructured data for timely prediction of bloodstream infection among children. Pediatr Res 2023; 93:969-975. [PMID: 35854085 DOI: 10.1038/s41390-022-02116-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/08/2022] [Accepted: 05/08/2022] [Indexed: 11/09/2022]
Abstract
BACKGROUND Hospitalized children with central venous lines (CVLs) are at higher risk of hospital-acquired infections. Information in electronic health records (EHRs) can be employed in training deep learning models to predict the onset of these infections. We incorporated clinical notes in addition to structured EHR data to predict serious bloodstream infections, defined as positive blood culture followed by at least 4 days of new antimicrobial agent administration, among hospitalized children with CVLs. METHODS Structured EHR information and clinical notes were extracted for a retrospective cohort including all hospitalized patients with CVLs at a single tertiary care pediatric health system from 2013 to 2018. Deep learning models were trained to determine the added benefit of incorporating the information embedded in clinical notes in predicting serious bloodstream infection. RESULTS A total of 24,351 patient encounters met inclusion criteria. The best-performing model restricted to structured EHR data had a specificity of 0.951 and positive predictive value (PPV) of 0.056 when the sensitivity was set to 0.85. The addition of contextualized word embeddings improved the specificity to 0.981 and PPV to 0.113. CONCLUSIONS Integrating clinical notes with structured EHR data improved the prediction of serious bloodstream infections among pediatric patients with CVLs. IMPACT Developed an advanced infection prediction model in pediatrics that integrates the structured and unstructured EHRs. Extracted information from clinical notes to do timely prediction in a clinical setting. Developed a deep learning model framework that can be employed in predicting rare events in a complex and dynamic environment.
Collapse
|
5
|
Clinical Decision Support Stewardship: Best Practices and Techniques to Monitor and Improve Interruptive Alerts. Appl Clin Inform 2022; 13:560-568. [PMID: 35613913 PMCID: PMC9132737 DOI: 10.1055/s-0042-1748856] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022] Open
Abstract
Interruptive clinical decision support systems, both within and outside of electronic health records, are a resource that should be used sparingly and monitored closely. Excessive use of interruptive alerting can quickly lead to alert fatigue and decreased effectiveness and ignoring of alerts. In this review, we discuss the evidence for effective alert stewardship as well as practices and methods we have found useful to assess interruptive alert burden, reduce excessive firings, optimize alert effectiveness, and establish quality governance at our institutions. We also discuss the importance of a holistic view of the alerting ecosystem beyond the electronic health record.
Collapse
|
6
|
Lessons Learned from OpenNotes Learning Mode and Subsequent Implementation across a Pediatric Health System. Appl Clin Inform 2022; 13:113-122. [PMID: 35081655 PMCID: PMC8791761 DOI: 10.1055/s-0041-1741483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
BACKGROUND The 21st Century Cures Act has accelerated adoption of OpenNotes, providing new opportunities for patient and family engagement in their care. However, these regulations present new challenges, particularly for pediatric health systems aiming to improve information sharing while minimizing risks associated with adolescent confidentiality and safety. OBJECTIVE Describe lessons learned preparing for OpenNotes across a pediatric health system during a 4-month trial period (referred to as "Learning Mode") in which clinical notes were not shared by default but decision support was present describing the upcoming change and physicians could request feedback on complex cases from a multidisciplinary team. METHODS During Learning Mode (December 3, 2020-March 9, 2021), implementation included (1) educational text at the top of commonly used note types indicating that notes would soon be shared and providing guidance, (2) a new confidential note type, and (3) a mechanism for physicians to elicit feedback from a multidisciplinary OpenNotes working group for complex cases with questions related to OpenNotes. The working group reviewed lessons learned from this period, as well as implementation of OpenNotes from March 10, 2021 to June 30, 2021. RESULTS During Learning Mode, 779 confidential notes were written across the system. The working group provided feedback on 14 complex cases and also reviewed 7 randomly selected confidential notes. The proportion of physician notes shared with patients increased from 1.3% to 88.4% after default sharing of notes to the patient portal. Key lessons learned included (1) sensitive information was often present in autopopulated elements, differential diagnoses, and supervising physician note attestations; and (2) incorrect reasons were often selected by clinicians for withholding notes but this accuracy improved with new designs. CONCLUSION While OpenNotes provides an unprecedented opportunity to engage pediatric patients and their families, targeted education and electronic health record designs are needed to mitigate potential harms of inappropriate disclosures.
Collapse
|
7
|
Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics. J Am Med Inform Assoc 2021; 28:2654-2660. [PMID: 34664664 DOI: 10.1093/jamia/ocab179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/02/2021] [Accepted: 09/10/2021] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Excessive electronic health record (EHR) alerts reduce the salience of actionable alerts. Little is known about the frequency of interruptive alerts across health systems and how the choice of metric affects which users appear to have the highest alert burden. OBJECTIVE (1) Analyze alert burden by alert type, care setting, provider type, and individual provider across 6 pediatric health systems. (2) Compare alert burden using different metrics. MATERIALS AND METHODS We analyzed interruptive alert firings logged in EHR databases at 6 pediatric health systems from 2016-2019 using 4 metrics: (1) alerts per patient encounter, (2) alerts per inpatient-day, (3) alerts per 100 orders, and (4) alerts per unique clinician days (calendar days with at least 1 EHR log in the system). We assessed intra- and interinstitutional variation and how alert burden rankings differed based on the chosen metric. RESULTS Alert burden varied widely across institutions, ranging from 0.06 to 0.76 firings per encounter, 0.22 to 1.06 firings per inpatient-day, 0.98 to 17.42 per 100 orders, and 0.08 to 3.34 firings per clinician day logged in the EHR. Custom alerts accounted for the greatest burden at all 6 sites. The rank order of institutions by alert burden was similar regardless of which alert burden metric was chosen. Within institutions, the alert burden metric choice substantially affected which provider types and care settings appeared to experience the highest alert burden. CONCLUSION Estimates of the clinical areas with highest alert burden varied substantially by institution and based on the metric used.
Collapse
|
8
|
Deep Learning Model to Predict Serious Infection Among Children With Central Venous Lines. Front Pediatr 2021; 9:726870. [PMID: 34604142 PMCID: PMC8480258 DOI: 10.3389/fped.2021.726870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 08/06/2021] [Indexed: 12/23/2022] Open
Abstract
Objective: Predict the onset of presumed serious infection, defined as a positive blood culture drawn and new antibiotic course of at least 4 days (PSI*), among pediatric patients with Central Venous Lines (CVLs). Design: Retrospective cohort study. Setting: Single academic children's hospital. Patients: All hospital encounters from January 2013 to December 2018, excluding the ones without a CVL or with a length-of-stay shorter than 24 h. Measurements and Main Results: Clinical features including demographics, laboratory results, vital signs, characteristics of the CVLs and medications used were extracted retrospectively from electronic medical records. Data were aggregated across all hospitals within a single pediatric health system and used to train a deep learning model to predict the occurrence of PSI* during the next 48 h of hospitalization. The proposed model prediction was compared to prediction of PSI* by a marker of illness severity (PELOD-2). The baseline prevalence of line infections was 0.34% over all segmented 48-h time windows. Events were identified among cases using onset time. All data from admission till the onset was used for cases and among controls we used all data from admission till discharge. The benchmarks were aggregated over all 48 h time windows [N=748,380 associated with 27,137 patient encounters]. The model achieved an area under the receiver operating characteristic curve of 0.993 (95% CI = [0.990, 0.996]), the enriched positive predictive value (PPV) was 23 times greater than the base prevalence. Conversely, prediction by PELOD-2 achieved a lower PPV of 1.5% [0.9%, 2.1%] which was 5 times the baseline prevalence. Conclusion: A deep learning model that employs common clinical features in the electronic health record can help predict the onset of CLABSI in hospitalized children with central venous line 48 hours prior to the time of specimen collection.
Collapse
|
9
|
Cost-effectiveness of infant respiratory syncytial virus preventive interventions in Mali: A modeling study to inform policy and investment decisions. Vaccine 2021; 39:5037-5045. [PMID: 34325934 PMCID: PMC8377743 DOI: 10.1016/j.vaccine.2021.06.086] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 06/25/2021] [Accepted: 06/28/2021] [Indexed: 11/29/2022]
Abstract
New RSV prevention products can substantially reduce disease burden. Longer-acting monoclonal antibodies, priced affordably, are likely cost-effective. Maternal vaccines meeting preferred product characteristics would be cost-effective. RSV prevention products can provide good value in low-income countries.
Importance Low- and middle-income countries have a high burden of respiratory syncytial virus lower respiratory tract infections. A monoclonal antibody administered monthly is licensed to prevent these infections, but it is cost-prohibitive for most low- and middle-income countries. Long-acting monoclonal antibodies and maternal vaccines against respiratory syncytial virus are under development. Objective We estimated the likelihood of respiratory syncytial virus preventive interventions (current monoclonal antibody, long-acting monoclonal antibody, and maternal vaccine) being cost-effective in Mali. Design We modeled age-specific and season-specific risks of respiratory syncytial virus lower respiratory tract infections within monthly cohorts of infants from birth to six months. We parameterized with respiratory syncytial virus data from Malian cohort studies, as well as product efficacy from clinical trials. Integrating parameter uncertainty, we simulated health and economic outcomes for status quo without prevention, intra-seasonal monthly administration of licensed monoclonal antibody, pre-seasonal birth dose administration of a long-acting monoclonal antibody, and maternal vaccination. We then calculated the incremental cost-effectiveness ratio of each intervention compared to status quo from the perspectives of the government, donor, and society. Results At a price of $3 per dose and from the societal perspective, current monoclonal antibody, long-acting monoclonal antibody, and maternal vaccine would have incremental cost-effectiveness ratios of $4280 (95% CI $1892 to $122,434), $1656 (95% CI $734 to $9091), and $8020 (95% CI $3501 to $47,047) per disability-adjusted life-year averted, respectively. Conclusions and Relevance In Mali, long-acting monoclonal antibody is likely to be cost-effective from both the government and donor perspectives at $3 per dose. Maternal vaccine would need higher efficacy over that measured by a recent trial in order to be considered cost-effective.
Collapse
|
10
|
Evaluation of a Clinical Decision Support Strategy to Increase Seasonal Influenza Vaccination Among Hospitalized Children Before Inpatient Discharge. JAMA Netw Open 2021; 4:e2117809. [PMID: 34292335 PMCID: PMC8299313 DOI: 10.1001/jamanetworkopen.2021.17809] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
IMPORTANCE Hospitalized children are at increased risk of influenza-related complications, yet influenza vaccine coverage remains low among this group. Evidence-based strategies about vaccination of vulnerable children during all health care visits are especially important during the COVID-19 pandemic. OBJECTIVE To design and evaluate a clinical decision support (CDS) strategy to increase the proportion of eligible hospitalized children who receive a seasonal influenza vaccine prior to inpatient discharge. DESIGN, SETTING, AND PARTICIPANTS This quality improvement study was conducted among children eligible for the seasonal influenza vaccine who were hospitalized in a tertiary pediatric health system providing care to more than half a million patients annually in 3 hospitals. The study used a sequential crossover design from control to intervention and compared hospitalizations in the intervention group (2019-2020 season with the use of an intervention order set) with concurrent controls (2019-2020 season without use of an intervention order set) and historical controls (2018-2019 season with use of an order set that underwent intervention during the 2019-2020 season). INTERVENTIONS A CDS intervention was developed through a user-centered design process, including (1) placing a default influenza vaccine order into admission order sets for eligible patients, (2) a script to offer the vaccine using a presumptive strategy, and (3) just-in-time education for clinicians addressing vaccine eligibility in the influenza order group with links to further reference material. The intervention was rolled out in a stepwise fashion during the 2019-2020 influenza season. MAIN OUTCOMES AND MEASURES Proportion of eligible hospitalizations in which 1 or more influenza vaccines were administered prior to discharge. RESULTS Among 17 740 hospitalizations (9295 boys [52%]), the mean (SD) age was 8.0 (6.0) years, and the patients were predominantly Black (n = 8943 [50%]) or White (n = 7559 [43%]) and mostly had public insurance (n = 11 274 [64%]). There were 10 997 hospitalizations eligible for the influenza vaccine in the 2019-2020 season. Of these, 5449 (50%) were in the intervention group, and 5548 (50%) were concurrent controls. There were 6743 eligible hospitalizations in 2018-2019 that served as historical controls. Vaccine administration rates were 31% (n = 1676) in the intervention group, 19% (n = 1051) in concurrent controls, and 14% (n = 912) in historical controls (P < .001). In adjusted analyses, the odds of receiving the influenza vaccine were 3.25 (95% CI, 2.94-3.59) times higher in the intervention group and 1.28 (95% CI, 1.15-1.42) times higher in concurrent controls than in historical controls. CONCLUSIONS AND RELEVANCE This quality improvement study suggests that user-centered CDS may be associated with significantly improved influenza vaccination rates among hospitalized children. Stepwise implementation of CDS interventions was a practical method that was used to increase quality improvement rigor through comparison with historical and concurrent controls.
Collapse
|
11
|
Quality Initiative to Reduce High-Flow Nasal Cannula Duration and Length of Stay in Bronchiolitis. Hosp Pediatr 2021; 11:309-318. [PMID: 33753362 DOI: 10.1542/hpeds.2020-005306] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
OBJECTIVES High-flow nasal cannula (HFNC) use in bronchiolitis may prolong length of stay (LOS) if weaned more slowly than medically indicated. We aimed to reduce HFNC length of treatment (LOT) and inpatient LOS by 12 hours in 0- to 18-month-old patients with bronchiolitis on the pediatric hospital medicine service. METHODS After identifying key drivers of slow weaning, we recruited a multidisciplinary "Wean Team" to provide education and influence provider weaning practices. We then implemented a respiratory therapist-driven weaning protocol with supportive sociotechnical interventions (huddles, standardized orders, simplification of protocol) to reduce LOT and LOS and promote sustainability. RESULTS In total, 283 patients were included: 105 during the baseline period and 178 during the intervention period. LOT and LOS control charts revealed special cause variation at the start of the intervention period; mean LOT decreased from 48.2 to 31.2 hours and mean LOS decreased from 84.3 to 60.9 hours. LOT and LOS were less variable in the intervention period compared with the baseline period. There was no increase in PICU transfers or 72-hour return or readmission rates. CONCLUSIONS We reduced HFNC LOT by 17 hours and LOS by 23 hours for patients with bronchiolitis via multidisciplinary collaboration, education, and a respiratory therapist-driven weaning protocol with supportive interventions. Future steps will focus on more judicious application of HFNC in bronchiolitis.
Collapse
|
12
|
Predicting presumed serious infection among hospitalized children on central venous lines with machine learning. Comput Biol Med 2021; 132:104289. [PMID: 33667812 DOI: 10.1016/j.compbiomed.2021.104289] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 01/29/2021] [Accepted: 02/14/2021] [Indexed: 01/28/2023]
Abstract
BACKGROUND Presumed serious infection (PSI) is defined as a blood culture drawn and new antibiotic course of at least 4 days among pediatric patients with Central Venous Lines (CVLs). Early PSI prediction and use of medical interventions can prevent adverse outcomes and improve the quality of care. METHODS Clinical features including demographics, laboratory results, vital signs, characteristics of the CVLs and medications used were extracted retrospectively from electronic medical records. Data were aggregated across all hospitals within a single pediatric health system and used to train machine learning models (XGBoost and ElasticNet) to predict the occurrence of PSI 8 h prior to clinical suspicion. Prediction for PSI was benchmarked against PRISM-III. RESULTS Our model achieved an area under the receiver operating characteristic curve of 0.84 (95% CI = [0.82, 0.85]), sensitivity of 0.73 [0.69, 0.74], and positive predictive value (PPV) of 0.36 [0.34, 0.36]. The PRISM-III conversely achieved a lower sensitivity of 0.19 [0.16, 0.22] and PPV of 0.30 [0.26, 0.34] at a cut-off of ≥ 10. The features with the most impact on the PSI prediction were maximum diastolic blood pressure prior to PSI prediction (mean SHAP = 3.4), height (mean SHAP = 3.2), and maximum temperature prior to PSI prediction (mean SHAP = 2.6). CONCLUSION A machine learning model using common features in the electronic medical records can predict the onset of serious infections in children with central venous lines at least 8 h prior to when a clinical team drew a blood culture.
Collapse
|
13
|
Development and dissemination of clinical decision support across institutions: standardization and sharing of refugee health screening modules. J Am Med Inform Assoc 2021; 26:1515-1524. [PMID: 31373356 DOI: 10.1093/jamia/ocz124] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2019] [Revised: 06/17/2019] [Accepted: 06/25/2019] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES We developed and piloted a process for sharing guideline-based clinical decision support (CDS) across institutions, using health screening of newly arrived refugees as a case example. MATERIALS AND METHODS We developed CDS to support care of newly arrived refugees through a systematic process including a needs assessment, a 2-phase cognitive task analysis, structured preimplementation testing, local implementation, and staged dissemination. We sought consensus from prospective users on CDS scope, applicable content, basic supported workflows, and final structure. We documented processes and developed sharable artifacts from each phase of development. We publically shared CDS artifacts through online dissemination platforms. We collected feedback and implementation data from implementation sites. RESULTS Responses from 19 organizations demonstrated a need for improved CDS for newly arrived refugee patients. A guided multicenter workflow analysis identified 2 main workflows used by organizations that would need to be supported by shared CDS. We developed CDS through an iterative design process, which was successfully disseminated to other sites using online dissemination repositories. Implementation sites had a small-to-modest analyst time commitment but reported a good match between CDS and workflow. CONCLUSION Sharing of CDS requires overcoming technical and workflow barriers. We used a guided multicenter workflow analysis and online dissemination repositories to create flexible CDS that has been adapted at 3 sites. Organizations looking to develop sharable CDS should consider evaluating the workflows of multiple institutions and collecting feedback on scope, design, and content in order to make a more generalizable product.
Collapse
|
14
|
Reducing Prescribing Errors in Hospitalized Children on the Ketogenic Diet. Pediatr Neurol 2021; 115:42-47. [PMID: 33333459 DOI: 10.1016/j.pediatrneurol.2020.11.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 11/11/2020] [Accepted: 11/14/2020] [Indexed: 01/14/2023]
Abstract
BACKGROUND Children on the ketogenic diet must limit carbohydrate intake to maintain ketosis and reduce seizure burden. Patients on ketogenic diet are vulnerable to harm in the hospital setting where carbohydrate-containing medications are commonly prescribed. We developed clinical decision support to reduce inappropriate prescription of carbohydrate-containing medications in hospitalized children on ketogenic diet. METHODS A clinical decision support alert was developed through formative and summative usability testing. The alert warned prescribers when they entered an order for a carbohydrate-containing medication in patients on ketogenic diet. The alert was implemented using a quasi-experimental design with sequential crossover from control to intervention at two tertiary care pediatric hospitals within a single health system. The primary outcome was carbohydrate-containing medication orders per patient-day. RESULTS During the study period, there were 280 ketogenic diet patient admissions totaling 1219 patient-days. The carbohydrate-containing medication order rate declined from 0.69 to 0.35 orders per patient-day (absolute rate reduction 0.34, 95% confidence interval 0.25-0.43), corresponding to 256 inappropriate orders prevented. The alert fired 398 times and was accepted (i.e., the order was removed) 227 times for an overall acceptance rate of 57%. CONCLUSIONS Implementation of a clinical decision support alert at order-entry resulted in a sustained reduction in carbohydrate-containing medication orders for hospitalized patients on ketogenic diet without an increase in alert burden. Clinical decision support developed with user-centered design principles can improve patient safety for children on ketogenic diet by influencing prescriber behavior.
Collapse
|
15
|
Attributing Patients to Pediatric Residents Using Electronic Health Record Features Augmented with Audit Logs. Appl Clin Inform 2020; 11:442-451. [PMID: 32583389 DOI: 10.1055/s-0040-1713133] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
OBJECTIVE Patient attribution, or the process of attributing patient-level metrics to specific providers, attempts to capture real-life provider-patient interactions (PPI). Attribution holds wide-ranging importance, particularly for outcomes in graduate medical education, but remains a challenge. We developed and validated an algorithm using EHR data to identify pediatric resident PPIs (rPPIs). METHODS We prospectively surveyed residents in three care settings to collect self-reported rPPIs. Participants were surveyed at the end of primary care clinic, emergency department (ED), and inpatient shifts, shown a patient census list, asked to mark the patients with whom they interacted, and encouraged to provide a short rationale behind the marked interaction. We extracted routine EHR data elements, including audit logs, note contribution, order placement, care team assignment, and chart closure, and applied a logistic regression classifier to the data to predict rPPIs in each care setting. We also performed a comment analysis of the resident-reported rationales in the inpatient care setting to explore perceived patient interactions in a complicated workflow. RESULTS We surveyed 81 residents over 111 shifts and identified 579 patient interactions. Among EHR extracted data, time-in-chart was the best predictor in all three care settings (primary care clinic: odds ratio [OR] = 19.36, 95% confidence interval [CI]: 4.19-278.56; ED: OR = 19.06, 95% CI: 9.53-41.65' inpatient: OR = 2.95, 95% CI: 2.23-3.97). Primary care clinic and ED specific models had c-statistic values > 0.98, while the inpatient-specific model had greater variability (c-statistic = 0.89). Of 366 inpatient rPPIs, residents provided rationales for 90.1%, which were focused on direct involvement in a patient's admission or transfer, or care as the front-line ordering clinician (55.6%). CONCLUSION Classification models based on routinely collected EHR data predict resident-defined rPPIs across care settings. While specific to pediatric residents in this study, the approach may be generalizable to other provider populations and scenarios in which accurate patient attribution is desirable.
Collapse
|
16
|
Influence of simulation on electronic health record use patterns among pediatric residents. J Am Med Inform Assoc 2019; 25:1501-1506. [PMID: 30137348 DOI: 10.1093/jamia/ocy105] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Accepted: 07/13/2018] [Indexed: 11/12/2022] Open
Abstract
Objective Electronic health record (EHR) simulation with realistic test patients has improved recognition of safety concerns in test environments. We assessed if simulation affects EHR use patterns in real clinical settings. Materials and Methods We created a 1-hour educational intervention of a simulated admission for pediatric interns. Data visualization and information retrieval tools were introduced to facilitate recognition of the patient's clinical status. Using EHR audit logs, we assessed the frequency with which these tools were accessed by residents prior to simulation exposure (intervention group, pre-simulation), after simulation exposure (intervention group, post-simulation), and among residents who never participated in simulation (control group). Results From July 2015 to February 2017, 57 pediatric residents participated in a simulation and 82 did not. Residents were more likely to use the data visualization tool after simulation (73% in post-simulation weeks vs 47% of combined pre-simulation and control weeks, P <. 0001) as well as the information retrieval tool (85% vs 36%, P < .0001). After adjusting for residents' experiences measured in previously completed inpatient weeks of service, simulation remained a significant predictor of using the data visualization (OR 2.8, CI: 2.1-3.9) and information retrieval tools (OR 3.0, CI: 2.0-4.5). Tool use did not decrease in interrupted time-series analysis over a median of 19 (IQR: 8-32) weeks of post-simulation follow-up. Discussion Simulation was associated with persistent changes to EHR use patterns among pediatric residents. Conclusion EHR simulation is an effective educational method that can change participants' use patterns in real clinical settings.
Collapse
|
17
|
Abstract
BACKGROUND Medical errors in blood product orders and administration are common, especially for pediatric patients. A failure modes and effects analysis in our health care system indicated high risk from the electronic blood ordering process. OBJECTIVES There are two objectives of this study as follows:(1) To describe differences in the design of the original blood product orders and order sets in the system (original design), new orders and order sets designed by expert committee (DEC), and a third-version developed through user-centered design (UCD).(2) To compare the number and type of ordering errors, task completion rates, time on task, and user preferences between the original design and that developed via UCD. METHODS A multidisciplinary expert committee proposed adjustments to existing blood product order sets resulting in the DEC order set. When that order set was tested with front-line users, persistent failure modes were detected, so orders and order sets were redesigned again via formative usability testing. Front-line users in their native clinical workspaces were observed ordering blood in realistic simulated scenarios using a think-aloud protocol. Iterative adjustments were made between participants. In summative testing, participants were randomized to use the original design or UCD for five simulated scenarios. We evaluated differences in ordering errors, time on task, and users' design preference with two-sample t-tests. RESULTS Formative usability testing with 27 providers from seven specialties led to 18 changes made to the DEC to produce the UCD. In summative testing, error-free task completion for the original design was 36%, which increased to 66% in UCD (30%, 95% confidence interval [CI]: 3.9-57%; p = 0.03). Time on task did not vary significantly. CONCLUSION UCD led to substantially different blood product orders and order sets than DEC. Users made fewer errors when ordering blood products for pediatric patients in simulated scenarios when using the UCD orders and order sets compared with the original design.
Collapse
|
18
|
Pediatric trainees systematically under-report duty hour violations compared to electronic health record defined shifts. PLoS One 2019; 14:e0226493. [PMID: 31830096 PMCID: PMC6907762 DOI: 10.1371/journal.pone.0226493] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 11/27/2019] [Indexed: 01/20/2023] Open
Abstract
Duty hour monitoring is required in accredited training programs, however trainee self-reporting is onerous and vulnerable to bias. The objectives of this study were to use an automated, validated algorithm to measure duty hour violations of pediatric trainees over a full academic year and compare to self-reported violations. Duty hour violations calculated from electronic health record (EHR) logs varied significantly by trainee role and rotation. Block-by-block differences show 36.8% (222/603) of resident-blocks with more EHR-defined violations (EDV) compared to self-reported violations (SRV), demonstrating systematic under-reporting of duty hour violations. Automated duty hour tracking could provide real-time, objective assessment of the trainee work environment, allowing program directors and accrediting organizations to design and test interventions focused on improving educational quality.
Collapse
|
19
|
Towards a Maturity Model for Clinical Decision Support Operations. Appl Clin Inform 2019; 10:810-819. [PMID: 31667818 PMCID: PMC6821535 DOI: 10.1055/s-0039-1697905] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Accepted: 08/14/2019] [Indexed: 12/21/2022] Open
Abstract
Clinical decision support (CDS) systems delivered through the electronic health record are an important element of quality and safety initiatives within a health care system. However, managing a large CDS knowledge base can be an overwhelming task for informatics teams. Additionally, it can be difficult for these informatics teams to communicate their goals with external operational stakeholders and define concrete steps for improvement. We aimed to develop a maturity model that describes a roadmap toward organizational functions and processes that help health care systems use CDS more effectively to drive better outcomes. We developed a maturity model for CDS operations through discussions with health care leaders at 80 organizations, iterative model development by four clinical informaticists, and subsequent review with 19 health care organizations. We ceased iterations when feedback from three organizations did not result in any changes to the model. The proposed CDS maturity model includes three main "pillars": "Content Creation," "Analytics and Reporting," and "Governance and Management." Each pillar contains five levels-advancing along each pillar provides CDS teams a deeper understanding of the processes CDS systems are intended to improve. A "roof" represents the CDS functions that become attainable after advancing along each of the pillars. Organizations are not required to advance in order and can develop in one pillar separately from another. However, we hypothesize that optimal deployment of preceding levels and advancing in tandem along the pillars increase the value of organizational investment in higher levels of CDS maturity. In addition to describing the maturity model and its development, we also provide three case studies of health care organizations using the model for self-assessment and determine next steps in CDS development.
Collapse
|
20
|
Hidden health IT hazards: a qualitative analysis of clinically meaningful documentation discrepancies at transfer out of the pediatric intensive care unit. JAMIA Open 2019; 2:392-398. [PMID: 31984372 PMCID: PMC6951953 DOI: 10.1093/jamiaopen/ooz026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 06/25/2019] [Indexed: 11/14/2022] Open
Abstract
Objective The risk of medical errors increases upon transfer out of the intensive care unit (ICU). Discrepancies in the documented care plan between notes at the time of transfer may contribute to communication errors. We sought to determine the frequency of clinically meaningful discrepancies in the documented care plan for patients transferred from the pediatric ICU to the medical wards and identified risk factors. Materials and Methods Two physician reviewers independently compared the transfer note and handoff document of 50 randomly selected transfers. Clinically meaningful discrepancies in the care plan between these two documents were identified using a coding procedure adapted from healthcare failure mode and effects analysis. We assessed the influence of risk factors via multivariable regression. Results We identified 34 clinically meaningful discrepancies in 50 patient transfers. Fourteen transfers (28%) had ≥1 discrepancy, and ≥2 were present in 7 transfers (14%). The most common discrepancy categories were differences in situational awareness notifications and documented current therapy. Transfers with handoff document length in the top quartile had 10.6 (95% CI: 1.2-90.2) times more predicted discrepancies than transfers with handoff length in the bottom quartile. Patients receiving more medications in the 24 hours prior to transfer had higher discrepancy counts, with each additional medication increasing the predicted number of discrepancies by 17% (95% CI: 6%-29%). Conclusion Clinically meaningful discrepancies in the documented care plan pose legitimate safety concerns and are common at the time of transfer out of the ICU among complex patients.
Collapse
|
21
|
Automatic Detection of Front-Line Clinician Hospital Shifts: A Novel Use of Electronic Health Record Timestamp Data. Appl Clin Inform 2019; 10:28-37. [PMID: 30625502 DOI: 10.1055/s-0038-1676819] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
OBJECTIVE Excess physician work hours contribute to burnout and medical errors. Self-report of work hours is burdensome and often inaccurate. We aimed to validate a method that automatically determines provider shift duration based on electronic health record (EHR) timestamps across multiple inpatient settings within a single institution. METHODS We developed an algorithm to calculate shift start and end times for inpatient providers based on EHR timestamps. We validated the algorithm based on overlap between calculated shifts and scheduled shifts. We then demonstrated a use case by calculating shifts for pediatric residents on inpatient rotations from July 1, 2015 through June 30, 2016, comparing hours worked and number of shifts by rotation and role. RESULTS We collected 6.3 × 107 EHR timestamps for 144 residents on 771 inpatient rotations, yielding 14,678 EHR-calculated shifts. Validation on a subset of shifts demonstrated 100% shift match and 87.9 ± 0.3% overlap (mean ± standard error [SE]) with scheduled shifts. Senior residents functioning as front-line clinicians worked more hours per 4-week block (mean ± SE: 273.5 ± 1.7) than senior residents in supervisory roles (253 ± 2.3) and junior residents (241 ± 2.5). Junior residents worked more shifts per block (21 ± 0.1) than senior residents (18 ± 0.1). CONCLUSION Automatic calculation of inpatient provider work hours is feasible using EHR timestamps. An algorithm to assess provider work hours demonstrated criterion validity via comparison with scheduled shifts. Differences between junior and senior residents in calculated mean hours worked and number of shifts per 4-week block were also consistent with differences in scheduled shifts and duty-hour restrictions.
Collapse
|
22
|
|
23
|
A Model for Clinical Informatics Education for Residents: Addressing an Unmet Need. Appl Clin Inform 2018; 9:261-267. [PMID: 29669389 DOI: 10.1055/s-0038-1641735] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
Abstract
Opportunities for education in clinical informatics exist throughout the spectrum of formal education extending from high school to postgraduate training. However, physicians in residency represent an underdeveloped source of potential informaticians. Despite the rapid growth of accredited fellowship programs since clinical informatics became a board-eligible subspecialty in 2011, few resident physicians are aware of their role at the intersection of clinical medicine and health information technology or associated opportunities. In an effort to educate and engage residents in clinical informatics, Children's Hospital of Philadelphia has developed a three-pronged model: (1) an elective rotation with hands-on project experience; (2) a longitudinal experience that offers increased exposure and mentorship; and (3) a resident founded and led working group in clinical informatics. We describe resident participation in these initiatives and lessons learned, as well as resident perceptions of how these components have positively influenced informatics knowledge and career choices. Since inception of this model, five residents have pursued the clinical informatics fellowship. This educational model supports resident involvement in hospital-wide informatics efforts with tangible projects and promotes wider engagement through educational opportunities commensurate with the resident's level of interest.
Collapse
|
24
|
Cost-effectiveness of maternal influenza immunization in Bamako, Mali: A decision analysis. PLoS One 2017; 12:e0171499. [PMID: 28170416 PMCID: PMC5295679 DOI: 10.1371/journal.pone.0171499] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 01/20/2017] [Indexed: 01/15/2023] Open
Abstract
Background Maternal influenza immunization has gained traction as a strategy to diminish maternal and neonatal mortality. However, efforts to vaccinate pregnant women against influenza in developing countries will require substantial investment. We present cost-effectiveness estimates of maternal influenza immunization based on clinical trial data from Bamako, Mali. Methods We parameterized a decision-tree model using prospectively collected trial data on influenza incidence, vaccine efficacy, and direct and indirect influenza-related healthcare expenditures. Since clinical trial participants likely had better access to care than the general Malian population, we also simulated scenarios with poor access to care, including decreased healthcare resource utilization and worse influenza-related outcomes. Results Under base-case assumptions, a maternal influenza immunization program in Mali would cost $857 (95% UI: $188-$2358) per disability-adjusted life year (DALY) saved. Adjusting for poor access to care yielded a cost-effectiveness ratio of $486 (95% UI: $105-$1425) per DALY saved. Cost-effectiveness ratios were most sensitive to changes in the cost of a maternal vaccination program and to the proportion of laboratory-confirmed influenza among infants warranting hospitalization. Mean cost-effectiveness estimates fell below Mali’s GDP per capita when the cost per pregnant woman vaccinated was $1.00 or less with no adjustment for access to care or $1.67 for those with poor access to care. Healthcare expenditures for lab-confirmed influenza were not significantly different than the cost of influenza-like illness. Conclusions Maternal influenza immunization in Mali would be cost-effective in most settings if vaccine can be obtained, managed, and administered for ≤$1.00 per pregnant woman.
Collapse
|
25
|
Maternal immunisation with trivalent inactivated influenza vaccine for prevention of influenza in infants in Mali: a prospective, active-controlled, observer-blind, randomised phase 4 trial. THE LANCET. INFECTIOUS DISEASES 2016; 16:1026-1035. [PMID: 27261067 PMCID: PMC4985566 DOI: 10.1016/s1473-3099(16)30054-8] [Citation(s) in RCA: 174] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 04/13/2016] [Accepted: 04/15/2016] [Indexed: 12/16/2022]
Abstract
Background Despite the heightened risk of serious influenza during infancy, vaccination is not recommended in infants younger than 6 months. We aimed to assess the safety, immunogenicity, and efficacy of maternal immunisation with trivalent inactivated influenza vaccine for protection of infants against a first episode of laboratory-confirmed influenza. Methods We did this prospective, active-controlled, observer-blind, randomised phase 4 trial at six referral centres and community health centres in Bamako, Mali. Third-trimester pregnant women (≥28 weeks' gestation) were randomly assigned (1:1), via a computer-generated, centre-specific list with alternate block sizes of six or 12, to receive either trivalent inactivated influenza vaccine or quadrivalent meningococcal vaccine. Study personnel administering vaccines were not masked to treatment allocation, but allocation was concealed from clinicians, laboratory personnel, and participants. Infants were visited weekly until age 6 months to detect influenza-like illness; laboratory-confirmed influenza diagnosed with RT-PCR. We assessed two coprimary objectives: vaccine efficacy against laboratory-confirmed influenza in infants born to women immunised any time prepartum (intention-to-treat population), and vaccine efficacy in infants born to women immunised at least 14 days prepartum (per-protocol population). The primary outcome was the occurrence of a first case of laboratory-confirmed influenza by age 6 months. This trial is registered with ClinicalTrials.gov, number NCT01430689. Findings We did this trial from Sept 12, 2011, to Jan 28, 2014. Between Sept 12, 2011, and April 18, 2013, we randomly assigned 4193 women to receive trivalent inactivated influenza vaccine (n=2108) or quadrivalent meningococcal vaccine (n=2085). There were 4105 livebirths; 1797 (87%) of 2064 infants in the trivalent inactivated influenza vaccine group and 1793 (88%) of 2041 infants in the quadrivalent meningococcal vaccine group were followed up until age 6 months. We recorded 5279 influenza-like illness episodes in 2789 (68%) infants, of which 131 (2%) episodes were laboratory-confirmed influenza. 129 (98%) cases of laboratory-confirmed influenza were first episodes (n=77 in the quadrivalent meningococcal vaccine group vs n=52 in the trivalent inactivated influenza vaccine group). In the intention-to-treat population, overall infant vaccine efficacy was 33·1% (95% CI 3·7–53·9); in the per-protocol population, vaccine efficacy was 37·3% (7·6–57·8). Vaccine efficacy remained robust during the first 4 months of follow-up (67·9% [95% CI 35·1–85·3] by intention to treat and 70·2% [35·7–87·6] by per protocol), before diminishing during the fifth month (57·3% [30·6–74·4] and 60·7 [33·8–77·5], respectively). Adverse event rates in women and infants were similar among groups. Pain at the injection site was more common in women given quadrivalent meningococcal vaccine than in those given trivalent inactivated influenza vaccine (n=253 vs n=132; p<0·0001), although 354 [92%] reactions were mild. Obstetrical and non-obstetrical serious adverse events were reported in 60 (3%) women in the quadrivalent meningococcal vaccine group and 61 (3%) women in the trivalent inactivated influenza vaccine group. Presumed neonatal infection was more common in infants in the trivalent inactivated influenza vaccine group than in those in the quadrivalent meningococcal vaccine group (n=60 vs n=37; p=0·02). No serious adverse events were related to vaccination. Interpretation Vaccination of pregnant women with trivalent inactivated influenza vaccine in Mali—a poorly resourced country with high infant mortality—was technically and logistically feasible and protected infants from laboratory-confirmed influenza for 4 months. With adequate financing to procure the vaccine, implementation will parallel the access to antenatal care and immunisation coverage of pregnant women with tetanus toxoid. Funding Bill & Melinda Gates Foundation.
Collapse
|
26
|
Potential cost-effectiveness of schistosomiasis treatment for reducing HIV transmission in Africa--the case of Zimbabwean women. PLoS Negl Trop Dis 2013; 7:e2346. [PMID: 23936578 PMCID: PMC3731236 DOI: 10.1371/journal.pntd.0002346] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Accepted: 06/18/2013] [Indexed: 11/24/2022] Open
Abstract
Background Epidemiological data from Zimbabwe suggests that genital infection with Schistosoma haematobium may increase the risk of HIV infection in young women. Therefore, the treatment of Schistosoma haematobium with praziquantel could be a potential strategy for reducing HIV infection. Here we assess the potential cost-effectiveness of praziquantel as a novel intervention strategy against HIV infection. Methods We developed a mathematical model of female genital schistosomiasis (FGS) and HIV infections in Zimbabwe that we fitted to cross-sectional data of FGS and HIV prevalence of 1999. We validated our epidemic projections using antenatal clinic data on HIV prevalence. We simulated annual praziquantel administration to school-age children. We then used these model predictions to perform a cost-effectiveness analysis of annual administration of praziquantel as a potential measure to reduce the burden of HIV in sub-Saharan Africa. Findings We showed that for a variation of efficacy between 30–70% of mass praziquantel administration for reducing the enhanced risk of HIV transmission per sexual act due to FGS, annual administration of praziquantel to school-age children in Zimbabwe could result in net savings of US$16–101 million compared with no mass treatment of schistosomiasis over a ten-year period. For a variation in efficacy between 30–70% of mass praziquantel administration for reducing the acquisition of FGS, annual administration of praziquantel to school-age children could result in net savings of US$36−92 million over a ten-year period. Conclusions In addition to reducing schistosomiasis burden, mass praziquantel administration may be a highly cost-effective way of reducing HIV infections in sub-Saharan Africa. Program costs per case of HIV averted are similar to, and under some conditions much better than, other interventions that are currently implemented in Africa to reduce HIV transmission. As a cost-saving strategy, mass praziquantel administration should be prioritized over other less cost-effective public health interventions. Evidence from epidemiological and clinical studies supports the hypothesis that genital infection with Schistosoma haematobium increases the risk of becoming infected with HIV among women in sub-Saharan Africa. Praziquantel is an oral, nontoxic, inexpensive medication recommended for treatment of schistosomiasis, which might be able to prevent the development of genital schistosomiasis. We constructed a mathematical model of female genital schistosomiasis and HIV infections, which we calibrated using epidemiological data from Zimbabwe. We used this model to investigate the potential cost-effectiveness of mass drug administration with praziquantel as an intervention strategy for reducing HIV transmission in sub-Saharan Africa. We showed that mass drug administration with praziquantel may be a timely, innovative, and cost-saving intervention strategy for HIV prevention in sub-Saharan Africa. As a cost-saving strategy, mass drug administration with praziquantel should be prioritized over other less cost-effective public health interventions. Our findings indicate the possible benefit of scaling up schistosomiasis control efforts in sub-Saharan Africa, and especially in areas were Schistosoma haematobium and HIV are highly prevalent.
Collapse
|
27
|
Background rates of adverse pregnancy outcomes for assessing the safety of maternal vaccine trials in sub-Saharan Africa. PLoS One 2012; 7:e46638. [PMID: 23056380 PMCID: PMC3464282 DOI: 10.1371/journal.pone.0046638] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2012] [Accepted: 09/02/2012] [Indexed: 11/28/2022] Open
Abstract
Background Maternal immunization has gained traction as a strategy to diminish maternal and young infant mortality attributable to infectious diseases. Background rates of adverse pregnancy outcomes are crucial to interpret results of clinical trials in Sub-Saharan Africa. Methods We developed a mathematical model that calculates a clinical trial's expected number of neonatal and maternal deaths at an interim safety assessment based on the person-time observed during different risk windows. This model was compared to crude multiplication of the maternal mortality ratio and neonatal mortality rate by the number of live births. Systematic reviews of severe acute maternal morbidity (SAMM), low birth weight (LBW), prematurity, and major congenital malformations (MCM) in Sub-Saharan African countries were also performed. Findings Accounting for the person-time observed during different risk periods yields lower, more conservative estimates of expected maternal and neonatal deaths, particularly at an interim safety evaluation soon after a large number of deliveries. Median incidence of SAMM in 16 reports was 40.7 (IQR: 10.6–73.3) per 1,000 total births, and the most common causes were hemorrhage (34%), dystocia (22%), and severe hypertensive disorders of pregnancy (22%). Proportions of liveborn infants who were LBW (median 13.3%, IQR: 9.9–16.4) or premature (median 15.4%, IQR: 10.6–19.1) were similar across geographic region, study design, and institutional setting. The median incidence of MCM per 1,000 live births was 14.4 (IQR: 5.5–17.6), with the musculoskeletal system comprising 30%. Interpretation Some clinical trials assessing whether maternal immunization can improve pregnancy and young infant outcomes in the developing world have made ethics-based decisions not to use a pure placebo control. Consequently, reliable background rates of adverse pregnancy outcomes are necessary to distinguish between vaccine benefits and safety concerns. Local studies that quantify population-based background rates of adverse pregnancy outcomes will improve safety assessment of interventions during pregnancy.
Collapse
|
28
|
Treatment outcomes among patients with multidrug-resistant tuberculosis: systematic review and meta-analysis. THE LANCET. INFECTIOUS DISEASES 2009; 9:153-61. [PMID: 19246019 DOI: 10.1016/s1473-3099(09)70041-6] [Citation(s) in RCA: 360] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multidrug-resistant (MDR) tuberculosis is a growing clinical and public-health concern. To evaluate existing evidence regarding treatment regimens for MDR tuberculosis, we used a Bayesian random-effects meta-analysis of the available therapeutic studies to assess how the reported proportion of patients treated successfully is influenced by differences in treatment regimen design, study methodology, and patient population. Successful treatment outcome was defined as cure or treatment completion. 34 clinical reports with a mean of 250 patients per report met the inclusion criteria. Our analysis shows that the proportion of patients treated successfully improved when treatment duration was at least 18 months, and if patients received directly observed therapy throughout treatment. Studies that combined both factors had significantly higher pooled success proportions (69%, 95% credible interval [CI] 64-73%) than other studies of treatment outcomes (58%, 95% CI 52-64%). Individualised treatment regimens had higher treatment success (64%, 95% CI 59-68%) than standardised regimens (54%, 95% CI 43-68%), although the difference was not significant. Treatment approaches and study methodologies were heterogeneous across studies. Many important variables, including patients' HIV status, were inconsistently reported between studies. These results underscore the importance of strong patient support and treatment follow-up systems to develop successful MDR tuberculosis treatment programmes.
Collapse
|
29
|
Methodologic issues regarding the use of three observational study designs to assess influenza vaccine effectiveness. Int J Epidemiol 2007; 36:623-31. [PMID: 17403908 DOI: 10.1093/ije/dym021] [Citation(s) in RCA: 195] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Influenza causes substantial morbidity and annual vaccination is the most important prevention strategy. Accurately measuring vaccine effectiveness (VE) is difficult. The clinical syndrome most closely associated with influenza virus infection, influenza-like illness (ILI), is not specific. In addition, laboratory confirmation is infrequently done, and available rapid diagnostic tests are imperfect. The objective of this study was to estimate the joint impact of rapid diagnostic test sensitivity and specificity on VE for three types of study designs: a cohort study, a traditional case-control study, and a case-control study that used as controls individuals with ILI who tested negative for influenza virus infection. METHODS We developed a mathematical model with five input parameters: true VE, attack rates (ARs) of influenza-ILI and non-influenza-ILI and the sensitivity and specificity of the diagnostic test. RESULTS With imperfect specificity, estimates from all three designs tended to underestimate true VE, but were similar except if fairly extreme inputs were used. Only if test specificity was 95% or more or if influenza attack rates doubled that of background illness did the case-control method slightly overestimate VE. The case-control method usually produced the highest and most accurate estimates, followed by the test-negative design. The bias toward underestimating true VE introduced by low test specificity increased as the AR of influenza- relative to non-influenza-ILI decreases and, to a lesser degree, with lower test sensitivity. CONCLUSIONS Demonstration of a high influenza VE using tests with imperfect sensitivity and specificity should provide reassurance that the program has been effective in reducing influenza illnesses, assuming adequate control of confounding factors.
Collapse
|
30
|
The epidemiology and burden of rotavirus in China: a review of the literature from 1983 to 2005. Vaccine 2006; 25:406-13. [PMID: 16956700 DOI: 10.1016/j.vaccine.2006.07.054] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2006] [Revised: 07/14/2006] [Accepted: 07/25/2006] [Indexed: 01/10/2023]
Abstract
We reviewed studies of rotavirus in MEDLINE and the Chinese literature to get a preliminary estimate of the burden of rotavirus gastroenteritis in China and the epidemiology of the disease. Studies were selected if they were conducted for a period 1 year or more, had more than 100 patients enrolled, and used an accepted diagnostic test. Overall, in 27 reports of children hospitalized for diarrhea in urban areas and 3 in rural areas, 44 and 33%, respectively, had rotavirus identified as the etiologic agent. Rotavirus was less commonly detected in children with milder illness seen in clinics (26% in urban and 28% in rural areas) and those cared for in the community (9%). The four main strains of rotavirus in circulation worldwide were also found in China and while G1 was the predominant strain overall, G3 emerged to be the most common strain in 9 of the 12 most recent studies. The disease has a distinct winter seasonal pattern and affects most children in their first 2 years of life. Although further studies are required to fully assess the burden of rotavirus diarrhea before decisions can be made about vaccine use, this review suggests that development and implementation of rotavirus vaccines should be a national priority.
Collapse
|