1
|
The Golden Hour of Casualty Care: Rapid Handoff to Surgical Team is Associated With Improved Survival in War-injured US Service Members. Ann Surg 2024; 279:1-10. [PMID: 36728667 DOI: 10.1097/sla.0000000000005787] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE To examine time from injury to initiation of surgical care and association with survival in US military casualties. BACKGROUND Although the advantage of trauma care within the "golden hour" after an injury is generally accepted, evidence is scarce. METHODS This retrospective, population-based cohort study included US military casualties injured in Afghanistan and Iraq, January 2007 to December 2015, alive at initial request for evacuation with maximum abbreviated injury scale scores ≥2 and documented 30-day survival status after injury. Interventions: (1) handoff alive to the surgical team, and (2) initiation of first surgery were analyzed as time-dependent covariates (elapsed time from injury) using sequential Cox proportional hazards regression to assess how intervention timing might affect mortality. Covariates included age, injury year, and injury severity. RESULTS Among 5269 patients (median age, 24 years; 97% males; and 68% battle-injured), 728 died within 30 days of injury, 68% within 1 hour, and 90% within 4 hours. Only handoffs within 1 hour of injury and the resultant timely initiation of emergency surgery (adjusted also for prior advanced resuscitative interventions) were significantly associated with reduced 24-hour mortality compared with more delayed surgical care (adjusted hazard ratios: 0.34; 95% CI: 0.14-0.82; P = 0.02; and 0.40; 95% CI: 0.20-0.81; P = 0.01, respectively). In-hospital waits for surgery (mean: 1.1 hours; 95% CI; 1.0-1.2) scarcely contributed ( P = 0.67). CONCLUSIONS Rapid handoff to the surgical team within 1 hour of injury may reduce mortality by 66% in US military casualties. In the subgroup of casualties with indications for emergency surgery, rapid handoff with timely surgical intervention may reduce mortality by 60%. To inform future research and trauma system planning, findings are pivotal.
Collapse
|
2
|
Finding the bleeding edge: 24-hour mortality by unit of blood product transfused in combat casualties from 2002-2020. J Trauma Acute Care Surg 2023; 95:635-641. [PMID: 37399037 DOI: 10.1097/ta.0000000000004028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
BACKGROUND Transfusion studies in civilian trauma patients have tried to identify a general futility threshold. We hypothesized that in combat settings there is no general threshold where blood product transfusion becomes unbeneficial to survival in hemorrhaging patients. We sought to assess the relationship between the number of units of blood products transfused and 24-hour mortality in combat casualties. METHODS A retrospective analysis of the Department of Defense Trauma Registry supplemented with data from the Armed Forces Medical Examiner. Combat casualties who received at least one unit of blood product at US military medical treatment facilities (MTFs) in combat settings (2002-2020) were included. The main intervention was the total units of any blood product transfused, which was measured from the point of injury until 24 hours after admission from the first deployed MTF. The primary outcome was discharge status (alive, dead) at 24 hours from time of injury. RESULTS Of 11,746 patients included, the median age was 24 years, and most patients were male (94.2%) with penetrating injury (84.7%). The median injury severity score was 17 and 783 (6.7%) patients died by 24 hours. Median units of blood products transfused was 8. Most blood products transfused were red blood cells (50.2%), followed by plasma (41.1%), platelets (5.5%), and whole blood (3.2%). Among the 10 patients who received the most units of blood product (164 units to 290 units), 7 survived to 24 hours. The maximum amount of total blood products transfused to a patient who survived was 276 units. Of the 58 patients who received over 100 units of blood product, 20.7% died by 24 hours. CONCLUSION While civilian trauma studies suggest the possibility of futility with ultra-massive transfusion, we report that the majority (79.3%) of combat casualties who received transfusions greater than 100 units survived to 24 hours. These results do not support a threshold for futility of blood product transfusion. Further analysis as to predictors for mortality will help in situations of blood product and resource constraints. LEVEL OF EVIDENCE Prognostic and Epidemiological; Level IV.
Collapse
|
3
|
The epidemiology and outcomes of prolonged trauma care (EpiC) study: methodology of a prospective multicenter observational study in the Western Cape of South Africa. Scand J Trauma Resusc Emerg Med 2022; 30:55. [PMID: 36253865 PMCID: PMC9574798 DOI: 10.1186/s13049-022-01041-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 10/01/2022] [Indexed: 11/16/2022] Open
Abstract
Background Deaths due to injuries exceed 4.4 million annually, with over 90% occurring in low-and middle-income countries. A key contributor to high trauma mortality is prolonged trauma-to-treatment time. Earlier receipt of medical care following an injury is critical to better patient outcomes. Trauma epidemiological studies can identify gaps and opportunities to help strengthen emergency care systems globally, especially in lower income countries, and among military personnel wounded in combat. This paper describes the methodology of the “Epidemiology and Outcomes of Prolonged Trauma Care (EpiC)” study, which aims to investigate how the delivery of resuscitative interventions and their timeliness impacts the morbidity and mortality outcomes of patients with critical injuries in South Africa. Methods The EpiC study is a prospective, multicenter cohort study that will be implemented over a 6-year period in the Western Cape, South Africa. Data collected will link pre- and in-hospital care with mortuary reports through standardized clinical chart abstraction and will provide longitudinal documentation of the patient’s clinical course after injury. The study will enroll an anticipated sample of 14,400 injured adults. Survival and regression analysis will be used to assess the effects of critical early resuscitative interventions (airway, breathing, circulatory, and neurologic) and trauma-to-treatment time on the primary 7-day mortality outcome and secondary mortality (24-h, 30-day) and morbidity outcomes (need for operative interventions, secondary infections, and organ failure). Discussion This study is the first effort in the Western Cape of South Africa to build a standardized, high-quality, multicenter epidemiologic trauma dataset that links pre- and in-hospital care with mortuary data. In high-income countries and the U.S. military, the introduction of trauma databases and registries has led to interventions that significantly reduce post-injury death and disability. The EpiC study will describe epidemiology trends over time, and it will enable assessments of how trauma care and system processes directly impact trauma outcomes to ultimately improve the overall emergency care system. Trial Registration: Not applicable as this study is not a clinical trial.
Collapse
|
4
|
An adaptive platform trial for evaluating treatments in patients with life-threatening hemorrhage from traumatic injuries: Planning and execution. Transfusion 2022; 62 Suppl 1:S242-S254. [PMID: 35748672 DOI: 10.1111/trf.16982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 04/27/2022] [Accepted: 04/27/2022] [Indexed: 11/28/2022]
|
5
|
An adaptive platform trial for evaluating treatments in patients with life-threatening hemorrhage from traumatic injuries: Rationale and proposal. Transfusion 2022; 62 Suppl 1:S231-S241. [PMID: 35732508 DOI: 10.1111/trf.16957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/29/2022] [Accepted: 05/04/2022] [Indexed: 11/30/2022]
|
6
|
Whole blood at the tip of the spear: A retrospective cohort analysis of warm fresh whole blood resuscitation versus component therapy in severely injured combat casualties. Surgery 2022; 171:518-525. [PMID: 34253322 DOI: 10.1016/j.surg.2021.05.051] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/26/2021] [Accepted: 05/31/2021] [Indexed: 11/28/2022]
Abstract
BACKGROUND Death from uncontrolled hemorrhage occurs rapidly, particularly among combat casualties. The US military has used warm fresh whole blood during combat operations owing to clinical and operational exigencies, but published outcomes data are limited. We compared early mortality between casualties who received warm fresh whole blood versus no warm fresh whole blood. METHODS Casualties injured in Afghanistan from 2008 to 2014 who received ≥2 red blood cell containing units were reviewed using records from the Joint Trauma System Role 2 Database. The primary outcome was 6-hour mortality. Patients who received red blood cells solely from component therapy were categorized as the non-warm fresh whole blood group. Non- warm fresh whole blood patients were frequency-matched to warm fresh whole blood patients on identical strata by injury type, patient affiliation, tourniquet use, prehospital transfusion, and average hourly unit red blood cell transfusion rates, creating clinically unique strata. Multilevel mixed effects logistic regression adjusted for the matching, immortal time bias, and other covariates. RESULTS The 1,105 study patients (221 warm fresh whole blood, 884 non-warm fresh whole blood) were classified into 29 unique clinical strata. The adjusted odds ratio of 6-hour mortality was 0.27 (95% confidence interval 0.13-0.58) for the warm fresh whole blood versus non-warm fresh whole blood group. The reduction in mortality increased in magnitude (odds ratio = 0.15, P = .024) among the subgroup of 422 patients with complete data allowing adjustment for seven additional covariates. There was a dose-dependent effect of warm fresh whole blood, with patients receiving higher warm fresh whole blood dose (>33% of red blood cell-containing units) having significantly lower mortality versus the non-warm fresh whole blood group. CONCLUSION Warm fresh whole blood resuscitation was associated with a significant reduction in 6-hour mortality versus non-warm fresh whole blood in combat casualties, with a dose-dependent effect. These findings support warm fresh whole blood use for hemorrhage control as well as expanded study in military and civilian trauma settings.
Collapse
|
7
|
Case-control analysis of prehospital death and prolonged field care survival during recent US military combat operations. J Trauma Acute Care Surg 2021; 91:S186-S193. [PMID: 34324473 DOI: 10.1097/ta.0000000000003252] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Quantification of medical interventions administered during prolonged field care (PFC) is necessary to inform training and planning. MATERIALS AND METHODS Retrospective cohort study of Department of Defense Trauma Registry casualties with maximum Abbreviated Injury Scale (MAIS) score of 2 or greater and prehospital records during combat operations 2007 to 2015; US military nonsurvivors were linked to Armed Forces Medical Examiner System data. Medical interventions administered to survivors of 4 hours to 72 hours of PFC and nonsurvivors who died prehospital were compared by frequency-matching on mechanism (explosive, firearm, other), injury type (penetrating, blunt) and injured body regions with MAIS score of 3 or greater. Covariates for adjustment included age, sex, military Service, shock, Glasgow Coma Scale, transport team, MAIS and Injury Severity Score (ISS). Sensitivity analysis focused on US military subgroup with AIS/ISS assigned to nonsurvivors after autopsy. RESULTS The total inception cohort included 16,202 casualties (5,269 US military, 10,809 non-US military), 64% Afghanistan, 36% Iraq. Of US military, 734 deaths occurred within 30 days, nearly 90% occurred within 4 hours of injury. There were 3,222 casualties (1,111 US military, 2,111 non-US military) documented for prehospital care and died prehospital (691) or survived 4 hours to 72 hours of PFC (2,531). Twenty-five percent (815/3,222) received advanced airway, 18% (583) ventilatory support, 9% (281) tourniquet. Twenty-three percent (725) received blood transfusions within 24 hours. Of the matched cohort (1,233 survivors, 490 nonsurvivors), differences were observed in care (survivors received more warming, intravenous fluids, sedation, mechanical ventilation, narcotics, antibiotics; nonsurvivors received more intubations, tourniquets, intraosseous fluids, cardiopulmonary resuscitation). Sensitivity analysis focused on US military (732 survivors, 379 nonsurvivors) showed no significant differences in prehospital interventions. Without autopsy information, the ISS of nonsurvivors significantly underestimated injury severity. CONCLUSION Tourniquets, blood transfusion, airway, and ventilatory support are frequently required interventions for the seriously injured. Prolonged field care should direct resources, technology, and training to field technology for sustained resuscitation, airway, and breathing support in the austere environment. LEVEL OF EVIDENCE Prognostic, Level III.
Collapse
|
8
|
A stepped randomized trial to promote colorectal cancer screening in a nationwide sample of U.S. Veterans. Contemp Clin Trials 2021; 105:106392. [PMID: 33823295 DOI: 10.1016/j.cct.2021.106392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 03/24/2021] [Accepted: 03/29/2021] [Indexed: 12/19/2022]
Abstract
BACKGROUND Colorectal cancer (CRC) screening (CRCS) facilitates early detection and lowers CRC mortality. OBJECTIVES To increase CRCS in a randomized trial of stepped interventions. Step 1 compared three modes of delivery of theory-informed minimal cue interventions. Step 2 was designed to more intensively engage those not completing CRCS after Step 1. METHODS Recruitment packets (60,332) were mailed to a random sample of individuals with a record of U.S. military service during the Vietnam-era. Respondents not up-to-date with CRCS were randomized to one of four Step 1 groups: automated telephone, telephone, letter, or survey-only control. Those not completing screening after Step 1 were randomized to one of three Step 2 groups: automated motivational interviewing (MI) call, counselor-delivered MI call, or Step 2 control. Intention-to-treat (ITT) analyses assessed CRCS on follow-up surveys mailed after each step. RESULTS After Step 1 (n = 1784), CRCS was higher in the letter, telephone, and automated telephone groups (by 1%, 5%, 7%) than in survey-only controls (43%), although differences were not statistically significant. After Step 2 (n = 516), there were nonsignificant increases in CRCS in the two intervention groups compared with the controls. CRCS following any combination of stepped interventions overall was 7% higher (P = 0.024) than in survey-only controls (55.6%). CONCLUSIONS In a nationwide study of Veterans, CRCS after each of two stepped interventions of varying modes of delivery did not differ significantly from that in controls. However, combined overall, the sequence of stepped interventions significantly increased CRCS.
Collapse
|
9
|
Abstract
OBJECTIVE To address the clinical and regulatory challenges of optimal primary endpoints for bleeding patients by developing consensus-based recommendations for primary clinical outcomes for pivotal trials in patients within 6 categories of significant bleeding, (1) traumatic injury, (2) intracranial hemorrhage, (3) cardiac surgery, (4) gastrointestinal hemorrhage, (5) inherited bleeding disorders, and (6) hypoproliferative thrombocytopenia. BACKGROUND A standardized primary outcome in clinical trials evaluating hemostatic products and strategies for the treatment of clinically significant bleeding will facilitate the conduct, interpretation, and translation into clinical practice of hemostasis research and support alignment among funders, investigators, clinicians, and regulators. METHODS An international panel of experts was convened by the National Heart Lung and Blood Institute and the United States Department of Defense on September 23 and 24, 2019. For patients suffering hemorrhagic shock, the 26 trauma working-group members met for almost a year, utilizing biweekly phone conferences and then an in-person meeting, evaluating the strengths and weaknesses of previous high quality studies. The selection of the recommended primary outcome was guided by goals of patient-centeredness, expected or demonstrated sensitivity to beneficial treatment effects, biologic plausibility, clinical and logistical feasibility, and broad applicability. CONCLUSIONS For patients suffering hemorrhagic shock, and especially from truncal hemorrhage, the recommended primary outcome was 3 to 6-hour all-cause mortality, chosen to coincide with the physiology of hemorrhagic death and to avoid bias from competing risks. Particular attention was recommended to injury and treatment time, as well as robust assessments of multiple safety related outcomes.
Collapse
|
10
|
Association of time to craniectomy with survival in patients with severe combat-related brain injury. Neurosurg Focus 2019; 45:E2. [PMID: 30544314 DOI: 10.3171/2018.9.focus18404] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 09/12/2018] [Indexed: 11/06/2022]
Abstract
OBJECTIVEIn combat and austere environments, evacuation to a location with neurosurgery capability is challenging. A planning target in terms of time to neurosurgery is paramount to inform prepositioning of neurosurgical and transport resources to support a population at risk. This study sought to examine the association of wait time to craniectomy with mortality in patients with severe combat-related brain injury who received decompressive craniectomy.METHODSPatients with combat-related brain injury sustained between 2005 and 2015 who underwent craniectomy at deployed surgical facilities were identified from the Department of Defense Trauma Registry and Joint Trauma System Role 2 Registry. Eligible patients survived transport to a hospital capable of diagnosing the need for craniectomy and performing surgery. Statistical analyses included unadjusted comparisons of postoperative mortality by elapsed time from injury to start of craniectomy, and Cox proportional hazards modeling adjusting for potential confounders. Time from injury to craniectomy was divided into quintiles, and explored in Cox models as a binary variable comparing early versus delayed craniectomy with cutoffs determined by the maximum value of each quintile (quintile 1 vs 2-5, quintiles 1-2 vs 3-5, etc.). Covariates included location of the facility at which the craniectomy was performed (limited-resource role 2 facility vs neurosurgically capable role 3 facility), use of head CT scan, US military status, age, head Abbreviated Injury Scale score, Injury Severity Score, and injury year. To reduce immortal time bias, time from injury to hospital arrival was included as a covariate, entry into the survival analysis cohort was defined as hospital arrival time, and early versus delayed craniectomy was modeled as a time-dependent covariate. Follow-up for survival ended at death, hospital discharge, or hospital day 16, whichever occurred first.RESULTSOf 486 patients identified as having undergone craniectomy, 213 (44%) had complete date/time values. Unadjusted postoperative mortality was 23% for quintile 1 (n = 43, time from injury to start of craniectomy 30-152 minutes); 7% for quintile 2 (n = 42, 154-210 minutes); 7% for quintile 3 (n = 43, 212-320 minutes); 19% for quintile 4 (n = 42, 325-639 minutes); and 14% for quintile 5 (n = 43, 665-3885 minutes). In Cox models adjusted for potential confounders and immortal time bias, postoperative mortality was significantly lower when time to craniectomy was within 5.33 hours of injury (quintiles 1-3) relative to longer delays (quintiles 4-5), with an adjusted hazard ratio of 0.28, 95% CI 0.10-0.76 (p = 0.012).CONCLUSIONSPostoperative mortality was significantly lower when craniectomy was initiated within 5.33 hours of injury. Further research to optimize craniectomy timing and mitigate delays is needed. Functional outcomes should also be evaluated.
Collapse
|
11
|
|
12
|
Ideal hemoglobin transfusion target for resuscitation of massive-transfusion patients. Surgery 2016; 160:1560-1567. [PMID: 27450716 DOI: 10.1016/j.surg.2016.05.022] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Revised: 05/18/2016] [Accepted: 05/24/2016] [Indexed: 10/21/2022]
Abstract
BACKGROUND Overtransfusion of packed red blood cells is known to increase the risk of death in stable patients. With the delineation of minimum transfusion ratios in hemorrhaging patients complete, attention must be turned to the other end of the massive transfusion spectrum-that of defining the maximum transfusion of packed red blood cells. We aimed to define the ideal hemoglobin range 24 hours after anatomic hemostasis associated with the lowest mortality. METHODS Massive-transfusion patients (≥10 units packed red blood cells within 24 hours) were reviewed from 2010-2013. The hemoglobin 24 ± 6 hours after anatomic hemostasis was used to stratify patients into undertransfusion (<8.0 g/dL), hemoglobin transfusion target (8.0-11.9 g/dL), and overtransfusion (>12.0 g/dL) groups; patients not surviving to 24 hours were excluded. RESULTS We identified 418 patients (351 [84%] in the hemoglobin transfusion target group, 38 [9%] in the undertransfusion group, and 29 [7%] in the overtransfusion group) with an overall mortality of 18%. Undertransfusion patients had the greatest risk of death (odds ratio 3.3; 95% confidence interval 1.6-6.7) followed by overtransfusion patients (odds ratio 2.5; 95% confidence interval 1.1-5.6). Though pretransfusion hemoglobin was similar (9.5 ± 2.2 g/dL vs 9.5 ± 2.3 g/dL), overtransfusion patients had greater hemoglobin values during massive transfusion (8.3 ± 3.0 g/dL vs 6.9 ± 1.4 g/dL), persisting until hospital dismissal/death (11.4 ± 2.3 g/dL vs 9.6 ± 1.1 g/dL). In total, 657.4 excess packed red blood cell units were transfused (1.9 ± 1.5 per patient). CONCLUSION Overtransfusion patients had increased mortality, comparable to undertransfusion patients, despite younger age and fewer comorbidities. Shorter massive transfusion durations foster a scenario in which patients are at greater risk of overtransfusion.
Collapse
|
13
|
Abstract
OBJECTIVES Despite a national crisis of increased prevalence of obesity and type 2 diabetes mellitus in adolescents, especially among Hispanics, there is a paucity of data on health indicators among farmworker adolescents and their peers. The main aim of this study was to estimate the prevalence of cardiovascular disease risk factors in a population of Hispanic adolescent students in south Texas. The study also aimed to compare the prevalence of these risk factors between students enrolled in the Migrant Education Program (MEP) and other students, and between boys and girls. METHODS In partnership with the Weslaco (Texas) Independent School District and the Migrant Education Department, a cohort study was conducted from 2007 to 2010 to estimate the prevalence of overall obesity (body mass index ≥85th percentile for age and sex), abdominal obesity (waist circumference ≥75th percentile for age, sex, and ethnicity), acanthosis nigricans (AN), and high blood pressure (HBP; ≥90th percentile for age, height, and sex or systolic/diastolic BP ≥120/80 mm Hg) among MEP students compared with other students from two south Texas high schools. Multilevel logistic regression was used to assess the relation between sex and our main outcomes of interest while accounting for within-school nesting of participants. RESULTS Among 628 sampled students, 508 (80.9%) completed the consent procedure and participated in the study. Of these, 257 were MEP students and 251 were non-MEP peers. Approximately 96.7% of participants were Hispanic and 50.0% were boys. Analyses of data across the years comparing MEP students and non-MEP students show an average prevalence of 44.8% versus 47.7% for overall obesity, 43.2% versus 43.7% for abdominal obesity, 24.7% versus 24.7% for AN, and 29.2% versus 32.8% for HBP. Across recruitment and follow-up years, the prevalence of overall obesity, abdominal obesity, and HBP was 1.3 to 1.5, 1.2 to 1.8, and 2.9 to 4.6 times higher in boys than in girls, respectively. In contrast, the prevalence of AN varied little by sex. CONCLUSIONS The high prevalence of cardiovascular risk factors in both groups suggests a compelling need for comprehensive, culturally targeted interventions to prevent future cardiovascular diseases in these high-risk Hispanic adolescents, especially among boys. There were not, however, substantial differences between MEP students and other students. These findings also support the feasibility of conducting future epidemiologic studies among adolescent farmworkers and their families, as well as culturally appropriate school or community-based interventions.
Collapse
|
14
|
A joint latent class model for classifying severely hemorrhaging trauma patients. BMC Res Notes 2015; 8:602. [PMID: 26498438 PMCID: PMC4620016 DOI: 10.1186/s13104-015-1563-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2014] [Accepted: 10/05/2015] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND In trauma research, "massive transfusion" (MT), historically defined as receiving ≥10 units of red blood cells (RBCs) within 24 h of admission, has been routinely used as a "gold standard" for quantifying bleeding severity. Due to early in-hospital mortality, however, MT is subject to survivor bias and thus a poorly defined criterion to classify bleeding trauma patients. METHODS Using the data from a retrospective trauma transfusion study, we applied a latent-class (LC) mixture model to identify severely hemorrhaging (SH) patients. Based on the joint distribution of cumulative units of RBCs and binary survival outcome at 24 h of admission, we applied an expectation-maximization (EM) algorithm to obtain model parameters. Estimated posterior probabilities were used for patients' classification and compared with the MT rule. To evaluate predictive performance of the LC-based classification, we examined the role of six clinical variables as predictors using two separate logistic regression models. RESULTS Out of 471 trauma patients, 211 (45 %) were MT, while our latent SH classifier identified only 127 (27 %) of patients as SH. The agreement between the two classification methods was 73 %. A non-ignorable portion of patients (17 out of 68, 25 %) who died within 24 h were not classified as MT but the SH group included 62 patients (91 %) who died during the same period. Our comparison of the predictive models based on MT and SH revealed significant differences between the coefficients of potential predictors of patients who may be in need of activation of the massive transfusion protocol. CONCLUSIONS The traditional MT classification does not adequately reflect transfusion practices and outcomes during the trauma reception and initial resuscitation phase. Although we have demonstrated that joint latent class modeling could be used to correct for potential bias caused by misclassification of severely bleeding patients, improvement in this approach could be made in the presence of time to event data from prospective studies.
Collapse
|
15
|
Estimating the ratio of multivariate recurrent event rates with application to a blood transfusion study. Stat Methods Med Res 2015; 26:1969-1981. [PMID: 26160825 DOI: 10.1177/0962280215593974] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In comparative effectiveness studies of multicomponent, sequential interventions like blood product transfusion (plasma, platelets, red blood cells) for trauma and critical care patients, the timing and dynamics of treatment relative to the fragility of a patient's condition is often overlooked and underappreciated. While many hospitals have established massive transfusion protocols to ensure that physiologically optimal combinations of blood products are rapidly available, the period of time required to achieve a specified massive transfusion standard (e.g. a 1:1 or 1:2 ratio of plasma or platelets:red blood cells) has been ignored. To account for the time-varying characteristics of transfusions, we use semiparametric rate models for multivariate recurrent events to estimate blood product ratios. We use latent variables to account for multiple sources of informative censoring (early surgical or endovascular hemorrhage control procedures or death). The major advantage is that the distributions of latent variables and the dependence structure between the multivariate recurrent events and informative censoring need not be specified. Thus, our approach is robust to complex model assumptions. We establish asymptotic properties and evaluate finite sample performance through simulations, and apply the method to data from the PRospective Observational Multicenter Major Trauma Transfusion study.
Collapse
|
16
|
Alternative end points for trauma studies: A survey of academic trauma surgeons. Surgery 2015; 158:1291-6. [PMID: 25958063 DOI: 10.1016/j.surg.2015.03.030] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2014] [Revised: 03/06/2015] [Accepted: 03/18/2015] [Indexed: 12/01/2022]
Abstract
INTRODUCTION Changing the epidemiology of trauma makes traditional end points like 30-day mortality less than ideal. Many alternative end points have been suggested; however, they are not yet accepted by the trauma community or regulatory bodies. This study characterizes opinions about the adequacy of accepted end points of studies of trauma and the appropriateness of several novel end points. METHODS An electronic survey was administered to all members of the American Association for the Surgery of Trauma. Questions involved demographics, research experience, appropriateness of proposed study end points, and the role of nontraditional, surrogate, and composite end points. RESULTS Response rate was 16% (141 of 873) with 74% of respondents practicing at Level 1 Trauma Centers. The respondents were very experienced, with 81% reporting >10 years of practice at the attending level and 87% actively involved in research. The majority of respondents rated the following end points favorably: 24-hour survival, 30-day survival, and time to control of acute hemorrhage with approval rates of 82%, 78%, and 76%, respectively. Six-hour survival, intensive care unit-free survival, and days free of multiorgan failure were rated as appropriate or very appropriate less than 66% of the time. Only 45% of respondents judged the currently used end points of trauma to be appropriate. More than 80% respondents disagreed or strongly disagreed that there was no role for of surrogate or composite endpoints in research of trauma resuscitation. CONCLUSION There is strong interest in finding efficient end points in trauma research that are both specific and reflect the changing epidemiology of trauma death. The alternative end points of 24-hour survival and time to control of acute hemorrhage had similar approval rates to 30-day mortality.
Collapse
|
17
|
Collider bias in trauma comparative effectiveness research: the stratification blues for systematic reviews. Injury 2015; 46:775-80. [PMID: 25766096 PMCID: PMC4402274 DOI: 10.1016/j.injury.2015.01.043] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2014] [Revised: 01/02/2015] [Accepted: 01/26/2015] [Indexed: 02/02/2023]
Abstract
BACKGROUND Collider bias, or stratifying data by a covariate consequence rather than cause (confounder) of treatment and outcome, plagues randomised and observational trauma research. Of the seven trials of prehospital hypertonic saline in dextran (HSD) that have been evaluated in systematic reviews, none found an overall between-group difference in survival, but four reported significant subgroup effects. We hypothesised that an avoidable type of collider bias often introduced inadvertently into trauma comparative effectiveness research could explain the incongruous findings. METHODS The two most recent HSD trials, a single-site pilot and a multi-site pivotal study, provided data for a secondary analysis to more closely examine the potential for collider bias. The two trials had followed the a priori statistical analysis plan to subgroup patients by a post-randomisation covariate and well-established surrogate for bleeding severity, massive transfusion (MT), ≥ 10 unit of red blood cells within 24h of admission. Despite favourable HSD effects in the MT subgroup, opposite effects in the non-transfused subgroup halted the pivotal trial early. In addition to analyzing the data from the two trials, we constructed causal diagrams and performed a meta-analysis of the results from all seven trials to assess the extent to which collider bias could explain null overall effects with subgroup heterogeneity. RESULTS As in previous trials, HSD induced significantly greater increases in systolic blood pressure (SBP) from prehospital to admission than control crystalloid (p=0.003). Proportionately more HSD than control decedents accrued in the non-transfused subgroup, but with paradoxically longer survival. Despite different study populations and a span of over 20 years across the seven trials, the reported mortality effects were consistently null, summary RR=0.99 (p=0.864, homogeneity p=0.709). CONCLUSIONS HSD delayed blood transfusion by modifying standard triggers like SBP with no detectable effect on survival. The reported heterogeneous HSD effects in subgroups can be explained by collider bias that trauma researchers can avoid by improved covariate selection and data capture strategies.
Collapse
|
18
|
Cellular microparticle and thrombogram phenotypes in the Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study: correlation with coagulopathy. Thromb Res 2014; 134:652-8. [PMID: 25086657 DOI: 10.1016/j.thromres.2014.07.023] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 06/26/2014] [Accepted: 07/08/2014] [Indexed: 01/23/2023]
Abstract
BACKGROUND Trauma-induced coagulopathy following severe injury is associated with increased bleeding and mortality. Injury may result in alteration of cellular phenotypes and release of cell-derived microparticles (MP). Circulating MPs are procoagulant and support thrombin generation (TG) and clotting. We evaluated MP and TG phenotypes in severely injured patients at admission, in relation to coagulopathy and bleeding. METHODS As part of the Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study, research blood samples were obtained from 180 trauma patients requiring transfusions at 5 participating centers. Twenty five healthy controls and 40 minimally injured patients were analyzed for comparisons. Laboratory criteria for coagulopathy was activated partial thromboplastin time (APTT) ≥ 35 sec. Samples were analyzed by Calibrated Automated Thrombogram to assess TG, and by flow cytometry for MP phenotypes [platelet (PMP), erythrocyte (RMP), leukocyte (LMP), endothelial (EMP), tissue factor (TFMP), and Annexin V positive (AVMP)]. RESULTS 21.7% of patients were coagulopathic with the median (IQR) APTT of 44 sec (37, 53), and an Injury Severity Score of 26 (17, 35). Compared to controls, patients had elevated EMP, RMP, LMP, and TFMP (all p<0.001), and enhanced TG (p<0.0001). However, coagulopathic PROMMTT patients had significantly lower PMP, TFMP, and TG, higher substantial bleeding, and higher mortality compared to non-coagulopathic patients (all p<0.001). CONCLUSIONS Cellular activation and enhanced TG are predominant after trauma and independent of injury severity. Coagulopathy was associated with lower thrombin peak and rate compared to non-coagulopathic patients, while lower levels of TF-bearing PMPs were associated with substantial bleeding.
Collapse
|
19
|
Abstract
Abstract Objective. Earlier use of plasma and red blood cells (RBCs) has been associated with improved survival in trauma patients with substantial hemorrhage. We hypothesized that prehospital transfusion (PHT) of thawed plasma and/or RBCs would result in improved patient coagulation status on admission and survival. Methods. Adult trauma patient records were reviewed for patient demographics, shock, coagulopathy, outcomes, and blood product utilization from September 2011 to April 2013. Patients arrived by either ground or two different helicopter companies. All patients transfused with blood products (either pre- or in-hospital) were included in the study. One helicopter system (LifeFlight, LF) had thawed plasma and RBCs while the other air (OA) and ground transport systems used only crystalloid resuscitation. Patients receiving PHT were compared with all other patients meeting entry criteria to the study cohort. All comparisons were adjusted in multilevel regression models. Results. A total of 8,536 adult trauma patients were admitted during the 20-month study period, of which 1,677 met inclusion criteria. They represented the most severely injured patients (ISS = 24 and mortality = 26%). There were 792 patients transported by ground, 716 by LF, and 169 on OA. Of the LF patients, 137 (19%) received prehospital transfusion. There were 942 units (244 RBCs and 698 plasma) placed on LF helicopters, with 1.9% wastage. PHT was associated with improved acid-base status on hospital admission, decreased use of blood products over 24 hours, a reduction in the risk of death in the sickest patients over the first 6 hours after admission, and negligible blood products wastage. In this small single-center pilot study, there were no differences in 24-hour (odds ratio 0.57, p = 0.117) or 30-day mortality (odds ratio 0.71, p = 0.441) between LF and OA. Conclusions. Prehospital plasma and RBC transfusion was associated with improved early outcomes, negligible blood products wastage, but not an overall survival advantage. Similar to the data published from the ongoing war, improved early outcomes are associated with placing blood products prehospital, allowing earlier infusion of life-saving products to critically injured patients.
Collapse
|
20
|
Factorial validity and invariance of four psychosocial constructs of colorectal cancer screening: does screening experience matter? Cancer Epidemiol Biomarkers Prev 2013; 22:2295-302. [PMID: 24057575 DOI: 10.1158/1055-9965.epi-13-0565] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Few studies have examined the psychometric properties and invariance of scales measuring constructs relevant to colorectal cancer screening (CRCS). We sought to: (i) evaluate the factorial validity of four core constructs associated with CRCS (benefits, barriers, self-efficacy, and optimism); and (ii) examine measurement invariance by screening status (currently screened, overdue, never screened). METHODS We used baseline survey data from a longitudinal behavioral intervention trial to increase CRCS among U.S. veterans. Respondents were classified as currently screened (n = 3,498), overdue (n = 418), and never screened (n = 1,277). The measurement model was developed using a random half of the sample and then validated with the second half of the sample and the full baseline sample (n = 5,193). Single- and multi-group confirmatory factor analysis was used to examine measurement invariance by screening status. RESULTS The four-factor measurement model demonstrated good fit. Factor loadings, item intercepts, and residual item variance and covariance were invariant when comparing participants never screened and overdue for CRCS, indicating strict measurement invariance. All factor loadings were invariant among the currently screened and overdue groups. Only the benefits scale was invariant across current screeners and never screeners. Non-invariant items were primarily from the barriers scale. CONCLUSION Our findings provide additional support for the construct validity of scales of CRCS benefits, barriers, self-efficacy, and optimism. A greater understanding of the differences between current and never screeners may improve measurement invariance. IMPACT Measures of benefits, barriers, self-efficacy, and optimism may be used to specify intervention targets and effectively assess change pre- and post-intervention across screening groups.
Collapse
|
21
|
Prostate cancer incidence in U.S. Air Force aviators compared with non-aviators. ACTA ACUST UNITED AC 2011; 82:1067-70. [PMID: 22097644 DOI: 10.3357/asem.3090.2011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Several studies investigating whether prostate cancer incidence is elevated in aviators both in the civilian and military sectors have yielded inconsistent findings. Most investigations have compared aviators to the general population. Instead, our study compared prostate cancer incidence rates among officer aviators and non-aviators in the U.S. Air Force (USAF) to reduce confounding by socioeconomic status and frequency of medical exams. METHODS This retrospective analysis ascertained prostate cancer cases using the Automated Cancer Tumor Registry of the Department of Defense linked to personnel records from the USAF Personnel Center to identify aviators and non-aviators. Survival analysis using the Cox Proportional Hazards model allowed comparison of prostate cancer incidence rates in USAF aviators and non-aviators. RESULTS After adjustment for age and race, the hazards ratio for prostate cancer incidence comparing aviators with non-aviators was 1.15 (95% confidence interval, 0.85-1.44). Neither prostate cancer incidence nor time to diagnosis differed significantly between the two groups. CONCLUSION Our study compared prostate cancer rates in aviators with a reference group of non-aviators similar in socio-economic level and frequency of exams. When compared to this internal reference group the risk of prostate cancer in USAF officer aviators appeared similar with no significant excess.
Collapse
|
22
|
Evaluation metrics for biostatistical and epidemiological collaborations. Stat Med 2011; 30:2767-77. [PMID: 21284015 PMCID: PMC3139813 DOI: 10.1002/sim.4184] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2010] [Accepted: 12/09/2010] [Indexed: 11/07/2022]
Abstract
Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research.
Collapse
|
23
|
A novel Bayesian algorithm to predict massive transfusion in patients receiving trauma laparotomy. J Am Coll Surg 2011. [DOI: 10.1016/j.jamcollsurg.2011.06.109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
24
|
A CONTINUOUS-TIME MARKOV CHAIN APPROACH ANALYZING THE STAGES OF CHANGE CONSTRUCT FROM A HEALTH PROMOTION INTERVENTION. JP JOURNAL OF BIOSTATISTICS 2010; 4:213-226. [PMID: 23504410 PMCID: PMC3595564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The Transtheoretical Model of behavior change is often used in longitudinal research of health-related outcomes. This model includes a construct called stage of change, which is a hypothesized concept of progression for individuals trying to modify their behavior. Project HOME (Healthy Outlook on the Mammography Experience), a population-based randomized group intervention trial sought to identify factors associated with subject changes in stage, from precontemplation to contemplation and from contemplation to action. The aims of this paper are to extend Li and Chan's Markov model approach to handle multiple covariates that include both continuous and binary variables. An empirical study was conducted to evaluate the accuracy of the estimators. The model was then applied to the Project HOME data. Specifically, we present a continuous-time Markov chain approach to examine covariates and their effect on the dynamics of the changes in stage. This model can be used by researchers to more fully describe transitions in data.
Collapse
|
25
|
A pilot study of symptoms of neurotoxicity and injury among adolescent farmworkers in Starr County, Texas. INTERNATIONAL JOURNAL OF OCCUPATIONAL AND ENVIRONMENTAL HEALTH 2010; 16:138-144. [PMID: 20465058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Little is known regarding the relationship between neurotoxicity symptoms and injury, particularly among adolescent farmworkers. This pilot study utilized logistic regression to analyze injury prevalence in relation to self-reported symptoms of neurotoxicity among adolescent farmworkers along the US-Mexico border in Texas. Respondents reporting at least five symptoms had 8.75 (95% CI, 1.89-40.54) times the prevalence of injury compared with those reporting zero or one symptom. Significant associations were observed for six items: trouble remembering things, family noticing memory loss, making notes, irritated for no reason, heart pounding, and tingling. This pilot study suggests a relationship between symptoms of neurotoxicity and injury among adolescent farmworkers, supporting the need for more rigorous investigations.
Collapse
|
26
|
A Pilot Study of Symptoms of Neurotoxicity and Injury among Adolescent Farmworkers in Starr County, Texas. INTERNATIONAL JOURNAL OF OCCUPATIONAL AND ENVIRONMENTAL HEALTH 2010. [DOI: 10.1179/oeh.2010.16.2.132] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
27
|
Severe back pain among farmworker high school students from Starr County, Texas: baseline results. Ann Epidemiol 2006; 17:132-41. [PMID: 17027295 DOI: 10.1016/j.annepidem.2006.06.011] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2006] [Revised: 06/14/2006] [Accepted: 06/21/2006] [Indexed: 10/24/2022]
Abstract
PURPOSE This cohort study is among the first to estimate the prevalence of and examine potential risk factors for severe back pain (resulting in medical care, 4+ hours of time lost, or pain lasting 1+ weeks) among adolescent farmworkers. These youth often perform tasks requiring bent/stooped postures and heavy lifting. METHODS Of 2536 students who participated (response rate across the three public high schools, 61.2% to 83.9%), 410 students were farmworkers (largely Hispanic and migrant). Students completed a self-administered Web-based survey including farm work/nonfarm work and back-pain items relating to a 9-month period. RESULTS The prevalence of severe back pain was 15.7% among farmworkers and 12.4% among nonworkers. The prevalence increased to 19.1% among farm workers (n = 131) who also did nonfarm work. A multiple logistic regression for farmworkers showed that significantly increased adjusted odds ratios for severe back pain were female sex (4.59); prior accident/back injury (9.04); feeling tense, stressed, or anxious sometimes/often (4.11); lifting/carrying heavy objects not at work (2.98); current tobacco use (2.79); 6+ years involved in migrant farm work (5.02); working with/around knives (3.87); and working on corn crops (3.40). CONCLUSIONS Areas for further research include ergonomic exposure assessments and examining the effects of doing farm work and nonfarm work simultaneously.
Collapse
|
28
|
A Cohort Study of Injuries in Migrant Farm Worker Families in South Texas. Ann Epidemiol 2006; 16:313-20. [PMID: 15994097 DOI: 10.1016/j.annepidem.2005.04.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2005] [Revised: 03/21/2005] [Accepted: 04/08/2005] [Indexed: 11/19/2022]
Abstract
PURPOSE This cohort study estimated the frequency of and risk factors for work injuries among migrant farmworker families over a two-year period. METHODS The cohort consisted of 267 families. Bilingual interviewers asked mothers to respond for their family soliciting demographic, psychosocial, employment, and work-related injury information. Cox regression was used to examine risk factors for first injury events. RESULTS Of the 267 families, nearly 60% migrated and 96% of these completed the follow-up interviews. These families represented about 310 individuals each year who had participated in farmwork on average 6 days a week, 10 hours a day, for 2.7 months in the past year. Twenty-five work-related injuries were reported with an overall rate of 12.5/100 FTE (95% C.I., 8.6-19.0). Working for a contractor increased the hazard ratio, and use of car seat belts and working for more than one employer during the season decreased it. CONCLUSIONS If person-time at risk for injuries is taken into account the reported injuries are substantial. Because the injuries were quite diverse, specific interventions may have to focus on improved working conditions (physical and economic), ergonomic modifications, and enhanced enforcement of existing regulations.
Collapse
|