ORAL PRESENTATIONS:
Control Number: 3157
Title: Development Of An Emergency Department Trigger Tool Using A Systematic Search And Modified Delphi Process
Topic: Health Services Research
Author Block: Richard T. Griffey1, Ryan M. Schneider1, Lee M. Adler2, Roberta Capp3, Christopher R. Carpenter1, Brenna M. Farmer4, Kathryn Y. Groner5, Sheridan Hodgkins3, Craig A. McCammon6, Jonathan T. Powell5, Jonathan E. Sather7, Jeremiah D. Schuur8, Marc J. Shapiro7, Brian R. Sharp9, Arjun K. Venkatesh7, Marie C. Vrablik10, Jennifer A. Wiler3. 1Washington University School of Medicine, St. Louis, MO; 2University of Central Florida, Orlando, FL; 3University of Colorado School of Medicine, Denver, CO; 4Weil-Cornell School of Medicine, New York, NY; 5Christiana Care Health System, Wilmington, DE; 6Barnes-Jewish Hospital, St. Louis, MO; 7Yale University School of Medicine, New Haven, CT; 8Harvard Medical School, Boston, MA; 9University of Wisconsin School of Medicine and Public Health, Madison, WI; 10University of Washington School of Medicine, Seattle, WA
Abstract:
Background: Typical ED quality and safety surveillance consists of a review of cases meeting blunt criteria (e.g. deaths within 24 hours, returns within 72 hours, etc.). This approach is low-yield for identifying adverse events (AEs), using notoriously poor indicators of quality. A robust, efficient and reliable method is needed to detect AEs, direct resources to high-risk processes and identify changes in AE rates over time. Trigger tools, developed for a number of clinical settings, outperform traditional methods for detecting harm. These consist of a 1st-level rapid review of a random sample of records for any of 40-50 ‘triggers’ and if present, a detailed review for AEs. A 2nd-level physician review is performed to confirm AEs when detected.
Objectives: To develop a consensus-derived ED trigger tool.
Methods: A multidisciplinary group of experts in safety, pharmacy, nursing, critical care, toxicology, geriatrics, infectious disease from 10 geographically diverse academic EDs conducted a modified Delphi process (IRB exempt). This was done in 4 stages over 4 months: 1) a systematic literature search and review, using standard key words and databases and >35% independent oversampling for inclusion; 2) solicitation of empiric triggers from participants; 3) web-based survey scoring of triggers on Face Validity and Utility using 3-level scales and Fidelity (sensitivity and specificity) using a ‘Goldilocks’ scale, and allowing participants to flag triggers for discussion; and 4) a in-person consensus meeting (Fig. 1).
Results: In-depth review was performed for 94 of the 804 unique manuscripts returned by our search (inter-rater reliability; k=0.80), yielding 56 triggers. Participants submitted 58 triggers, totaling 114 candidate triggers for scoring. Mean scores for Face Validity, Utility and Fidelity ranged respectively: 1.0-2.81; 1.29-2.75; and 1.12-2.31. At the consensus meeting, we reviewed the 50 top-ranked triggers (based on summed scores for Face Validity and Utility) and 21 triggers flagged by >2 participants. There was consensus for inclusion for 41 triggers; 5 of 7 triggers requiring subsequent voting were adopted, for a total of 46 final triggers (Table 1).
Conclusion: We identified 46 consensus-derived triggers for the detection of AEs among ED patients. This ED trigger tool requires pilot testing to quantify individual and collective performance.
Control Number: 3162
Title: Telemedicine Provides Non-Inferior Research Informed Consent for Remote Enrollment in an Emergency Department-Based Clinical Trial
Topic: Research Design/Methodology/Statistics
Author Block: Morgan Bobb, Paul Van Heukelom, Brett Faine, Azeemuddin Ahmed, Jeffrey Messerly, Gregory Bell, Karisa Harland, Christian Simon, Nicholas M. Mohr. University of Iowa, Iowa City, IA
Abstract:
Background: Telemedicine through audio-visual conferencing can pair emergency physicians in a tertiary medical center with rural emergency departments (EDs). Telemedicine networks are beginning to provide an avenue for rural health research, but using telemedicine to conduct research informed consent is unproven.
Objectives: To test the hypothesis that patient comprehension of telemedicine-enabled research informed consent is not inferior to standard face-to-face research informed consent.
Methods: A prospective, open-label randomized control trial was performed in a 65,000-visit Midwestern academic ED to test effectiveness of a single dose of oral chlorhexidine gluconate 0.12% in preventing hospital-acquired pneumonia among adult patients with expected hospital admission. Prior to informed consent, potential participants were randomized in a 1:1 allocation ratio to standard face-to-face consent vs. consent by audio-visual telemedicine. Telemedicine was provided using a commercially available interface (REACH platform, Vidyo Inc., Hackensack, NJ). Comprehension of research consent (primary outcome) was measured using the modified Quality of Informed Consent (QuIC), a validated tool for measuring research informed consent comprehension. Sample size was estimated to require 100 completed surveys using a non-inferiority design with a 5 point non-inferiority margin (α = 0.05, power = 80%). Consent rate was a secondary outcome. Statistical comparisons were conducted with t-test, Mann-Whitney U test, and chi-squared test and statistical significance was defined by threshold α < 0.05 using two-tailed tests.
Results: Patients (N=131) were randomized, and 101 QuIC surveys completed (n = 67, telemedicine). Comprehension of research informed consent using telemedicine was not inferior to face-to-face consent (QuIC scores 74.4 ± 8.1 vs. 74.4 ± 6.9 on a 100-point scale, p = 0.999). Subjective consent understanding (p=0.194) and consent rates (56% vs. 69%, p = 0.142) were similar.
Conclusion: Telemedicine is non-inferior to face-to-face consent for delivering research informed consent, with similar research comprehension and patient-reported understanding. This study will inform design of future telemedicine-enabled clinical trials.
Control Number: 3234
Title: Interruptions in the Emergency Department and their Impact on Emergency Physicians
Topic: Health Services Research
Author Block: Hunter Hawthorne, Tara Cohen, Wesley Cammon, M. Fernanda Bellolio, Michon Dohlman, David Nestler, Thomas Hellmich, Erik Hess, Mustafa Sir, Susan Hallbeck, Kalyan Pasupathy, Renaldo Blocker. Mayo Clinic, Rochester, MN
Abstract:
Background: The Emergency Department (ED) is a dynamic environment characterized by unpredictable workloads with intermittent time-critical activities, high uncertainty, and concurrent management of multiple patients. While interaction with other staff members is absolutely necessary for patient care; these interactions amongst others may interrupt the physicians from their current task and lead to unfavorable outcomes.
Objectives: To identify the impact of interruptions on emergency physicians’ (EPs) shift and to capture their perceived workload in a large tertiary care academic center.
Methods: This is an ongoing prospective study of direct observation. EPs were observed during their clinical shift (9 hours) over a one-month period. Healthcare systems engineering researchers collected data regarding the shift time, interruption type, duration, interruption location, impact to current task and priority level of the interruption. Additionally, at mid-shift and end-shift, EPs completed a modified NASA-TLX survey to capture their perceived workload. Data was analyzed using descriptive statistics and MANOVAs.
Results: Preliminary results revealed 743 interruptions across the 10 shifts. EP frequently encountered the following types of interruptions: Face-to-face (FTF) Nurse (32%), FTF Doctors (29%), Pagers (13%), and Environmental (12%). Most interruptions caused a break-in-task (54%), while only 4% of the interruptions caused a complete end to their current task.
On average, EPs experience roughly 9 interruptions every hour, with each interruption lasting about 30 seconds (M=29.12s, SD=32.24).
EPs experienced the same types of interruptions during different shift times, but MANOVAs indicated significant differences between the shift times and interruption duration (p=0.008), location (p=0.025), priority level (p=0.046), and the impact to current task (p=0.0006). Additionally, the average NASA-TLX scores showed an increase in all six subscales, comparing mid-shift to end-shift.
Conclusion: Findings suggest that most interruptions that EPs encounter were face-to-face interactions, with the majority causing a break in their current task. EPs believed their workload increases from mid-shift to end-shift. This ongoing study will examine if there’s a correlation between interruptions and workload.
Control Number: 3179
Title: The Impact Of Self-interest On Ed Patients’ Triage Decisions And Perception Of The Triage Process
Topic: Ethics
Author Block: Natasha Wheaton, Scott Pierce, Mark Graber. University of Iowa Carver College of Medicine, Iowa CIty, IA
Abstract:
Background: The triage process is an important tool utilized by emergency departments in prioritizing patient evaluation and maintaining the flow of patients through the department. However, triage decisions can conflict with patient autonomy and patients’ ideas of “basic fairness”. The goal of this study was to examine the effect of self-interest on theoretical patient triage decisions amongst a group of patients presenting to the ED.
Objectives: Emergency Department patients, when asked to make a triage decision involving two theoretical cases, will be influenced by self-interest and triage themselves first if they are included in the scenario.
Methods: 264 ED patients were interviewed for this study randomized into two groups. Participants evaluated 17 different cases of two patient presentations to determine who they thought should be seen first. In one group, the scenarios were theoretical placing the patient as a third person observer. In the second group, the same questions were asked but the participant was asked to see themselves as one of the patients in the scenario being triaged. The average visual analog scale for each case was compared for each of the two groups. Qualitative analysis was done on study participants’ beliefs regarding the triage process.
Results: The study findings revealed that there was no statistical difference between the two groups for 16 out of the 17 cases in their decision making. There was good concordance between subjects as to which patient should be seen first for each case regardless of which group the subject was placed in. However, case 6 showed a statistical difference between the two groups in the visual analog scale number.
Conclusion: Self-interest does not appear to play a significant role in how patients make theoretical triage decisions. In addition, both groups had generally good concordance for which patient should be seen first in the majority of cases, 14 out of 17. For three of the 17 cases, both groups were unable to confidently decide which patient should be seen first. Although, one case did show a statistical difference between how the two study groups answered, it was not clinically significant, as the outcome would have been the same with the same theoretical patient being seen first.
Control Number: 3205
Title: Electrolyte Abnormalities And Antiarrhythmic Use In Patients Presenting To The ED With Atrial Fibrillation
Topic: Cardiovascular – Clinical Research
Author Block: Rajat N. Moman, Shawna D. Bellew, Christine M. Lohse, Erik P. Hess, M. Fernanda Bellolio. Mayo Clinic, Rochester, MN
Abstract:
Background: Atrial fibrillation (AF) represents 0.5% of all emergency department (ED) visits. Electrolyte imbalances, particularly hypokalemia and hypomagnesemia, are a predisposing factor for AF and guidelines recommend obtaining serum electrolyte panels in all patients. The incidence of these abnormalities is unknown and the use of antiarrhythmics for cardioversion in AF is common.
Objectives: To identify the incidence of electrolyte abnormalities in ED patients diagnosed with AF and evaluate the relationship between electrolyte abnormalities and number of doses of antiarrhythmic administered.
Methods: Cross-sectional study of consecutive adult patients who presented to an academic tertiary care ED from January 2011 to March 2014 with a final diagnosis of AF. Medical records were reviewed for demographics, vital signs, electrolyte values within 24 hours of presentation, electrolyte replacement therapy (ERT), medication types and doses administered. Continuous values and cut-offs were analyzed. Odds ratios were calculated for associations with electrolyte replacement and for between electrolytes and number of doses of antiarrhythmics given. Wilcoxon test was used to compare medians. Abnormal levels were defined as in the table.
Results: 1964 patients were included. Electrolyte level abnormalities are described in the table. 106 (5.4%) patients received ERT while in the ED; these patients were more likely to receive two or more doses (OR 2.3, 95% CI 1.6-3.5, p<0.0001) and more classes of antiarrhythmics (mean 2.5 +/- 1.6 vs. 1.7 +/- 1.7, p<0.0001). Patients with a lower potassium had an increased number of antiarrhythmic doses (p=0.049) and different classes of antiarrhythmics administered (p=0.041).
Conclusion: Significant electrolyte abnormalities are present in 4.4% of patients who present to the ED with AF and 5.4% of patients received ERT while in the ED. Patients with AF who require ERT in the ED are more likely to require an increased number of doses of antiarrhythmics. In particular, lower potassium levels are associated with increased medication requirements. Electrolyte deficiencies should be corrected aggressively in the ED to facilitate the management of atrial fibrillation.
Control Number: 3228
Title: Difference In The Incidence Of Adverse Events In Pediatric Procedural Sedation In The Emergency Department Between Observational And Randomized Controlled Trials
Topic: Airway/Anesthesia/Analgesia
Author Block: Patricia Barrionuevo1, Henrique A. Puls2, Ana Castaneda-Guarderas1, Waqas I. Gilani1, Jana L. Anderson1, Patricia J. Erwin1, M. Hassan Murad1, Erik P. Hess1, M. Fernanda Bellolio1. 1Mayo Clinic, Rochester, MN; 2Federal University of Health Sciences Of Porto Alegre, Porto Alegre, Brazil
Abstract:
Background: Observational studies rely on chart review and may underreport adverse events and present confounded estimates. However; they are larger than randomized trials and provided more precise and robust estimates when adverse events are rare.
Objectives: We conducted a systematic review and meta-analysis to compare the incidence of adverse events during procedural sedation in children as reported by randomized controlled trials versus observational studies.
Methods: We searched multiple electronic databases including MEDLINE, EMBASE, EBSCO, CINAHL, CENTRAL, Cochrane Database of Systematic Reviews, Web of Science and Scopus without language restrictions. Randomized controlled trials and observational studies of procedural sedations in the ED were included. Meta-analysis was performed using a random-effects model and reported as incidence rate per 1,000 sedations and 95% confidence intervals (CI).
Results: A total of 1,177 studies were retrieved for title and abstract screening and 258 of them were selected for full-text review. Forty two studies reporting on 13,975 procedural sedations were included. The incidence of all events was higher in randomized trials compared to observational studies: hypoxia 36.6 per 1,000 (95% CI 20.6 to 52.6) vs. 10.9 (6.6 to 15.3), agitation 64.6 (32.9 to 96.2) vs. 12.2 (7.5 to 16.9), apnea 14.0 (3.3 to 24.7) vs. 6.2 (1.8 to 10.6), BVM 12.1 (3.0 to 21.3) vs. 4.4 (1.5 to 7.2), hypotension 94.6 (19.0 to 170.2) vs. 2.2 (0 to 4.6), laryngospasm 5.9 (0 to 14.4) vs. 2.8 (0.8 to 4.8), and vomiting 99.5 (70.6 to 128.3) vs. 44.3(33.6 to 55.0). There were no intubations among the 576 patients in RCTs, and there were 4 among 8560 in observational studies 0.3 (0 to 0.7). Figure shows the outcome of hypoxia by study type.
Conclusion: Observational studies present a lower incidence of adverse events when compared to RCTs. Meta-analysis of RCTs is more informative for decision-making with the exception of very rare events such as intubation; which requires evidence derived from observational studies.
Control Number: 3207
Title: Are Residents Satisfied With The Current ACGME Duty Hour Regulations?
Topic: Education
Author Block: Diana M. Shewmaker, Damian V. Baalmann, Benjamin J. Sandefur, Steven H. Rose, James E. Colletti. Mayo Clinic, Rochester, MN
Abstract:
Background: The Institute of Medicine released its landmark report “To Err is Human” in 1999 which highlighted medical errors as a leading cause of patient morbidity and mortality. The ACGME subsequently instituted regulations on resident duty hours, implemented in 2003 and revised in 2011, which were intended to improve patient safety and resident wellbeing. Following those revisions, few studies on resident attitudes and opinions about duty hour regulations have been published. Existing studies are subspecialty specific and conflict in their conclusions.
Objectives: We report the attitudes and opinions about the current ACGME resident duty hour regulations from a single center, multispecialty cohort of residents.
Methods: During the fall of 2014, we anonymously surveyed all residents in ACGME-accredited residency programs at Mayo Clinic in Rochester, MN. We asked 41 questions pertaining to residents’ attitudes and opinions about duty hour regulations. We analyzed descriptive statistics based upon the survey results.
Results: 736 residents representing all 24 residency programs were surveyed. The response rate was 67%, and 53% of surveys were completed in entirety. Survey findings on resident attitudes and opinions of the ACGME duty hours are detailed below:
•78% of residents agree or strongly agree that they are satisfied with current duty hour regulations
•73% believe the regulations positively affect their overall training
•42% believe the regulations diminish clinical educational experiences
•89% believe resident fatigue contributes to adverse events
•80% believe the regulations reduce resident fatigue, 74% believe they decrease the frequency of performing clinical duties while fatigued
•37% believe the regulations decrease medical errors, 44% believe they have no effect
•41% believe handovers contribute more to medical errors than fatigue
•62% believe the regulations diminish patient familiarity and continuity of care.
Conclusion: The majority of a multidisciplinary cohort of residents reported satisfaction with the ACGME duty hour regulations, and believe they positively affect their training. Perceived negative effects of the regulations are diminished clinical experiences, more handovers of care, and decreased continuity of care and patient familiarity.
Control Number: 3212
Title: ST-segment Changes in Left Bundle Branch Block with Acute Coronary Occlusion: ST Concordance has High Specificity and Proportionally Excessive ST Discordance has High Sensitivity
Topic: Cardiovascular – Clinical Research
Author Block: Kenneth W. Dodd1, Kendra D. Elm2, Stephen W. Smith1. 1Hennepin County Medical Center, Minneapolis, MN; 2University of Minnesota Medical School, Minneapolis, MN
Abstract:
Background: Historically, the baseline ST-segment changes in left bundle branch block (LBBB) have made diagnosis of acute myocardial infarction difficult. But the diagnosis of acute coronary occlusion (ACO; “STEMI” equivalent) may be obvious if the rule of appropriate discordance in LBBB is kept in mind. That is, the ST segment changes should be in the opposite direction (discordant) to the majority of the QRS complex. Concordance has been shown to be a specific marker of ACO in LBBB. The three Sgarbossa criteria include two concordance rules [concordant ST elevation (STE) ≥ 1 mm and concordant ST depression (STD) ≥ 1 mm in V1-V3] and one discordance rule [discordant STE ≥ 5 mm]. These criteria are suggested for the diagnosis of myocardial infarction in LBBB. While Sgarbossa’s concordance rules with a cutoff of 1 mm have a high specificity for ACO in LBBB, we hypothesize that even less concordance would also have a high specificity. Furthermore, we have previously shown that excessively discordant ST-segment changes must be assessed as a proportion of the preceding QRS segment (ST/S ratio) and that an ST/S ratio ≤ -0.25 is significantly more sensitive than Sgarbossa’s 5 mm STE cutoff (79% vs 33%, p < 0.05).
Objectives: To compare cutoffs for ST-segment concordance as well as updated rules for diagnosis of ACO in LBBB.
Methods: In this retrospective study, the study group consisted of ED patients with LBBB and angiographically-proven complete ACO or culprit lesion and troponin I ≥ 10 ng/ml. The control group consisted of consecutive ED patients with LBBB and ischemic symptoms but no evidence of recent ACO. “NSTEMI” patients were included as controls if ACO could be excluded. Measurements included ST segment at the J-point as well as S- and R-wave amplitude to the nearest 0.5 mm. Statistics were by McNemar’s test.
Results: The study and control groups consisted of 33 and 129 patients, respectively. Both of Sgarbossa’s concordance rules (with a cutoff ≥ 0.5 mm), as well as a rule of ≥ 1 mm concordant STE or STD in any lead, all had > 90% specificity for ACO in LBBB (see Dodd Table 1).
Conclusion: In LBBB, Sgarbossa’s concordance rules with a ≥ 0.5 mm cutoff or ≥ 1 mm of concordance in any lead have a high specificity for ACO in LBBB. Proportionally excessive discordance has a high sensitivity for ACO in LBBB. Rules for diagnosis of ACO in LBBB must keep these principles in mind.
Control Number: 3235
Title: Youth-Size ATV Seat Design: Variability and Lack of Consistent Changes in Vehicles Designated for Different Ages Demonstrates Need for Evidence-Based Standardization
Topic: Disease/Injury Prevention
Author Block: Charles A. Jennissen, Claire Castaneda, Alvin Long, Gerene Denning. University of Iowa Department of Emergency Medicine, Iowa City, IA
Abstract:
Background: Carrying passengers is an independent risk factor for crash and injury on all-terrain vehicles (ATVs). Optimal seat design would allow for safe vehicle operation while decreasing the likelihood of multiple riders and use by underage operators. A previous study of adult-size ATVs found a wide variability in seat length and placement among manufacturers and between sport (mean 31.3 in.) and utility ATVs (mean 26.1 in.). Seat lengths overall ranged from 19.8-37.0 inches. Many models had seats long enough to accommodate multiple riders. There are no published studies related to youth-size ATV seat design.
Objectives: To determine the variability in seat length characteristics among youth-size ATV models (Y6 for youth ≥6 yrs, Y10 for youth ≥10 yrs, Y12 for youth ≥12 yrs, and Y14+ for youth ≥14 yrs) from major manufacturers.
Methods: Measurements of 37 models were performed using an image-based method previously validated that utilizes tools from Adobe Photoshop. Seat characteristics were compared by model age designation, manufacturer, and by ATV type (sport vs. utility).
Results: Seat lengths ranged from 20.5-30.4 inches with a mean of 24.6 inches. The difference in the seat length of the average Y6 model and the average Y14+ model was only 1.4 inches. Youth utility models (eight) had an average seat length of 25.7 inches (range 23.1-27.9 in.) which was similar to sport models for youth 10 years and older (mean 25.0 in., range 22.1-30.4 in.). Y6 sport utility models had an average seat length of 21.5 inches (range 20.5-25.8 in.). The seat front to handle grip distance ranged from 2.7-10.4 inches with a mean of 6.3 inches. The difference in this average distance between Y6 and Y14+ models was only 1 inch. Variability was noted in seat length and in seat front to handle grip distance among manufacturers for ATV models designated for the same aged youth.
Conclusion: The seat lengths of youth-size ATVs are very similar to that of adult models and there was little difference in the seat length and placement for youth models that were designated for various aged children. It is likely that these seat lengths allow and potentially encourage the carrying of passengers. The seat front to handle grip distance was quite short for many youth models and may allow the use of these vehicles by children younger than which they are designated. Regulations are needed to standardize safe seat design for ATVs.
Control Number: 3233
Title: Selective Spinal Immobilization Protocol For Prehospital Providers: Effects On Practice And Patient Outcomes
Topic: EMS/Out-of-Hospital – Non-Cardiac Arrest
Author Block: Nathan Miller, Kari Harland, Joshua Stilley. The University of Iowa, Iowa City, IA
Abstract:
Background: In January 2015 a Selective Spinal Immobilization Protocol for prehospital providers was adopted for use throughout the state of Iowa. The previous protocol proscribed near uniform application of a cervical collar and long backboard to all trauma patients with a concerning mechanism of injury. The new protocol limits immobilization criteria by adopting a combination of recent NAEMSP/ASC-COT recommendations and the NASEMSO National Model EMS Clinical Guidelines.
Objectives: To determine if a Selective Spinal Immobilization Protocol will result in a reduced rate of prehospital spinal immobilization.
Methods: We conducted a retrospective chart review of all 239 patients arriving at the University of Iowa Hospitals and Clinics by ground or air ambulance during April 2014 (pre-implementation of protocol) and April 2015 (post-implementation of protocol) who met National Trauma Data Standard Patient Inclusion Criteria. All demographics, injury severity, and visit characteristic data were collected from the Iowa Trauma Registry database. Spinal immobilization status was identified using EMS patient care records, nursing flowsheets, and clinician notes.
Results: From April 2014 to April 2015, the data shows a statistically significant decrease (P=0.0001) in the percentage of patients with any spinal immobilization in place upon arrival to our emergency department. There was also a significant decrease in the use of the cervical collar and backboard together as a method of spinal immobilization. Even after controlling for patient age and injury severity, the adjusted odds ratio for spinal immobilization in April 2015 was 0.38 (95% CI 0.19-0.75). After controlling for patient age, head abbreviated injury score and if a patient had spinal immobilization, there was no difference in the odds of spinal cord or vertebral injury, aOR = 0.71 (95% CI 0.31-1.62).
Conclusion: The data demonstrates that a selective spinal immobilization protocol allowed prehospital providers to correctly identify patients at risk for spinal injury. With the decrease in spinal immobilization rate and no change in the odds of spinal cord or vertebral injury after implementation, the 2015 Selective Spinal Immobilization Protocol was able to reduce the rate of inappropriate spinal immobilization.
Control Number: 3157
Title: Development Of An Emergency Department Trigger Tool Using A Systematic Search And Modified Delphi Process
Topic: Health Services Research
Author Block: Richard T. Griffey1, Ryan M. Schneider1, Lee M. Adler2, Roberta Capp3, Christopher R. Carpenter1, Brenna M. Farmer4, Kathryn Y. Groner5, Sheridan Hodgkins3, Craig A. McCammon6, Jonathan T. Powell5, Jonathan E. Sather7, Jeremiah D. Schuur8, Marc J. Shapiro7, Brian R. Sharp9, Arjun K. Venkatesh7, Marie C. Vrablik10, Jennifer A. Wiler3. 1Washington University School of Medicine, St. Louis, MO; 2University of Central Florida, Orlando, FL; 3University of Colorado School of Medicine, Denver, CO; 4Weil-Cornell School of Medicine, New York, NY; 5Christiana Care Health System, Wilmington, DE; 6Barnes-Jewish Hospital, St. Louis, MO; 7Yale University School of Medicine, New Haven, CT; 8Harvard Medical School, Boston, MA; 9University of Wisconsin School of Medicine and Public Health, Madison, WI; 10University of Washington School of Medicine, Seattle, WA
Abstract:
Background: Typical ED quality and safety surveillance consists of a review of cases meeting blunt criteria (e.g. deaths within 24 hours, returns within 72 hours, etc.). This approach is low-yield for identifying adverse events (AEs), using notoriously poor indicators of quality. A robust, efficient and reliable method is needed to detect AEs, direct resources to high-risk processes and identify changes in AE rates over time. Trigger tools, developed for a number of clinical settings, outperform traditional methods for detecting harm. These consist of a 1st-level rapid review of a random sample of records for any of 40-50 ‘triggers’ and if present, a detailed review for AEs. A 2nd-level physician review is performed to confirm AEs when detected.
Objectives: To develop a consensus-derived ED trigger tool.
Methods: A multidisciplinary group of experts in safety, pharmacy, nursing, critical care, toxicology, geriatrics, infectious disease from 10 geographically diverse academic EDs conducted a modified Delphi process (IRB exempt). This was done in 4 stages over 4 months: 1) a systematic literature search and review, using standard key words and databases and >35% independent oversampling for inclusion; 2) solicitation of empiric triggers from participants; 3) web-based survey scoring of triggers on Face Validity and Utility using 3-level scales and Fidelity (sensitivity and specificity) using a ‘Goldilocks’ scale, and allowing participants to flag triggers for discussion; and 4) a in-person consensus meeting (Fig. 1).
Results: In-depth review was performed for 94 of the 804 unique manuscripts returned by our search (inter-rater reliability; k=0.80), yielding 56 triggers. Participants submitted 58 triggers, totaling 114 candidate triggers for scoring. Mean scores for Face Validity, Utility and Fidelity ranged respectively: 1.0-2.81; 1.29-2.75; and 1.12-2.31. At the consensus meeting, we reviewed the 50 top-ranked triggers (based on summed scores for Face Validity and Utility) and 21 triggers flagged by >2 participants. There was consensus for inclusion for 41 triggers; 5 of 7 triggers requiring subsequent voting were adopted, for a total of 46 final triggers (Table 1).
Conclusion: We identified 46 consensus-derived triggers for the detection of AEs among ED patients. This ED trigger tool requires pilot testing to quantify individual and collective performance.
Control Number: 3162
Title: Telemedicine Provides Non-Inferior Research Informed Consent for Remote Enrollment in an Emergency Department-Based Clinical Trial
Topic: Research Design/Methodology/Statistics
Author Block: Morgan Bobb, Paul Van Heukelom, Brett Faine, Azeemuddin Ahmed, Jeffrey Messerly, Gregory Bell, Karisa Harland, Christian Simon, Nicholas M. Mohr. University of Iowa, Iowa City, IA
Abstract:
Background: Telemedicine through audio-visual conferencing can pair emergency physicians in a tertiary medical center with rural emergency departments (EDs). Telemedicine networks are beginning to provide an avenue for rural health research, but using telemedicine to conduct research informed consent is unproven.
Objectives: To test the hypothesis that patient comprehension of telemedicine-enabled research informed consent is not inferior to standard face-to-face research informed consent.
Methods: A prospective, open-label randomized control trial was performed in a 65,000-visit Midwestern academic ED to test effectiveness of a single dose of oral chlorhexidine gluconate 0.12% in preventing hospital-acquired pneumonia among adult patients with expected hospital admission. Prior to informed consent, potential participants were randomized in a 1:1 allocation ratio to standard face-to-face consent vs. consent by audio-visual telemedicine. Telemedicine was provided using a commercially available interface (REACH platform, Vidyo Inc., Hackensack, NJ). Comprehension of research consent (primary outcome) was measured using the modified Quality of Informed Consent (QuIC), a validated tool for measuring research informed consent comprehension. Sample size was estimated to require 100 completed surveys using a non-inferiority design with a 5 point non-inferiority margin (α = 0.05, power = 80%). Consent rate was a secondary outcome. Statistical comparisons were conducted with t-test, Mann-Whitney U test, and chi-squared test and statistical significance was defined by threshold α < 0.05 using two-tailed tests.
Results: Patients (N=131) were randomized, and 101 QuIC surveys completed (n = 67, telemedicine). Comprehension of research informed consent using telemedicine was not inferior to face-to-face consent (QuIC scores 74.4 ± 8.1 vs. 74.4 ± 6.9 on a 100-point scale, p = 0.999). Subjective consent understanding (p=0.194) and consent rates (56% vs. 69%, p = 0.142) were similar.
Conclusion: Telemedicine is non-inferior to face-to-face consent for delivering research informed consent, with similar research comprehension and patient-reported understanding. This study will inform design of future telemedicine-enabled clinical trials.
Control Number: 3234
Title: Interruptions in the Emergency Department and their Impact on Emergency Physicians
Topic: Health Services Research
Author Block: Hunter Hawthorne, Tara Cohen, Wesley Cammon, M. Fernanda Bellolio, Michon Dohlman, David Nestler, Thomas Hellmich, Erik Hess, Mustafa Sir, Susan Hallbeck, Kalyan Pasupathy, Renaldo Blocker. Mayo Clinic, Rochester, MN
Abstract:
Background: The Emergency Department (ED) is a dynamic environment characterized by unpredictable workloads with intermittent time-critical activities, high uncertainty, and concurrent management of multiple patients. While interaction with other staff members is absolutely necessary for patient care; these interactions amongst others may interrupt the physicians from their current task and lead to unfavorable outcomes.
Objectives: To identify the impact of interruptions on emergency physicians’ (EPs) shift and to capture their perceived workload in a large tertiary care academic center.
Methods: This is an ongoing prospective study of direct observation. EPs were observed during their clinical shift (9 hours) over a one-month period. Healthcare systems engineering researchers collected data regarding the shift time, interruption type, duration, interruption location, impact to current task and priority level of the interruption. Additionally, at mid-shift and end-shift, EPs completed a modified NASA-TLX survey to capture their perceived workload. Data was analyzed using descriptive statistics and MANOVAs.
Results: Preliminary results revealed 743 interruptions across the 10 shifts. EP frequently encountered the following types of interruptions: Face-to-face (FTF) Nurse (32%), FTF Doctors (29%), Pagers (13%), and Environmental (12%). Most interruptions caused a break-in-task (54%), while only 4% of the interruptions caused a complete end to their current task.
On average, EPs experience roughly 9 interruptions every hour, with each interruption lasting about 30 seconds (M=29.12s, SD=32.24).
EPs experienced the same types of interruptions during different shift times, but MANOVAs indicated significant differences between the shift times and interruption duration (p=0.008), location (p=0.025), priority level (p=0.046), and the impact to current task (p=0.0006). Additionally, the average NASA-TLX scores showed an increase in all six subscales, comparing mid-shift to end-shift.
Conclusion: Findings suggest that most interruptions that EPs encounter were face-to-face interactions, with the majority causing a break in their current task. EPs believed their workload increases from mid-shift to end-shift. This ongoing study will examine if there’s a correlation between interruptions and workload.
Control Number: 3179
Title: The Impact Of Self-interest On Ed Patients’ Triage Decisions And Perception Of The Triage Process
Topic: Ethics
Author Block: Natasha Wheaton, Scott Pierce, Mark Graber. University of Iowa Carver College of Medicine, Iowa CIty, IA
Abstract:
Background: The triage process is an important tool utilized by emergency departments in prioritizing patient evaluation and maintaining the flow of patients through the department. However, triage decisions can conflict with patient autonomy and patients’ ideas of “basic fairness”. The goal of this study was to examine the effect of self-interest on theoretical patient triage decisions amongst a group of patients presenting to the ED.
Objectives: Emergency Department patients, when asked to make a triage decision involving two theoretical cases, will be influenced by self-interest and triage themselves first if they are included in the scenario.
Methods: 264 ED patients were interviewed for this study randomized into two groups. Participants evaluated 17 different cases of two patient presentations to determine who they thought should be seen first. In one group, the scenarios were theoretical placing the patient as a third person observer. In the second group, the same questions were asked but the participant was asked to see themselves as one of the patients in the scenario being triaged. The average visual analog scale for each case was compared for each of the two groups. Qualitative analysis was done on study participants’ beliefs regarding the triage process.
Results: The study findings revealed that there was no statistical difference between the two groups for 16 out of the 17 cases in their decision making. There was good concordance between subjects as to which patient should be seen first for each case regardless of which group the subject was placed in. However, case 6 showed a statistical difference between the two groups in the visual analog scale number.
Conclusion: Self-interest does not appear to play a significant role in how patients make theoretical triage decisions. In addition, both groups had generally good concordance for which patient should be seen first in the majority of cases, 14 out of 17. For three of the 17 cases, both groups were unable to confidently decide which patient should be seen first. Although, one case did show a statistical difference between how the two study groups answered, it was not clinically significant, as the outcome would have been the same with the same theoretical patient being seen first.
Control Number: 3205
Title: Electrolyte Abnormalities And Antiarrhythmic Use In Patients Presenting To The ED With Atrial Fibrillation
Topic: Cardiovascular – Clinical Research
Author Block: Rajat N. Moman, Shawna D. Bellew, Christine M. Lohse, Erik P. Hess, M. Fernanda Bellolio. Mayo Clinic, Rochester, MN
Abstract:
Background: Atrial fibrillation (AF) represents 0.5% of all emergency department (ED) visits. Electrolyte imbalances, particularly hypokalemia and hypomagnesemia, are a predisposing factor for AF and guidelines recommend obtaining serum electrolyte panels in all patients. The incidence of these abnormalities is unknown and the use of antiarrhythmics for cardioversion in AF is common.
Objectives: To identify the incidence of electrolyte abnormalities in ED patients diagnosed with AF and evaluate the relationship between electrolyte abnormalities and number of doses of antiarrhythmic administered.
Methods: Cross-sectional study of consecutive adult patients who presented to an academic tertiary care ED from January 2011 to March 2014 with a final diagnosis of AF. Medical records were reviewed for demographics, vital signs, electrolyte values within 24 hours of presentation, electrolyte replacement therapy (ERT), medication types and doses administered. Continuous values and cut-offs were analyzed. Odds ratios were calculated for associations with electrolyte replacement and for between electrolytes and number of doses of antiarrhythmics given. Wilcoxon test was used to compare medians. Abnormal levels were defined as in the table.
Results: 1964 patients were included. Electrolyte level abnormalities are described in the table. 106 (5.4%) patients received ERT while in the ED; these patients were more likely to receive two or more doses (OR 2.3, 95% CI 1.6-3.5, p<0.0001) and more classes of antiarrhythmics (mean 2.5 +/- 1.6 vs. 1.7 +/- 1.7, p<0.0001). Patients with a lower potassium had an increased number of antiarrhythmic doses (p=0.049) and different classes of antiarrhythmics administered (p=0.041).
Conclusion: Significant electrolyte abnormalities are present in 4.4% of patients who present to the ED with AF and 5.4% of patients received ERT while in the ED. Patients with AF who require ERT in the ED are more likely to require an increased number of doses of antiarrhythmics. In particular, lower potassium levels are associated with increased medication requirements. Electrolyte deficiencies should be corrected aggressively in the ED to facilitate the management of atrial fibrillation.
Control Number: 3228
Title: Difference In The Incidence Of Adverse Events In Pediatric Procedural Sedation In The Emergency Department Between Observational And Randomized Controlled Trials
Topic: Airway/Anesthesia/Analgesia
Author Block: Patricia Barrionuevo1, Henrique A. Puls2, Ana Castaneda-Guarderas1, Waqas I. Gilani1, Jana L. Anderson1, Patricia J. Erwin1, M. Hassan Murad1, Erik P. Hess1, M. Fernanda Bellolio1. 1Mayo Clinic, Rochester, MN; 2Federal University of Health Sciences Of Porto Alegre, Porto Alegre, Brazil
Abstract:
Background: Observational studies rely on chart review and may underreport adverse events and present confounded estimates. However; they are larger than randomized trials and provided more precise and robust estimates when adverse events are rare.
Objectives: We conducted a systematic review and meta-analysis to compare the incidence of adverse events during procedural sedation in children as reported by randomized controlled trials versus observational studies.
Methods: We searched multiple electronic databases including MEDLINE, EMBASE, EBSCO, CINAHL, CENTRAL, Cochrane Database of Systematic Reviews, Web of Science and Scopus without language restrictions. Randomized controlled trials and observational studies of procedural sedations in the ED were included. Meta-analysis was performed using a random-effects model and reported as incidence rate per 1,000 sedations and 95% confidence intervals (CI).
Results: A total of 1,177 studies were retrieved for title and abstract screening and 258 of them were selected for full-text review. Forty two studies reporting on 13,975 procedural sedations were included. The incidence of all events was higher in randomized trials compared to observational studies: hypoxia 36.6 per 1,000 (95% CI 20.6 to 52.6) vs. 10.9 (6.6 to 15.3), agitation 64.6 (32.9 to 96.2) vs. 12.2 (7.5 to 16.9), apnea 14.0 (3.3 to 24.7) vs. 6.2 (1.8 to 10.6), BVM 12.1 (3.0 to 21.3) vs. 4.4 (1.5 to 7.2), hypotension 94.6 (19.0 to 170.2) vs. 2.2 (0 to 4.6), laryngospasm 5.9 (0 to 14.4) vs. 2.8 (0.8 to 4.8), and vomiting 99.5 (70.6 to 128.3) vs. 44.3(33.6 to 55.0). There were no intubations among the 576 patients in RCTs, and there were 4 among 8560 in observational studies 0.3 (0 to 0.7). Figure shows the outcome of hypoxia by study type.
Conclusion: Observational studies present a lower incidence of adverse events when compared to RCTs. Meta-analysis of RCTs is more informative for decision-making with the exception of very rare events such as intubation; which requires evidence derived from observational studies.
Control Number: 3207
Title: Are Residents Satisfied With The Current ACGME Duty Hour Regulations?
Topic: Education
Author Block: Diana M. Shewmaker, Damian V. Baalmann, Benjamin J. Sandefur, Steven H. Rose, James E. Colletti. Mayo Clinic, Rochester, MN
Abstract:
Background: The Institute of Medicine released its landmark report “To Err is Human” in 1999 which highlighted medical errors as a leading cause of patient morbidity and mortality. The ACGME subsequently instituted regulations on resident duty hours, implemented in 2003 and revised in 2011, which were intended to improve patient safety and resident wellbeing. Following those revisions, few studies on resident attitudes and opinions about duty hour regulations have been published. Existing studies are subspecialty specific and conflict in their conclusions.
Objectives: We report the attitudes and opinions about the current ACGME resident duty hour regulations from a single center, multispecialty cohort of residents.
Methods: During the fall of 2014, we anonymously surveyed all residents in ACGME-accredited residency programs at Mayo Clinic in Rochester, MN. We asked 41 questions pertaining to residents’ attitudes and opinions about duty hour regulations. We analyzed descriptive statistics based upon the survey results.
Results: 736 residents representing all 24 residency programs were surveyed. The response rate was 67%, and 53% of surveys were completed in entirety. Survey findings on resident attitudes and opinions of the ACGME duty hours are detailed below:
•78% of residents agree or strongly agree that they are satisfied with current duty hour regulations
•73% believe the regulations positively affect their overall training
•42% believe the regulations diminish clinical educational experiences
•89% believe resident fatigue contributes to adverse events
•80% believe the regulations reduce resident fatigue, 74% believe they decrease the frequency of performing clinical duties while fatigued
•37% believe the regulations decrease medical errors, 44% believe they have no effect
•41% believe handovers contribute more to medical errors than fatigue
•62% believe the regulations diminish patient familiarity and continuity of care.
Conclusion: The majority of a multidisciplinary cohort of residents reported satisfaction with the ACGME duty hour regulations, and believe they positively affect their training. Perceived negative effects of the regulations are diminished clinical experiences, more handovers of care, and decreased continuity of care and patient familiarity.
Control Number: 3212
Title: ST-segment Changes in Left Bundle Branch Block with Acute Coronary Occlusion: ST Concordance has High Specificity and Proportionally Excessive ST Discordance has High Sensitivity
Topic: Cardiovascular – Clinical Research
Author Block: Kenneth W. Dodd1, Kendra D. Elm2, Stephen W. Smith1. 1Hennepin County Medical Center, Minneapolis, MN; 2University of Minnesota Medical School, Minneapolis, MN
Abstract:
Background: Historically, the baseline ST-segment changes in left bundle branch block (LBBB) have made diagnosis of acute myocardial infarction difficult. But the diagnosis of acute coronary occlusion (ACO; “STEMI” equivalent) may be obvious if the rule of appropriate discordance in LBBB is kept in mind. That is, the ST segment changes should be in the opposite direction (discordant) to the majority of the QRS complex. Concordance has been shown to be a specific marker of ACO in LBBB. The three Sgarbossa criteria include two concordance rules [concordant ST elevation (STE) ≥ 1 mm and concordant ST depression (STD) ≥ 1 mm in V1-V3] and one discordance rule [discordant STE ≥ 5 mm]. These criteria are suggested for the diagnosis of myocardial infarction in LBBB. While Sgarbossa’s concordance rules with a cutoff of 1 mm have a high specificity for ACO in LBBB, we hypothesize that even less concordance would also have a high specificity. Furthermore, we have previously shown that excessively discordant ST-segment changes must be assessed as a proportion of the preceding QRS segment (ST/S ratio) and that an ST/S ratio ≤ -0.25 is significantly more sensitive than Sgarbossa’s 5 mm STE cutoff (79% vs 33%, p < 0.05).
Objectives: To compare cutoffs for ST-segment concordance as well as updated rules for diagnosis of ACO in LBBB.
Methods: In this retrospective study, the study group consisted of ED patients with LBBB and angiographically-proven complete ACO or culprit lesion and troponin I ≥ 10 ng/ml. The control group consisted of consecutive ED patients with LBBB and ischemic symptoms but no evidence of recent ACO. “NSTEMI” patients were included as controls if ACO could be excluded. Measurements included ST segment at the J-point as well as S- and R-wave amplitude to the nearest 0.5 mm. Statistics were by McNemar’s test.
Results: The study and control groups consisted of 33 and 129 patients, respectively. Both of Sgarbossa’s concordance rules (with a cutoff ≥ 0.5 mm), as well as a rule of ≥ 1 mm concordant STE or STD in any lead, all had > 90% specificity for ACO in LBBB (see Dodd Table 1).
Conclusion: In LBBB, Sgarbossa’s concordance rules with a ≥ 0.5 mm cutoff or ≥ 1 mm of concordance in any lead have a high specificity for ACO in LBBB. Proportionally excessive discordance has a high sensitivity for ACO in LBBB. Rules for diagnosis of ACO in LBBB must keep these principles in mind.
Control Number: 3235
Title: Youth-Size ATV Seat Design: Variability and Lack of Consistent Changes in Vehicles Designated for Different Ages Demonstrates Need for Evidence-Based Standardization
Topic: Disease/Injury Prevention
Author Block: Charles A. Jennissen, Claire Castaneda, Alvin Long, Gerene Denning. University of Iowa Department of Emergency Medicine, Iowa City, IA
Abstract:
Background: Carrying passengers is an independent risk factor for crash and injury on all-terrain vehicles (ATVs). Optimal seat design would allow for safe vehicle operation while decreasing the likelihood of multiple riders and use by underage operators. A previous study of adult-size ATVs found a wide variability in seat length and placement among manufacturers and between sport (mean 31.3 in.) and utility ATVs (mean 26.1 in.). Seat lengths overall ranged from 19.8-37.0 inches. Many models had seats long enough to accommodate multiple riders. There are no published studies related to youth-size ATV seat design.
Objectives: To determine the variability in seat length characteristics among youth-size ATV models (Y6 for youth ≥6 yrs, Y10 for youth ≥10 yrs, Y12 for youth ≥12 yrs, and Y14+ for youth ≥14 yrs) from major manufacturers.
Methods: Measurements of 37 models were performed using an image-based method previously validated that utilizes tools from Adobe Photoshop. Seat characteristics were compared by model age designation, manufacturer, and by ATV type (sport vs. utility).
Results: Seat lengths ranged from 20.5-30.4 inches with a mean of 24.6 inches. The difference in the seat length of the average Y6 model and the average Y14+ model was only 1.4 inches. Youth utility models (eight) had an average seat length of 25.7 inches (range 23.1-27.9 in.) which was similar to sport models for youth 10 years and older (mean 25.0 in., range 22.1-30.4 in.). Y6 sport utility models had an average seat length of 21.5 inches (range 20.5-25.8 in.). The seat front to handle grip distance ranged from 2.7-10.4 inches with a mean of 6.3 inches. The difference in this average distance between Y6 and Y14+ models was only 1 inch. Variability was noted in seat length and in seat front to handle grip distance among manufacturers for ATV models designated for the same aged youth.
Conclusion: The seat lengths of youth-size ATVs are very similar to that of adult models and there was little difference in the seat length and placement for youth models that were designated for various aged children. It is likely that these seat lengths allow and potentially encourage the carrying of passengers. The seat front to handle grip distance was quite short for many youth models and may allow the use of these vehicles by children younger than which they are designated. Regulations are needed to standardize safe seat design for ATVs.
Control Number: 3233
Title: Selective Spinal Immobilization Protocol For Prehospital Providers: Effects On Practice And Patient Outcomes
Topic: EMS/Out-of-Hospital – Non-Cardiac Arrest
Author Block: Nathan Miller, Kari Harland, Joshua Stilley. The University of Iowa, Iowa City, IA
Abstract:
Background: In January 2015 a Selective Spinal Immobilization Protocol for prehospital providers was adopted for use throughout the state of Iowa. The previous protocol proscribed near uniform application of a cervical collar and long backboard to all trauma patients with a concerning mechanism of injury. The new protocol limits immobilization criteria by adopting a combination of recent NAEMSP/ASC-COT recommendations and the NASEMSO National Model EMS Clinical Guidelines.
Objectives: To determine if a Selective Spinal Immobilization Protocol will result in a reduced rate of prehospital spinal immobilization.
Methods: We conducted a retrospective chart review of all 239 patients arriving at the University of Iowa Hospitals and Clinics by ground or air ambulance during April 2014 (pre-implementation of protocol) and April 2015 (post-implementation of protocol) who met National Trauma Data Standard Patient Inclusion Criteria. All demographics, injury severity, and visit characteristic data were collected from the Iowa Trauma Registry database. Spinal immobilization status was identified using EMS patient care records, nursing flowsheets, and clinician notes.
Results: From April 2014 to April 2015, the data shows a statistically significant decrease (P=0.0001) in the percentage of patients with any spinal immobilization in place upon arrival to our emergency department. There was also a significant decrease in the use of the cervical collar and backboard together as a method of spinal immobilization. Even after controlling for patient age and injury severity, the adjusted odds ratio for spinal immobilization in April 2015 was 0.38 (95% CI 0.19-0.75). After controlling for patient age, head abbreviated injury score and if a patient had spinal immobilization, there was no difference in the odds of spinal cord or vertebral injury, aOR = 0.71 (95% CI 0.31-1.62).
Conclusion: The data demonstrates that a selective spinal immobilization protocol allowed prehospital providers to correctly identify patients at risk for spinal injury. With the decrease in spinal immobilization rate and no change in the odds of spinal cord or vertebral injury after implementation, the 2015 Selective Spinal Immobilization Protocol was able to reduce the rate of inappropriate spinal immobilization.