If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Address correspondence to Mark V. Mai, MD, MHS, Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, 3401 Civic Center Blvd, Philadelphia, PA 19146
Training disruptions, such as planned curricular adjustments or unplanned global pandemics, impact residency training in ways that are difficult to quantify. Informatics-based medical education tools can help measure these impacts. We tested the ability of a software platform driven by electronic health record data to quantify anticipated changes in trainee clinical experiences during the COVID-19 pandemic.
Methods
We previously developed and validated the Trainee Individualized Learning System (TRAILS) to identify pediatric resident clinical experiences (i.e. shifts, resident provider-patient interactions (rPPIs), and diagnoses). We used TRAILS to perform a year-over-year analysis comparing pediatrics residents at a large academic children's hospital during March 15–June 15 in 2018 (Control #1), 2019 (Control #2), and 2020 (Exposure).
Results
Residents in the exposure cohort had fewer shifts than those in both control cohorts (P < .05). rPPIs decreased an average of 43% across all PGY levels, with interns experiencing a 78% decrease in Continuity Clinic. Patient continuity decreased from 23% to 11%. rPPIs with common clinic and emergency department diagnoses decreased substantially during the exposure period.
Conclusions
Informatics tools like TRAILS may help program directors understand the impact of training disruptions on resident clinical experiences and target interventions to learners’ needs and development.
Program directors and trainees lack means of measuring the effect of training disruptions on clinical experiences. To test the ability of a system using EHR-based attribution to quantify clinical experiences, we studied pediatric resident clinical experiences surrounding the COVID-19 pandemic.
Currently, program directors and trainees lack effective methods to track the volume and variety of trainee experiences in the setting of disruptions, either intentional, as with the implementation of a night float system
Given the prominent role of the electronic health record (EHR) in the provision of patient care, informatics-based tools may provide a solution. With support from the Association of Pediatric Program Directors, we previously developed and validated an automated, scalable, and portable system, called the TRAinee Individualized Learning System (TRAILS), using common electronic health record (EHR) metadata to identify clinically meaningful interactions between resident provider and patients.
Briefly, we trained a classification model using EHR metadata extracted from the Epic Clarity database (Epic Systems, Verona, Wisconsin, United States) associated with clinical activity (eg, note authorship, order placement, care team, and chart closure). Time-in-chart was estimated from audit log events, as previously described.
Using these data elements as predictors and resident identified resident provider-patient interactions (rPPIs) as labels, we developed logistic regression classification models, which have been shown to identify pediatric resident patient experiences with high accuracy across 3 care settings (Continuity Clinic, Emergency Department, and Inpatient).
In this study we aimed to generate additional construct validity evidence, based on the Messick framework of validity,
by testing whether the TRAILS system could measure expected changes in pediatric resident clinical training experiences during a known disruption in training. Based on reported COVID-19 pandemic-driven curricular changes and decreased healthcare utilization,
we hypothesized that the TRAILS system would detect a decrease in resident shifts across all contexts and across all training levels, resulting in fewer rPPIs and decreased exposure across predictable diagnosis categories.
Methods
We performed a single-center retrospective cohort study incorporating year-over-year analysis of rPPIs. The study was approved by the Institutional Review Board at the Children's Hospital of Philadelphia.
Setting
Children's Hospital of Philadelphia is a large urban freestanding children's hospital with a pediatrics residency program comprised of 155 residents. Residents provide patient care in three care settings – Continuity Clinic (three primary care clinics), Emergency Department (four teams), and Inpatient (fifteen teams).
Curricular Changes
In preparation for the regional spread of COVID-19, stepwise curricular changes were implemented in the pediatric residency program. In mid-March 2020, the program suspended rotations that did not provide required staffing for the hospital. Half-day continuity clinic sessions were suspended to decrease inpatient handoffs. Month-long primary care resident rotations continued in-person, and residents could complete a 1 or 2-week primary care elective if interested. Telemedicine elective opportunities were added to the alternative elective options in mid-April.
Residents continued to participate in emergency department (ED) and inpatient rotations. As non-pediatric rotating residents left to fill staffing needs at their home institutions, some pediatrics residents spent an extra 1-2 weeks in the ED during their elective blocks. Lower ED volumes led to some previously scheduled residents serving in a “backup” role. Similarly, due to a persistently low inpatient census, the residency program down-staffed to allow two residents to remotely support each in-person team.
By late May 2020, after the initial wave of regional COVID-19 cases subsided and as hospital ramp-up efforts were initiated, inpatient teams returned to normal staffing and several suspended rotations were restarted. By the start of the new academic year in late June 2020, all rotations returned to their prepandemic state.
Participants
We included pediatric residents who completed at least one shift in any care setting from March 15 to June 15 across three time windows: 2018 (Control #1), 2019 (Control #2), and 2020 (Exposure). We excluded residents if no EHR access log entries were present. While individual residents may have been present in multiple cohorts (eg, a PGY-1 resident during the Control #1 time window would have been a PGY-3 resident during the Exposure time window), we treated each time window cohort independently and did not link residents longitudinally.
Data Sources
We identified resident shifts and rPPIs from the Epic (Epic Systems, Verona WI) Clarity relational database using previously-described automated methods.
Briefly, we extracted EHR access log entries from institution-managed computers, but not personal or mobile devices, and used timestamps to calculate resident shifts and identify candidate encounters.
Additional predictor variables included EHR data elements such as note authorship, order placement, care team membership and chart closure. Encounters were labeled as Continuity Clinic, Emergency Department or Inpatient based on the encounter department and the location of the patient at the time of EHR access.
To predict rPPIs we applied a logistic regression classifier which has been previously trained and validated in a population of pediatrics residents at the same institution. For encounters which were identified as rPPIs, patient encounter and admission (if applicable) diagnoses were recorded as outcome variables. We linked EHR diagnoses to International Classification of Disease 10 (ICD-10) codes, as well as Clinical Classification Software Refined (CCSR) codes developed with support from the Agency for Health Research and Quality,
to obtain clinically meaningful diagnosis categories. We defined a Continuity Clinic rPPI as having “patient continuity” if the same patient had been seen by the same resident provider prior to the current visit, during the same academic year.
Data Analysis
We excluded resident shifts calculated to be <1 hour in duration, as these are typically sparse access log entries unrelated to patient care.
When reporting shift counts, durations, and rPPIs per resident, residents were counted in the denominator only if they had ≥1 shift during the time window. Shift totals were compared with Pearson's chi-squared test across PGY and time window. Continuous variables were compared with one-way ANOVA by time window, with post hoc pairwise comparison using Tukey multiple comparison of means. Cohen's d was calculated to compare effect sizes between shift counts, shift durations, and rPPIs in each control and exposure time windows. Comparison of diagnoses among time windows was evaluated with Pearson's chi-squared test. All extraction and subsequent analysis were written in R Studio (RStudio, PBC, Boston, MA).
Results
A total of 270 unique residents (Control #1: 155; Control #2: 158; Exposure: 161) were included. Two residents from Control #1 and ten residents from Control #2 were excluded for shifts lasting <1 hour; the majority of these were medicine-pediatric residents scheduled at another hospital during the entire study period.
There was a significant relationship between time window and shift count. In the Exposure cohort, residents completed fewer shifts than those in both Control cohorts in pairwise comparison (P < .01 across all PGYs). The magnitude of these differences resulted in moderate to large decreases in shifts per resident with an average decrease of 29% fewer shifts per PGY-1 (Cohen's d: 0.75–0.98), 18% fewer shifts per PGY-2 (Cohen's d: 0.49–0.55) and 44% fewer shifts per PGY-3 (Cohen's d: 0.81–1.17). While shift counts decreased, shift duration in the Exposure cohort significantly increased among PGY-1 residents (mean 11.2 hrs, SD 4.0) compared to a mean of 10.2–10.3 hours in both Control cohorts (P < .001), although effect sizes were relatively small (Cohen's d < 0.3).
Figure 1 shows the count of total rPPIs for each cohort subset by care setting, and includes one-month lead-in. Total rPPI count in the Exposure cohort decreased dramatically following March 15th, 2020 (Week 12) and remained lower across all care settings through the duration of the Exposure time window. Separating by PGY and by clinical context demonstrate substantial decreases across all levels when comparing Exposure to both Control time windows by posthoc pairwise comparison. PGY-1 residents demonstrate the largest absolute decrease in rPPIs (Exposure: 111, Control #1: 222, Control #2: 206) reflecting an average 48.2% decrease (Cohen's d: 1.0–1.8). Among clinical contexts, Continuity Clinic shows the largest absolute decrease with a mean of only 8.2 rPPIs (SD 23.5) during the Exposure time window compared to 36.5–36.7 during the control time windows, representing an average decrease of 77.6% rPPIs per resident (Cohen's d > 1.1).
Figure 1Resident provider-patient interactions decreased across all clinical contexts after March 16, 2020 (Week 12) compared to interactions during the same timeframe in 2018 and 2019. The World Health Organization declared COVID-19 a global pandemic on March 11, 2020 (Week 11). Due to missing access log data, 6 days were excluded from the 2018 timeframe and four days were excluded from the 2019 timeframe. rPPI: resident provider-patient interaction.
In addition to a decrease in absolute rPPIs among Continuity Clinic encounters, the percentage of clinic visits with provider-patient continuity decreased from 23.4 % in both Control cohorts to 11.1% in the Exposure cohort (P < .001 by Pearson's chi-squared test). Patient age was significantly lower among encounters labeled Continuity Clinic in the Exposure cohort (mean: 3.6 years, SD: 4.9) compared to both Control #1 (mean: 5.5 years, SD: 5.2, Cohen's d: −0.4) and Control #2 (mean: 5.4 years, SD: 5.2, Cohen's d: −0.4).
Among outpatient encounters, diagnosis groupings were significantly different when comparing Exposure and Control cohorts. For encounters labeled Continuity Clinic,Figure 2a presents the ten most frequently appearing CCSR diagnosis categories for which a Pearson's chi-squared test demonstrated a significant relationship between diagnosis count and time window (P < .05). Trends during the Exposure time window include a decrease in the percentage of encounters with well-visit diagnoses (“Medical evaluation”) as well as those with respiratory, infectious disease, growth, and mental illness diagnoses. In a similar analysis of encounters labeled Emergency Department (Fig. 2b), most CCSR categories significantly decreased in the Exposure period compared to Control periods. Of note, categories associated with respiratory illness appeared to show a significant reduction.
Figure 2(A) Common continuity clinic encounter diagnoses significantly decreased during the 2020 timeframe compared to previous years. (B) Common ED encounter diagnoses significantly decreased during the 2020 timeframe compared to previous years. The top 10 CCSR diagnosis categories for each context are shown that demonstrated statistical significance (P < .05) across timeframes.
The regional arrival of the COVID-19 pandemic and subsequent curricular changes presented an opportunity to test the ability of the TRAILS system to measure the expected impact on pediatric resident clinical experiences. Via automated extraction of resident clinical training experiences from EHR data, the TRAILS system was able to support our hypothesis that during the pandemic our residents worked fewer shifts with many fewer resident provider-patient interactions. TRAILS detected an increase in shift duration among PGY-1 residents in the Exposure cohort, which was anticipated as inpatient teams were staffed with fewer residents working longer shifts to minimize COVID-19 exposure. Using TRAILS, we measured the extent to which clinical experiences were impacted, as the mean number of rPPIs decreased significantly across all 3 care settings, resulting in a decrease ranging from 33% to over 75% compared to prepandemic levels, particularly affecting PGY-1 residents. Furthermore, for the Exposure cohort the system was able to detect that patients encounters associated with well-child visit diagnoses and respiratory diagnoses decreased in the Continuity Clinic setting, while diagnoses associated with respiratory illnesses decreased in the Emergency Department setting.
These expected findings support the construct validity of using TRAILS to measure variations in pediatric resident clinical training experiences around disruptions in training, as the system detected the expected finding of decreased clinical exposure during the pandemic compared to prepandemic years.
These findings are further supported by both national and international reports of significantly decreased healthcare utilization for non-COVID-19 illnesses, particularly in children, following the implementation of stay-at-home orders and social distancing due to COVID-19.
Such drops in healthcare utilization may translate into fewer opportunities for residents to engage in concrete patient experiences and learn from the spectrum of disease presentations.
Informatics-based systems like TRAILS can complement and perhaps illuminate qualitative data from residents and program directors in the setting of external influences to trainee education. Because clinical experiences can be resolved to the individual resident level, such systems could allow program directors to be even more individualized and flexible in their approach to determining resident competency.
For example, program directors might use a system like TRAILS in near real-time to understand the impact of programmatic changes on residents and to identify any unintended training gaps that can be addressed. Such data may be valuable to program directors seeking to target educational experiences.
Our study is limited to data collected by TRAILS at a single residency program, thus limiting the generalizability of our findings during this phase of early testing. Following validation of TRAILS at collaborating sites, a repeat analysis of residents’ experiences during the pandemic would provide further construct validity evidence for TRAILS, as recent literature suggests other institutions may have had similar experiences.
We acknowledge the support of Dr. Bryan Wolf and the Department of Biomedical and Health Informatics for providing development support for this project. We thank the pediatric residency program, especially Dr. Jeanine Ronan, program director, for support of this work. With gratitude, we would like to acknowledge the pediatrics residents at the Children's Hospital of Philadelphia for all they do in the care of their patients and each other. We would like to thank our external collaborators – Drs. Andrea Leep Hunderfund, Brad Sobolewski, and Evan Orenstein; TRAILS is available to partnering researchers and readers can reach out to the authors for additional details.
Funding for this project was provided by the Association of Pediatric Program Directors (APPD) through the Special Projects Grant (Principal Investigators: Adam Dziorny and Mark Mai) and the American Medical Association (AMA) through the Accelerating Change in Medical Education Innovation Grant (Principal Investigator: Adam Dziorny). The APPD and AMA had no involvement at any stage of the production of this manuscript.
References
Lieu TA
Forrest CB
Blum NJ
et al.
Effects of a night-float system on resident activities and parent satisfaction.
Conflicts of Interest: Drs. Mark Mai and Adam Dziorny are co-founders of the Trainee Individualized Learning System (TRAILS), the intellectual property of which is owned by the Children's Hospital of Philadelphia. They have not received financial compensation for this work. The remaining authors have no relevant conflicts of interest to declare.