Creating a culture of psychological safety, feedback and learning through a formalized interdisciplinary training program for health professionals

Abstract

Working in the health care field requires a high level of individual training, largely focused on clinical skills. Additional training regarding interprofessional and interpersonal skills has gained an increased focus in recent years, with many institutions encouraging health professionals to engage in supplementary training programs to better develop leadership capacity. This study aims to assess the impact of one such program, the Relational Leadership Initiative (RLI) taught at the University of Utah. Through qualitative interviews with RLI participants, researchers learned the impact of the program and its teachings on the work and wellbeing of health professionals. Three major themes emerged including (1) creating psychological safety, (2) fostering a culture of feedback, and (3) learning about one’s communication style. As institutions and health executives encourage such trainings among their teams of health professionals, they can work closer together to improve the care of their patients and communities.

Introduction

Health professions training primarily focuses on clinical and didactic experiences deemed essential to patient care and producing positive outcomes. Over the last decade, we have witnessed increased value placed on imparting the “softer” skills within health professions programming and education, specifically, an emphasis on intrapersonal and interpersonal communication to bolster leadership capacity.1–3 Notably, the Interprofessional Education Collaborative lists “interprofessional communication practices” as one of their primary competency domains.4 These skills have been linked to improved patient outcomes; one study found that after providers underwent a communication training course, medication adherence, patient self-efficacy, and hypertension outcomes improved.5 As such, many health professionals often seek additional training to improve their team-based skills at different points in their careers for personal and professional development reasons. 6,7

Historically, leadership programming in health systems highlights sustainable growth strategies of individual providers and less on the team unit.8,9 In recent years, however, the need for team training has become increasingly noticed.2 As such, academic institutions and health care organizations have developed home-grown leadership programs and communication seminars for staff to encourage their teams to be more cohesive and high functioning.9,10

While team training and working cohesively together is often the focus, a critical and often neglected piece of such training is ensuring that a sense of psychological safety is embedded in the programming. Indeed, the growth of an individual on an interprofessional team hinges on a group or team’s ability to create a safe space for its members to flourish.11 Psychological safety, described by Amy Edmonson, is “a shared belief held by members of a team that the team is safe for interpersonal risk taking.”12 It is a necessary foundation when bringing together health care workers from different professions to care for patients and improve outcomes. The classic landscape of psychological safety and communication has notably shifted with the recent COVID-19 pandemic, amplifying the need for people to collaborate while maintaining the highest level of care for their patients.13,14 Given the growing prevalence and value of leadership training, there is a need to identify and address current programming to foster psychological safety on health care teams and capture its effects on patient outcomes in this novel health landscape.

Relational leadership, a theory that aims to bring together a cross-generational, interprofessional, diverse group of health professionals, was initially developed in 2015.15 Since then, programs push an incorporation of its evidence-based principles; in the health care space, a leader in the movement is the Relational Leadership Initiative (RLI). RLI began as a collaborative partnership between Oregon Health & Science University (OHSU) and Intend Health Strategies, a clinician-founded non-profit focusing on communication and quality outcomes.15 Its programming started in 2017 and evolved to emphasize the development of leadership and advocacy skills within the greater health community via a longitudinal learning seminary-style curriculum, emphasizing four domains – self-management, coaching and mentorship, leading change, and teamwork. Each content module is designed to contribute to the larger goal of helping health professionals create authentic connections and cultivate relationships, thus enhancing collaboration.15 In 2019, RLI expanded its partnerships to include the University of North Carolina at Chapel Hill and the University of Utah. The program enjoyed immediate success, championed by a team of interprofessional faculty and housed in Health Science Education. In the spirit of quality improvement, the RLI core team at the University of Utah endeavored to understand the reasons for the program’s appeal.

Methods

Conceptual Framework

This study is a qualitative analysis of how health professionals at the University of Utah implemented and responded to the RLI programming from Fall 2019 to Fall 2020, and which aspects they have been able to integrate into their respective health care teams. Seventeen interviews were performed and transcribed, and then analyzed using Strauss’ Grounded Theory as the framework.16

Study Design

Three independent cohorts of 20 to 30 participants participated in the RLI programming at the University of Utah in Fall 2019, Spring 2020 and Fall 2020. Participants within each cohort were contacted via email between 4 to 14 months after completing the program and were asked for their voluntary participation in a 30-minute interview to discuss their perception and application of the RLI course to date. Forty-two participants were contacted, and 17 live interviews were conducted over Zoom video conference software. To reduce study bias, some participants were excluded from interviews because they did not complete the entire program or because they were part of this or another analysis of the program. Others did not respond to the inquiry to participate.

The study utilized a guided narrative strategy to build on questions written by the national RLI group. See Supplement A for the complete interview guide. All interviews were recorded and transcribed for data analyses, then uploaded to Atlas.ti, a HIPAA-compliant qualitative software package, to analyze and create codes. Codes were identified, defined, and organized into themes following the parameters set by grounded theory.16 Dual coders were utilized to increase the rigor and reproducibility of results. Given the quality improvement nature of this study, it was deemed exempt by the University of Utah’s Institutional Review Board (IRB 00134146).

Results

Interview Themes and Responses

Three separate themes emerged from the interview topics, including the importance of (1) creating psychological safety, (2) fostering a culture of feedback, and (3) learning about one’s communication style.

  1. Creating Psychological Safety

“It is so different than what we are trained to do, especially in the health care profession. You never really talk about people’s lives outside of the home. You don’t ever really talk about your story. We so often just see each other in this one sphere of what we do.”

Psychological safety was a concept multiple participants appreciated and revisited throughout the training program.12 Psychological safety, centered around communicating in a safe space, is defined as the ability and insight to work together as individuals and as a collective group. For example, in discussing how to manage self-identity, the role of “teaming,” and communicating in conflict, participants found psychological safety to be the primary dictator of the success of small groups and even within the larger cohort. Participants shared how the RLI program modeled how a safe space is developed and expressed increased comfort in practicing difficult conversations. One participant shared how they “definitely felt very safe to be vulnerable in that setting, very supported by coparticipants and facilitators as well.” Others shared how setting group norms at the beginning of the training and revising the criteria to set expectations of how to work together, thus creating room for participants develop their own ideas of what would be discussed.

A necessary condition for psychological safety is the involvement individuals from multiple training backgrounds and subspecialties within health professions. Diversity within the group mirrors the interprofessional communities often seen in the health care setting.  While reflecting on the training, one participant shared that “learning that everyone has a different background and different experience, and a different reality and perception of reality is something I knew I had a grasp of.”

  • Fostering a Culture of Feedback

“Creating a culture of feedback was incredibly useful. I feel like that is what I want our culture to be like. People feel like they can engage and provide constructive criticism when needed or give people praise when needed.”

The integration of regular feedback into participants’ personal and professional lives is a critical aspect they will continue to apply going forward. Such conversations can be difficult, especially if it is contrary to the unit’s culture. However, as health care leaders learn to value feedback and feel more comfortable discussing feedback with colleagues, communication patterns can improve. Participants shared that through the program, they discovered “not to feel defensive about the feedback” and to be “more open to it, to try and look at [oneself] in a more honest way, trying to have a growth mindset.”

  • Learning about One’s Communication Style

“I spent a lot of time watching how people facilitated, even though I was there to gain skills. I also was trying to learn and model off of them…that’s something I have become more self-aware of.”

Participants appreciated the facilitators, or group leaders, who modeled many of the concepts throughout the learning process before participants practiced them. Through facilitator modeling coaching and vulnerability, participants were allowed to feel safe to share stories themselves.

Another concept modeled was “one-on-ones,” or discussions between two individuals. Participants had the opportunity to role-play difficult conversations they may experience in the workplace, bringing their concerns to the RLI group. Role-playing helped participants work through problems and helped build relationships within the RLI cohort early on. For example, one shared that “one-on-ones helped me a lot…even if we didn’t solve the problems of the world, we could both relate to one another.”

The translation of these softer skills into the workplace was harder to replicate than initially anticipated. People came to the RLI training ready to learn and open themselves up to the training group; however, participants shared it was harder to model these things within their workplace.

Discussion

training program with a focus on relational leadership, they were able to broaden their perspective regarding psychological safety and learn together as a group. This study justifies the need for continuing education and training programs on topics of communication and teamwork. Interviews with RLI participants provide a wealth of knowledge regarding how the perspective of health care professionals changed post-program participation. Participation in leadership training programs helps health care workers broaden their perspective as they are exposed to different situations they may experience in practice. The study highlights how a program, such as RLI, may be central to this learning and shift in perspective while fostering a psychologically safe space.

Participants felt that the overall RLI programming helped them become more comfortable creating relationships on teams with diverse groups of people. In addition, the individual modules provided valuable opportunities to collaborate, practice difficult conversations, and flourish in their leadership abilities.

Research on teaching concepts of psychological safety to health professionals is often front-loaded in the initial clinical training; it is less often seen in continuing education or professional development courses after graduation.17 In this way, the RLI program presents a novel, interdisciplinary cohort-style learning experience, inviting health care professionals from diverse careers and statuses come together and discuss complex topics in the spirit of creating a psychologically safe space.

As health professionals work together in a psychologically safe space, with less of a hierarchical lens, initiative and creativity can be encouraged. Mistakes can be seen as opportunities to improve, and teams can work more closely together with patient safety as a primary focus.18

As increasing numbers of health care workers seek out supplementary leadership training, organizations should consider offering such programs. Similar to the results of this study, as health professionals participate in these programs, and improve their ability to communicate with other members of the interprofessional team, it is likely that they will be better equipped to work collaboratively on teams. As teamwork and leadership amongst health professionals improve, the level of care provided to patients will likely improve as well.19

Future opportunities for this research include a broader study involving participants from multiple sites at different follow-up times upon completion of the program. In addition, quantitative studies are concurrently underway that evaluate participant demographics, as well as their response to and application of the training.

Limitations

This study had multiple limitations. First, the sampling of individual participants came from one training location, the University of Utah, while the same program was conducted at other sites. Study subjects also participated in the program in multiple ways – some entirely in person, some a hybrid model of in-person and online, and others solely online due to restrictions of the COVID-19 pandemic. Interviews with previous participants may be susceptible to response bias, as those more satisfied with the program may have been more likely to respond to the interview request. Finally, the application of study results is limited to participants of studies similar to those described above.

This study’s strengths include the rigorous use of qualitative methods, including a semi-structured interview format and coding based on Strauss’ Grounded Theory.16 In addition, participants shared positive and negative feedback from the program, which will allow RLI leaders to adapt programming for future participants.

Conclusion

Through the integration of leadership training programs, like RLI, there is an opportunity for health executives and leaders to identify better ways to integrate teaming and leadership skills within their organizations to help their health professionals work together as they care for patients. This study offers a balanced perspective of positive and negative feedback from the program, which will allow RLI leaders to adapt the content for future participants. Most importantly, participants appreciated a psychologically safe space where they could experience self-growth and practice conversations they may have with others within health care or their patients. Such efforts open the door for health professionals to work more closely together in teams to improve the care of their patients and communities.

References

1.        Back AL, Fromme EK, Meier DE. Training Clinicians with Communication Skills Needed to Match Medical Treatments to Patient Values. J Am Geriatr Soc. 2019;67(S2):S435-S441. doi:10.1111/jgs.15709

2.        Patel S, Pelletier-Bui A, Smith S, et al. Curricula for empathy and compassion training in medical education: A systematic review. PLoS One. 2019;14(8):e0221412. doi:10.1371/journal.pone.0221412

3.        van Diggele C, Burgess A, Roberts C, Mellis C. Leadership in healthcare education. BMC Med Educ. 2020;20(Suppl 2):456. doi:10.1186/s12909-020-02288-x

4.        Core Competencies for Interprofessional Collaborative Practice: 2016 Update. Interprofessional Education Collaborative; 2016.

5.        Tavakoly Sany SB, Behzhad F, Ferns G, Peyman N. Communication skills training for physicians improves health literacy and medical outcomes among patients with hypertension: a randomized controlled trial. BMC Health Serv Res. 2020;20(1):60. doi:10.1186/s12913-020-4901-8

6.        Davila L. An Absense of Essential Skills in the Current Healthcare Landscape. Pharmacy Times.

7.        Mata ÁN de S, de Azevedo KPM, Braga LP, et al. Training in communication skills for self-efficacy of health professionals: a systematic review. Hum Resour Health. 2021;19(1):30. doi:10.1186/s12960-021-00574-3

8.        Leggat SG. Effective healthcare teams require effective team members: defining teamwork competencies. BMC Health Serv Res. 2007;7(1):17. doi:10.1186/1472-6963-7-17

9.        Zajac S, Woods A, Tannenbaum S, Salas E, Holladay CL. Overcoming Challenges to Teamwork in Healthcare: A Team Effectiveness Framework and Evidence-Based Guidance. Front Commun (Lausanne). 2021;6. doi:10.3389/fcomm.2021.606445

10.      Rosen MA, DiazGranados D, Dietz AS, et al. Teamwork in healthcare: Key discoveries enabling safer, high-quality care. Am Psychol. 73(4):433-450. doi:10.1037/amp0000298

11.      Appelbaum NP, Lockeman KS, Orr S, et al. Perceived influence of power distance, psychological safety, and team cohesion on team effectiveness. J Interprof Care. 34(1):20-26. doi:10.1080/13561820.2019.1633290

12.      Edmondson A. Psychological Safety and Learning Behavior in Work Teams. Adm Sci Q. 1999;44(2):350-383. doi:10.2307/2666999

13.      Shanafelt T, Ripp J, Trockel M. Understanding and Addressing Sources of Anxiety Among Health Care Professionals During the COVID-19 Pandemic. JAMA. 2020;323(21):2133. doi:10.1001/jama.2020.5893

14.      Back A, Tulsky JA, Arnold RM. Communication Skills in the Age of COVID-19. Ann Intern Med. 2020;172(11):759-760. doi:10.7326/M20-1376

15.      Relational Leadership Institute – Intend Health Strategies. Published 2022. Accessed April 4, 2022. https://www.intendhealth.org/strategy-pages/the-relational-leadership-institute-rli

16.      Strauss AL, Glaser B. The Discovery of Grounded Theory. Aldine de Gruyter; 1967.

17.      Tsuei SHT, Lee D, Ho C, Regehr G, Nimmon L. Exploring the Construct of Psychological Safety in Medical Education. Acad Med. 2019;94(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 58th Annual Research in Medical Education Sessions):S28-S35. doi:10.1097/ACM.0000000000002897

18.      Whitelaw S, Kalra A, van Spall HGC. Flattening the hierarchies in academic medicine: the importance of diversity in leadership, contribution, and thought. Eur Heart J. 2020;41(1):9-10. doi:10.1093/eurheartj/ehz886

19.      Babiker A, el Husseini M, al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.

Coding and Themes from Participant Interviews

Appendix A: Interview Guide – Click here to view Interview Guide

“Dear Program Director”: An Analysis of Gender Bias in Internal Medicine Letters of Recommendation from 2009 and 2019

Abstract

Purpose

The majority of United States Internal Medicine (IM) programs use letters of recommendation (LOR) as part of their holistic review of applicants in the residency selection process.  It is important to determine if there are gender differences in IM LOR for frequency of agentic (e.g., assertive, confident) and communal (e.g., compassionate, kind) descriptors.

Methods

The authors retrospectively reviewed LOR from University of Utah IM matched applicants in 2009 and 2019.  Text analysis was used to determine agentic and communal descriptor frequency and compared by applicant gender, letter writer gender and year with ANOVAs.

Results

Letter writers used more communal terms in men applicants’ LOR relative to women applicants’ LOR in 2009, F(1,158)=9.80,P=0.001,np2=0.06 and there was more communal presence in women writers’ LOR relative to men writers’ in 2009, F(1,158)=8.97,P=0.003,np2=0.04, which did not persist in 2019. Agentic terms were used more often in 2019 relative to 2009, F(1,383)=4.49,P=0.035,np2=0.01, as were communal terms, F(1,383)=28.07,P < 0.001,np2=0.07. 

Conclusion

It is unclear how the equivalent use of descriptors in LOR impacts women in the residency selection process.  Further research is needed to understand how IM residency programs use LOR for resident selection and how this could be impacted by gender bias. 

Keywords: bias; residents; letters of recommendation; gender; text analysis

Introduction

According to the 2018 National Resident Matching Program (NRMP), United States Internal Medicine (IM) program director survey, letters of recommendation (LOR) are used by 74% of program directors in selecting applicants to interview.1  Recently, the Invitational Conference on United States Medical Licensing Examination Scoring called for a more holistic review of applicants, which takes information regarding an applicant’s attributes and future potential, such as those mentioned in LOR, into account. The goal of holistic review is to address biases and avoid over-reliance on USMLE step scores.2  Yet, if elements of the application like LOR are biased themselves, using them may not help address bias in the residency selection process.  Given the potential for implicit bias, some specialties have transitioned to a standardized letter of recommendation (SLOR), which includes standard evaluative and comparative data in an effort to reduce gender-based differences in LOR.3,4  To date, no studies have investigated how the content of LOR may vary by gender of IM applicants.  Understanding if there is gender bias in LOR and whether this has changed over time is essential for the fair assessment of students’ abilities during the IM residency selection process.

One way to evaluate for gender differences in LOR has been to look at how agentic and communal terms describe applicants. Agentic descriptors (i.e., assertive, confident, dominant, aggressive) are more often associated with men as a man’s social role has historically been to be strong, dominant, and self-reliant.5 Contrastingly, women are often described in communal terms (i.e., affectionate, kind, compassionate) that focus on the welfare of others because traditionally, a primary role of women is to care for others.5,6

The study of communal and agentic descriptors within academic medicine has demonstrated varying differences between how men and women are described, particularly how standout adjectives and agentic terms are used.  Results from recent studies of LOR for urology and general surgery residency as well as transplant surgery fellowship concluded that standout adjectives and/or agentic terms were used to describe men applicants more often than women applicants, who were more likely to be described using communal terms.7–9  This is in contrast to LOR for women applicants applying to a general surgery residency program which were more likely to contain standout adjectives relative to men applicants, but overall, the LOR contained similar descriptors for men and women.10  In radiology, LOR for women were more likely to include agentic descriptors than LOR for men.11 

Given the paucity of data in IM, we studied LOR for men and women residents accepted to our IM residency program in 2009 and 2019.  We chose to evaluate LOR over a 10-year span to determine if growing recognition of gender bias12 has impacted the language used to describe applicants.  Based on prior research showing gender bias in LOR7–9,13, we hypothesized that IM residency program LOR for men applicants relative to women applicants would contain more agentic terms, but that the difference would decline from 2009 to 2019 with increasing awareness of gender bias. 

Methods

Participants and Design

This was a text analysis study of LOR from categorical and preliminary applicants who matched at the University of Utah IM residency program in 2009 and 2019.  We analyzed LOR entered in the Electronic Residency Application Service (ERAS) for all categorical and preliminary IM applicants who matched at the University of Utah in 2009 and 2019.  The LORs for matched applicants in 2009 and 2019 represented all Association of American Medical Colleges Group on Education Affairs United States geographic regions- 35% (135) were from the central region, 27% (106) were from the western region, 21% (82) were form the southern region, 6% (22) were from the northeast region and 11% (42) were for international graduates.  For each LOR, the gender of the letter writer, gender of the applicant, length of letter as measured by word count, and year of application were recorded by a research assistant who was unaware of the study hypotheses. If there were multiple letter writers, only the gender of the first letter writer listed was included as it was assumed this was the primary author. While the authors acknowledge that gender is not a binary construct, because ERAS only allows applicants to select from one of two options, we used a binary approach to assign gender in this study.  Since ERAS does not require letter writers to identify their gender when submitting a LOR, we assigned the gender of each letter writer based on a name’s historical association with a man or woman.  If the gender was not apparent, an Internet search of the faculty was conducted.   In this paper, we use the terms man and woman to refer to the residency applicants and letter writers because we are exploring the potential effect of gender bias, rather than sex, on applicants.14  LOR were missing for one applicant in 2009. 

Letter of Recommendation Analysis

A communal and agentic dictionary of terms was created based upon previously defined lists of agentic and communal words in LOR for surgery and radiology applicants.9,11 In addition, we used the software program R (v2.6.2)15 to capture the 20 most frequent terms that were used to describe our Internal Medicine residency applicants in 2009 and 2019 and reviewed these terms for inclusion as agentic or communal descriptors. Two researchers independently reviewed these terms and categorized each as communal or agentic and then met to review and resolve differences.  Through this process we identified 18 additional terms that were used frequently in internal medicine LOR of applicants applying to our institution.  Our initial set of terms included 63 agentic terms and 52 communal terms. (Appendix)

To ensure accuracy and appropriate use of context, three researchers were each assigned one third of the terms and performed a manual review of the LOR in the de-identified file to ensure the agentic and communal terms were used in the appropriate context.  A term was deemed appropriate if it was a direct descriptor of the applicant or a description of an applicant’s attributes or skills in the past, present, or future.  If the term did not meet the preceding criteria, it was removed and not included in the final analysis.  For example, the term “aggressive” used in the context of “he handled a situation with an aggressive patient” was excluded from the analysis as it referenced an attribute of a patient and not the applicant.  The three researchers calibrated their shared definition for appropriate use before reviewing their assigned terms.  If a researcher had a question regarding whether a term met the defined criteria, it was discussed amongst all three researchers for agreement.  The data was corrected based on manual review before any analyses. 

Statistical analysis

Frequencies and percentages were computed for demographic variables and average word count was computed for each LOR and compared between by year with the Mann Whitney U test.  An average agentic percentage and average communal percentage were computed by determining if each agentic and communal term was present or absent in a LOR, respectively.  The total presence counts were averaged across all agentic and communal words, respectively for each LOR.

To determine if the presence of each word type varied by applicant gender and application year, 2 (applicant gender: woman, man) x 2 (year: 2009, 2019) ANOVAs were run on agentic presence averages and communal presence averages.  To determine if presence of each word type varied by the letter writer’s gender, 2 (letter writer gender: woman, man) x 2 (applicant gender: woman, man) ANOVAs were run on agentic presence averages and communal presence averages in 2009 and in 2019.  ANOVAs were run for 2009 and 2019 due to small Ns of women letter writers for each level of applicant gender.   This study was deemed exempt by the University of Utah Spence Eccles Fox School of Medicine Institutional Review Board. 

Results

A total of 387 LOR were analyzed: 146 LOR (58 women and 88 men applicants) from 2009 and 241 LOR (104 women and 137 men applicants) from 2019.  Table 1 provides letter writer gender, number of applicants, average number of LOR per applicant, and average word count per LOR.  The average word count per LOR increased by 102 words (CI 58-146) from 2009 to 2019, P < 0.001.

After manual review, the percentage of agentic terms used in the appropriate context was 91% (4,062/4484) and the percentage of communal terms used in the appropriate context was 79% (1551/1972) for an overall accuracy rate of 87% (5613/6456). 

Table 2 provides the average presence of agentic and communal terms by applicant gender, letter writer gender, and year.  There was 2% more communal presence (CI 0.005–3.0%) in men applicants’ LOR relative to women applicants’ LOR in 2009,  F(1,158) = 9.80, P = 0.001, np2 = 0.06, and 2% more communal presence (CI 0.4-4%) in women writers’ LOR  relative to men writers’ LOR in 2009.  There was 1% more agentic presence (CI 0.2-3%) in 2019 relative to 2009, F(1,383) = 4.49, P = 0.035, np2 = 0.01, and 2% more communal presence (CI 1-3%) in 2019 relative to 2009, F(1,383) = 28.07, P < 0.001, np2 = 0.07. 

Discussion

To date, this is the first study using text analysis to examine potential gender differences in LOR for IM residency applicants.  Overall, there was an increase in the presence of agentic and communal terms used in LOR from 2009 to 2019, irrespective of gender.  There was also an increase in word count for LORs from 2009 to 2019.   We found no difference in the presence of agentic terms by gender in 2009 or 2019.  Finally, communal terms were used more often to describe men applicants in 2009, in comparison to women applicants and, more specifically, these terms were used more often by women letter writers to describe men applicants, in comparison to men letter writers.  There was no difference in the use of communal terms to describe men and women applicants in 2019.

Our results show that over the last 10 years IM applicants’ LOR have increased in length and have more agentic and communal descriptors, irrespective of applicant gender.  It is unclear what accounts for the increased use of agentic and communal terms.  While we did not look at faculty rank in association with frequency of use of terms, Grimm et al. found that junior faculty were more likely to use agentic and communal terms.11  It is possible that with a rise in the number of hospitalist providers over the last 20 years, whose median age is 41,16 more junior faculty are being asked to write LOR as students frequently interact with these providers on their inpatient clerkship and sub-internship rotations.  Without clear guidelines for the structure and content of LOR in IM, junior faculty, with fewer years of experience and less comparative performance data of residents, may be more likely to use agentic and communal descriptors for all applicants. 

We found that communal terms were used more often by women letter writers to describe men applicants in 2009, a finding that was not seen in 2019 LOR.  This is in contrast to prior work evaluating medical student performance evaluations (MSPE), which found that women authors used less “positive emotion” words to describe men students.17  However, studies looking at LOR for radiology and general surgery applicants found that women LOR writers are more likely to use agentic and communal terms than men letter writers, irrespective of applicant gender. 10,11  Given our findings differ from surgical-based specialties, this raises the possibility of specialty-specific differences in the use and value placed on agentic and communal terms used in LOR. This may be dependent upon the percentage of women practicing in a specialty, as agentic traits are valued in male dominated fields and communal traits are valued in female dominated fields, and these values may change over time as more women enter a specialty.18,19  These differences highlight that use of agentic and communal descriptors may depend upon specialty and local or institutional cultural norms.  Further research is needed to explore how letter writers’ demographics (i.e., location, rank, age, gender) and institutional gender bias training impact LOR. 

It should be noted that our manual review highlighted a lower accuracy rate for communal terms relative to agentic terms using the text analysis approach.  Other studies assessing communal and agentic terms in residency LOR do not comment on the context of communal and agentic term use and if it pertains directly to the applicant.7–9,11,20,21 Future studies should comment on whether communal terms are describing the applicant or features of something or someone else, like the patient.  

The Invitational Conference on USMLE Scoring called for a holistic approach in the review of residency applicants to best identify applicants who align with individual residency program’s strengths and guiding principles.2  One of the goals of holistic review is to address potential biases, including gender bias.  Given the potential for implicit bias in the narrative LOR, Emergency Medicine (EM) residency programs implemented a SLOR in 1997 to improve resident selection based on evaluative and comparative data.3    Other residency programs, mainly the surgical subspecialties, have followed suit.  SLOR contain standard evaluative and comparative data in addition to a short narrative component, and, in several studies, they have shown little to no gender-based differences in comparison to narrative LOR.4,20,21  In May, 2020, the Alliance for Academic Internal Medicine (AAIM) released recommendations for the Department of Medicine summary letters to follow a format similar to SLOR in order for program directors to have “standardized, objective data to facilitate holistic review.”22 It is unclear what impact the recommended SLOR format will have on the use of agentic and communal descriptors for candidates.

Several limitations of this study should be considered.  First, it is a single-center study with a small data set consisting of LOR from applicants who matched at our program. Second, while LOR represent one resource in the residency selection process, we only analyzed LOR of applicants who matched into our residency program.  Excluding non-matched applicants may have inherently created bias and limited the generalization of the results to IM matched applicants.  We purposely limited the sample to matched IM applicants because a sample of unmatched and matched applicants would have had a broader range of abilities making it more difficult to know if the results were really due to gender bias or performance characteristics.  Third, much of the prior work on gender bias in letters of recommendation has relied on Linguistic Inquiry and Word Count, which has a predefined dictionary, whereas we created a dictionary of terms from several recent sources which may not be exhaustive of terms; our manual review however allowed for confirmation that terms were used in the context of the applicant.  In addition, while the frequency of agentic and communal terms has been widely used in studies to evaluate for gender bias, it only represents two linguistic domains; thus, it is possible that our methods were insufficient to detect all possible types of bias.6,8–11  Finally, we describe the differences and trends in agentic and communal word use in LOR over time but in this retrospective study we were unable to ascertain how these words were interpreted and acted on by the reader.

Conclusion

While we found no difference in the frequency of agentic terms used to describe women and men applicants, the presence of agentic and communal terms in LOR has increased significantly from 2009 to 2019.  Despite the lack of gender difference in the frequency of agentic terms, it is unclear if equivalent use affects women in the residency selection process as prior studies have shown that women are less likely to get male stereotyped jobs if their qualifications are equivalent to their male counterparts.23  Further research is needed to understand how IM residency programs use LOR for the residency selection process and how this could be impacted by gender bias.  In addition, institutions should review narrative components of the residency application like the LOR to determine if bias exists to better focus faculty development efforts for letter writers and those reviewing LOR to make residency selection decisions.

The Thruple of Self-Directed Learning: Marrying Student-led Community Outreach Clinics, Problem-Based Learning, and Professionalism Mentorship in Medical Education

As a student about to graduate from medical school who completed a graduate degree prior to entering medical school, I’ve developed opinions about various aspects of my medical education. I’ve had many profound and life-changing educational experiences, and I have also jumped through a lot of hoops. I’ve had excellent teachers and mentors guide me, and I can recount experiences where I felt demeaned or overlooked. I have been included and heard, but sometimes I’ve just had to put my head down and go with the flow. As I’ve had the opportunity to research medical education and reflect on my own experiences, I’ve come to believe that the best way forward in the medical education involves embracing every student’s ownership of their own learning1. While medical education has continued to develop and progress throughout the last few decades, there are still many opportunities for improvement to nurture and mold the kind of physicians that we want for the future2. Three important, interrelated tenets of self-directed learning include 1) early and longitudinal student-led community outreach clinic involvement, 2) problem-based learning (PBL), and 3) professionalism mentorship, ideally within houses (i.e., small learning communities) or a similar system.

Student-directed learning is often proverbially and colloquially discussed on many levels within medical education. However, few institutions truly have a curriculum that fosters student ownership of learning. As part of curriculum reinvention initiative entitled, MedEdMorphosis at the University of Utah, we are designing and workshopping specific aspects of a newly envisioned educational structure that will ultimately foster a greater ownership of learning and more impactful mentoring relationships between students and upper classmates, residents, and master clinician educators. While this has been a long term and complex process to imagine, design, workshop, and ultimately implement significant changes to our medical school structure and curriculum, I would like to focus this opinion piece on a few of the aspects of our plan that I am most passionate about and that I believe have the greatest opportunity to evolve medical education in Utah.

Student-Led Community Outreach Clinics

As a first- and second-year medical student, I had the opportunity to volunteer at a community clinic. I still vividly remember many of the experiences I had there, so many of which involved a moment in which something that I had learned in the lecture hall finally took real shape and form when I interviewed or examined a patient, or when I talked to my attending about a treatment plan. I remember my first time I observed costovertebral angle (CVA) tenderness in a patient with pyelonephritis. I remember doing a point of care (POC) glucose myself and explaining a diabetic diet to my patient in Spanish. My confidence as a budding physician first began to blossom in this community clinic. I could see for the first time how my classroom learning might apply, and how it couldn’t. Looking back, some of my clinical experiences during official clerkships comprised a more observatory, passive role, but in the student-run volunteer clinic, I got my first taste of what it felt like to help someone as a physician. It was a powerful and humbling role.

MedEdMorphosis hinges on our belief that early and consistent longitudinal exposure to hands-on patient care starts as early as the first week of medical school and leads to an exceptional learning environment where students take ownership of their knowledge and skills from the beginning. They can contextualize the knowledge they gain and build a framework for how to improve their skills and hone their understanding of scientific concepts. Nothing is learned in a vacuum. Students acquire knowledge and skills with the patient (and likely specific patients) in mind. Students will come to intimately understand the needs of underserved communities and develop greater empathy through actual experience rather than relying on vacuous lecture material. They will come to understand the roles of other healthcare professionals without having to take a separate course as they work closely with nurses, advanced practice providers such as nurse practitioners and physicians’ assistants, therapists, social workers, and medical assistants weekly and longitudinally4. Student involvement can also add significant value to patient care, the hospital system, and the educational system5,6.

While on the outset, this approach might appear to overwhelm the beginning student, the layered nature of student involvement in the student-led clinics will facilitate a stepwise approach to mastery of clinical skills, medical knowledge, and professionalism. In our current vision of how these student-led community outreach clinics will function, students in Phase 1 will apprentice and work closely with students in Phase 3 and beyond along with supportive faculty. These students within houses will develop long-term, and hopefully close and trusting, relationships with each other. These relationships will be cultivated and nourished within the house learning communities and in longitudinal clinical work.

Problem-Based Learning

Having completed a PhD before attending medical school, I will admit that starting medical school felt like joining a herd and was far removed from the self-directed learning guided by specific aims that I experienced as a graduate student. In some ways, medical school felt like a step backward in my education rather than a step forward towards my future career. As I progressed in my medical education and began to work with patients, my favorite learning moments in medical school were those where I was given a relevant clinical question to answer myself. The importance of asking the right question and finding an applicable answer cannot be overstated.  

Multiple sources of evidence suggest that problem-based learning (PBL) is superior to traditional lecture-based formats as evidenced by test scores as well as student preference7. PBL in its most traditional format would require students to bring problems and questions from their clinical experiences to a group of their peers and mentors, and then to discuss and research clinical approaches and solutions together. Though this may initially appear a haphazard and random approach, we envision a competency-based list of problems/subjects that should be covered in some way during Phase 1. An experienced, primary care-based highly experienced clinician educator would be present to guide student inquiries and supplement the PBL discussions with relevant clinical experience and materials. We also envision that a library of asynchronous resources would be made available to students to access in addition to sources such as UpToDate, Dynamed, and other databases. Traditional PBL may also be supplemented by team-based learning (TBL) and case-based learning (CBL) separately, or elements of these learning modalities may be incorporated into PBL group activities. Successful mentorship and teaching within PBL groups will require highly experienced clinician educators who are invested in the learning outcomes of their students. These leaders will be ethical and open-minded, and they will seek for genuine connection with their learners. They create an environment of curiosity, psychological safety, and passion for science and quality patient care.

Longitudinal Mentorship within Houses

We believe that trusting relationships with mentors and a strong sense of community within medicine will produce the best doctors8. We plan to achieve these outcomes through implementing a house system where we foster close professional communities of students from all levels, residents, highly experienced clinician educators, and staff within the school of medicine. Students will receive increased support to network and explore specialties. They will have close interactions with residents and be better prepared for clinical clerkships, residency, and ultimately successful careers as community-oriented physicians. Students will also have important leadership opportunities as mentors and participants in student-led boards that will prepare them for future leadership roles in medicine. We believe that students will achieve their highest potentials when adequately and longitudinally supported in close-knit groups that incorporate many levels of trainees and highly experienced clinician educators.

In recent years, there has been significant discussion about how professionalism is portrayed, exemplified, and taught to medical students. Professionalism is sometimes invoked as a suppressive mantle to encourage lecture attendance or discourage changes from the heteronormative superstructures of medicine9,10. Sometimes medical professionalism is talked about as part of the “hidden curriculum” of medicine that teaches hierarchically appropriate behaviors11. However, we believe that professionalism is best defined and taught within relationships—trusting relationships where students are treated as humans and capable, eager learners. Mistreatment has been another hot topic within the medical education literature12, but we further posit that mistreatment would require less attention if professionalism were taught compassionately within trusting mentorship relationships, such as those provided within houses. How can we encourage, teach, and measure the development of positive mentorship relationships within the house model? We hope to increase connection between students and mentors by training clinical educators who value and emphasize psychological safety13, appropriate feedback, empathy, and educational alliance14, as posited by the Connection Index (CI12) for measurement of connection between trainees and educators15. In a longitudinal study using a validated scoring system (the CI12), higher connection scores between educators and trainers were linearly associated with greater supervision attendance, higher personal achievement, and less negative emotional experience15.  We hope to implement the CI12 or a similar objective scoring system to identify faculty who require greater development while also prioritizing teaching to and reallocating resources for those educators who excel at connecting with their trainees. We hope that these core principles of psychological safety, appropriate feedback, empathy, and educational alliance can become core tenets of the house system, so that we accelerate learning and community building amongst medical students and their mentors/educators.

Conclusion

The University of Utah is passionate about upcoming changes proposed by MedEdMorphosis. As a graduating student, I am excited about rooting the educational experience of future students in longitudinal student-led community outreach clinicals, PBL, and mentoring relationships within houses. As we envision the future that we want for medicine, we believe that this “thruple” will produce the physicians that we need and hope for by increasing student ownership of learning. Future students will have the experiences to anchor and contextualize their medical knowledge at student-led community outreach clinics. They will have the skills to identify questions and find clinically relevant answers by participating in PBL. They will understand professionalism in a non-toxic, wholistic way through genuine connection and nurturing mentorship within their house relationships. We believe that medical education can and should be different and are boldly moving in the direction of the future that we want for our profession.

References

  1. Wu, J. H., Gruppuso, P. A., &amp; Adashi, E. Y. (2021). The self-directed medical student curriculum. JAMA, 326(20), 2005. https://doi.org/10.1001/jama.2021.16312
  2. Pock, A.R., Durning, S.J., Gilliland, W.R. et al. Post-Carnegie II curricular reform: a north American survey of emerging trends & challenges. BMC Med Educ 19, 260 (2019). https://doi.org/10.1186/s12909-019-1680-1
  3. Modi, A., Fascelli, M., Daitch, Z., & Hojat, M. (2016). Evaluating the relationship between participation in student-run free clinics and changes in empathy in medical students. Journal of Primary Care &amp; Community Health, 8(3), 122–126. https://doi.org/10.1177/2150131916685199
  4. Farlow, J. L., Goodwin, C., & Sevilla, J. (2015). Interprofessional Education Through Service-Learning: Lessons from a student-led free clinic. Journal of Interprofessional Care, 29(3), 263–264. https://doi.org/10.3109/13561820.2014.936372
  5. Gonzalo, J. D., Lucey, C., Wolpaw, T., & Chang, A. (2017). Value-added clinical systems learning roles for medical students that transform education and health. Academic Medicine, 92(5), 602–607. https://doi.org/10.1097/acm.0000000000001346
  6. Gonzalo, J. D., Dekhtyar, M., Hawkins, R. E., & Wolpaw, D. R. (2017). How can medical students add value? identifying roles, barriers, and strategies to advance the value of undergraduate medical education to patient care and the health system. Academic Medicine, 92(9), 1294–1301. https://doi.org/10.1097/acm.0000000000001662
  7. Trullàs, J.C., Blay, C., Sarri, E. et al. Effectiveness of problem-based learning methodology in undergraduate medical education: a scoping review. BMC Med Educ 22, 104 (2022). https://doi.org/10.1186/s12909-022-03154-8
  8. Sklar DP, McMahon GT. Trust Between Teachers and Learners. JAMA. 2019;321(22):2157–2158. doi:10.1001/jama.2018.22130
  9. Hafferty, Frederic W. PhD; O’Brien, Bridget C. PhD; Tilburt, Jon C. MD Beyond High-Stakes Testing: Learner Trust, Educational Commodification, and the Loss of Medical School Professionalism, Academic Medicine: June 2020 – Volume 95 – Issue 6 – p 833-837 doi: 10.1097/ACM.0000000000003193
  10. Lee JH. The weaponization of medical professionalism. Acad Med. 2017;92:579–580.
  11. Azmand, S., Ebrahimi, S., Iman, M., & Asemani, O. (2018). Learning professionalism through hidden curriculum: Iranian medical students’ perspective. Journal of medical ethics and history of medicine11, 10.
  12. Cook, A. F., Arora, V. M., Rasinski, K. A., Curlin, F. A., & Yoon, J. D. (2014). The prevalence of medical student mistreatment and its association with burnout. Academic medicine: journal of the Association of American Medical Colleges89(5), 749–754. https://doi.org/10.1097/ACM.0000000000000204
  13. Torralba KD, Loo LK, Byrne JM, Baz S, Cannon GW, Keitz SA, Wicker AB, Henley SS, Kashner TM. Does Psychological Safety Impact the Clinical Learning Environment for Resident Physicians? Results From the VA’s Learners’ Perceptions Survey. J Grad Med Educ. 2016 Dec;8(5):699-707. doi: 10.4300/JGME-D-15-00719.1. PMID: 28018534; PMCID: PMC5180524.
  14. Telio, Summer MD; Ajjawi, Rola PhD; Regehr, Glenn PhD The “Educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education, Academic Medicine: May 2015 – Volume 90 – Issue 5 – p 609-614. doi: 10.1097/ACM.0000000000000560
  15. Puder D, Dominguez C, Borecky A, Ing A, Ing K, Martinez AE, Pereau M, Kashner TM. Assessing Interpersonal Relationships in Medical Education: the Connection Index. Acad Psychiatry. 2022 Jan 22. doi: 10.1007/s40596-021-01574-0. Epub ahead of print. PMID: 35064549.

Identifying Key Items in Systemic Lupus Erythematosus for Undergraduate Medical Education: A Consensus Study

Abstract

Objective:  The purpose of this study is to use consensus methodology involving various educators to identify key curricular items about systemic lupus erythematosus that medical students should learn about during undergraduate medical education.

Methods:  86 faculty and housestaff members were invited to participate in a 3 step Delphi consensus process. Step 1 involved reviewing the  current items in the curriculum and requesting suggestions for additional items.  In step 2, each participant rated every item’s importance to be included in the medical school curriculum using a 5-point Likert scale (1 = not at all important; 5 = extremely important). In step 3, participants were given the group’s mean and mode rating for each item, reminded of their own initial rating, and asked to make a final 5-point rating.  After the final step, items rated ³4 (“very important” or “extremely important”) by at least 80% of participants were retained.

Results: 44 participants accepted the invitation to join our consensus project (51%); 31 participants completed all steps (37% of invited members, 70% of accepted participants).

In step 1, 61 items were added as suggestions to the curriculum leading to a total of 82 items to be rated. The consensus process eliminated 50 items, leaving 32 in the final list of identified important key teaching elements for lupus.

Conclusion: Using a systematic consensus exercise, a diverse group of educators participating in the lupus consensus project identified key teaching items to prioritize during medical school education. 

Introduction

Creating a curriculum that adequately teaches systemic lupus erythematosus (SLE) can be overwhelming for both the medical student and the teacher.  Systemic lupus, or simply “lupus”  for this publication, is a complex multi-organ disease entity with multiple clinical and laboratory features as displayed in the 2019 EULAR/ACR Classification Criteria(1).  When learning about lupus, students can feel confused by this complex disease given the multiple facets involved. Students grade their confidence in knowledge of lupus as neutral as determined by  Mok and colleagues(2), but these authors also demonstrate that a large proportion of students use external curricular resources such as the internet and materials provided by professional societies.   Kerezoudis, et al., cites in his study that students believe that lupus represents a “model autoimmune systemic disease that can provide the opportunity for deeper understanding of other systemic autoimmune diseases” (3).  Medical students also perceive lupus as life-threatening, more burdensome, and having worse consequences even compared to lupus patients given the possible complications (4).  

            Because SLE is recognized as a rheumatologic disease, it is reasonable that the responsibility for undergraduate medical curriculum in lupus is primarily entrusted to rheumatology faculty.   However, lupus spans multiple complex systems, thus medical educators outside of rheumatology also hold stake in determining what elements of lupus are appropriate for undergraduate medical education (UME).   Although rheumatology faculty may be identified as curriculum leaders for SLE, faculty in other disciplines relevant to lupus (e.g., nephrology, pulmonology, cardiology, dermatology, etc.) may have varying—even conflicting—opinions regarding specific content or learning objectives.  Students also recognize that SLE needs a multidisciplinary approach (5). These factors add further complexity to curricular design.   Recent literature stresses the importance of teaching epidemiology, pathogenesis, clinical manifestations, management, and treatment but offers little guidance in determining what the essential teaching items should be in UME (2,3).

The aim of this study was to use consensus methodology involving various stakeholders to identify key lupus curricular items that medical students should learn during their four year education. The identification of specific key items will help streamline medical education for lupus and uniformly prepare graduating students to potentially recognize key aspects of this complex disease. 

Materials and Methods

Recruitment

Key stakeholders were identified through review of all faculty engaged across the range of rheumatology educational experiences at the University of Utah School of Medicine (UUSOM).  This included all adult and pediatric rheumatology faculty and fellows, internal medicine chief residents, all content experts in UME, members of the curriculum evaluation committee, all deans, internal medicine clerkship directors,  and preclinical curriculum course directors.  Content experts included PhD, MD, and DO faculty who were assigned by UUSOM to be designated advisors in the specific areas of interest.  There were identified content experts in all subjects including: Gross Anatomy, Allergy and Immunology, Cardiology, Dermatology, Community Engaged Learning, Embryology, Endocrine, Genetics, Gastroenterology, Histology, Hematology/oncology, Human Behavior, Infectious disease, Metabolism and Nutrition, Pathology, Pulmonary, Rheumatology, and Renal (https://medicine.utah.edu/students/programs/md/curriculum/core-educator-program/domain-experts.php).  

 A total of 86 faculty and housestaff members from the UUSOM were identified as stakeholders and invited by email to participate in the consensus process.

Consensus Process

Initial review of the four year UME curriculum at UUSOM identified 21 content items currently included in lupus instruction across all subject areas.  This initial review was conducted by the designated rheumatology content expert (JKT) for UUSOM. 

A 3-step Delphi consensus process was conducted. Step 1 involved accepting invited participants to review these 21 items in the current curriculum and suggest additional items. In step 2, each participant rated each item’s importance to be included in the medical school curriculum using a 5-point Likert scale (1 = not at all important; 5 = extremely important); suggested items from step 1 were also included. In the 3rd step, participants were given the group’s mean and mode rating for each item from step 2, reminded of their own initial rating, and were asked to make a final rating for each item.  Individual responses to each step remained anonymous. All communication including solicitation and the rating procedures were performed by email. 

Statistical analysis

We analyzed responses to each survey item by calculating the mean, mode, and standard deviation. After the final step, individual items rated ³4 (“very important” or “extremely important”) by at least 80% of participants were retained; these identified items defined the set of lupus curricular elements representing the consensus of the group.  This study was deemed IRB exempt.

Results

44 participants accepted the invitation to join our consensus project (51% of total invited members).  31 participants completed all steps of the consensus project (37% of invited members, 70% of participants who accepted). The majority were rheumatologists (16, 52% of total), 4 participants were from internal medicine, and 11 participants were medical school educators (6 of these medical educators also had internal medicine backgrounds).   Three medical school deans participated.  Participants also included designated content experts in Gross Anatomy, Community Engaged Learning, Embryology, Endocrine, Histology, Hematology/oncology, Human Behavior, Metabolism and Nutrition, Pathology, Pulmonary, Physiology, and  Rheumatology. 

In step 1, 61 items were added as suggestions to the curriculum leading to a total of 82 items to be rated and reviewed (see Supplement 1) . Additional items included knowledge about innate and adaptive immunity, understanding cardiac manifestations, and  antiphospholipid antibody syndrome.  Other additions included learning about social determinants of health and how this impacts diagnosis, treatment, and prognosis.

The consensus process eliminated 50 items out of the total 82 items.  Thirty-two items remained in the final list of identified important key teaching elements for lupus (see Table 1). In a post-hoc analysis of ratings provided by rheumatologists, 9 additional elements were identified though these did not meet total group consensus (see Table 2).  Seventeen of the original twenty-one items remained after completion of this study.

Table 1: Identified Key Teaching Items meeting >80% total group consensus
Table 1: Identified Key Teaching Items meeting >80% total group consensus * Additional items proposed by consensus group; DAH – diffuse alveolar hemorrhage
Table 2: Additional identified key items recommend by rheumatologist but not meeting total group consensus
Table 2: Additional identified key items recommend by rheumatologist but not meeting total group consensus * Additional items proposed by consensus group CAD- Coronary arterial disease CVA – Cerebrovascular accident

Discussion

Through consensus methodology, this study identified 32 key lupus items to prioritize in the UME curriculum. Multiple original items were eliminated or modified during this process including specific items detailing pathophysiology such as specific type identification of hypersensitivity reaction and pathology identification of lupus nephritis.  These items may be too specific and granular for medical student education.  Multiple items were added, including discussion of epidemiology and risks for increased morbidity and mortality.  Interestingly, nine additional items were recommended by rheumatologist in post-hoc analysis.  These items included more detailed knowledge in regards to clinical features, lab abnormalities, and need for treatment other than steroids (or steroid sparing agents). Rheumatologists likely placed higher value on medical student awareness of variable lupus phenotypes and range of options for management. 

Recent rheumatology publications supports that early recognition of lupus symptoms leads to earlier diagnosis and potentially improved outcomes.  Kernder, et al. illustrated that a delayed diagnosis of lupus is associated with worse outcomes (6).Oglesby, et al. demonstrates that “patients diagnosed with SLE sooner may experience lower flare rates, less healthcare utilization, and lower costs” (7).  Thus, teaching graduating medical students the varied clinical manifestations and lab abnormalities may lead to shorter time to diagnosis and early initiation of steroid sparing agents.  This, in part, will lead to improved outcomes for lupus patients.

UME is undergoing curriculum reform with increasing desire for interdisciplinary overlap in education.  Lupus is a prototype condition that requires layered and overlapping knowledge of basic science, clinical immunology, and pathology (3).  Engaging stakeholders from various disciplines in this systematic consensus process facilitates integration of many perspectives into a more comprehensive curriculum.  Curriculum integration creates new collaborations within the community of teachers at an academic center and more robust and impactful educational experiences for learners. Demonstration of a multidisciplinary fourth year elective called “Understanding Lupus” shows success in focusing on lupus as a chronic disease process that ties multiple interdisciplinary objectives together with basic science (5).  This same concept may  be applied in development of teaching sessions at the start of medical education.  This can create a framework for pre-clinical curriculum with utilization of these key items as building blocks.

Limitations of this study include conduction of the consensus process with stakeholders at a single academic center.  Another limitation includes the low total percentage of invited participants who completed all three steps of the project, at 37%. This study did not include persons not involved in medical education, thus not capturing data from rheumatologists in non-academic centers. 

            The next step involves using these key items  to develop learning objectives.  These objectives then can be used as a roadmap for curriculum design to educate students about this complex disease process. 

References

  1. Aringer M, Costenbader K, Daikh D, Brinks R, Mosca M, Ramsey-Goldman R, et al. European League Against Rheumatism/American College of Rheumatology Classification Criteria for Systemic Lupus Erythematosus. Arthritis Rheumatol. 2019;71:1400-1412.
  2. Mok M, Lo Y, and Lau C. A Needs Assessment and Review Curriculum of Content of Teaching on Systemic Lupus Erythematosus (abstract). Arthritis Rheumatology, 2013.
  3. Kerezoudis P, Lontos K, Apostolopoulou A, Christofides A, Banos A, Leventis D.,,et al. (2016). Lupus in medical education: student awareness of basic, clinical, and interdisciplinary aspects of complex diseases. Journal of Contemporary Medical Education 2016; 4: 97-106.
  4. Nowicka-Sauer K, Pietrzykowska M, Banaszkiewicz D, Hajduk A, Czuszyńska Z, Smoleńska Ż. How do patients and doctors-to-be perceive systemic lupus erythematosus? Rheumatol Int. 2016;36:725-9.
  5. Nambudiri VE, Newman LR, Haynes HA, Schur P, Vleugels RA. Creation of a novel, interdisciplinary, multisite clerkship: “understanding lupus”. Acad Med. 2014;89:404-9.
  6. Kernder A, Richter JG, Fischer-Betz R, Winkler-Rohlfing B, Brinks R, Aringer M, et al. Delayed diagnosis adversely affects outcome in systemic lupus erythematosus: Cross sectional analysis of the LuLa cohort. Lupus. 2021;30:431-438.
  7. Oglesby A, Korves C, Laliberté F, Dennis G, Rao, S, Suthoff E, et al. Impact of early versus late systemic lupus erythematosus diagnosis on clinical and economic outcomes. Appl Health Econ Health Policy. 2014;12:179-190.

2022 Journal of the Academy of Health Sciences: A Pre-Print Repository

Measuring the Learning Orientation Fostered by Pediatric Residency Programs: Adapting an Instrument Developed for UME

Abstract

Background: An essential component of an educational program’s learning environment is its learning orientation. Several instruments attempt to measure the learning environment in graduate medical education (GME) but none focus on learning orientation. Thus, it is challenging to know if competency-based educational interventions, such as Education in Pediatrics Across the Continuum (EPAC), are achieving their objective of supporting mastery learning.

Objective: To revise Krupat’s Educational Climate Inventory (ECI), originally designed for medical students, and determine the modified instrument’s psychometric properties when used to measure the learning orientation in GME programs.

Methods: We included 12 items from the original ECI in our GME Learning Environment Inventory (GME-LEI) and added 10 additional items. We hypothesized a three-factor structure, consisting of two sub-scales from the original ECI (Centrality of Learning also known as Learning Orientation; Competition and Stress) and a Support in the Learning Environment sub-scale. We administered the GME-LEI electronically to all residents (EPAC and traditional) in 4 pediatric GME programs across the United States. We performed confirmatory factor and parallel factor analyses, calculated Cronbach’s alpha for each sub-scale, and compared mean sub-scale scores between EPAC and traditional residents using a two-way analysis of variance.

Results: A total of 127 pediatric residents from 4 participating GME programs completed the GME-LEI. The final three-factor model was an acceptable fit to the data (CFI 0.86, RMSEA 0.07). Cronbach’s alpha for each sub-scale was acceptable (Centrality: 0.87, 95% CI 0.83, 0.9; Stress: 0.73, 95% CI 0.66, 0.8; Support: 0.77, 95% CI 0.71, 0.83). Mean scores on each sub-scale varied by program type (EPAC vs traditional) with EPAC residents reporting significantly higher scores in the Centrality of Learning sub-scale (2.03, SD 0.30, vs 1.79, SD 0.42; p=.023)       

Conclusions: Our analysis suggests that the GME-LEI reliably measures three distinct aspects of the GME learning environment. GME-LEI scores are sensitive to detecting differences in the centrality of learning, or learning orientation, of EPAC and traditional pediatric residents. The GME-LEI can be used to identify areas of improvement in GME and help programs better support mastery learning of residents.

Introduction

The learning environment, or the setting in which learning happens, powerfully influences learners’ educational experience. Perceptions of an environment that supports learning are associated with higher levels of learner achievement,1 greater motivation for learning,2 lower rates of learner burnout,3-5 and improved patient experiences.6 The critical influence of the learning environment is noted by accrediting bodies such as the Liaison Committee on Medical Education7 and the Accreditation Council for Graduate Medical Education.8,9 Both require medical education programs to monitor their learning environments.

An essential component of the learning environment is the learning orientation that the program fosters.1 One way to conceptualize learning orientation is using Dweck’s conceptual framework of a growth versus a fixed mindset.10,11 A growth mindset, or mastery orientation, focuses on increasing learners’ knowledge and skills, critical thinking, and self-directed learning, all of which are essential to the practice medicine.12 Uncertainty and learning from mistakes are valued with growth fostered through constructive feedback. In contrast, a fixed mindset, or performance orientation, focuses on measuring and documenting learners’ achievement. Mistakes are viewed as failures and feedback is avided because it critiques performance.

In undergraduate medical education (UME), a variety of instruments exist to assess the learning environment;13-19 however, many of these instruments are of mixed quality and are often not informed by conceptual or theoretical frameworks.20,21 Furthermore, only one of these instruments, the Educational Climate Inventory (ECI) developed by Krupat et al,16 focuses specifically on a mastery versus performance orientation. With its UME focus, we will refer to this instrument as the UME-ECI. The UME-ECI allows educators to assess students’ perceptions of the learning orientation fostered by medical school and per the authors “has potential as an evaluation instrument to determine the efficacy of attempts to move health professions education toward learning and mastery”.16 Importantly, unlike many other instruments which measure elements of the UME learning environment, the UME-ECI meets many of the criteria for learning environment assessment.20

Compared to UME, fewer instruments exist to measure the learning environment in graduate medical education(GME).22-26 Twoof the most frequently used instruments developed by Boor et al.22 and Roff et al.25 are well validated but focus on teacher-learner relationships and supervision of learners without addressing the learning orientation. Given the importance of learning orientation as part of an institution’s learning environment, and the lack of validated instruments which measure learning orientation in GME, we adapted the UME-ECI for the GME setting and created a new instrument, the GME Learning Environment Inventory (GME-LEI).  We set out to collect reliability and validity evidence for its use in GME and to answer two research questions:

1. Will the GME-LEI prove to have similar psychometric properties as the UME-ECI, including sub-scale structure and sub-scale reliability?

2. Do resident physicians enrolled in the Education in Pediatrics Across the Continuum (EPAC) program, a competency-based medical education program that is known to foster a mastery learning orientation, score differently on the GME-LEI compared to residents in traditional pediatric residencies?

In this article, we describe the development of the GME-LEI and present the outcomes of the data we collected using the adapted instrument to address our research questions. 

Methods

Instrument Development and Administration:

To measure the learning orientation within GME programs consisting of both EPAC and traditional pediatric residents, we revised the UME-ECI in a stepwise fashion. We solicited the input of the EPAC steering committee, which consists of 4 to 5 representatives of each EPAC school and 4 to 6 consultants funded to guide EPAC by the Association of American Medical Colleges (AAMC), during semi-annual face-to-face meetings. Based on the iterative discussions of this group, we retained most items for 2 of the 3 UME-ECI sub-scales to be used in the GME-LEI: Centrality of Learning (7 of 10 items) and Competitiveness and Stress in the Learning Environment (5 of 6 items).

Centrality of Learning focuses on learning orientation (i.e., learning vs mastery) in the residency program while Competitiveness and Stress in the Learning Environment focuses on how programs create stress and competitiveness in the learning environment. We omitted all items in the UME-ECI’s Passivity and Memorization sub-scale as they were more oriented to classroom or preclinical learning, and we perceived them to have little relevance to learning in the clinical environment. We replaced this sub-scale with a new Support of Learning sub-scale, and included 6 items pertaining to supervisory relationships, autonomy, and longitudinal experiences, elements we determined to be important in the context of the clinical environment. For example, we asked about ability to assume higher levels of responsibility when residents feel ready and about the programs fostering long-term relationships with attendings.

Members from the national EPAC steering committee reviewed each item in the GME-LEI to ensure clarity of the language for residents and to ensure face validity when used in a GME setting. During this review process we replaced, for example, ‘faculty’ with ‘supervisor’ and ‘students’ with ‘residents’. To optimize clarity even further, we added 2 items in the Centrality of Learning sub-scale to make explicit reference to trust and partnering within interprofessional healthcare teams. The final GME-LEI instrument contained 22 items, organized into the 3 sub-scales (Centrality of Learning = 11, Competitiveness and Stress in the Learning Environment=6, Support of Learning=5). Items were designed such that participants could choose one response on a four-point Likert scale, which ranged from “strongly agree” to “strongly disagree.”

Participants:

We sent the GME-LEI electronically to all residents (EPAC and traditional) in 4 pediatric GME programs across the United States. EPAC is a pilot program designed to advance trainees through medical school and pediatric residency training based on competence, as opposed to time-in-training. What sets EPAC apart is its use of Entrustable Professional Activities (EPAs)27,28 as an assessment framework and entrustment supervision scales to track trainees’ progress and determine their readiness for advancement. Specific details of EPAC’s assessment process have been reported  previously.28 Trainees enrolled in EPAC also take part in additional longitudinal experiences compared to trainees in traditional UME and GME programs. These additional longitudinal experiences, which, combined with the EPA evaluation framework, is intended to help foster a mastery orientation in their respective programs.29

Instrument Administration and Data Collection:

We administered the GME-LEI electronically between December 2019 and April 2020, using the Association of Pediatrics Program Directors LEARN Coordinating Center,30 an existing platform that was part of a larger pediatrics educational research network. We assigned each respondent a unique identifier so that responses were anonymous but were linked to the individual’s participating institution, year of training, and EPAC status. As responses to the survey were part of a program evaluation, all residents were expected to participate, but in an anonymous manner so that deciding not to participate cannot be tracked nor result in any negative consequence. Distribution of the instrument varied by program, with some programs providing an electronic survey link to each resident during their semi-annual meeting with the residency program director, while other programs sent out the survey link to all residents via email. All responses were stored in the APPD LEARN Coordinating Center database. Data were collected as part of an EPAC program evaluation, and thus the data collection and analysis were determined by the IRB at each participating institution to be exempt human subjects research.

Data Analysis:

We first performed a confirmatory factor analysis (CFA) on the items retained from the UME-ECI to ensure that the expected two-factor structure was present. We then fitted a three-factor CFA model to all GME-LEI responses. We allowed the 2 additional Centrality of Learning items to load on any of the three factors, hypothesizing that they would load only on Centrality of Learning. We assigned the new Support of Learning items to a third factor and examined modification indices to determine whether any of the Support of Learning items would be better placed in one of the original factors than in the new factor. Finally, we performed a check for unidimensionality on each sub-scale by conducting a parallel exploratory factor analysis to evaluate the fit of a single underlying factor for each sub-scale, and then calculated Cronbach’s alpha for each sub-scale to summarize its interitem reliability.

The second part of the analysis focused on using the instrument to answer the second research question. We calculated separate mean scores and standard deviations for residents in each program site, and each program type, EPAC vs traditional. Within the Competitiveness and Stress sub-scale, one item, which was worded positively, was reverse scored resulting in a scale where higher scores indicated more competitiveness and stress. One item in the Support of Learning sub-scale was negatively worded, and thus was reverse scored resulting in a scale where higher scores indicated more support. All items in the Centrality of Learning sub-scale were positively worded and thus higher scores indicated a more mastery focused learning orientation.

We compared mean scores across groups using a two-way analysis of variance. Data analysis was conducted using R 3.6 (R Core Team, Vienna, Austria). We used two-sided hypothesis tests and considered p values less than .05 significant.

Results

A total of 127 pediatric residents from 4 participating GME programs completed the GME-LEI. The responses rates of program sites A-D were 53%, 18%, 35%, and 67% respectively with a total response rate of 40% across sites.Respondents’ characteristics are described in Table 1; two responses did not include program information and thus were not used in the CFA.

Table 1- Respondent Characteristics

In response to our first research question, the items from the UME-ECI were well-characterized by a two-factor model (CFI 0.94, RMSEA 0.06, Supplementary Appendix 1). In the three-factor model, the two new items added to the GME-LEI related to relationships with interprofessional healthcare team members loaded significantly only on to the Centrality of Learning factor (Supplementary Appendix 2). All but one of the new items included in the GME-LEI’s Support of Learning sub-scale loaded significantly on to a third factor, and model fit could not be improved by moving them to any other sub-scale. The final three-factor model was an acceptable fit to the data (CFI 0.86, RMSEA 0.07, Figure 1).

Figure 1 – Fitted Three-Factor Model of GME-LEI Responses

A parallel factor analysis of each sub-scale suggested that each was best interpreted as unidimensional (Supplementary Appendix 3). Cronbach’s alpha for each sub-scale was acceptable (Centrality: 0.87, 95% CI 0.83, 0.9; Stress: 0.73, 95% CI 0.66, 0.8; Support: 0.77, 95% CI 0.71, 0.83).

In response to research question 2, respondents’ mean score on each sub-scale varied by program site (Table 2) and program type, EPAC vs traditional (Table 3). EPAC residents had significantly higher scores on the Centrality of Learning sub-scale (2.03, SD 0.30, vs 1.79, SD 0.42; p=.023) suggesting a more mastery focused learning orientation compared to residents in traditional programs.

Table 2- Mean GME-LEI Scores by Program Site
Table 3- Mean GME-LEI Scores by Program Type

Discussion

In this paper, we describe our revision of an existing instrument used to measure the learning orientation in UME programs. We also describe our attempt to provide reliability and validity evidence for this revised instrument, the GME-LEI, that measures the learning environment in GME programs and attends to learning orientation. The need for an instrument that measures learning orientation is pressing as enthusiasm and evidence for mastery orientation grows and remains pivotal to national accreditation. Our analysis suggests that the GME-LEI reliably measures three distinct aspects of the learning environment in GME programs: Centrality of Learning, Competitiveness and Stress in the Learning Environment, and Support of Learning. We confirmed items contained within the GME-LEI were well characterized by this three-factor model through a CFA and calculated Cronbach alpha values that support the reliability of our new instrument.  With its focus on learning orientation, the GME-LEI is unique among learning environment instruments in GME and helps to fill a gap in the literature and practice of measuring GME learning environments.

Our analysis found that in the Centrality of Learning sub-scale, EPAC residents tended to have higher mean scores than traditional residents. This suggests that the GME-LEI may be sensitive to detecting differences in learning orientation of EPAC and traditional pediatric residents. EPAC was intentional about advancing trainees based on competency, not time-in-training. In the EPAC program, students exhibited self-directed learning (e.g., actively seeking opportunities to engage in patient care) and action-oriented discernment (e.g., actively seeking constructive feedback).29 These behaviors, in addition to the unadjusted difference detected in GME-LEI, suggest that EPAC may foster a learning environment that supports mastery learning. Thus, the GME-LEI could be a useful tool in further elucidating the impact of competency based medical education on micro-levels of learning.31

Limitations Our study has several important limitations to consider including the relatively small sample size and variable numbers of respondents at each of the four program sites. Both of these factors impacted our ability to adjust for potential confounders which may have affected the mean scores on the GME-LEI. The GME-LEI was administered over a period of 5 months. Learning environments are not static and it could be that residents’ perceptions in December were different than their perceptions in April. Finally, we revised the existing UME-ECI and modified it for the primary purpose of program evaluation, not research. Thus, the GME-LEI was not extensively piloted prior to administration.

Conclusion

The GME-LEI reliably measures three distinct aspects of the GME learning environment. As its scores may be sensitive to detecting differences in learning orientation, the GME-LEI could be used to identify areas of improvement and help GME programs better support mastery learning.

References

  1. Genn JM. AMEE Medical Education Guide No. 23 (Part 2): Curriculum, environment, climate, quality and change in medical education – a unifying perspective. Med Teach. 2001;23(5):445-454.
  2. Delva MD, Kirby J, Schultz K, Godwin M. Assessing the relationship of learning approaches to workplace climate in clerkship and residency. Acad Med. 2004;79(11):1120-1126.
  3. Billings ME, Lazarus ME, Wenrich M, Curtis JR, Engelberg RA. The effect of the hidden curriculum on resident burnout and cynicism. J Grad Med Educ. 2011;3(4):503-510.
  4. Llera J, Durante E. Correlation between the educational environment and burn-out syndrome in residency programs at a university hospital. Arch Argent Pediatr. 2014;112(1):6-11.
  5. Sum MY, Chew QH, Sim K. Perceptions of the Learning Environment on the Relationship Between Stress and Burnout for Residents in an ACGME-I Accredited National Psychiatry Residency Program. J Grad Med Educ. 2019;11(4 Suppl):85-90.
  6. Smirnova A, Arah OA, Stalmeijer RE, Lombarts K, van der Vleuten CPM. The Association Between Residency Learning Climate and Inpatient Care Experience in Clinical Teaching Departments in the Netherlands. Acad Med. 2019;94(3):419-426.
  7. Liaison Committee on Medical Education (2019). Function and structure of a medical school: Standards for accreditation of medical education programs leading to the M.D. degree. 2019; http://lcme.org/wp-content/uploads/filebase/standards/2020-21_Functions-and-Structure_2019-05-01.docx.
  8. Weiss KB, Bagian JP, Wagner R. CLER Pathways to Excellence: Expectations for an Optimal Clinical Learning Environment (Executive Summary). J Grad Med Educ. 2014;6(3):610-611.
  9. Weiss KB, Bagian JP, Wagner R, Nasca TJ. Introducing the CLER Pathways to Excellence: A New Way of Viewing Clinical Learning Environments. J Grad Med Educ. 2014;6(3):608-609.
  10. Dweck CS, Mangels JA, Good C. Motivational effects on attention, cognition, and performance. Motivation, emotion, and cognition: Integrative perspectives on intellectual functioning and development. Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers; 2004:41-55.
  11. Wolcott MD, McLaughlin JE, Hann A, et al. A review to characterise and map the growth mindset theory in health professions education. Med Educ. Apr 2021;55(4):430-440. doi:10.1111/medu.14381
  12. Richardson D, Kinnear B, Hauer KE, et al. Growth mindset in competency-based medical education. Med Teach. Jul 2021;43(7):751-757. doi:10.1080/0142159x.2021.1928036
  13. Chan CYW, Sum MY, Tan GMY, Tor PC, Sim K. Adoption and correlates of the Dundee Ready Educational Environment Measure (DREEM) in the evaluation of undergraduate learning environments – a systematic review. Med Teach. Dec 2018;40(12):1240-1247. doi:10.1080/0142159x.2018.1426842
  14. Eggleton K, Goodyear-Smith F, Henning M, Jones R, Shulruf B. A psychometric evaluation of the University of Auckland General Practice Report of Educational Environment: UAGREE. Educ Prim Care. Mar 2017;28(2):86-93. doi:10.1080/14739879.2016.1268934
  15. Irby DM, O’Brien BC, Stenfors T, Palmgren PJ. Selecting Instruments for Measuring the Clinical Learning Environment of Medical Education: A 4-Domain Framework. Acad Med. Feb 1 2021;96(2):218-225. doi:10.1097/acm.0000000000003551
  16. Krupat E, Borges NJ, Brower RD, et al. The Educational Climate Inventory: Measuring Students’ Perceptions of the Preclerkship and Clerkship Settings. Acad Med. Dec 2017;92(12):1757-1764. doi:10.1097/acm.0000000000001730
  17. Marshall RE. Measuring the medical school learning environment. J Med Educ. Feb 1978;53(2):98-104. doi:10.1097/00001888-197802000-00003
  18. Pololi L, Price J. Validation and use of an instrument to measure the learning environment as perceived by medical students. Teach Learn Med. Fall 2000;12(4):201-7. doi:10.1207/s15328015tlm1204_7
  19. Shochet RB, Colbert-Getz JM, Wright SM. The Johns Hopkins learning environment scale: measuring medical students’ perceptions of the processes supporting professional formation. Acad Med. Jun 2015;90(6):810-8. doi:10.1097/acm.0000000000000706
  20. Schönrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. Key elements in assessing the educational environment: where is the theory? Adv Health Sci Educ Theory Pract. 2012;17(5):727-742.
  21. Boor K, Van Der Vleuten C, Teunissen P, Scherpbier A, Scheele F. Development and analysis of D-RECT, an instrument measuring residents’ learning climate. Med Teach. 2011;33(10):820-827.
  22. Cannon GW, Keitz SA, Holland GJ, et al. Factors determining medical students’ and residents’ satisfaction during VA-based training: findings from the VA Learners’ Perceptions Survey. Acad Med. Jun 2008;83(6):611-20. doi:10.1097/ACM.0b013e3181722e97
  23. Pololi LH, Evans AT, Civian JT, Shea S, Brennan RT. Assessing the Culture of Residency Using the C – Change Resident Survey: Validity Evidence in 34 U.S. Residency Programs. J Gen Intern Med. Jul 2017;32(7):783-789. doi:10.1007/s11606-017-4038-6
  24. Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. Jun 2005;27(4):326-31. doi:10.1080/01421590500150874
  25. Schönrock-Adema J VM, Raat AN, Brand PL. Development and validation  of the Scan of Postgraduate Educational Environment Domains (SPEED): A brief instrument to assess the educational environment in postgraduate medical education.: PLoS One.; 2015.
  26. Andrews JS, Bale JF, Jr., Soep JB, et al. Education in Pediatrics Across the Continuum (EPAC): First Steps Toward Realizing the Dream of Competency-Based Education. Acad Med. Mar 2018;93(3):414-420. doi:10.1097/acm.0000000000002020
  27. Murray KE, Lane JL, Carraccio C, et al. Crossing the Gap: Using Competency-Based Assessment to Determine Whether Learners Are Ready for the Undergraduate-to-Graduate Transition. Acad Med. Mar 2019;94(3):338-345. doi:10.1097/acm.0000000000002535
  28. Caro Monroig AM, Chen HC, Carraccio C, Richards BF, Ten Cate O, Balmer DF. Medical Students’ Perspectives on Entrustment Decision-Making in an EPA Assessment Framework: A Secondary Data Analysis. Acad Med. 2020.
  29. Schwartz A, Young R, Hicks PJ; for APPD LEARN. (2014). Medical Education Practice-Based Research Networks: Facilitating collaborative research. Medical Teacher, 38:1, 64-74.
  30. Ten Cate O, Balmer DF, Caretta-Weyer H, Hatala R, Hennus MP, West DC. Entrustable Professional Activities and Entrustment Decision Making: A Development and Research Agenda for the Next Decade. Acad Med. 2021 Jul 1;96(7S):S96-S104. doi: 10.1097/ACM.0000000000004106. PMID: 34183610.