The Influence of Revising an Online Gerontology Program on the Student Experience

Acknowledgements

We acknowledge the support of the University of Utah Teaching and Learning Technologies, the University of Utah College of Nursing, and the University of Utah Consortium for Families and Health Research.

Funding

Program revisions were funded through a University of Utah Teaching and Learning Technologies Online Program Development Grant.

Declaration of Interest

We have no conflicts of interest to declare.

Abstract

The recent adoption of gerontology competencies for undergraduate and graduate education emphasizes a need for national standards developed to enhance and unify the field of gerontology. The Gerontology Interdisciplinary Program at the University of Utah revised all of the Gerontology course offerings to align with the Association for Gerontology in Higher Education’s (AGHE) Gerontology Competencies for Undergraduate and Graduate Education (2014), while also making improvements in distance instructional design. In this study, we examined student course evaluation scores and written comments in six Master of Science in Gerontology core courses (at both 5000 and 6000 levels) prior to and following alignment with AGHE competencies and online design changes. Data included evaluations two semesters prior to and two semesters following course revisions and was assessed using paired t-test and thematic analysis. No significant statistical findings were found between pre and post revisions. Qualitative comments post revision did show an increased focus in comments about interactive and engaging technology. These findings will be used for course and program quality improvement initiatives, including enhanced approaches to documenting and assessing competency-based education.

Keywords

Competency-based education, course evaluation, course revision, distance education

Background

Competency-based education (CBE) is growing in popularity and demand (Burnette, 2016; McClarty & Gaertner, 2015). Gerontology curriculum development has moved towards CBE with national standards developed to enhance and unify the field of gerontology (Association for Gerontology in Higher Education [AGHE], 2014; Damron-Rodriguez et al., 2019).  The Academy for Gerontology in Higher Education (AGHE) approved the Gerontology Competencies for Undergraduate and Graduate Education (AGHE, 2014); designed to serve as a curricular guide for undergraduate (i.e., majors, minors, certificates) and master’s degree level programs. Benefits of using competencies for curricular revisions, include: shifting focus to measurable outcomes (Burnette, 2016; Damron-Rodriguez et al., 2019; Wendt, Peterson, & Douglass, 1993), increasing program accountability for learning outcomes (Burnette, 2016; Damron-Rodriquez et al., 2019; McClarty & Gaertner, 2015), preparing students to graduate with necessary skills (McClarty & Gaertner, 2015), and training the gerontological workforce by bridging the gaps between aging services and gerontology education (Applebaum & Leek, 2008; Damron-Rodriquez et al., 2019).

At the same time CBE has increased, online teaching and learning are more accessible and in demand (Means, Toyama, Murphy, Bakia, & Jones, 2010; Woldeab, Yawson, & Osafo, 2020). For programs looking to enhance curriculum and program accessibility, considering both CBE and distance course design are vital. Quality course design for courses incorporating CBE, emphasize opportunities for student application and practice, active learning strategies, and timely instructor response and feedback (Krause, Dias, & Schedler, 2015). In a previous paper (Dassel, Eaton, & Felsted, 2019) we described an approach to the program-wide revisions to align with the AGHE competencies and to meet current recommendations in cyber-pedagogy. The University of Utah Gerontology Interdisciplinary Program (GIP) was in a position to make revisions to enhance both CBE and online instructional design using a course/credit model that embeds competencies within a traditional approach to higher education that offers credit hours towards a degree (Council of Regional Accrediting Commissions [C-RAC], 2015). The University’s Online Teaching and Learning Technologies (TLT) office released a funding opportunity for programs wanting to move completely online. The GIP took the opportunity to apply to use funds with the following purpose: 1) transition the Masters of Science program into a completely online format, and 2) improve the quality and consistency of existing gerontology courses through a full curriculum review by the experts at TLT. The goal was to make the fully online transition in a manner that allowed for dynamic online learning and to incorporate CBE within the program. In 2015, the GIP began the work to revise all program courses to meet best practices of online learning and map program curricula to National Competencies in Gerontology Education (AGHE, 2014).

            Course revisions were complete in 2017. We then applied for and received official UOnline Program status and accreditation as a fully online program through the Northwest Commission on Colleges and Universities (2020). This accreditation allows us to be recognized as an official UOnline Program at the University of Utah. The University is a member of the State Authorization Reciprocity Agreement (NC-SARA) which reduces the number of other state regulations to continually monitor, resulting in more efficiency in the authorization process. Through NC-SARA, GIP is able to offer and expand certain educational opportunities to students in and out of the state of Utah (National Council for State Authorization of Reciprocity Agreements, 2020). In 2017, we were also awarded Program of Merit (POM) status from the Academy for Gerontology in Higher Education (AGHE), at the master’s degree level. The process of curricula review, competency mapping, and online revisions/planning, facilitated our application, review, and award of the POM.

            Course revision and development followed a model that incorporated best practices in teaching pedagogy and online learning. These incorporated Fink’s (2003) approach to designing college courses, using the DREAM exercise, situational factors exercise, course alignment grid, and taxonomy of significant learning. A backward design approach (Wiggins & McTighe, 2005) helped faculty begin with competencies and learning objectives followed by identifying assessments that then measure those objectives. Bloom’s (1984) taxonomy was used to design assessments that evaluate the learning experiences accurately and active learning principles (Bonwell & Eison, 1991; Prince, 2004) guided choices to facilitate dynamic online learning. Instructional designers met individually with instructors to work through, enhance, and redesign courses to facilitate this work.

            Upon completion, the program continued to assess student learning using individual course assessments, grades, progress towards graduation, annual and exit student interviews, and alumni surveys. However, we wondered about the student experience and reaction to changes pre and post revision of the entire curricula. As this process spanned four years and courses, it became of interest to see if existing data might facilitate better understanding of the student experience pre compared to post program revision.

The purpose of this paper is to compare student course evaluations from six core courses of the Master of Science in Gerontology program before and after alignment with AGHE competencies and online design changes. The objective of this study is to analyze pre and post qualitative and quantitative student evaluations in order to assess indicators of program quality and improvement. We hypothesize that course evaluations will improve from pre to post revision. Testing of this hypothesis occurred through two aims:

Aim 1: Assess the changes in numerical course ratings provided by students pre to post course evaluation.

Aim 2: Assess the changes, pre to post course revision, in student open-ended feedback submitted with course evaluations.

Methods

Course Selection

For the purpose of the current study, we compared de-identified, anonymous student course evaluations in six of our Master of Science core courses before and after the course revision and alignment. The six core courses required in our Master of Science program are: 1) GERON 5001/6001: Introduction to Aging, 2) GERON 5370/6370: Health and Optimal Aging, 3) GERON 5002/6002: Services Agencies and Programs for Older Adults, 4) GERON 5500/6500: Social and Public Policy in Aging, 5) GERON 5604/6604: Physiology and Psychology of Aging, and 6) GERON 5003/6003: Research Methods in Aging (Note. 5000 and 6000 level courses are considered to be graduate level by the University of Utah). Two additional core courses, GERON 5990/6990: Gerontology Practicum and GERON 6970/6975: Gerontology Thesis/Project, were omitted from this study as they were newly created in an online format, are mentor-based (one instructor to one or two students), and don’t receive evaluations due to the small course size.

These six core courses underwent significant redesign across three consecutive semesters. Each instructor worked one on one with an instructional designer provided through the UOnline grant mechanism. Instructional designers, associated with the University of Utah’s TLT, aided course instructors in updating their courses with the latest technological media to provide online content in innovative and effective ways. 

Course Evaluations

Faculty were guided in assessing and revising courses through the use of AGHE competencies (2014) and Fink’s (2003) and Bloom’s (1984) taxonomies. AGHE competencies were first mapped across all gerontology courses, identifying redundancy, overlap, and missing content. A detailed description of this process is described in Dassel et al. (2019). Faculty noted recommended revisions based on competencies specific to objectives and modified content. These were incorporated as faculty worked with instructional designers on their assigned course. Next, instructors used the framework of taxonomies to redesign the student learning experience for an active online format. Fink’s taxonomy is a non-hierarchical model, which defines six major domains that need to be present for a complete learning experience – foundational knowledge, application, integration, human dimensions, caring, and learning to learn (Fink, 2003). Bloom’s taxonomy, revised posthumously by a group of cognitive psychologists in 2001, is a hierarchical model which defines and distinguishes six categories of learning (Bloom, 1984; Anderson & Krathwohl, 2001). Bloom’s six categories, which are each intended to be individually mastered before moving to the next category, are remember, understand, apply, analyze, evaluate, and create. These designations allow for the design of the accompanying assessment to accurately evaluate the learning experience by level.

Permission to analyze student course evaluations was submitted and reviewed by the Institutional Review Board (IRB) at the University of Utah. The IRB determined oversight was not required as this work does not meet the definition of Human Subjects Research. All student evaluations are completed on an anonymous basis. Evaluations are used as a tool of quality improvement to assess course outcomes and faculty instructions. In order to obtain a representative sample of student evaluations, we assessed evaluations from two consecutive semesters immediately prior to the course revision and the two consecutive semesters immediately following the course revision.

Course evaluations were emailed to students during the last month of the semester. Students were asked to voluntarily complete the anonymous course evaluations. The data, consisting of numerical scaled response options and open-ended comment sections, was summarized and provided to course instructors at the end of the semester once grades have been submitted. From the full list of course evaluation questions, we selected 10 quantitative questions that we felt were most relevant to course revision. The questions selected include: 1) Overall course evaluation, 2) The course objectives were clearly stated, 3) The course objectives were met, 4) The course content was well organized, 5) The course materials were helpful in meeting course objectives, 6) Assignments and exams reflected what was covered in the course, 7) I learned a great deal in this course, 8) As a result of this course, my interest in the subject increased, 9) Course requirements and grading criteria were clear, and 10) I gained an excellent understanding of concepts in this field. Response options were based on a 5-point Likert scale with 1= strongly disagree to 6 = strongly agree. Open-ended questions ask students to comment on: 1) course effectiveness, 2) the online components of the course, and 3) comments intended for the instructor.

Data Analysis

Data analysis occurred in two phases. Phase one focused on quantitative data from the course evaluations. Pre and post data were aggregated for each course. Since students do not take a course multiple times, analyzing data pre to post by individual student is impossible. Rather than focus on the individual student as the unit of analysis, we assessed pre and post evaluations using the course as the unit of analysis. The means of each sample were calculated for each of the course evaluation questions (e.g., overall course rating, course objectives) as a proxy for evaluating the effectiveness of curriculum revision and course mapping. We used univariate statistics to describe frequencies and mean responses for each evaluation question. Paired-samples t-tests were conducted on the course means to examine score changes from pre to post course revision. Each course was compared separately and then data was pooled for all courses to assess program change over time. For the qualitative portion of this study, we compiled and organized all of the course evaluation open-ended student responses by course and semester. Data was uploaded into NVivo (QSR International, 2018) and assessed in a two-phase process. First each comment was read and coded into four a priori codes: 1) pre-commendations, 2) pre-recommendations, 3) post-commendations, 4) post-recommendations. The second phase of coding used thematic analysis to assess the main themes presented by students (Saldaña, 2009). This allowed us to assess potential change in student thoughts pre- to post-revision.

Results

Data are anonymous and demographics were not gathered as part of student evaluations. However, we do have a general idea of student demographics within the GIP. During a recent fall semester, we had 189 unique students enrolled in gerontology courses. Students represented 6 master’s degree programs, and 3 doctorate programs; with 9 students undeclared and 4 nonmatriculated. The average age of students was 29; 137 female (72.5%), 50 male (26.5%), and 2 unknown (1.05%). The majority of students were white (67.72%), with others identifying as Hispanic/Latino (13.76%), Asian (7.40%), unknown ethnicity (4.23%), multi-racial (3.70%), international (1.59%), Black/African American (1.05%), and Native Hawaiian or other Pacific Islander (0.53%).

A summary of the t-test data results is found in Table 1. Some data were unavailable due to too few responses. One course, GERON 5500/6500, did not have sufficient data for analysis (less than 2 observations per class), as it was a newly developed course and did not have sufficient pre-revision data. This course was retained in the overall comparison of results pre to post. Paired t-tests comparing overall course ratings pre and post course revision revealed a trend in improvement in the GERON 5001/6001: Introduction to Aging (t=4.09; p=.05) course. Examination of aggregate data from all of the courses in relation to individual course evaluation questions showed trends in improvement for the following two areas: 1) “The course objectives were met” (t=1.47; p=.09), and 2) “I learned a great deal in this course” (t=1.36; p=.09). There were no significant differences on overall or individual course evaluation questions pre to post course revision.

Table 1. Assessment of Course Evaluation Questions Pre- to Post-Revision

Open-Ended Student Comments

Qualitative analysis summarizes both overall number of commendations and recommendations and the content of comments to assess change pre to post revision. A total of 298 codes were documented pre-revision (see Table 2). Of these 71% were commendations, focusing on positive feedback about course content, online teaching, and instructor efficacy. Comments focusing on recommendations for change comprised 29% of the total pre-revision codes. These recommendations centered on issues with course content, technology, and instruction. Comments in the recommendation’s category included both negative reviews and constructive ideas for change. Post-revision comments were coded 257 times. Seventy-three percent of these were commendations and 27% were recommendations (Table 2). Percentages are very similar pre to post, demonstrating that overall positive or negative comments did not alter much from pre to post revision.

Table 2. Overall Pre to Post Coding of Course Evaluation Qualitative Comments

The second phase of qualitative analysis assessed the content of the comments to understand the topics focused on pre to post revision. Student comments were evaluated for each course; pre-revision comments were analyzed first followed post the post-revision comments. After identifying themes within pre-revision comments, a summary was written of the main ideas. Following this, post-revision comments were read and coded for the same course. A summary was then written about the main themes for the post-revision codes. Representative quotes were included in each summary, in order to present examples of themes. At this time the pre and post revision summaries were compared for each course. Any major thematic changes were noted in a final course comparison summary. Once this process was complete for each course, all course comparison summaries were re-read and coded for similarities and differences across the group of courses. Table 3 includes a summary of each course, including representative quotes.

Table 3. Analysis of Student Comments by Class

Summary of Qualitative Comments Pre to Post Revision

The following summarizes overall findings from qualitative analysis of student open-ended course evaluation comments. Student comments increased in two main areas post-revision when compared to pre-revision: 1) connection to the instructor, and 2) organized content.

            Connection to the Instructor. Students expressed not wanting all the extra technological features integrated into courses such as screen and video recorded Power Point lectures, interactive quizzes, and movie creation apps. A variety of apps (e.g., Flipgrid, Lucid chart, Pathbrite) led to confusion and overwhelmed students. However, students emphasized the importance of technology in helping them maintain connection with the instructor. For example, one student stated, “I especially liked the introduction videos before each module because it felt like the instructor was in constant communication with the class.” The adoption of video was particularly useful in helping students feel this connection.

            Organized Content. Comments emphasized the importance of balancing assignments, content, and the amount of work. Students noted that spreading assignments out throughout the semester helped them disperse their stress. This was most often mentioned when a course had multiple assignments due the last week of the semester. One student commented, “Assign one of the larger projects to be due at mid-term, to space out the stress.” Students value learning and in an online environment this requires incorporating moments of accountability to help students interact with the content. Students emphasized wanting these opportunities for accountability and when a course was lacking this, they acknowledged their lack of course interaction. “I have mixed feelings about the assignments. On the one hand, I feel that the small amount of assignments was nice, but also allowed for me to be less involved in the course than perhaps I should have.”

Discussion

In this mixed method, multi-year study examining student evaluations from pre to post course revisions, quantitative analysis did not produce statistically significant differences in mean course evaluation scores. This may be attributed to the small sample size, use of aggregate data rather than individual data points, missing data, and little variation in scores with most courses receiving high mean scores. In the qualitative analysis of student evaluations, we gained useful information. We found that students value technology that augments their connection to the instructor and course organization. Some students do not want all the extra features that come with a wide variety of technology (e.g., external sites to create blogs, mini podcasts, video creation). Students noticed video introductions, video lectures, and video summaries, often stating it made them feel connected to the instructor. This aligns with the quality indicators in CBE online courses that emphasize the importance of technology and navigation as one of seven recommended areas for measurement (Krause et al., 2015). Students want to learn. Learning online necessitates the incorporation of one or more forms of accountability (which the students want). In addition, students desire forms of accountability throughout the semester, rather than just at semester’s end. The balance of assignments, content, and amount of work matters to students. Instructional design is vital in quality online courses. Accountability should be an area that faculty and instructional designers collaborate on to facilitate enhanced quality in online CBE. Two quality indicators of accountability include 1) assessment and evaluation, and 2) competence and learning activities (Krause et al., 2015). We also observed an increase in student comments specific to a certain topic each time a major adjustment occurred, whether pre or post revision. This could be an outcome of the “growing pains” related to trying something new. Similar to piloting research, faculty pilot testing teaching strategies often need student feedback to improve changes in a manner that actually works for students. Checking in with students demonstrates the quality indicator of learner support, and allows faculty to assess and evaluate their course as part of quality assurance (Krause et al., 2015).

The information obtained from this study is relevant to course and program quality improvement. Strengths include the mixed method format and multi-year analysis. Limitations include data that did not allow for pre and post data from the same students, as it is impossible to require students to take a course twice. In some cases, there was not sufficient data for analysis, as t-tests require at least two observations per class (e.g., GERON 5500/6500). This insufficient data was attributed to new course development and changes in student evaluation questions that occurred across the University of Utah. This meant that questions were different pre to post revision for some courses. In addition, conducting a technology revision simultaneously with competency revisions makes it difficult to tease out changes due to course format versus curriculum. Instructors need to remind students which competencies are being covered and how they will expect to interact with this content during the course. Clear learning outcomes and student comprehension of the proficiencies they are working on enhances CBE (Burnette, 2016).

Mapping the entire GIP curriculum to the AGHE competency guidelines (Dassel et al., 2019) prepared us to apply for and receive Program of Merit designation through AGHE. This Program of Merit status has provided the foundation for future application for accreditation through the Accreditation for Gerontology Education Council (AGEC), which requires that the programs under review align with the AGHE Gerontology Competencies for Undergraduate and Graduate Education (2014). Students from all health science disciplines participate in undergraduate and graduate level certificates available through our program. Improving program quality and demonstrating the efficacy of such changes should strengthen the ability of students to work with older adults in community and health care settings.

Programs should build on CBE by developing measures to assess student achievement of competencies. This process can be used to improve the quality of the student learning experience (Damron-Rodriguez et al., 2019; McClarty & Gaertner, 2015). Our program is developing a tool that will allow faculty to assess program learning outcomes and AGHE competencies within each class. Data will be gathered every 3 years and will facilitate progress at both the course and program levels.  Tools, such as this, can be shared in an effort to develop tool-kits for other gerontology programs to build quality models of competency-based education (Damron-Rodriguez et al., 2019). It is our goal to enhance the ability of graduates to demonstrate the competencies and skills they have gained through high quality gerontology education as they work with employers and older adults. We will enhance our approach to CBE by assessing the path alumni take and their use of competencies to communicate their knowledge, skills, and contributions within the workforce. Advancing CBE in gerontology needs to happen through organizational leadership (Damron-Rodriguez et al., 2019). Our program benefits from being housed within a College of Nursing that follows a CBE model and process for accreditation. We can learn from this process of documentation, tracking, assessment, and quality improvement to enhance the rigor and approach we take to CBE in gerontology programs. Finally, plan to share our CBE strategies, assessment tools, and models gerontology programs  in the Utah State Gerontology Collaborative.

The results of this study have implications beyond the Gerontology Interdisciplinary Program to the larger Health Sciences campus where our program and college are housed. Many interprofessional health science students enroll in our courses. Thus, improving program quality and demonstrating efficacy ultimately strengthens students’ ability to work effectively with older adults in a variety of settings.

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Applebaum, R. & Leek, J. (2008). Bridging the academic/practice gap in gerontology and geriatrics: Mapping a route to mutual success. Annual Review of Gerontology and Geriatrics, 28, 131-148. doi: 10.1891/0198-8794.28.131

Association for Gerontology in Higher Education [AGHE] (2014). Gerontology competencies for undergraduate and graduate education. Washington, DC: Association for Gerontology in Higher Education. Retrieved from: https://www.geron.org/images/gsa/AGHE/gerontology_competencies.pdf

Bloom, B. S. (1984). Taxonomy of educational objectives: The classification of educational goals. New York: Longman.

Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. ASH-ERIC Higher Education Report. Washington, DC: School of Education and Human Development, George Washington University.

Burnette, D. M. (2016). The renewal of competency-based education: A review of the literature. The Journal of Continuing Higher Education, 64, 84-93. doi: 10.1080/07377363.2016.1177704

Council of Regional Accrediting Commissions [C-RAC]. (2015, June 2). Framework for competency-based education [Press release]. Retrieved from https://download.hlcommission.org/C-RAC_CBE_Statement_6_2_2015.pdf

Damron-Rodriguez, J., Frank, J. C., Maiden, R. J., Abushakrah, J., Jukema, J. S., Pianosi, B., & Sterns, H. L. (2019). Gerontology competencies: Construction, consensus and contribution. Gerontology & Geriatrics Education, 40(4), 409-431. doi: 10.1080/02701960.2019.1647835

Dassel, K., Eaton, J., & Felsted, K. (2019). Navigating the future of gerontology education: Curriculum mapping to the AGHE competencies. Gerontology & Geriatrics Education, 40(1), 132-138.

Fink, L.D. (2003) Creating significant learning experiences: An integrated approach to designing college courses. San Francisco: Jossey‐Bass.

Krause, J., Dias, L. P., & Schedler, C. (2015). Competency-based education: A framework for measuring quality courses. Online Journal of Distance Learning Administration, 18(1). Retrieved from https://www.westga.edu/~distance/ojdla/spring181/krause_dias_schedler181.html

McClarty, K. L. & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. AEI Series on Competency-Based Higher Education. Washington, DC: Center on Higher Education Reform & American Enterprise Institute for Public Policy Research.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. U.S. Dept. of Education, Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service website. Retrieved from https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

National Council for State Authorization Reciprocity Agreements [NC-SARA]. (2020). About NC-SARA. Retrieved from https://nc-sara.org/about-nc-sara

Northwest Commission on Colleges and Universities. (2020). Accreditation. Retrieved from https://www.nwccu.org/accreditation%20/

Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223-231.

QSR International Pty Ltd. (2018). NVivo qualitative data analysis software (version 12) [Software]. Retrieved from https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home. Accessed May 17, 2020.

Saldaña, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: SAGE.

Wendt, P. F., Peterson, D. A., & Douglass, E. B. (1993). Core principles and outcomes of gerontology, geriatrics, and aging studies instruction. Washington, DC: Association for Gerontology in Higher Education and the University of Southern California.

Wiggins, G.P., & McTighe, J. (2005). Understanding by design. (2nd Ed.).  Alexandria, VA: Association for Supervision and Curriculum Development.

Woldeab, D., Yawson, R.M, & Osafo, E. (2020). A systematic meta-analytic review of thinking beyond the comparison of online versus traditional learning. E-Journal of Business Education & Scholarship of Teaching, 14(1), 1-24.

Student-Faculty Co-Production of a Medical Education Design Challenge as a Tool for Teaching Health System Science

Funding

AMA Accelerating Change in Education Innovation Grant program

Disclosures

None

What problem was addressed:

Medical schools prepare students to enter a complex health system with the knowledge to care for patients but provide little training on the health system they join. Health systems science (HSS) is an important topic that is starting to enter medical school curricula. The difficulty is how to teach this complex topic which is slow to gain traction with key stakeholders.1 We argue that HSS is not as difficult a concept to implement if presented in a familiar context that encourages active participation with the material. We present our educational innovation to teach HSS in an active learning setting that increased buy-in from medical students and faculty. 

What was tried:

We organized the Medical Education Design and Innovation Challenge (MEDIC), a competition that taught medical students HSS as they competed to design an educational innovation. We introduced 24 medical students from all years of training to HSS using the Shingo ModelTM 2 as a framework, the model successfully used by the University of Utah Health system. Students were divided into 6 teams and asked to identify an area for improvement and then design a program, course, or initiative utilizing this model. The Shingo ModelTM requires users to identify guiding principles, key stakeholders, and important outcomes as precursory steps to any innovative problem-solving design. This encouraged students to understand education system before proposing a solution to a perceived deficit. This event was divided over two days with a total of 8 hours of participation. Students were introduced to HSS and the Shingo ModelTM during an introductory dinner and then placed into teams. The following day teams had 4 hours to identify a deficit within their school system and design a solution (ex. Mentorship program for specialty exploration) using the Shingo ModelTM as a framework. Teams then pitched their proposals and were evaluated on their creativity, feasibility, and evidence of utilizing systems science in their design. Winners were determined based on majority vote from guest faculty judges, coordinators and participants.

What lessons were learned:

Two major challenges exist in teaching HSS to medical students: relevance to learners and incorporation into already full medical school curricula. Survey data from MEDIC suggest this project-focused approach to teaching HSS addressed both challenges. Participants (63% response rate) revealed that 100% of students felt that MEDIC was relevant to them. 93% of students thought the Shingo ModelTM was an appropriate framework for approaching medical education innovation and 73% were confident in their ability to apply the model after only four hours of team-based work. 80% of students found they developed new skills and had a change of perception of medical education design by participating in MEDIC. Additionally, 80% of students agreed or strongly agreed that all students would benefit from exposure to HSS in the core curriculum. This experience may be easily reproduced at other institutions. The positive response of the students and success in proposing innovative ideas for medical education encouraged us to continue to use this framework as we engage students and faculty in ongoing curricular reform.

References

  1. Gonzalo JD, Hawkins R, Lawson L, Wolpaw D, Chang A. Concerns and Responses for Integrating Health Systems Science Into Medical Education. Acad Med. 2018; 93(6):843–849
  2. https://shingo.org/model

Personal, Social, Organizational, and Space Components of the Clinical Learning Environment: Variations in their Perceived Influence

Abstract

Purpose. Because of its impact on learning, interest in the learning environment continues unabated. This includes a framework which emerged from a Macy-sponsored conference which organizes factors that influence the quality of the learning environment into four components: personal, social, organizational, and space. This paper reports a study which assessed the relative influence of these components.

Methods. This study involved the secondary analysis of a subset of transcribed excerpts obtained from a study using an appreciative-inquiry style of questions in interviews and focus groups with faculty, residents and students from two departments at the University of Utah School of Medicine conducted in 2019. After all excerpts had been coded using a constant comparative method, those assigned the codes of “successes” or “challenges” were considered for the secondary analysis. For each selected excerpt, trained research assistants divided up 100 points among the four components according to their perceived relative influence on the ideas expressed in the excerpt. Differences in the average number of points assigned across components by type of excerpt (success versus challenge) and by type of speaker (faculty, resident, student) were examined.

Results. Overall, the social component received the highest average number of points, followed closely by personal, and then much less so by organizational, and space. In both successes and challenges, the four components followed this same rank order, Nevertheless, the average number of points assigned to organization was significantly greater for challenges than for successes. There were no statistically significant differences in points assigned based on speaker.

Conclusion. Our secondary analysis of excerpts from interview and focus group transcripts confirm the relatively stronger influence of the social and personal components in the learning environment than the organizational and space. Nevertheless, a consideration of the organizational component appears warranted when seeking to overcome challenges which impede learning in the clinical environment.

BACKGROUND

The clinical learning environment is receiving substantial and increasing attention in medical schools as stakeholders strive to optimize learning outcomes while overcoming longstanding challenges such as mistreatment and marginalization of learners.1-4

Given this level of interest, Gruppen and colleagues recently held a Macy-sponsored consensus conference on the learning environment from which was published a conceptual framework,5 hereafter in this article referred to as the Macy-Conference Framework. This framework integrates an extensive literature, including a recent review from Schonrock-Adema,6 with the intent to “facilitate health professions educators in understanding, studying, and designing interventions to improve the learning environment.”6

According to Gruppen and colleagues, the learning environment is “a complex psycho-social-physical construct that is co-created by individuals, social groups, and organizations in a particular setting.”5 The Macy-Conference Framework captures this complexity. It includes “five overlapping and interactive core components that form two dimensions: the psychosocial dimension and material dimension.”5 The psychosocial dimension comprises three components: the personal, social, and organizational. The material dimension encompasses physical and virtual spaces.5 The personal includes characteristics of participants in the learning environment which intrinsically shape behavior including knowledge, attitudes, perceptions, level of commitment, and a priori goals. The social focuses on the dynamics between individuals including interactions and social relations related to all aspects of the learning environment. The organizational includes norms, roles, and structures, which extrinsically shape behaviors (both individually or as a group). Space includes the physical and virtual characteristics of the environment in which learning and practice occur.

We encountered the Macy-Conference Framework after we had initiated an inquiry in 2019 into the learning environment at our institution, the University of Utah School of Medicine, with the intent of shifting focus from what’s wrong (e.g., reoccurring reports of mistreatment) to what’s working (e.g.,  stand out teaching moments). Our goal, consistent with the philosophy of appreciative inquiry,7-9 was to optimize the learning environment by first discovering and documenting existing examples which we could showcase and replicate. While we also explored challenges and concerns, these were not our primary focus, per se. Our inquiry included conducting, transcribing, and coding appreciative-style interviews8,10 and focus groups with faculty, residents, and students. We had reached the point in this inquiry of completing a first round of qualitative, inductive analyses, resulting in a robust set of coded excerpts, of which 189 had been coded as either “successes” or “challenges” in promoting learning in the clinical learning environment. 

As we began to consider how best to take advantage of the Macy-Conference Framework5 to analyze our data, we discovered that it wasn’t a matter of just using the domains or components as codes and assigning them to excerpts because all of the quotes simultaneously referred to the personal, social, organizational, and/or material components of the Macy-Conference Framework–explicitly or implicitly–with varying degrees of emphasis. This realization gave us the inspiration to initiate a secondary analysis of our data with the aim of using the Macy-Conference Framework as an interpretive lens to deepen our understanding of the conditions influencing the learning environment and ways to enhance that environment.

We guided our secondary analysis of these coded excerpts using the following questions:

  1. What is the relative influence of each component in the Macy-Conference Framework in how stakeholders talk about the learning environment?
  2. Are there differences in the relative influence of each component across excerpts coded as successes versus challenges?
  3. Are there differences in the relative influence of each component across excerpts made by faculty, residents or students?

METHODS

After receiving IRB approval from the Institutional Review Board of the University of Utah, we collected data for the primary study upon which our secondary analysis was based through interviews and focus groups. We conducted individual interviews with Faculty and Residents in the Department of Surgery (n=7) and the Department of Obstetrics and Gynecology (n=6). We also conducted interviews with fourth-year medical students (n=4) and two focus groups with third-year medical students (n=20). Interview and focus group questions were designed utilizing an appreciative inquiry approach.7-9 Our primary interview/focus group questions (see Appendix) specifically asked participants to, 1) describe a successful learning moment and what they contributed to that moment, 2) identify their core values and how well these were reflected by our institution, and 3) recall an instance in which their values had been challenged. Additional, follow-up questions explored perceptions of mistreatment and barriers to an optimal learning environment.  We transcribed the interviews and focus group recordings verbatim. We used a constant comparative method11 to code the transcripts into categories that included, among other things, successful and challenging learning moments. Multiple members of the team participated in coding or reviewing codes and resolved any discrepancies through discussion.

It was at this point that we decided to initiate a secondary analysis of our qualitative data with the aim of using the Macy-Conference Framework5 as an interpretive lens. In response to time constraints, we randomly selected 122 excerpts for inclusion in our secondary analysis, or approximately 2/3 of excerpts coded as successes or challenges. The length of these selected excerpts ranged from 71 to 2783 words (average=790).

Our secondary analysis borrowed an approach used in previous research,12,13 in which study participants assign 100 points amongst a set of elements to indicate the relative influence of each element on the topic of interest. For example, Balmer and colleagues12 asked 4th-year students to divide 100 points among the explicit, implicit, and extra curriculum for each of 10 school-wide learning objectives to show the relative influence on each type of curricula in the student’s acquisition of the knowledge and skills required to achieve each objective. Differences in how the participants assigned points led to important insights about the positive influences of the implicit curriculum on student learning, particularly of learning objectives associated communication, teamwork, and professionalism.

Similar to Balmer and her colleagues,12,13 we developed, piloted, and refined a method to assign weights to the components of the Macy-Conference Framework to capture raters’ perceptions of the relative importance of each component in the content of each selected excerpt. Based on our pilot work in which raters tended to assign very few points to either of the two components of the material domain (i.e., physical and virtual),5 we combined them into one, which we called ‘space’. Two research assistants (CB and TD) first discussed definitions of each component, practiced using those definitions in assigning weights, and then shared their experience to optimize calibration. The raters then assigned weights to a subset of assigned excerpts, blinded as to whether the excerpt had been coded as a success or challenge and to whether the speaker was a faculty, resident, or student. In assigning weights, the raters carefully read each assigned excerpt and considered the key content being expressed. They then inferred the relative influence of each of the four components in each excerpt by dividing up 100 points among the components.  Both researchers assigned weights to 2/3 of the selected subset of excerpts so that 1/3 overlapped. To optimize congruence in their approach to assigning weights, the researchers reviewed and discussed each other’s weight assignments for the 1/3 overlapping excerpts before completing the task for all assigned excerpts. Table 1 contains example excerpts with point assignments.

Table 1. Example excerpts and relative weight assignment.

To address research question 1, BR averaged the weights assigned to the four components for the 1/3 overlapping excerpts and then computed overall averages and standard deviations of weights for each of the four domains across excerpts. For questions 2 and 3, BR computed separate averages and standard deviations for excerpts coded as success and challenges or for each stakeholder group.

We received help from the institution’s Center for Clinical and Translational Science Biostatistics Core14 to analyze our data and determine the significance of observed differences. We first summarized results descriptively, where subject (faculty, resident, student) and type (challenge vs. success) were summarized as frequency and percentage, and weight within each of the four components (personal, social, organizational and space) was summarized using mean and standard deviation (SD), median (25th and 75th quartiles), and range. Given that the weights across the four components summed to 100, we used analysis methods appropriate for compositional data.  We used a Friedman’s test to assess whether weight distributions differed across the 4 components. We used Dirichlet regression to assess whether weight distributions differed by type of excerpt (success versus challenge) and type of stakeholder (faculty, resident, or student).15 We reported Odds ratios (ORs) of weighting each component more than the Social component (reference) with 95% confidence intervals (CIs) and p-values. We assessed statistical significance at the p<0.05 level two sided tests. We conducted all analyses using Rv.3.6.16

RESULTS

Overall, we observed statistically significant differences in the relative weights assigned to personal, social, organization, and space (p<0.001, Friedman’s test). The social component received the highest average weights, followed closely by personal, and then much less so by organizational and space. See Table 2 for frequencies and percentages of stakeholders (faculty, resident, student) and types of excerpt (success versus challenge). See Table 3 for the mean, median, and standard deviation for weights assigned to each component.

Table 2. Number of excerpts coded by stakeholder and excerpt type
Table 3. Summary of weight distribution per component.

The average weight assignments of the four components follows the same rank order for Successes as well as Challenges (see Figures 1 and 2); however, the odds of an excerpt being weighted as an Organizational component more than the Social component is 40% lower for successes than for challenges OR=0.60 (95% CI 0.40, 0.89, p=0.012).

Average weights assigned by stakeholder to the four components again, fall in the same order of social, personal, organizational, and space. The weight distributions of the four components are not significantly different among stake holders (all p-values are greater than 0.05).

Figure 1. Average weight for each component stratified by Challenge vs Success with 95% confidence intervals.
Figure 1. Average weight for each component stratified by Challenge vs Success with 95% confidence intervals.
Figure 2. Odds ratios with 95% CIs for comparing each component to the social domain, by success vs. challenge.
Figure 2. Odds ratios with 95% CIs for comparing each component to the social domain, by success vs. challenge.

DISCUSSION

There is extensive literature that attempts to understand what it takes to optimize the learning environment (where successes occur and challenges are minimized), both generally2,4,5,17-22 and in Surgery23,24 and OB-GYN, specifically.25,26 The results of our study add to this literature, by using the lens of the Macy-Conference Framework to affirm the greater emphasis of the psychosocial domain compared to the material domain.

Within the psychosocial domain, our data highlights the relative greater influence of the personal and social factors on both success and challenges. This finding is intuitive and is consistent with the literature. Teaching and learning inherently emphasize interactions between individuals. As many authors have suggested, these interactions are shaped–whether good or bad–by the idiosyncratic characteristics of the participants (personal component) and by the dynamics of the interpersonal relationship (social component).2,23,27 Stakeholders in our study consistently referenced personal characteristics of faculty and learners. As illustrated in the sample excerpts included in Table 1, personal characteristics associated with successes and/or challenges include listening skills, a willingness to say ‘I don’t know’, a willingness to focus on teaching during surgery, an awareness of the learning climate, choosing to take the time to teach, and feeling safe enough to speak up.

The excerpts we included in our secondary analysis, as illustrated in Table 1, consistently highlight the interpersonal conditions central to promoting effective learning and responding to challenges. Examples include: involving others in teaching interactions, teaching styles, dynamics created with multiple levels of learners, communication patterns, giving and receiving feedback, and including or excluding learners. As a result, these data affirm the leading role of the social component in promoting success or in overcoming challenges in the learning environment, such as setting expectations, slowing down to explain whenever possible, providing timely feedback, promoting open communication, incrementally increasing learner autonomy, or protecting even a few minutes daily for teaching.1,25,28

Our data highlight the influence of organizational elements on the quality of the learning environment. While less than personal and social, the organizational elements deserve strong consideration in efforts to understand and shape the learning environment–particularly in terms of responding to challenges.2,18,22,25 As seen in the excerpts in Table 1, organizational elements that promote success, include working within the hierarchy of a team and establishing a culture which prioritizes interactions with learners. On the other hand, examples related to the organizational component that appear to create challenges to be overcome include patient care services that are perceived as devoid of any learning opportunities and the level of stress triggered by the broader environmental conditions.2,18,29

Our data suggest that the material domain of the Macy-Conference Framework has less influence in shaping the learning environment than the psychosocial domain. We consider this an interesting finding because a major addition of the Macy-Conference Framework, compared to the model proposed previously by Schonrock-Adema et. al.6 is the addition of the material domain. Reference to the material domain seldom occurred and thus raters consistently gave it little to no weight. As captured example 3 in Table 1, exceptions did occur, in which the speaker referenced an association between space (OR, clinical, floor) and learning.

Because the results of this study suggest that organizational factors may have greater influence for challenges than for successes, regardless of stakeholder group, we suggest leaders may want to look at interventions to improve the learning environment differently depending upon whether their intent is to promote successes versus overcome challenges. In particular, in order to promote more successes, leaders may want to first look for ways to influence social or personal aspects in the psychosocial domain using such interventions as faculty development30-34 and enhancing longitudinal relationships.35,36 On the other hand, to minimize challenges, leaders may want to first look for ways to modify organizational aspects of the learning environment, such as schedules, incentives, or policies.4,37 That said, because the Macy-Conference Framework assumes that the domains/components are continually overlapping and interacting, the development and application of interventions must be addressed at multiple levels.

LIMITATIONS

This secondary analysis of existing qualitative data took place in a single institution and in two procedurally-oriented departments. The material domain was not well represented, possibly because space tends to influence our behavior without us consciously knowing.38 This phenomenon could also be a product of our explicit focus in Surgery and OBGYN, whereby the operating theater is the dominant clinical learning environment and the space where learning occurs in these disciplines is a given and is beyond the control of educators. Perhaps we would find greater variation in the space component if we were to expand to other more medically oriented specialties. While our study was able to affirm the relative importance of the various components of the Macy-Conference Framework, it was not designed to identify what may be missing from the Framework. Our primary study, is much better equipped to meet this challenge.

CONCLUSION

Clinical learning environments are composed of complex interactions between people, organizational structures, and physical factors which work dynamically to promote or impede learning. Such environments are in constant need of shaping and reshaping to best meet the needs of stakeholders–be they students, residents, or faculty. Using secondary analysis of transcribed excerpts from appreciative inquiry style interviews and focus groups, this study supports the value of the Macy-Conference Framework as a lens for better understanding the interacting components of the learning environment with an eye to continual improvement. Indeed, components are not all equally influential and their relative influence tends to change when one is focused on promoting successes versus minimizing challenges. In the former, the focus is most likely to be on social and personal components, with a secondary focus on organization and space. On the other hand, in the latter, the focus is more likely to also include the organizational component.

By using the Macy-Conference Framework as a lens for our secondary analysis, we provide evidence related to the validity of the framework and its potential utility in shaping the learning environment. Our data reinforces the presence of a common set of core components of the learning environment and their overall relative importance, particularly in the psychosocial domain, and should inform future efforts to optimize the clinical learning environment.

Acknowledgements

The research reported in this publication was supported in part by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR002538. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding/Support. None

Disclosures. None

Ethical approval. University of Utah Internal Review Board

Disclaimers. None.

 Previous presentations.

REFERENCES

1.         Brandford E, Hasty B, Bruce JS, et al. Underlying mechanisms of mistreatment in the surgical learning environment: A thematic analysis of medical student perceptions. American journal of surgery. 2018;215(2):227-232.

2.         Kilty C, Wiese A, Bergin C, et al. A national stakeholder consensus study of challenges and priorities for clinical learning environments in postgraduate medical education. BMC Med Educ. 2017;17(1):226.

3.         Mazer LM, Bereknyei Merrell S, Hasty BN, Stave C, Lau JN. Assessment of programs aimed to decrease or prevent mistreatment of medical trainees. JAMA Network Open. 2018;1(3):e180870.

4.         van der Goot WE, Cristancho SM, de Carvalho Filho MA, Jaarsma ADC, Helmich E. Trainee-environment interactions that stimulate motivation: A rich pictures study. Medical Education. 2019;54(3):242-253.

5.         Gruppen LD, Irby DM, Durning SJ, Maggio LA. Conceptualizing Learning Environments in the Health Professions. Academic medicine : journal of the Association of American Medical Colleges. 2019;94(7):969-974.

6.         Schonrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. Key elements in assessing the educational environment: where is the theory? Adv Health Sci Educ Theory Pract. 2012;17(5):727-742.

7.         Rama JA, Falco C, Balmer DF. Using Appreciative Inquiry to Inform Program Evaluation in Graduate Medical Education. J Grad Med Educ. 2018;10(5):587-590.

8.         Sandars J, Murdoch-Eaton D. Appreciative inquiry in medical education. Med Teach. 2017;39(2):123-127.

9.         Williams A, Haizlip JA. Ten Keys to the Successful Use of Appreciative Inquiry in Academic Healthcare. OD Practitioner. 2013;45(2).

10.       Bushe GR. Appreciative Inquiry is Not (Just) About the Positive

OD Practitioner. 2007;39(4):30-35.

11.       Glaser B SA. The discovery of grounded theory: strategies for qualitative research. . New York, NY: Aldine de Gruyter; 1967.

12.       Balmer DF, Hall E, Fink M, Richards BF. How do medical students navigate the interplay of explicit curricula, implicit curricula, and extracurricula to learn curricular objectives? Academic medicine : journal of the Association of American Medical Colleges. 2013;88(8):1135-1141.

13.       Balmer DF, Quiah S, DiPace J, Paik S, Ward MA, Richards BF. Learning across the explicit, implicit, and extra-curricula: an exploratory study of the relative proportions of residents’ perceived learning in clinical areas at three pediatric residency programs. Academic medicine : journal of the Association of American Medical Colleges. 2015;90(11):1547-1552.

14.       CCTS Population Health Research.  https://medicine.utah.edu/ccts/population-health/.

15.       Mazotti L, Adams J, Peyser B, Chretien K, Duffy B, Hirsh DA. Diffusion of innovation and longitudinal integrated clerkships: Results of the clerkship directors in internal medicine annual survey. Medical Teacher. 2019;41(3):347-353.

16.       Maier MJ. DirichletReg: Dirichlet Regression in R. R package version 0.7-0. 2020.

17.       Fried JM, Vermillion M, Parker NH, Uijtdehaage S. Eradicating medical student mistreatment: a longitudinal study of one institution’s efforts. Academic medicine : journal of the Association of American Medical Colleges. 2012;87(9):1191-1198.

18.       Gan R, Snell L. When the learning environment is suboptimal: exploring medical students’ perceptions of “mistreatment”. Academic medicine : journal of the Association of American Medical Colleges. 2014;89(4):608-617.

19.       House JB, Griffith MC, Kappy MD, Holman E, Santen SA. Tracking Student Mistreatment Data to Improve the Emergency Medicine Clerkship Learning Environment. The western journal of emergency medicine. 2018;19(1):18-22.

20.       Kulaylat AN, Qin D, Sun SX, et al. Perceptions of mistreatment among trainees vary at different stages of clinical training. BMC Med Educ. 2017;17(1):14.

21.       Olasoji HO. Broadening conceptions of medical student mistreatment during clinical teaching: message from a study of “toxic” phenomenon during bedside teaching. Advances in medical education and practice. 2018;9:483-494.

22.       Oser TK, Haidet P, Lewis PR, Mauger DT, Gingrich DL, Leong SL. Frequency and negative impact of medical student mistreatment based on specialty choice: a longitudinal study. Academic medicine : journal of the Association of American Medical Colleges. 2014;89(5):755-761.

23.       Castillo-Angeles M, Watkins AA, Acosta D, et al. Mistreatment and the learning environment for medical students on general surgery clerkship rotations: What do key stakeholders think? American journal of surgery. 2017;213(2):307-312.

24.       Kemp MT, Smith M, Kizy S, Englesbe M, Reddy RM. Reported Mistreatment During the Surgery Clerkship Varies by Student Career Choice. J Surg Educ. 2018;75(4):918-923.

25.       Lau JN, Mazer LM, Liebert CA, Bereknyei Merrell S, Lin DT, Harris I. A Mixed-Methods Analysis of a Novel Mistreatment Program for the Surgery Core Clerkship. Academic medicine : journal of the Association of American Medical Colleges. 2017;92(7):1028-1034.

26.       Baecher-Lind LE, Chang K, Blanco MA. The learning environment in the obstetrics and gynecology clerkship: an exploratory study of students’ perceptions before and after the clerkship. Med Educ Online. 2015;20:27273.

27.       Singh TSS, Singh A. Abusive culture in medical education: Mentors must mend their ways. Journal of anaesthesiology, clinical pharmacology. 2018;34(2):145-147.

28.       Furney SL, Orsini AN, Orsetti KE, Stern DT, Gruppen LD, Irby DM. Teaching the one-minute preceptor. A randomized controlled trial. J Gen Intern Med. 2001;16(9):620-624.

29.       Dyrbye LN, Thomas MR, Shanafelt TD. Medical student distress: causes, consequences, and proposed solutions. Mayo Clinic proceedings. 2005;80(12):1613-1622.

30.       Birman BF, Desimone L, porter AC, Garet MS. Designing professional development that works. Educational Leadership. 2000.

31.       Desimone LM, Porter AC, Garet MS, Yoon KS, Birman BF. Effects of Professional Development on Teachers’ Instruction: Results from a Three-year Longitudinal Study. Educational Evalution and Policy Analysis. 2002;24(2):81-112.

32.       Gast I, Schildkamp K, Veen JTvd. Team-Based Professional Development Interventions in Higher Education: A Systematic Review. Review of Educational Research. 2017;87(4):736-767.

33.       McLean M, Cilliers F, Van Wyk JM. Faculty development: Yesterday, today and tomorrow. Medical Teacher. 2008;30(6):555-584.

34.       Sklar DP. Moving From Faculty Development to Faculty Identity, Growth, and Empowerment. Academic Medicine. 2016;91(12):1585-1587.

35.       Hirsh D, Walters L, Poncelet AN. Better learning, better doctors, better delivery system: Possibilities from a case study of longitudinal integrated clerkships. Medical Teacher. 2012;34(7):548-554.

36.       Hirsh DA, Holmboe ES, ten Cate O. Time to Trust: Longitudinal Integrated Clerkships and Entrustable Professional Activities. Academic Medicine. 2014;89(2):201-204.

37.       Vanstone M, Grierson L. Social power facilitates and constrains motivation in the clinical learning environment. Med Educ. 2020;54(3):181-183. 38.       Harrouk C. Psychology of Space: How Interiors Impact our Behavior? 2020; https://www.archdaily.com/936027/psychology-of-space-how-interiors-impact-our-behavior. Accessed 9/17/20, 2020.

Interview Questions

**This is a semi-structured interview protocol, adapted from a guide originally developed by Drs. Williamson and Suchman.1 Probes will be used as necessary to elicit additional pertinent information.

Introduction: This is going to be what we call an appreciative interview. I am going to ask you questions about times when you experienced educational things working at their best here at [institution]. Many times, we try to ask questions about things that aren’t working well—the problems—so that we can fix them. In this case, we are trying to find out about the things at their best—the successes—so that we can find out what works and why, and find ways to infuse more of it into our practice.
• As we get started, I’d like to know a little bit about you. Just so you know, this information will not be associated with any of your stories or quotes, but will just be used to provide context to our findings.
o What’s your role here at [institution] and how long have you been here?
• People do their best work when they are doing things that they find personally meaningful, and when they feel that their work makes a difference. During your time at [institution], there have no doubt been high points and low points. For now, I’d invite you to think of a teaching and learning moment that meant a lot to you, when things went right, a time that brought out the best in you.
o Please tell the story of that time. (If they are very general, try to probe for more specificity.)
o Without worrying about being modest, please tell me what it was about you—your unique qualities, gifts or capacities; decisions you made; or actions you took—that contributed to this teaching/learning experience?
o What did others contribute or do?
o What aspects of the situation made this a success (for example, the place, the time of day or year, recent events)?
• Now, think of a time at [institution] when you or your values were challenged.
o Please tell me a story about that time. (If participant needs clarification about what a value is, explain that a value is “a person’s principles or standards of behavior; one’s judgment of what is important in life.”)
• We each have different qualities, gifts and skills we bring to the world and to our work. Think about the things you value about yourself, the nature of your work and the university. At work, we’re always dealing with challenges and change.
• How have your strengths and values helped you deal with challenges and change?
o Your work: When you are feeling good about your work, what do you like about the work itself?
o Yourself: Imagine you’re at your retirement party. What do you think your colleagues would say they liked most about you?
o Yourself: Now what do you think your students would say they’ve liked most about you?
o How do your personal values match those of [institution]? (for example, honesty, compassion, teamwork)?
o Where have you seen examples of these values at [institution]?
• Where do you think these reports of mistreatment are coming from?

Improving the Promotions Dossier with the Enhanced CV

Abstract

In contrast to trends calling for the use of an educator portfolio to present evidence of accomplishment in the promotion dossier for faculty with a career focus on education or patient care, we propose use of an Enhanced Curriculum Vitae (CV). Rather than dedicate the time to create a portfolio that contains redundant information with the CV and is often ignored or treated as ‘second class’, we believe that, with a few simple additions, faculty can include nearly all of the content in the CV which they might have presented in a portfolio. We propose two types of additions. One is adding categories to the CV to be more inclusive of educational or clinical contributions (e.g., teaching, mentoring, and course leadership) not often included in the research-centric CV. The other is adding terse annotations to selected items listed in the CV to clarify quantity, quality, and scope of specific accomplishments (e.g., selection process of honors and awards, learner evaluation highlights).

We argue that, in lieu of peer-reviewed publication, educators be allowed to include other types of work products with the CV in the promotions dossier, such as a course syllabus, a representative instructional product, or clinical care guidelines.

Introduction

Because academic medicine has become more accepting of educational and clinical accomplishments in support of rank advancement, in this Perspectives article, we argue that greater thought needs to be given to how information regarding such accomplishments can best be presented to participants in the promotions process (e.g. individuals asked to write letters of reference, members of promotions committees).  The goal, as always, is to ensure that participants in the promotions process pay attention to the evidence and then give it careful and fair consideration in light of institutional priorities and values

We make this argument–specifically as it applies to educational accomplishments—based on many years of experience working with a wide variety of clinician educators seeking rank advancement at institutions supportive of promotion based on educational accomplishments. Quite simply, we have learned that attempts to present a faculty member’s educational accomplishments in an educator portfolio tend to be less successful than presenting the same evidence using an annotated or Enhanced Curriculum Vitae (CV). For this to work, modest enhancements to the traditional CV are necessary. We also argue that as institutions become more accepting of accomplishments in education and clinical care as evidence of readiness for advancement, promotions dossiers need to allow educators to submit products such as syllabi or instructional materials in lieu of standard peer-reviewed publications.

The challenge to present diverse accomplishments fairly

In academic medical communities, faculty participate in clinical, educational, and investigational activities, with most faculty members participating in more than one of these areas and often in all three. In recent years, many institutions have broadened their promotions processes, as espoused by Boyer (1990) to include scholarly accomplishments in all these areas as legitimate domains for consideration in rank advancement (Simpson et al., 2007).  A manifestation of this trend is inclusion of different career tracks, most often including research, education, and clinical care.

The challenge, created by this trend, is the need for fair representation and consideration of academic accomplishments in promotions dossier across all of these areas. While Glassick’s (2000) criteria promote a similar standard to evaluate scholarship from any area, what has been challenging is finding the best way to present the unique types of evidence required for careful consideration and awarding of credit for the scholarship achieved in education and clinical care (Baldwin, Gusic and Chandran, 2010; Simpson et al., 2007).

Our experience suggests that the traditional CV does not capture this diversity and that a more inclusive format is needed. Generally, the standard promotions dossier consists of the traditional CV, copies of a limited number of the publications listed in the CV, and a short personal statement. The traditional CV has evolved to best capture the accomplishments of investigators, measuring success by peer reviewed grants and publications. Scientific peer review (i.e., NIH study sections, journal editorial boards), in effect, allows the promotions decision-makers to defer to the judgments of other scientists or experts, so little additional information about the items listed is needed. As a result, the listing of these accomplishments in the traditional CV tend to conform to very specific formats, designed to communicate needed information for each type of accomplishment most efficiently. For example, the listing of grants typically includes the funding agency, amount awarded, and the individual’s role in the funded project, while the listing of publications is limited to standard biographic information (authors, title, journal, date). These formats have become very familiar to individuals involved in the promotions process. Thus, the traditional CV and by extension the promotions process tends not to fully capture the accomplishments of educators and clinicians, whose activities are not peer reviewed in the same manner as researchers.

After working with numerous faculty educators who have been nominated for promotions, we have learned that simple lists of products produced is not sufficient to represent the wide diversity of their accomplishments. Most often, more explanation about the what, why, who, and how of the accomplishment, including the type of peer review, is usually needed. Furthermore, these accomplishments frequently do not typically result in the kinds of traditional peer-reviewed publications that are typically attached to a promotions dossier.

The educator portfolio – pros and cons

For well over 20 years, the use of an educator portfolio has been recommended as a companion to the traditional CV in the promotions dossier for faculty with substantive involvement in education (Simpson et al., 2007; Baldwin, Gusic and Chandral, 2010; Niebuhr et al., 2013; Shinkai et al., 2018). An educator portfolio is designed to detail educational accomplishments of faculty. It often includes a personal statement of philosophy and intent, and may include many of the same educational accomplishments listed in a traditional CV, but with greater detail. Furthermore, the portfolio often includes additional accomplishments because they do not fit into the categories of the traditional CV.

While portfolios privilege the attitudes and activities of educators, we have come to question whether educator portfolios offer the optimal way to present the accomplishments of non-research oriented activities for the purposes of promotion. One reason portfolios are suboptimal is that many individuals involved in promotions considerations are not familiar with them and may largely ignore them. In addition, generally only “educator track” faculty assemble portfolios, thus the educational contributions of non-educator track faculty, including clinicians, may remain underrepresented. The length and narrative quality of the educator portfolio may be alienating to non-educator faculty thus limiting their utility. Finally, having a supplement that only educators submit may paradoxically serve to further marginalize educators from the general community of medical faculty.

The Enhanced CV

At Columbia University Vagelos College of Physicians and Surgeons, these concerns led a group of educators from multiple clinical departments (led by BFR and DLC) to form a workgroup to think about the best way to: 1) organize educational achievements and products; 2) share them with promotions committees. We considered many models, and the model that was proposed and ultimately adopted was what we came to call the “Enhanced CV.” This is a standard CV that allows for more expansion than is traditionally used, particularly for educational accomplishments and materials; although this expansion can be adopted for other areas as well, such as clinical work. It brings the educational materials back into the CV and out of a separate educator portfolio, asking all faculty to highlight their educational work (not just those who identify as “educators”) and allows those on educator tracks to have room and flexibility to highlight their unique accomplishments. After similar discussions, the University of Utah School of Medicine has used a similar, yet less formal and less widely adopted Enhanced CV.

As faculty have adopted the Enhanced CV, we have learned first-hand about its effectiveness in capturing the variety and detail of a faculty member’s accomplishments. Including terse, bulleted annotations of selected accomplishments allows promotion’s decision makers to judge the magnitude of effort, degree of quality, and impact of accomplishments, including those where no external, standardized process of peer review was possible. Table 1 contains some representative examples from the Enhanced CV of RJG, which was included in a successful promotions dossier. The examples illustrate the type of annotation used as well as the types of content not typically included in the traditional CV.

Table 1
Table 1: Examples of Annotation from Enhanced CV of RG

The idea of annotation is not new per se, as they have been used in the traditional CV for grants (e.g. details about funding agency, grant amount, etc.) because the title of the grant alone fails to communicate sufficient information to appropriately weigh the impact and magnitude of each grant listed. The Enhanced CV can just as easily include annotations about other types of accomplishments which reviewers can read to fully understand and appreciate the impact of those accomplishments, such as quality improvement projects, public health guidelines, and educational innovations. For example, faculty at the University of Utah have found annotations particularly helpful in clarifying the different levels of selectivity in the peer review and acceptance processes of posters, oral presentations, and workshops at professional meetings.

The Enhanced CV offers individuals involved in the promotions process a common, systematically organized format for all faculty that flexibly and fairly presents their diverse scholarly accomplishments. As a result, the Enhanced CV is particularly useful as academic institutions move towards using multiple tracks rank advancement, each with unique forms of scholarly contributions.

Substitution of unique educational work products

Most promotions dossiers we have seen call for the faculty member to select a handful of publications (e.g., 5 at Columbia) from those listed in the CV to include in their entirety with the CV. Of course, such publications are representative work products that highlight the faculty member’s accomplishments. We believe that dossiers for educators need to allow substitutions of publications for more appropriate peer-reviewed work products generated by educational activities, such as a syllabus or chapter of an instructional text. We also believe that, similar to annotations in the CV, providing limited annotations within these documents helps provide important background information to enhance reviewer’s understanding and ability to judge the quality and merit of the work represented (providing its own form of peer review).

Experience with a promotions dossier with the Enhanced CV

In 2010, the Columbia University Medical Center launched a three-track structure for academic advancement: 1) investigator, 2) educational leadership and scholarship, and 3) applied health sciences. An important aspect of this launch included use of the Enhanced CV for all three tracks in the promotions dossier. It also included broadening the types of work products that could be included in the promotions dossier. No educational portfolio was encouraged, nor considered necessary.

As a result, the recommended promotions dossier for all faculty, regardless of track consists of 1) 1-2 page personal statement that clarifies and highlights the faculty member’s “story” as presented in the Enhanced CV, 2) The Enhanced CV and 3) Up to five appropriate work-products (not limited to publications). These documents are sent to individuals requested to write letters of reference and subsequently to members of the promotions committees. The University of Utah follows a similar process, without the five work-products. The personal statement is meant to capture the “impact” of the faculty’s work.

The Enhanced CV is designed to help promotions committee members understand the scope of the educational contribution, which is often central to promotions decisions. For example, committee members on other tracks may understand the number of hours that go into writing a grant, but might not appreciate the hours that go into conceptualizing, planning and executing a medical school course. In addition, educational work products such as websites, videos, and syllabi have been of great interest to promotions committee members and have helped bring the educator’s work to life. Given the confidential nature of promotion committee deliberations, it is difficult to cite specific promotions decisions where the improved dossier was critical to the outcome, nevertheless, feedback from committee members at both Columbia and Utah, who have seen the Enhanced CV, suggest that dossiers with the Enhanced CV and relevant work products have helped to encourage more careful and in depth consideration of the accomplishments represented–regardless of whether they were associated with research, education, or clinical practice—and that the evidence presented is appropriate to each track and considered equally valid.

Our experience with helping educators prepare for the promotions process also suggests that the Enhanced CV helps educators capture and understand the extent of the work they do and have done. Too often, educators do not capture educational activities, such as designing curricula, advising and mentorship, in their CVs, including only finished peer-reviewed products. Using the enhance CV not only helps educators to include these critical, often incredibly time-consuming activities, it helps them to better understand their roles as educators.

As would be expected, we estimate that Enhanced CVs are 20 to 50% longer than traditional CVs depending upon the additional content and annotations. While this may add to the time required to review the CV, it is still likely much less time than would have been required to review both a CV and a portfolio. Furthermore, because the Enhanced CV continues to emphasize the list-like format of the traditional CV, reviewers can easily scan and count the various types of listed content and thereby get a holistic view of the ‘story’ represented and then can re-review the same lists but this time pay attention to the details in the annotation and get a sense of the meaning and impact of specific items.

In light of the concern about adding length to the CV, over time, we have refined our understanding about the types and amount of annotations in the Enhanced CV that are most helpful to readers. For example, one common mistake has been to use annotations that are too lengthy, potentially overwhelming the reader and leading them to lose sight of the story represented in the chronological listing of accomplishments. Another mistake, not unique to the Enhanced CV, has been to try to include everything, rather than to prioritize and highlight with annotation the accomplishments that best represent the breadth and depth of a person’s accomplishments. As with many things, even with the Enhanced CV, “less can be more”.

Conclusion

While recent trends calling for the use of an educator portfolio in the promotion dossier for faculty with a career focus on education or clinical care, have been well meaning and thoughtfully led, we have argued that educators may be better served with the use of an Enhanced CV. This argument recognizes that the CV is a familiar format to all involved, including promotions committee members as well as individuals asked to write letters of reference. With a few simple additions, most notably brief annotations, faculty can include nearly all of the content in the CV that they might have presented in a portfolio. This model is designed to help both promotions committee members understand the work of clinician educators, and helps the educators themselves capture the breadth and depth of their work, and to even help define their critical roles in medical education. Our hope is that disseminating this model will help us take the next steps in testing the effectiveness of this alternate method for capturing the work of clinician educators.

Acknowledgments:

The authors wish to recognize the general contribution to the development of the Enhanced CV made by members of promotions committees at Columbia University and the University of Utah and by faculty colleagues brave enough to be early adopters of the Enhanced CV format. We particularly want to recognize the contributions of Lisa Saiman, MD MPH and Susan Rosenthal, PhD who championed and helped shape these ideas from the outset.

Declarations:

Funding/Support: None.
Other disclosures: None.
Ethical approval: Not applicable.
Disclaimer: None.
Previous presentation: Emerging Solution: Managing Evidence of Educational Scholarship with the Enhanced CV. 2016 AAMC Learn Lead Succeed Meeting. Seattle WA.

Keywords/phrases:

Enhanced CV, promotions dossier, educator portfolio

Take home message:

  • A traditional CV typically does not include important non peer-reviewed educational work products
  • A traditional CV typically does not include important non peer-reviewed educational work products
  • Using an Enhanced CV can improve the promotions process
  • An educator portfolio is lengthy and less likely to reviewed fully by the promotions committee
  • An Enhanced CV is meant to include the essential information of a traditional CV and that of a portfolio

References

Boyer, E. L. (1990) Scholarship reconsidered: priorities of the professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.

Glassick, C. E. (2000) ‘Boyerʼs Expanded Definitions of Scholarship, the Standards for Assessing Scholarship, and the Elusiveness of the Scholarship of Teaching’, Academic Medicine, 75(9), pp. 877–880. doi: 10.1097/00001888-200009000-00007.

Niebuhr, V., Johnson, R., Mendias, E., Rath, L., et al. (2013) ‘Educator Portfolios’, MedEdPORTAL Publications. doi: 10.15766/mep_2374-8265.9355.

Shinkai, K., Chen, C. (A., Schwartz, B. S., Loeser, H., et al. (2018) ‘Rethinking the Educator Portfolio’, Academic Medicine, 93(7), pp. 1024–1028. doi: 10.1097/acm.0000000000002005.

Simpson, D., Fincher, R.-M. E., Hafler, J. P., Irby, D. M., et al.
(2007) ‘Advancing educators and education by defining the components and evidence associated with educational scholarship’, Medical Education, 41(10), pp. 1002–1009. doi: 10.1111/j.1365-2923.2007.02844.x.