Adult Learning Theory

Abstract

Andragogy, the art and science of helping adults learn, gained popularity in the 1970s, often based on Malcolm Knowles (1970, 1984) assumptions regarding adults as learners. This non-neuropsychological learning theory posits adults see themselves as self-directed learners, can call on their prior experience as a resource for learning, are motivated by the desire to improve in a social role and therefore are more problem rather than subject focused. Adults are more likely to engage when they are intrinsically motivated to learn. Although these assumptions may not hold across all adults at all times, it is a valuable framework for educators to use in determining approaches to teaching adults and addressing issues that may arise in self-direction, prior experience, and motivation.

Chapter Learning Objectives

  1. Compare and contrast andragogy, pedagogy and heutagogy
  2. Describe approaches to guiding learners to higher levels of self-determined learning
  3. Explain how prior experience can support or impede adult learning
  4. Apply approaches to enhance motivation and readiness to learn

Dr. Adrian Williams is a clinician who is engaging in a formal clinical teaching role for the first time. After three years of full-time practice as a specialist, they are excited to be assigned three trainees. Two of these trainees have limited exposure to the specialty whereas the third, Chris Erickson, has a parent who practices in this specialty.  In preparing for this new role, Adrian enlists the help of a mentor. The mentor advises Adrian to consider using adult learning theories to guide their teaching decisions and create learning environments that will support the trainees’ professional growth.  The mentor reminds Adrian that in addition to clinical competence, trainees benefit from developing skills for lifelong learning.

  • What are the underlying assumptions of the pedagogical, andragogical, or heutagogical approach to teaching?
  • What are the roles of the educator and learner in adult learning?

Adrian is reflecting on their first week of clinical teaching with three trainees.  Each one has learning strengths and challenges. In considering the learning needs of individual trainees, they realize that although each has similar foundational preparation, their clinical learning needs are quite diverse, as are their approaches to life-long learning. Adrian reviews the assumptions they had about adult learners and applies it to new knowledge regarding each trainee to further differentiate instruction in the coming weeks.

  • How do you as an educator decide which approach is best in each situation?

The first trainee is Tyler Miller. Tyler’s last prior rotation was in a related specialty. They have shown progress over the past week and express interest in learning more about this specialty, even though it is not one they are planning to pursue. Tyler has demonstrated clinical abilities through thorough documentation of patient care. Tyler is initially resistant to feedback, typically showing resistance in the moment but does incorporate changes into their clinical performance. When given an unfamiliar situation, Tyler will find the evidence to improve their knowledge base but rarely interacts with others on the team. Tyler is sometimes uncomfortable in communicating with patients, particularly when other team members are in the room. Most concerning is Tyler’s approach to presenting patient cases in the team. They appear anxious and hesitant to answer questions. Adrian shares their observations during an end-of-week meeting, and Tyler discloses that they had several negative experiences with the rotation leader during a previous rotation and was unable to find anyone to help them navigate the situation. Tyler states, “what helped me get by was to just keep my head down.”

  • How can Adrian use Tyler’s prior clinical knowledge to enhance their learning?
  • How might Tyler’s experiences in previous rotations be impacting their learning?
  • What might Adrian do to establish a learning environment where Tyler can move forward in team communication?

The second trainee is Shane Smith. Shane has strong interpersonal skills and easy rapport with patients and staff alike. Their assessment skills are somewhat lacking, and they are slow to respond to feedback. It has become noticeable that Shane does not take much initiative in pursuing learning beyond clinic hours, preferring to socialize with staff. For example, on Wednesday, Shane was tasked with investigating a particularly complex diagnosis of a patient that the team encountered. Shane presented a cursory review of the patient’s condition and care during the next day’s teaching rounds. Later, Adrian discovers that Shane shared with peers that they didn’t know where to find additional research evidence regarding the case and “besides this isn’t something I will see in my future practice.”  Adrian contemplates their response to this student who does not appear motivated to learn.

  • Should Adrian impose consequences for not doing more outside learning?
  • What might Adrian investigate relative to the statement that “this isn’t something I will see in my future practice?”
  • Which education practices might Adrian employ to support greater motivation for self-directed learning?

The third trainee is Chris Erickson, whose parent practices in this specialty. Chris has significantly greater knowledge in the specialty, beyond what would be expected. They are excelling in the rotation; things that are challenging to other students seem to come naturally. Chris expresses that this rotation is of particular interest, as it is a specialty they would like to pursue. During this first week, Chris has responded positively to feedback, setting learning goals to improve current performance and asking for help when needed. With encouragement, Chris is beginning to ask patient care questions for which there is no easy answer but is not taking the initiative to address these more complex problems. Adrian senses that Chris may be ready for a greater challenge and guided into higher levels of self-directed learning.

  • What observations has Adrian made that indicate Chris may be ready for greater learning independence?
  • What self-directed learning skills should Adrian teach or reinforce to help Chris be more independent in lifelong learning?
  • What type of learning experiences might benefit Chris and why?

It is nearing the end of the rotation, and each learner has made progress in both their clinical and learning skills. Adrian reflects on how the application of adult learning theories supported the learning of these diverse trainees.

Discussion

When educators approach teaching decisions, it is important to consider the context in which learning takes place, the nature of the content to be learned, and the characteristics of the learners (Pratt, 2016). Clinical teaching is challenging, as education occurs within the context of providing high-quality patient care while attending to learner needs. Among the learning goals Dr. Williams desires for their trainees is instilling values and skills pertaining to lifelong learning. Gaining an understanding of broad approaches to teaching is to consider a continuum of pedagogical frameworks that vary in goals of instruction, the role of the educator, and the degree of learner self-direction.

Assumptions underlying pedagogical frameworks

Pedagogy is historically defined as the art and science of teaching, though in contemporary writing, it is associated with teaching children. Andragogy is defined as the art and science of facilitating adult learning (Knowles, 1970). Heutagogy is the art and science of teaching adults blended with complexity theory (Hase & Kenyon, 2007).  There are situations in which each approach is appropriate to use with adult learners.

Pedagogy is characterized by a learning environment where the teacher leads the learning experience, and students are in a dependent position. Students are primed to learn what they are told they need to learn in order to progress within a formal academic setting. Although this is often considered less appropriate for adult students, there are situations in which an educator may select to exercise more control. Machynska and Boika (2020) suggest that when learner’s prior experience is not sufficient (such as when large amounts of new information become available) or when prior knowledge makes it difficult for learners to accommodate new information, a more directive approach may initially be warranted.

The idea that the art and science of teaching adults (andragogy) is different from teaching children gained in popularity in the early 1970s. (Merriam & Baumgartner, 2020). The most prominent of the theories and frameworks was introduced by Knowles (1970) as a set of assumptions about adult learners that could be used to derive best practices in adult education.  Initially, there were four assumptions in the framework that adults 1) see themselves as self-directed learners, 2) use their life experiences as a resource for learning, 3) become ready to learn based on developmental tasks related to their social roles, and 4) they are more problem-focused with a desire for immediate application of learning (Merriam & Baumgartner, 2020, Wang, 2017).  Knowles (1984) later added two assumptions that internal motivation was more influential in learning progress and that adults prefer to know why they need to learn something. Educators can use these assumptions to inform their teaching practice as they assess the needs of their learners. 

Andragogy is not without its critics. Each assumption can be challenged in terms of universality; not all adults are ready to be self-directed learners.  Prior experience may also create challenges for learning and require a certain amount of unlearning in the process. Adults may choose to learn for the joy of learning itself rather than to solve an immediate problem.  Additionally, andragogy neglects to address the social context of learning, focusing primarily on individual development. The goal of andragogy focuses on developing competence in one’s social role (Merriam & Baumgartner, 2020). However, developing competence that can be reproduced in familiar situations is not enough for practice in highly complex health care systems characterized by high levels of uncertainty. Moving from andragogy to heutagogy offers a more advanced level of preparation.

Heutagogy proposes an approach to teaching that assists learners in moving beyond competence to capability through self-determined learning (Hase & Kenyon, 2007). The goal of heutagogy is to equip learners with the skills to set their own learning goals, reflect on their learning processes and goal attainment, and utilize double-loop learning processes to fix problems and challenge their underlying assumptions about how to approach the problem (Abraham & Komattil, 2017).  Responsibility for learning transfers to the learner, and opportunities that arise within the complex context of clinical practice serve as catalysts for learning. Whereas in andragogy the educator continues to be significantly involved in guiding the learning process, the educator in a heutagogy serves more as a coach, providing feedback and facilitating learner reflection on both gains and process.

Role of the Educator

The role of the educator, according to adult learning theory, is characterized as facilitator or helper. This begins with the educator considering their relationship to the learner as one not tasked with transmitting knowledge but supporting the process of learning. To be successful, the educator and learner form an alliance to negotiate learning goals and processes. It is important that the educator develops a trusting relationship with the learner through empathy, genuineness, and acceptance. The facilitator is tasked with helping the learner carry out their own goal-directed learning process, negotiating learning goals in relation to the curriculum in an academic setting, providing encouragement and learning guidance, connection to resources, and assisting in evaluation of learning outcomes (Merriam & Baumgartner, 2020).

Adult Learning Theory - Figure 1
Figure 1

Educators can refer to the assumptions made about adult learners to select approaches based on trainee strengths and areas for growth. Using adult learning theory assumptions to underpin our decisions, we will further explore how to create learning environments in which every learner can thrive through self-directedness, the role of prior experience, and motivation/readiness to learn.

Prior experience as a resource for learning

The benefit of prior experience as a resource for learning is one of the underlying assumptions of andragogy. Adults acquire knowledge through life experiences, however not all adults have experiences that connect to current learning tasks, or those experiences may present a barrier through misconceptions or negative associations (Merriam & Baumgartner, 2020). Each individual’s knowledge base is uniquely built by through the construction of schemas or “patterns of thought… that organize categories of information or actions and (define) the relationships among them” (van Merrienboer, 2016, p. 15).  The function of schemas is to combine previously separate elements into a single element, thus benefiting limited working memory when recall and manipulation of information is required. Learning is a result of consciously constructing and elaborating on these schemas when new information or new connections are introduced. Educators can leverage the power of prior learning by asking trainees to retrieve related knowledge prior to introducing a new topic, relate new knowledge to personal experience, and encourage elaboration as a way to expand and strengthen these schemas (van Merrienboer, 2016).

In addition to knowledge in the field, a learner’s ability to function as part of the health care team can be impacted by positive and negative interpersonal or social experiences related to learning. Past life traumas outside or within a learning environment can manifest in suboptimal learning behaviors. For example, it is unfortunate that humiliation, such as questioning that is perceived as overly harsh, as a method to motivate learners still exists within health professions education. This can have a lingering detrimental impact on learning mediated by a loss of confidence and professional satisfaction (Nagoshi, Hahn & Littles, 2019), as we see manifested in our learner’s reluctance to interact with the team.

Educators can use the principles of trauma-informed care to support those who are struggling with past experiences and to set a positive tone for all learners. These principles include creating spaces where both physical and psychological safety exist, being transparent in teaching, promoting peer support, collaboration, and empowerment to encourage and motivate all students (Brown, et al., 2021). Educators can role model team communications that accept not knowing and mistakes as opportunities for growth, share personal stories relevant to the situation, and develop relationships with each learner (Brown, et al., 2021).

Struggling learners often benefit from debriefing of difficult experiences. One guideline for debriefing comes from the use of peak-end rule related to forming memories of past experiences. The peak-end rule states that memories are framed and based mainly on the peak intensities and conclusion of the experience (Cockburn et al., 2008). In this context, the instructor can utilize reflective thinking with the student to analyze and frame the peaks and conclusion of the clinical experience. Questions such as what was learned or gained can help reframe the experience.  The instructor can also guide the student to apply reflection to past negative experiences: to reframe the memory of the experience and prepare for future experiences. 

When working with trainees and indeed anyone who is learning, incorporating the learner’s prior experiences has significant benefits through facilitating the acquisition of new knowledge and skills, as well as addressing any gaps in knowledge or negative experiences in prior educational settings. Attending to this aspect of adult learning theory demonstrates respect and is an important component for developing life-long learning skills.

Supporting Intrinsic Motivation

Another characteristic found in andragogy is that adults learn best when they are intrinsically motivated, e.g. a desire to learn based on interest, enjoyment, and inherent satisfaction (Ryan & Deci, 2020).  However, this natural tendency can be supported or thwarted by teachers, peers, organizations, and learning environments. Our learners may not demonstrate this intrinsic motivation, which often appears as the trainee not being serious about the work or engaging in self-directed learning. When intrinsic motivation is lacking, there is often an appeal to various extrinsic motivators. Extrinsic motivation is nuanced, ranging from highly externalized motivators such as rewards and punishments for compliance to increasingly autonomous extrinsic motivation that is self-selected based on values and consistency with self or professional identity (Ryan & Deci, 2020). Although adult learners do respond to extrinsic motivation (grades, acceptance by peers and educators, sense of duty), intrinsic motivation leads to greater learning. Imposing consequences on adult learners may change behavior in the short term, focusing on ways to stimulate intrinsic motivation will lead to long term benefits (Wang & Hansman, 2017). One approach to structuring the environment to promote intrinsic motivation is through self-determination theory. According to this theory, motivation rests on three pillars: autonomy, competence, and relatedness (Ryan & Deci, 2020). Attention to each of these areas can help the educator promote motivation for learning.

According to adult learning theory, motivation in adult learners is tied to the characteristics of readiness to learn, problem-solving orientation, and the need to know why one is being asked to learn (Knowles, 1984). Readiness to learn is driven by a perceived need to better function within chosen social roles (problem-solving and immediacy of application), in this context as a health professional. Given limited exposure to the breadth of practice, it may be necessary for the educator to help the learner envision the connections between current learning and future performance expectations. Another tactic is to help the learner to tap into their individual professional interests to find growth opportunities within the learning environment (Orsini et al., 2015; Thammasitboon, et al., 2016, van der Goot, et al., 2019). The concept of readiness to learn and the need to know why behind what one is being asked to learn can be seen as aspects of autonomy, supporting the learner’s ‘sense of initiative and ownership in one’s actions” (Ryan & Deci, 2020 p. 1). This supports Knowles (1984) assumption that adult perceive themselves as self-directed learners.

 In addition to autonomy, motivation is supported through a sense of competence, defined as the experience of effectiveness and mastery (Orsini et al., 2015). Supporting the development of competence includes behaviors such as timely feedback, introducing progressively more complex and challenging situations to manage, and encouraging vicarious learning while providing the supports needed to maintain safety (Orsini et al., 2015; van der Goot et al., 2019). Adult learners are motivated by the need to solve problems, and success in this realm managing increasingly complex or ill-defined problems promotes a justified assessment of growing competence. Facilitating competence is directly related to growing autonomy as more responsibility is transferred to the learner.

Underlying this cycle of growing competence and autonomy is a sense of relatedness. This is demonstrated in educator-learner relationships where each is willing to share academic and professional experiences, give and receive feedback, all within a psychologically safe environment. This is an aspect that is not well described by andragogy but is increasingly recognized as crucial for promoting learner growth. Although being able to be vulnerable, admit mistakes, and welcome feedback are important for learning, the clinical learning environment presents challenges (Dolan, Arnold & Green, 2019).  Assessment can be too closely tied to grades or other evaluative documentation rather than focusing on formative assessment for learning. Implicit bias can influence assessments, even when it is criterion based.  Educators often do not have enough time to attend to assessment in ways that build trusting relationships. These challenges can be mitigated by adopting a mastery orientation that rewards growth based on feedback and reflection, with time to achievement being flexible. In addition, including learners in designing assessments not only improves relatedness, but supports autonomy and competence as well (Dolan, Arnold, & Green, 2019). Relatedness also encompasses creating communities of practice including peers to connect around professional and scholarly activities (Orsini et al., 2015, Thammastiboon et al., 2016). Sharing a common humanity in the pursuit of quality patient care promotes learning for all involved.

Promoting Self-Direction in Learning

As educators, we want to recognize when our trainees are ready to become more independent in pursuing their learning. Knowles defined self-directed learning as ‘’a process in which individuals take the initiative, with or without the help of others, in diagnosing their learning needs, formulating learning goals, identifying human and material resources for learning, choosing and implementing appropriate learning strategies, and evaluating learning outcomes”. (1975, p. 18). Self-direction lies on a continuum from dependent on others for all aspects of learning to independent, self-determined learners who develop their capability to perform in increasingly complex environments (Hase & Kenyon, 2007). The self-determined learner not only establishes their goals but also reevaluates their goals and changes them according to their needs and level of progression. Learners advance in their readiness for self-directed learning as they expand their prior knowledge and experience as well as discovering their internal motivation for further learning. This provides a fertile field for promoting greater self-direction in learning. The process of self-directed learning also lies on a continuum from linear to interactive. Linear models are a more traditional step-by-step process of progression to reaching goals, involving significant pre-planning. On the other end of the spectrum, the interactive model is driven by opportunities in the learning environment and is inherently more spontaneous, arising from a combination of learners who take responsibility for their learning, processes that encourage learners to take control, and an environment that supports learning (Merriam & Baumgartner, 2020). This interactive model provides many benefits in the clinical setting, including greater adaptability than a linear model. Within an interactive model, the learner seeks experiences within their environment, applies past and new knowledge, and capitalizes on the learning environment’s spontaneity (Merriam et al., 2007). These experiences often occur in clusters or sets of experiences.  These clusters eventually form the whole of the experience, as they combine in context with other learning clusters (schemas). 

Along the continuum, it is incumbent upon educators to assist learners as needed to set goals, locate resources, participate in learning activities, and assess progress. Encourage learners to ask their own questions. Another important skill is the ability to monitor one’s own process and progress in learning. Learners can be encouraged to move from single-loop learning focusing only on outcome, to a double-loop process where the learner pauses to evaluate their processes and the assumptions that underlie their learning in addition to analyzing outcomes. In essence, the learner is guided to ask themselves why they do what they do, this additional depth of metacognition moves the learner further on the continuum to self-determined learning (Jho & Chae, 2014).

The role of the educator includes constructing processes and environments that are conducive to learner self-direction (Merriam & Baumgartner, 2020). This includes identifying or creating authentic learning opportunities for trainees. For example, Thammasitboon, et al. (2016) describe the implementation of scholarly activities within clinical practice to promote greater self-determination in learning. The trainees were able to select areas of professional interest and pursue a project with educator facilitation with positive results for the learners. This activity promoted the learner’s sense of autonomy and competence, thus enhancing their motivation while learning valuable skills in self-directed learning.

CONCLUSION

Adult learning theory (andragogy and heutagogy) serves as a basis for understanding the needs of our learners. The assumptions of this model include that adults prefer to see themselves as self-directed, want to know the why behind what they are learning, use prior experience for learning, and are motivated intrinsically by a desire to meet their current challenges. Educators can create and sustain learning environments and relationships that support learners in connecting past and current learning with future expectations while modeling and rewarding self-directed and self-determined learning.

References

Abraham RR &  Komattil R. (2017).  Heutagogic approach to developing capable learners. Medical Teacher, 39(3):295-299. doi: 10.1080/0142159X.2017.1270433.

Brown, T., Berman, S. McDaniel, K., Radford, C., Mehta, P., Potter, J, & Hirsh, DA. (2021). Trauma-Informed Medical Education (TIME): Advancing Curricular Content and Educational Context. Academic Medicine 96(5), 661-667.

Cockburn, A., Quinn, P., Gutwin, C. (2015). Examining the peak-end effects of subjective experience. Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 357-366). New York, NY: ACM.  https://doi.org/10.1145/2702123.2702139

Dolan, B., Arnold, J. & Green, M.M. (2019). Establishing Trust When Assessing Learners: Barriers and Opportunities. Academic Medicine, 94(12), 1851-1853. doi: 10.1097/ACM.0000000000002982.

Hase, S, & Kenyon, C. (2007). Heutagogy: A Child of Complexity Theory. Complicity: An International Journal of Complexity and Education, 4(1), 111-118.

Jho, M. Y., & Chae, M.-O. (2014). Impact of Self-Directed Learning Ability and Metacognition on Clinical Competence among Nursing Students. The Journal of Korean Academic Society of Nursing Education, 20(4), 513–522. https://doi.org/10.5977/jkasne.2014.20.4.513

Knowles, M. S. (1970). The modern practice of adult education: andragogy versus pedagogy. New York: Association Press.

Knowles, M.S. (1975). Self-directed learning: A guide for learners and teachers. Englewood Cliffs; Prentice Hall/Cambridge.

Knowles, M.S. (1984). The Adult Learner: A Neglected Species (3rd Ed.). Houston, TX: Gulf Publishing.

Machynska, N. & Boiko, H. (2020). Andragogy = The Science of Adult Education: Theoretical Aspects. Journal of Innovation in Psychology, Education and Didactics. 24(1), 25-34.

Merriam, S.B. & Baumgartner, L.M. (2020). Knowles Andragogy and McClusky’s Theory of Margin, in Learning in Adulthood: A Comprehensive Guide. Jossey-Bass, pp.117 – 136.

Merriam, S. B., Caffarella, R. S., & Baumgartner, L. (2007). Learning in adulthood: A comprehensive guide (3rd ed.). Jossey-Bass.

Nagoshi, Y., Hahn, P., & Littles, A. (2019). The Secret in the Care of the Learner, in Contemporary Challenges in Medical Education. Z. Zaidi, E. Rosenberg, and R.J. Beyth, eds.  Gainesville: U of Florida. 146-162.

Orsini, C., Evans, P., Binnie, V. Ledezma, P. & Fuentes, F (2015). Encouraging intrinsic motivation in the clinical setting: a teachers’ perspectives from the self-determination theory. European Journal of Dental Education, 20, 102-111. doi: 10.1111/eje.12147

Pratt, D.D. (2016). Five Perspectives on Teaching: A Plurality of the Good, 2nd edition. Dave Smulders and Associates

Ryan RM, Deci EL. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. 55(1):68-78. doi: 10.1037//0003-066x.55.1.68. PMID: 11392867.

Thammasitboon S, Darby JB, Hair AB, Rose KM, Ward MA, Turner TL, Balmer DF (2016). A theory-informed, process-oriented Resident Scholarship Program. Medical Education Online, 14; 21:31021. doi: 10.3402/meo.v21.31021.

van der Groot WE, Cristancho SM, de Carvalho Filho MA, Jaarsma ADC, Helmich E. (2020). Trainee-environment interactions that stimulate motivation: A rich pictures study. Medical Education; 54(3):242-253. doi: 10.1111/medu.14019.

Van Merrienboer, J. (2016). How People Learn in The Wiley Handbook of Learning Technology, N. Rushby & D.W. Surrey, eds. John Wiley & Sons. Pp. 15-34

Wang, V.C.X. and Hansman, C.A. (2017). Pedagogy and Andragogy in Higher Education, in Theory and Practice of Adult and Higher Education Victor C.X. Wang, editor. Information Age Publishing, Inc. Pp. 87 – 111.

Multiple Choice Questions

After sharing assessment information with your learner and identifying gaps in performance, they work with you to set learning goals and determine processes by which to improve in that area. You both agree that you as the educator will assess their progress and provide additional feedback. This is an example of applying which approach to teaching?

a. Pedagogy

b. Andragogy

c. Heutagogy

d. Synergogy

The correct answer is b. This is an example of andragogy in that the educator is promoting self-direction in setting learning goals and learning processes while retaining a significant role in assessment in a skill that addresses current competencies. In contrast, a pedagogical approach would consist of the educator controlling all aspects of the learning process whereas in heutagogy, the learner would be in control of all aspects of the learning process and the focus would be on applying competencies in a complex environment.

2. Which of the following teaching-learning approaches would best support a self-determined learner in a clinical/experiential setting? Encourage them to …

a. make a detailed self-directed learning plan for your feedback

b. analyze how they reached a particular clinical judgment

c. select a topic to present from a rotation-specific list

d. research the use of a particular therapeutic intervention

The correct answer is b. By encouraging the learner to analyze how they reached a particular clinical judgment you are promoting double-loop learning and self-regulation, both are important for adapting to complex environments. The other approaches, such as providing feedback on a learning plan prior to execution, limiting areas of exploration such as selecting a topic or researching a therapeutic intervention are strongly associated with self-directed learning.

3. Which of the following actions taken by the educator in a clinical/experiential setting is an effective approach to honoring the role of the trainees prior experience in learning?

a. Incorporating your own stories into a clinically-based lecture

b. Asking questions to uncover knowledge gaps.

c. Providing detailed clinical explanations to build cognitive schema

d. Demonstrating the proper way to accomplish a common psychomotor skill

The correct answer is b. By asking questions and uncovering misconceptions, the educator can uncover current knowledge and skills as well as identify any gaps. Incorporating your own stories is a powerful method for teaching but centers the educator’s prior experience. Providing detailed explanations of clinical cases does not help learners activate their own prior learning. Demonstrating a common psychomotor skill that learner has already mastered does not assist learners to expand on their current knowledge.

4. When working with a group of trainees, you notice that they do not appear to be motivated to learn what you identify as an important clinical skill. Which of the following interventions could be used to enhance intrinsic motivation for learning this skill?

a. A test at the end of the week

b. A friendly competition with a reward

c. An Explanation why the skill is important

d.  A Reminder that it will look good on a  their record

The correct answer is c. Explaining why a particular skill is important to their practice will enhance intrinsic motivation. The use of a test, friendly competition, or the promise that this will look good as they move toward credentialing are all examples of extrinsic motivators.

A Soundboard Approach to Facilitate Effective Feedback to Health Professions Trainees

Abstract

It is well known that health professions trainees benefit from feedback that is timely, constructive, and focused on observable behaviors.  However, providing such feedback becomes increasingly more difficult in situations where either the stakes (possible negative outcomes) or the trainees’ resistance to the feedback increases, or both.  In this short communication, we use the metaphor of the sound engineer’s soundboard to highlight the benefits of being adaptive to the interplay of these two factors when giving feedback to trainees in these difficult situations. We describe and illustrate the metaphor and then share educators’ responses to it.

Background

Health professions trainees require feedback from science educators that is effective: timely, constructive, and focused on observable actions and behaviors.1  We define “trainee” broadly to include both pre-professional and post-graduate students and “science educator” broadly to include teachers, mentors and preceptors.  Best practices in providing feedback exist2,3 and often work well in routine situations. Nevertheless, we have observed that these practices generally do not explicitly consider: (1) the stakes (high or low) or possible consequences of the situation necessitating the feedback to be offered; and (2) the extent to which the trainee is resistant to feedback.  We have observed from personal experience that the quality of the feedback can be impaired if the approach to giving feedback is not adequately adjusted to account for these two factors. This dynamic between quality of outcomes and the attention given to underlying conditions reminded us of the challenges a sound engineer faces when using a soundboard to modulate the interplay of multiple inputs to produce an harmonious output.  A soundboard is the electronic console used in live and recorded performances on which there are input channels with sliders that are adjusted to obtain the right sound mix.

Figure 1: SRM Soundboard Axes
Figure 1: SRM Soundboard Axes: The X axis of the SRM Soundboard indicates the stakes associated with the situation for which the feedback is being given. Low stakes are at the far left and high stakes are at the far right. The Y axis indicates the trainee’s resistance to feedback (positioned as the reverse of receptivity in order to align the axes), with low resistance (high receptivity) at the bottom left and high resistance (low receptivity) at the upper left. The placement of the gray bubble indicates the perception of the person completing the SRM Soundboard of the stakes (low or high) of the situation and the degree of mentee resistance. The diameter of the bubble indicates the difficulty of the feedback to be provided, which determines the level of mentor skill required for the feedback encounter.

By building on this conceptual association, we developed the SR (Stakes/Trainee Receptivity) Soundboard, a metaphor to illustrate (see Figure 1) the educators’ task of adjusting their approach to giving feedback to accommodate the stakes of the feedback situation and the trainee receptivity to feedback.  In our model, the educator is envisioned as adjusting these sliders, as if on a soundboard, to obtain the right mix for a good outcome in giving feedback.  Just as with a soundboard operator, the more highly skilled the educator, the more adept they will be at recognizing which channels need attention.  For example, if the educator applies advanced skills, they may be able to reduce the stakes of the problem.  If the trainee is resistant to feedback, the stakes of the problem may increase and the situation may require greater skill on the part of the educator.  A sound engineer knows that all sliders can move independently – none are stuck in place – so there are a large number of possible variations for the mix to create a harmonious outcome.  In the same way, all of the variables influencing the ultimate success of feedback given can be seen as modifiable in a particular situation.  Such insights might lead the educator to consider how, given their skill level or the trainee’s receptivity, they might break down the issue into its constituent parts, each of which is perceived as being lower stakes than the issue in its entirety.  Or perhaps the educator will seek out a more experienced educator to improve their skill in providing complex feedback to an unreceptive trainee, or to assist in providing the feedback.  Application of this conceptual model also allows one to envision the educator-trainee relationship as a melody playing out over time, with the sliders moving as needed and taking into account feedback discussions that have come before.

Activity

Our use of the soundboard metaphor involves tuning the axes on Figure 1 to provide the best “mix” of the two components to realize the ideal “sound” or outcome of the feedback.  Mix 1 (low stakes and low resistance) represents a best-case scenario in which well-established feedback models should be successful and in which educator skill in providing feedback may be relatively less important.  Mix 2 (high stakes and low resistance) represents a more challenging feedback scenario, such as delivering difficult news to a highly engaged trainee, which will require more skill on the part of the educator to provide effective individualized academic support or resources for the trainee.  Mix 3 (low stakes and high resistance) may require more skill on the part of the educator to discern the basis for and to address the potential negative impact of low receptivity on the part of the trainee.  The educator’s ability to address the receptivity of the trainee will impact the effectiveness of the feedback given.  Finally, Mix 4 (high stakes and high resistance) represents the most difficult scenario for educators to provide feedback to trainees and may involve lack of resolution or escalation of a previously addressed issue or a new, difficult issue with a trainee who is unreceptive to feedback.  Mix 4 scenarios will likely demand the greatest skill on the part of the educator to provide effective feedback to ensure resolution of the issue and maximal trainee professional development.

Importantly, the SR Soundboard metaphor highlights that the conditions in which feedback is offered can vary as an individual feedback encounter unfolds over time.  Low receptivity/high resistance is a trainee-dependent factor.  For example, among medical students, less experienced trainees tend to value positive feedback more highly than constructive feedback, while more experienced trainees tend to value constructive feedback more highly than positive feedback.6  The extent of resistance may also vary as a consequence of context, even if the actual feedback being provided is similar across two contexts.  Likewise, the overall complexity and, ultimately, the success of the feedback situation can vary independent of the level of trainee receptivity, due to differences in the level of skill the educator brings to the situation.  We believe this model will allow educators to identify these conditions more precisely, especially as they unfold during the feedback session and, therefore, increase the likelihood that the feedback encounter will be successful, particularly when interacting with highly resistant trainees or for high-stakes issues.  

Figure 2: Using the SRM Soundboard to Track the Progress of Feedback
Figure 2: Using the SRM Soundboard to Track the Progress of Feedback: This figure presents hypothetical feedback scenarios illustrating the influence of differing degrees of mentee resistance and mentor skill on the evolution of the feedback scenario and its complexity. In both scenarios, the mentor’s perception of both the stakes and the mentee’s resistance are initially the same; however, as the feedback session evolves, it is apparent that the resistance of the mentee in Scenario 1b is higher, as reflected by the mentee’s response (text box 2) to the initial statement from the mentor (text box 1). Consequently, bubble 2 in scenario 1b is higher than bubble 2 in scenario 1a and its diameter is larger, as the complexity of the feedback to be provided is increased. As the feedback session evolves, the influence of mentor skill is evident in the movement and diameter of the bubbles to different locations for 3a (high degree of mentor skill applied) vs. 3b (low degree of mentor skill applied).

We introduced the SR Soundboard to educators from across the health professions, in three different sessions using professional actors to portray trainees in a simulated feedback scenario in which the feedback complexity varied as a function of trainee receptivity (see Figure 2).  After each simulated conversation, we debriefed by reviewing the maps to identify places where the conversation might have gone differently, and where it went particularly well.

The simulated scenario involved a scholarly writing project (e.g., class assignment or clinical note) that was not well written and bordered on plagiarism (e.g., cutting and pasting); however, the situation is “low stakes,” because it is occurring early in the course/semester/rotation.  A “low resistance” student (Scenario 1a) may be one who is very receptive to receiving feedback and wants to be a better writer, but doesn’t know where to start or how to improve.  This student is open to resources the mentor can provide. This is an ideal combination of a low stakes’ situation, low trainee resistance, and high mentor skill; thus, a successful feedback encounter is likely.  Conversely, if the mentor skill is low, the mentor may be frustrated that this trainee lacks basic writing skills or ethics and may feel that it will not be possible or is not their role to mentor this trainee on basic skills that should have be acquired by this point in the program.  In this case, although the stakes of the feedback situation and the trainee receptivity have not changed, a successful outcome may be less likely. Similarly, one can envision this same scenario of the “low stakes” writing project, yet the trainee resistance to feedback is high (Scenario 1b); the trainee thinks they are a great writer, has always received excellent grades/remarks on their writing, and feels as though their writing is acceptable and not plagiarism.  In this case, the mentor may be concerned about the seriousness of the problem, but realizes that the feedback situation is more complex, as they must first must direct the feedback to increasing the trainee’s self-assessment capabilities and understanding of the problem from the standpoint of ethics and writing skills.  Only after the trainee has a more realistic assessment regarding their writing skills and an appreciation of the ethical aspects of their actions, can the feedback turn to improving writing skills. Alternatively, if the mentor brings a low level of skill to the scenario, they may be frustrated with the trainee’s lack of self-awareness and inability to take responsibility for unacceptable work.  In this case of high trainee resistance and low mentor skill, a successful feedback encounter may be less likely. 

Results and Discussion

We piloted the SRM Soundboard in three workshops with faculty from across the health professions, using professional actors to portray trainees in the scenarios described above, but with no scripted dialogue or determined outcome (see Figure 3).  After two of the three sessions, we obtained written feedback from participants in a post-session evaluation that indicated that the visualization of the encounter provided by this SR Soundboard is highly useful in understanding the progression and transition points in a feedback encounter. Based on this feedback, we conclude that the SR Soundboard is a useful metaphor to objectively capture the major factors that contribute to the mix of a educator-trainee feedback encounter.   

Figure 3: Actual feedback scenarios with different levels of resistance to feedback tracked in real time using the SRM Soundboard
Figure 3: Actual feedback scenarios with different levels of resistance to feedback tracked in real time using the SRM Soundboard: Actual feedback sessions tracked in real-time by an observer in which a single mentor provided feedback regarding the same scenario (i.e., equal stakes) to two different “trainees” portrayed by trained actors. As the feedback session evolves in Scenario 2a, it becomes clear that the mixing of the mentee’s receptivity and the mentor’s skill leads to a decrease in the stakes and in the complexity of the situation, as reflected in the movement of the bubbles to the lower left corner and the progressive decrease in their diameter. Conversely, in Scenario 2b, the mentee’s resistance necessitates an increase in the mentor skill applied. Despite the increase in mentor skill, the mentee resistance does not change and the feedback situation becomes increasingly complex, as reflected by the increasing diameter of the bubbles. Eventually, the mentor resolves the present feedback session by increasing the stakes through presentation of an ultimatum to the student, who then acquiesces.

The SR Soundboard may be used to capture an individual feedback encounter or to capture feedback encounters over time.  This approach allows educators from multiple health science disciplines to track personal and professional development competency domains related to feedback across the dimensions of significance of the issue (i.e., low to high stakes), trainee receptivity, and situational complexity in order to determine the corresponding level of educator skill required to deliver effective feedback.  Feedback from our session participants indicates that the visualization of a feedback session that is offered by this model is highly useful.  Research is needed to determine the feasibility of adopting these tools among academic educators and community educators.

References

  1. Moss HA, Derman PB, Clement RC.  Medical student perspective:  Working toward specific and actionable clinical clerkship feedback.  Med Teach 2012; 34(8): 665-667.
  2. Hewson MG, Little ML.  Giving feedback in medical education: Verification of recommended techniques.  J Gen Intern Med 1998; 13:111–116.
  3. Ramani S, Krackov SK: Twelve tips for giving feedback effectively in the clinical environment. Med Teach 2012; 34:787–791.
  4. Sonthisombat P.  Pharmacy student and preceptor perceptions of preceptor teaching behaviors.  Am J Pharm Educ 2008; 72(5): 110.
  5. Straus SE and Sackett DL. Mentorship in Academic Medicine.  Hoboken, NJ: Wiley Blackwell; 2014.
  6. Murdoch-Eaton D, Sargeant J.  Maturational differences in undergraduate medical students’ perceptions about feedback.  Med Educ 2012; 46(7): 711-721.
  7. Young S, Voss SS, Cantrell M, Shaw R.  Factors associated with students’ perception of preceptor excellence.  Am J Pharm Educ 2014; 78(3): 53.

American Association of Neurological Surgeons Joint Sponsored Activities: A longitudinal comparison of learning objectives and intent-to-change statements by meeting participants (Abstract)

Published in J Contin Educ Health Prof. 2021 Dec 1.
doi: 10.1097/CEH.0000000000000408. Online ahead of print. PMID: 34862334

Abstract

Background: Continuing medical education (CME) activities are required for physician board certification, licensure, and hospital privileges. CME activities are designed to specifically address professional knowledge or practice gaps. We examined statements taken from participants of their “intent-to-change” as data to determine whether the CME activity content achieved a stated learning objective.

Methods: We performed a retrospective mixed-method thematic content analysis of written and electronic records from American Association of Neurological Surgery  (AANS) sponsored CME activities. Data was analyzed using a quantitative, deductive content analysis approach. Meeting objectives were examined to determine if they resulted in specific intent-to-change statements in learners’ evaluation of the CME activity on a direct basis for one year as well as longitudinally over 6 consecutive years. Intent-to-change data that did not align with meeting objectives were further analyzed inductively using a qualitative content analysis approach to explore potential unintended learning themes.

Results: We examined a total of 85 CME activities, averaging 12–16 meetings per year over 6 years. This yielded a total of 424 meeting objectives averaging 58–83 meeting objectives each year. The objectives were compared with a total of 1950 intent-to-change statements (146–588 intent-to-change statements in a given year). Thematic patterns of recurrent intent-to-change statements that matched with meeting objectives included topics of resident education, complication avoidance, and clinical best practices and evidence. New innovations and novel surgical techniques were also common themes of both objectives and intent-to-change statements.

Intent-to-change statements were not related to any meeting objective an average of 37.3% of the time. Approximately a quarter of these unmatched statements led to subsequent CME activity new learning objectives. However, the majority of intent-to-change statements were repeated over a number of years without an obvious change in subsequent meeting learning objectives. An examination of CME learning objectives found that 15% of objectives had no intent-to-change statements associated with those objectives.

Conclusion: An examination of CME learning objectives and participant intent-to-change statements provides information for examination of both meeting planner and learner attitudes for future CME activity planning.

One-Minute Preceptor: An Efficient and Effective Teaching Tool

This document will become one of many chapters in a text book on education in the health professions to be published by Oxford University Press. All of the chapters in the textbook will follow a Problem-based Learning (PBL) format dictated by the editors and used by these authors.

Abstract

As learners progress from early health professions education to the clinical learning environment, there is a need for high-quality instruction from their clinical preceptors to foster the application of knowledge to patient care.  The busy clinical environment poses challenges to both learners and educators as there is a time constraint to meet both the learner’s and patient’s needs.  The One-Minute Preceptor is an easily-learned clinical teaching tool that features five microskills initiated by the educator: getting a commitment from the learner, probing for supporting evidence, teaching a general rule, reinforcing what was done right, and correcting mistakes.  This model has been well-studied.  It effectively and efficiently imparts high-quality education to the learner without compromising patient care.  It is a preferred modality by learners and preceptors alike.  Although the original intent was for use in the ambulatory care setting while working with a learner one-on-one, it can be adapted in a variety of settings.  There are also several factors that can facilitate the model’s success, including educator adaptability and focusing on the principles of effective feedback.

Keywords

One-Minute preceptor, clinical education, teaching model, efficient teaching, effective teaching, feedback

Learning Objectives

  1. Identify strengths, weaknesses and situations to utilize the “One-Minute Preceptor” model
  2. Define and utilize the 5 steps of the One-Minute Preceptor
  3. Adapt the model for various learning environments and groups of learners
  4. Identify barriers to using the “One-Minute Preceptor” model and strategies for resolving them.

Case

Case: 

Maria is a medical student just starting her clerkship year and has been assigned to the pediatric endocrine clinic for a week.  She arrives at the front desk and recognizes the waiting room is already full.  She is greeted and brought back to wait in the team room. 

Questions:

How is learning in the clinical environment different than her previous, pre-clerkship medical school experience?

How can she best meet her learning needs while fitting into the flow of her assigned clinic?

Case progression:

Dr. Wright, Maria’s assigned preceptor, walks through the clinic door to see a full waiting room.  The clinic receptionist greets him saying, “your medical student Maria is here and she is sitting in the team room.”  Dr. Wright had forgotten this was the first day of the clerkship.

Questions:

How can Dr. Wright best meet the medical needs of his patients, keep up with his schedule and still provide the student with a meaningful educational experience?

Is there a proven format that can help?

Case progression:

Following introductions, Dr. Wright quickly shows Maria around the clinic and describes the schedule and his expectations.  When asked about her learning goals, Maria was unsure of how to respond. 

Questions:

How will Maria learn what she is capable of and where she falls short?

How will Dr. Wright observe Maria’s work enough to learn about her abilities and knowledge gaps?

How will Dr. Wright find time in the busy clinic to provide effective feedback safely?

Case progression:

Dr. Wright enters the first patient’s room with Maria and makes introductions.  He then leaves Maria to obtain a history and physical exam. 

Questions:

How can Dr. Wright learn about Maria’s questioning and exam skills without being there?

How can Dr. Wright ensure that Maria’s questioning and exam follow a hypothesis-driven progression?

Case progression:

Maria presents her findings to Dr. Wright when he finishes up with a patient. She pauses looking to Dr. Wright for next steps.  Dr. Wright quietly notes to himself that he is already late for his next patient, but wants to provide clinical teaching to Maria. 

Questions:

How is Dr. Wright able to teach and keep up with his clinical schedule?

How does Dr. Wright provide efficient disease-specific teaching to meet Maria’s needs?

Case progression:

Maria feels stressed recognizing that Dr. Wright is busy.  She is also worried that she is not performing well enough.  There is so much new here in the clinic!

Questions:

What are the components of effective feedback?

Can effective feedback be provided in a busy clinic?

Discussion

The clinical environment introduces educational challenges distinct from those in classroom-based health professions education. In the latter, a more structured environment, there are many teaching modalities to facilitate knowledge acquisition: team-, case- and problem-based learning, simulation, classroom teaching and self-study.  Additionally, during that time, a student’s education is the primary focus of the teaching faculty and their performance is reported directly as a score on a summative assessment.  As students become integrated into the clinical environment, their emerging knowledge and skills are stretched with the real-world complexity of clinical applications.  For example, students need to balance the disease-based knowledge obtained through their reading with actual patient symptoms to construct a prioritized differential diagnosis and a patient-specific management plan.  In the clinical setting, faculty need to create a fruitful and safe educational environment while concurrently administering exceptional patient care. Teaching modalities leaned on heavily in early health professions education are less congruent with the environment of clinical practice.  The assessments students receive can be more subjective and based on short interactions.

To be feasible, models of teaching need to adapt to this environment.  To be useful to the instructor, the model must somehow provide insight into both the patient’s illness and the learner’s abilities, be easy to utilize and fit within the tight time-constraints required of increasing patient volumes.  To be beneficial to the learner, the model should allow for autonomy in a psychologically safe environment, provide direct teaching that improves an area of weakness, and impart honest feedback.  There have been multiple models published to overcome these challenges and maximize learning (SNAPPS, concept mapping and One-Minute-Preceptor).1  Each approach has different strengths and weaknesses.  Based on a broad evidence-base detailing its efficiency, efficacy, learner- and preceptor-preference, and its adaptability for multiple health professions and settings, we will delve into the specifics of the One-Minute Preceptor model.1,2

            The five-step “microskills” model of clinical teaching, known more commonly as the “One-Minute Preceptor” was first formally described in the Journal of the American Board of Family Practice in 1992 by Neher et al.3  This clinical teaching method earned its name due to its emphasis on providing a brief teaching moment within the context of a busy clinical setting.  The model was originally created by senior educators at the University of Washington to provide less experienced family practice preceptors an educational framework to improve their teaching.  It was originally presented within the University of Washington Family Practice Network Faculty Development Fellowship curriculum and at other regional and national meetings.  Since the 1990s, use of the One-Minute Preceptor has spread across various disciplines as an effective approach to clinical teaching.  

            The five microskills are simple teaching behaviors focused on optimizing learning when time is limited.3  The model is best initiated by the clinical preceptor after the learner has seen a patient and presented details about the case. The preceptor then encourages the learner to develop their own conclusions about the patient from the information they have gathered.   The preceptor then identifies gaps in the learner’s knowledge and provides specific teaching and feedback to fill those gaps. This approach is different than traditional models in which the preceptor asks a series of clarifying questions, mostly to aid the preceptor in correctly diagnosing the patient.3

The first microskill is to “get a commitment from the learner”.3  This entails asking the learner to commit to a certain aspect of the patient’s case.  For example, after the learner presents the patient the preceptor may ask, “What do you think is the most likely diagnosis?” or “What laboratory tests would you like to order?”.  This encourages the learner to make a decision and demonstrate their level of knowledge. 

The second microskill is to “probe for supporting evidence”.3  After the learner makes a commitment, this step allows the supervising clinician to better understand the learner’s thought process and identify knowledge gaps.  The preceptor may ask, “What aspects of the patient’s history support your diagnosis?” or “How did you select those laboratory tests?”. 

The third microskill is to “teach a general rule” that ideally helps fill a knowledge gap identified in the first two steps.3  This is meant to be a brief teaching pearl about one aspect of the patient’s case.  For example, the preceptor may highlight physical exam findings that support the most likely diagnosis or discuss an additional laboratory test that could help narrow the differential diagnosis. 

The fourth microskill begins the feedback portion of the model and “reinforces what was done right”.3  Feedback should always be specific, timely, and focused on behaviors.4,5 

The fifth microskill “corrects mistakes”.3  This should be done after allowing the learner to assess their own performance first.  Educators are also encouraged to provide context while giving feedback, highlighting the positive impact of the learner’s behaviors and how to correct any errors that took place.   The five microskills are meant to be a brief set of teaching tools to provide relevant teaching points and feedback in a few quick minutes.

            There are many process-oriented strengths of the One-Minute Preceptor model that explain its widespread use.  First, this model improves upon more traditional approaches in that it not only focuses on the learner but also has the benefit of facilitating correct diagnosis of the patient.6  The first two steps delve into the learner’s knowledge base, thought process, and potential gaps so that the later steps can provide teaching and feedback that are specific to the learner’s needs in that moment.  It has also been shown that teaching is more disease-specific rather than generic when using this model.7  Educators are more likely to provide teaching points that are focused on differential diagnoses, patient evaluation, and disease progression than more general topics, such as approaches to history taking or presentation skills.7  This higher level of teaching can focus on the learner’s decision-making process and clinical reasoning ability, which are essential skills for optimal patient care.3,8  Another process-oriented strength of the model is its efficiency.  In addition to the teaching being high-yield and learner-centered, it is also quick to work through and is viewed by preceptors and residents as more effective and efficient.6,9  The model is also easy for preceptors to learn in just an hour or two3.  Receiving training in the One-Minute Preceptor model also increases the preceptors’ self-efficacy as an educator and increases the likelihood that they will choose to precept in the future.10  Finally, feedback is often lacking in more traditional teaching encounters, which can leave the learner unsure of their performance and where they should focus their learning.  By integrating feedback into the model, the One-Minute Preceptor model has improved the quality and specificity of feedback, even in busy clinical environments.11

            As with any teaching process, there are limitations and weaknesses of the One-Minute Preceptor model.  The premise of the model relies on good information gathering from the learner and an ability to convey this information to the preceptor.  This may be challenging for more junior learners.  As it is a preceptor-driven model, faculty development and practice are necessary for success.  Also, more junior educators such as residents may feel less comfortable teaching general rules due to lack of confidence or limitations in their own knowledge base.12  Due to its focus on efficiency, the general rule that is taught must be limited and succinct and the preceptor may need to omit other key learning points. 

            There have been many studies on the outcomes (i.e. impact and efficacy) of the One-Minute Preceptor model since its inception.  There is evidence to support that this teaching method benefits educators, students, and patients alike.  Clinician educators find the model to be more effective and efficient than traditional models.6  They also indicate higher confidence in their ability to rate learners and tend to rate learner performance more favorably than with more classic methodology.6  The One-Minute Preceptor is also useful in everyday teaching practice, with faculty in the original study indicating that they used the five microskills in 90% of their teaching encounters and all found it a least somewhat helpful.3  Learners also favor this model to more traditional approaches.13  Medical students rate resident teaching skills higher after the resident has received training in One-Minute Preceptor.12  Learners are also more likely to be included in the decision-making process when this model is used as compared to more traditional models.13  Learners also benefit from increased feedback.  With this teaching model, they receive higher quality feedback in that it is specific and includes constructive comments in addition to positive ones.11  One common barrier to effective clinical teaching is that it takes time away from the patient, but One-Minute Preceptor aims for efficiency and leaves time for quality medical care.  There is also evidence to show that patients are more likely to be diagnosed correctly when One-Minute Preceptor is used versus more traditional models.6

            The original intent of the One-Minute Preceptor was to assist clinician educators in the ambulatory clinical setting.  This is an ideal environment as learners are often presenting patients to their preceptor one-on-one.  This setting provides an opportunity for preceptors to tailor teaching to the individual learner’s needs.  Furthermore, there are often high patient volumes in the ambulatory setting with limited time per patient, making quick clinical teaching models necessary for work flow.  The model does function best in the context of patient care rather than in the classroom setting, as the basis for starting this approach is a learner’s presentation of an actual patient’s case.  It also may be challenging at the patient’s bedside as the teaching is tailored to the learner’s level of understanding rather than the patient’s. Despite its initial application in the ambulatory setting, One-Minute preceptor has been used effectively in other clinical and educational environments.  It has been adapted and implemented to teach multiple learners on the inpatient wards.14  After a learner presents a patient on rounds, the clinician educator can then ask the learner to make a commitment to the diagnosis.  If the learner struggles at this step, the same question can then be posed to a more senior learner on rounds.  General rules and feedback can be delivered quickly during rounds as well.  With multiple learners, it may be effective to alter step three (“teach a general rule”) and highlight several general rules, more basic learning points for junior learners and more complex ones for senior learners.  As the clinical setting is often unpredictable, altering the model to fit the scenario can be beneficial.  One environment which might require some adaptation for the One-Minute Preceptor model to be successful is a high-acuity setting, like the emergency department.  Since presentations may happen at the bedside, learners should be counseled ahead of time on what discussions are appropriate to have in front of patients.15  Learners should also be encouraged to circle back to their preceptor to complete the model if an interruption arises.  Another adaptation might be that learners make a commitment on the patient’s most acute problem rather than completing a full assessment as there might not be appropriate time during very critical and pressing scenarios.16  To further aid feasibility, preceptors might opt not to use all the steps in every encounter or to alter their exact order.17   In some instances, only a few of the steps may apply.  This allows for widespread use of the model in a variety of situations, even while teaching procedures.

Table 1. One-Minute Preceptor User’s Guide 2, 3, 4, 17, 18

Step 1. Getting a commitment
Goal: The resident should internally process the information they gathered to create an assessment of the situation.3 Learners can be asked to commit to primary or alternative diagnoses, next diagnostic step or potential therapies.18
Approaches to initiate step: This step is usually initiated following the learner presentation. This questioning can evolve through longitudinal experiences with the same learner.
• “What do you think is the most likely diagnosis for this patient?2
• “What do you think is going on with this patient?3
• “I like you’re thinking that this might be pneumonia, what other diagnoses are you considering?2”
• “What laboratory tests do you feel are indicated?3
• “What would you do for this patient if I weren’t here?” (to decrease pressure of “the ideal” answer)18
Learner deficit identified: Failing to commit could indicate difficulty processing the information, fear of exposing a weakness or dependence on the opinions of others.3 Alternatively, the learner might not have integrated some relevant information they had gathered, which could suggest lack of content knowledge.2,17
Possible remedy for identified learner deficit: Assuming a safe environment, this identified mistake in processing is a teaching opportunity.3 The next step will help elucidate if that teaching point should focus on the learner’s processing, a knowledge deficit, or the need for hypothesis-driven data gathering.
Facilitators for success:
• Create a safe and supportive environment to allow the learner to feel comfortable being vulnerable to make a commitment instead of more safely staying quiet.3
• If necessary for patient care, preceptors can ask a few brief clarifying questions. This should be limited at this stage, as too much questioning highlights the preceptor’s thought process rather than the learner’s.3 These questions are more appropriate later in the process.
• Learners should be gently pushed to make a commitment just beyond their level of comfort.18
Step 2. Probing for supporting evidence
Goal: Help learners reflect on their reasoning to identify process or knowledge gaps.17
Approaches to initiate step: Open-ended questions aimed at having the learner identify information used to arrive at their commitment:
• “Why do you think that is the most likely diagnosis?2
• “What were the major findings that led to your diagnosis?3
• “Did you consider any other diagnoses based on the patient’s presentation and exam?2
• “How did you rule those things out?17
• “Why did you choose that particular medication?17
Learner deficit identified: Probing allows clear evaluation of learner’s knowledge and clinical reasoning and identification of gaps and deficits.
Possible remedy for identified learner deficit: Any deficits (either knowledge or reasoning) identified in this step can serve as content for the next step, “teaching a general rule”.17
Facilitators for success:
• Preceptors should avoid passing judgement or talking and teaching immediately.3 By listening and learning which facts support the learner’s commitment, the teaching point can be tailored to the learner. This decreases the likelihood of general teaching that might repeat areas the learner already knows.3
• Maintain a supportive environment.
Step 3. Teaching a general rule
Goal: Preceptor shares expertise with a relevant and succinct learning point based on what the preceptor learned about the learner’s knowledge and deficits.3
Approaches to initiate step: Direct statements work well:
• “There was a recent journal article indicating that children with otitis media do not necessarily require antibiotics, unless they meet certain criteria…”
• “In elderly people with confusion, it is important to ask about recent medication changes.”
• “Following an uncomplicated vaginal delivery, our standard of care is a follow-up contact within 3-weeks.”
Facilitators for success:
• This step can be skipped if the resident has performed well, and no gaps are obvious, or if more information is needed for a decision.3 The saved time can be spent gathering additional information with the patient.
• Generalizable and succinct “take-home” teaching points relevant to the patient are preferred to complete lectures or descriptions of preceptor preferences.3,17 Topics can include disease-specific features, patient-specific management decisions, or areas for follow-up.18
• If during the probing step, you identify larger knowledge gaps it might be more appropriate to assign more comprehensive reading or plan a slightly longer discussion for a later time.18
Step 4. Reinforcing what the learner did well
Goal: Recognize, validate and encourage certain behaviors. Appropriately build learner confidence.3
Approaches to initiate step: A timely, direct, specific statement that is based on the behavior directly observed by the preceptor is ideal.4, 17 Asking the learner what they felt they did well is an effective place to start.18
• “I was impressed with how you obtained a thorough social history on our patient and noted that smoke exposure at home may be exacerbating her asthma.”
Facilitators for success:
• Aim for specific statements which are more helpful than general praise.3 Brief positive statements can be integrated into the questions from the preceding steps as well.17 (During “probing for evidence”: “Asking about travel history was a great thought, what was your motivation?”)
Step 5. Correcting mistakes
Goal: Tactfully improve learner performance.3
Approaches to initiate step: A timely, direct, specific statement is helpful.4 Asking the learner where they feel they could improve can help the preceptor start the conversation starting from where the learner feels they are.3,4,18
• “A thorough skin exam is important in every patient. Noting his Janeway lesions may have brought endocarditis to the list of his potential diagnoses.”
Facilitators for success:
• Maintain a collaborative and psychologically safe environment.4 “Focus on the decision, not the decion-maker.4” Finding the right moment and setting for this part is helpful for success.3,4 The most effective feedback occurs in quiet, relaxed areas soon after the observed performance.3,4 This can be challenging as the clinical environment is unpredictable and often fairly public.
• Asking students ahead of time how and when they want to receive feedback can be very helpful.18
• Very specific feedback for areas of improvement is more actionable and measurable than general criticism.4 Concrete improvement suggestions can move this delicate conversation in a positive direction; general criticism can impair the supportive and trusting environment.
• Faculty development efforts can be helpful for successful implementation.
Download Table 1 in PDF format

Multiple Choice Questions:

  1. Which of the following is NOT a step in the One-Minute Preceptor model?
    A. Correct mistakes
    B. Get a commitment
    C. Provide five teaching points
    D. Teach a general rule
    Answer: C
  1. Which of the following are benefits of the One-Minute Preceptor model?
    A. Increases quality of feedback to learner
    B. Improves efficiency and effectiveness of clinical teaching
    C. Provides disease-specific, rather than generic teaching
    D. All of the above
    Answer: D
  1. How can the One-Minute Preceptor model be adapted in the emergency department setting?
    A. Prioritize all five steps of the model over patient care
    B. Get a commitment on the patient’s most urgent clinical issue
    C. Encourage the patient to provide feedback instead of the clinician educator
    D. Skip teaching a general rule since time is limited
    Answer: B
  1. Which of the following is the best way to come up with the general rule to teach?
    A. Teach a knowledge gap identified in step two (“probing for supporting evidence”)
    B. Teach a general rule that the learner already knows to reinforce it
    C. Teach the general rule that you know the most about
    D. Teach a general rule that pertains to the next patient that the learner will see
    Answer: A

References

  1. Pierce C, Corral J, Aagaard EM, Harnke B, Irby DM, Stickrath C. A BEME realist synthesis review of the effectiveness of teaching strategies used in the clinical setting on the development of clinical skills among health professionals: BEME guide no. 61. Med Teach. 2020; 42(6): 604-615.
  2. Gatewood E, DeGagne JC. The one-minute preceptor model: a systematic review.  JAANP. 2019; 31(1): 46-57.
  3. Neher JO, Gordon KC, Meyer B, Stevens N. A five-step “microskills” model of clinical teaching. J Am Board Fam Prac. 1992; 5(4): 419–424.
  4. Ende J. Feedback in clinical medical education. JAMA. 1983; 250: 777-81.
  5. Kelly E, Richards JB. Medical education: giving feedback to doctors in training. BMJ. 2019; 366-370.
  6. Aagaard EM, Teherani A, Irby DM. Effectiveness of the one-minute preceptor model for diagnosing the patient and the learner: proof of concept. Acad Med. 2004; 79(1): 42–49.
  7. Irby DM, Aagaard E, Teherani A. Teaching points identified by preceptors observing one-minute preceptor and traditional preceptor encounters. Acad Med. 2004; 79(1): 50–55.
  8. Richards JB, Hayes MM, Schwartzstein RM.  Teaching clinical reasoning and critical thinking: from cognitive theory to practical application.  Chest. 2020; 158(4): 1617-1628.
  9. Arya V, Gehlawat VK, Verma A, Kaushik JS. Perception of one-minute preceptor (OMP) model as a teaching framework among pediatric postgraduate residents: A feedback survey. Indian Journal of Pediatrics. 2018; 85: 598.
  10. Miura M, Daub K, Hensley P.  The one-minute preceptor model for nurse practitioners: a pilot study of a preceptor training program.  JAANP. 2020; 32: 809-816.
  11. Salerno SM, O’Malley PG, Pangaro LN, Wheeler G. A, Moores LK, Jackson JL. Faculty development seminars based on the one-minute preceptor improve feedback in the ambulatory setting.  Journal of General Internal Medicine. 2002; 17: 779–787.
  12.  Furney SL, Orsini AN, Orsetti KE, Stern DT, Gruppen LD, Irby DM.  Teaching the one-minute preceptor: a randomized control trial. J Gen Intern Med. 2001; 16: 620-624.
  13. Teherani A, O’Sullivan P, Aagaard EM, Morrison EH, Irby DM.  Student perceptions of the one-minute preceptor and traditional preceptor models. Med Teach. 2007; 29(4): 323–327.
  14. Pascoe JM, Nixon J, Lang VJ. Maximizing teaching on the wards: review and application of the One-Minute Preceptor and SNAPPS models. J Hosp Med. 2015; 10(2): 125–130.
  15. Farrell SE, Hopson LR, Wolff M, Hemphill RR, Santen SA. What’s the evidence: a review of the One-Minute Preceptor Model of clinical teaching and implications for teaching in the emergency department. J Emerg Med. 2016; 51(3): 278–283.
  16. Sokol, K. Modifying the one-minute preceptor model for use in the emergency department with a critically ill patient. J Emerg Med. 2017; 52: 368–369.
  17. Lockspeiser TM, Kaul P. Applying the one-minute preceptor model to pediatric and adolescent gynecology education. Journal of Pediatric and Adolescent Gynecology. 2015; 28: 74–77.
  18. Neher JO, Stevens NG. The one-minute preceptor: shaping the teaching conversation. Fam Med. 2003; 35(6): 391-393

Assessment of Learners

This document will become one of many chapters in a text book on education in the health professions to be published by Oxford University Press. All of the chapters in the textbook will follow a Problem-based Learning (PBL) format dictated by the editors and used by these authors.

Learning objectives

  1. Compare and contrast feedback, formative assessment, summative assessment, evaluation, and grading.
  2. Identify frameworks for providing learner assessment and tracking growth in the health professions.
  3. Identify key components to providing feasible, fair, and valid assessment. 
  4. Describe the roles and responsibilities of both preceptors and learners in optimizing assessments and evaluations.

Abstract

This chapter explores the concepts of learner assessment and evaluation by presenting a case in which a medical student participates in a year-long clinical experience with a preceptor. Using various data points and direct observation, the student is given both formative and summative assessments throughout the learning experience, providing them with information needed to guide their learning and improve their clinical skills. As the case progresses, questions are posed in order to help identify key concepts in learner assessment and explore the interconnectivity between assessment, evaluation, feedback, and grading. The information presented will help educators identify and develop effective assessment strategies that support learner development and growth.

Key Words: assessment, evaluation, learner, formative, summative, grading

Case

Morgan is a medical student who is beginning a new pediatric clinical experience. This learning opportunity includes weekly outpatient clinics with you as a preceptor. This is Morgan’s first opportunity to learn clinical skills outside of the classroom setting.   

  • What is the role of a learner in the clinical setting as they progress through their training?
  • What are some of the frameworks available for assessing learners’ abilities in the clinical setting?

After introducing yourself and the clinic staff to Morgan, you give them[NL1]  a quick tour of the facilities before sitting down in your office to discuss their current learning goals. He identifies obtaining a history as an area he would like to improve. Specifically, he would like to improve his ability to take a history that is comprehensive but tailored to the chief complaint and the clinical setting. You advise him that you will regularly assess him and provide feedback. You recommend that he keep a patient log so he can track the number of patients and complaints he sees throughout the experience.           

  • What is the difference between assessment and feedback?
  • What are the roles of the faculty and the learner in providing the learner with assessment and feedback?

In the first week, Morgan sees a 6y/o girl who presents with a fever. You follow the student into the room, allowing him to enter first.  After introducing Morgan, you ask the family if they are comfortable with Morgan taking the history. The family is excited to contribute in the education of a medical professional and readily agrees. Morgan stands against the wall, looks down at his tablet to pull up his notes, and begins: “The medical assistants told us your daughter has a fever. How long has it been going on?” He then proceeds to ask about the nature of the fever, some associated symptoms (including runny nose, cough, and rash), and alleviating and exacerbating factors. He asks about her past medical history including surgeries and medicines and then conducts a full family and social history. You ask the family a few follow-up questions and perform a physical exam, finishing the visit by discussing the most likely diagnoses and developing a plan with the patient and family.

After sending the family on their way, you ask Morgan how he felt the history went. You ask him to reflect on what he did well and what he should continue to work on. After listening to Morgan’s response, you provide feedback on your assessment, including concrete suggestions for improvement. The student thanks you for the feedback and commits to integrating your recommendations into his practice. He also thanks you for the opportunity to observe your approach to counseling families and obtaining a physical exam that puts the patient at ease.    

  • What are the key components of effective formative assessment?
  • How often should formative assessment occur to optimize learning and growth?

You continue to work with Morgan over the following weeks. He sees multiple children of all ages with several different complaints. When able, you accompany him into the room so that you can directly observe his history-taking. However, there are multiple times that he goes in alone and then reports his findings to you. During many encounters, he is unsure how to interpret his exam findings and asks you to double-check his technique and interpretations. At times, you ask follow-up questions related to the chief complaint and he admits that he does not know the answer. When this occurs, he reports honestly that he did not ask the question. He promises to ask when you both return to the room. You note that he frequently asks the question in subsequent encounters with patients.

  • What role does trustworthiness play in the assessment of learners?

Three months into the year, a 5y/o child, Kai, presents with a fever. You accompany Morgan into the room to directly observe his history. You obtain the family’s permission for Morgan to participate in their son’s care. Morgan begins by introducing himself and asking the family if they are okay if he takes some notes while they talk. He begins, “Kai, I’m sorry you aren’t feeling well. Can you tell me what’s been going on?” Kai says she doesn’t feel good and has a fever. Morgan proceeds to obtain a comprehensive, but focused history of the fever, including both the patient and parents in the conversation. While taking the history, he uses active listening skills, asks clarifying questions, and summarizes the information for the family to ensure he fully understands their concerns. He asks about commonly associated symptoms and symptoms related to possible diagnoses. He asks the family about treatments they have tried (including over-the-counter and homeopathic remedies), asks about their concerns regarding the fever, and includes recent travel and sick contacts in his social history. Before moving onto the physical exam, the student asks the family if he has missed anything important about the chief complaint or about Kai’s medical history.          

  • What does a learner need to do to show “competence” or the ability to effectively perform a professional activity without supervision?
  • How do learner assessment frameworks help track/note improvements in learner performance?

After you conclude the visit and leave the room, you ask Morgan how he feels the encounter went and how he has progressed with his goal of obtaining a history. He is happy with his progress and able to identify areas in which he has improved and things he would still like to work on. You agree that his skills have improved and provide him with formative feedback regarding your assessment of his performance today. You ask him to stay after the clinic so the two of you can review his progress to date.

  • What is the difference between formative and summative assessment?
  • What are the benefits of longitudinal relationships in both formative and summative assessment?

After the clinic, you sit down with Morgan. You ask him to pull out his patient log and the two of you go through the patients he has seen through the 3-months he has been with you so far. He has been collecting a portfolio of interesting cases and experiences. He brings with him the notes he took when getting feedback on his weekly formative assessments. The two of you go through his portfolio and patient log. He reflects on the improvements he has made and identifies areas he can continue to improve and sets new learning goals. You agree with his findings and provide further guidance on growth you have observed and areas he can continue to work on. You continue this pattern of sitting down with Morgan every 3 months throughout the remainder of the learning experience, to review his progress, discuss learning goals, and add to his portfolio.

At the end of the year, you thank Morgan for his participation in the care of your patients. The school has an evaluation form that asks about students’ strengths and areas requiring further growth. You consider all the work you have done with Morgan, his assessments, and his growth throughout the year. You fill out the evaluation form, providing a summative assessment that includes both quantitative (performance ratings) and qualitative (narrative comments) information. Morgan is required to take a final “exam” that includes a multiple-choice test and participate in an observed encounter with a simulated patient, where an actor plays the role of a patient. Morgan receives a final grade for the rotation with comments on his performance. 

  • What are the key components of effective summative assessment?
  • What are the methods and key components of learner evaluation?
  • What are the similarities and differences between assessment and evaluation?
  • What role does the learner have in accepting and reviewing their evaluation?

Assessment and learning in health sciences education

The goal of health sciences education is to provide the environment, information, and experiences needed for learners to develop the knowledge, skills, and attitudes required to practice as a professional in their specific field. Ultimately, the responsibility for learning lies with the learner.1 The teacher’s role is to support and challenge learners in their journey, providing information, supervision, and assessment in order to help them grow and improve in their abilities.

Assessment is one of the most important methods teachers use to support and challenge their learners.2,3 Assessment, in essence, is the process of judging a student’s performance relative to a set of expectations.4 Through assessment, the teacher guides learning by helping students identify their unique strengths and weaknesses and providing concrete recommendations to address these areas. These may include knowledge gaps, skill sets requiring further practice, or even misunderstandings in requirements and attitudes that need reframing. This is why learning and assessment are linked together – one can’t really be achieved well without the other. In an ideal learning environment, every teacher considers it their responsibility to assess learners routinely and consistently, challenging them to demonstrate their current abilities and then supporting them in their growth where needed.[i]

Assessment can take many forms, varying based on the circumstances of the environment and learner; types of knowledge, skills, and attitudes being assessed; and the primary purpose for which the assessment information will be used. For example, types of knowledge and skills can vary from remembering basic facts to thinking critically to conducting complex surgical procedures. Assessments, therefore, will differ and may include multiple choice written tests to oral examinations to procedural skills simulations or direct observations in the clinical learning environment, respectively. Overall, assessment should be used to tailor individual learners’ education and experiences to support their growth. Each assessment may be formative and relatively informal, geared toward iteratively shaping performance or may be more formal, geared toward giving information about learning outcomes.

Formative vs. Summative Assessment

Overtime, specific terms have emerged to differentiate among the variations in assessment described above. One of the most important distinctions is between formative and summative assessment. Think of these as a continuum.5 On one end is formative. Formative assessments tend to be less formal and focused on providing information to help students ‘form’ their knowledge, skills, and attitudes. They should be performed regularly and may be completed after a single experience or observation. On the other end of the spectrum is summative assessments, which tend to be more formal and focused on “summarizing” a learner’s knowledge and skills after a certain time period. Formative and summative assessments can be systematically sequenced and combined within a school to optimize learning, so that assessments from individual teachers contribute to a larger program of assessments conducted by school leadership to create a holistic understanding of learners’ strengths and weaknesses.6,7 In this chapter, we focus on assessments made by individual teachers.

With formative assessments, teachers use limited data to identify learners’ strengths and areas needing further development and help guide the learner’s education and experiences to support this learning. With summative assessments, teachers use more comprehensive information in order to judge learning outcomes achieved to date and check the learner’s knowledge and skills. These assessments tend to combine information from multiple sources and settings and include information from different time points. To better understand the difference, take the example of a runner competing in a marathon. The athlete is receiving formative assessments and feedback throughout the race, including lap time, current pace, and current position. After the race is completed, the runner gets a summative assessment, including average pace per mile, time to course completion, and overall rank among finishers. Formative assessment may be used by the runner to adjust their strategies and plans throughout the race. Alternatively, summative assessment information can help to guide the runner as they prepare for and begin their next race. Often, a summative assessment is tied to a decision-making process, such as a final grade.

Figure 1: Relationship of formative and summative assessment

Figure 1 O'Connor - Learner Assessment

Assessment and Evaluation

Another important distinction in education is between assessment and evaluation. Although they are often used interchangeably, there are differences. Assessment is used to refer to the process of collecting evidence of learning, identifying learners’ strengths and areas needing further development and growth. Evaluation is used to refer to the process of comparing evidence of progress to learning objectives or standards (criterion-referenced) or even other learners’ performances (norm-referenced). In other words, assessment focuses on the learning process while evaluation focuses on the learning outcomes compared to a standard. Keeping the focus on assessment supports growth-mindset learning and the idea that health professionals are life-long learners.8 Shifting to competency-based education and assessment emphasize criterion-referenced evaluation, promoting self-improvement in learning rather than competition with other learners. Alternatively, overemphasis on evaluation can set up an environment that focuses on performance-mindset learning.

Now, you may be wondering how do feedback and grading fit into assessment and evaluation? Feedback refers to information provided to the learner about their knowledge, skills, or attitudes at a single point in time after a direct observation or assessment. Grading is a form of evaluation, providing the learner with an overall score or rank that is based on their performance.

Assessment Frameworks

For many years, educators used the term learning objectives to describe desired outcomes they wanted learners to achieve through a learning experience. Objectives usually include action verbs and are stated in the following format “At the end of this module, learners will be able to…”, followed by a description of a specific behavior. Refer again to the learning objectives at the beginning of this chapter as examples. More recently, however, educators have begun to state desired learning outcomes as competencies. Competencies refer to a combination of knowledge, skills, values and attitudes required to practice in a particular profession.9 These abilities are observable, so they can be assessed. Learners are expected to demonstrate “competence” in all abilities related to their field prior to practicing without supervision. Therefore, the purpose of most learning programs is to prepare learners by achieving a level of competence for all of the identified critical activities for that profession.

Most professions have identified several competencies . For example, the Association of American Medical Colleges has identified 52 competencies for practicing physicians. These have been organized into domains: medical knowledge, patient care, professionalism, interpersonal and communication skills, medical informatics, population health and preventative medicine, and practice-based & systems-based medical care.9 When used together, they describe the “ideal” physician. Although they are comprehensive and provide a strong basis for the development of an assessment strategy, their descriptions can be abstract and therefore difficult to assess concretely and in the setting in which a learner practices. As a result, obtaining routine and meaningful assessments of these competencies during medical school and graduate medical education is proving to be a challenge.10 In response to these challenges, various organizations have developed approaches to better defining expectations of learners and assessing their progress throughout their training.

One of these new approaches, Entrustable Professional Activities (EPAs), is growing in popularity across the health sciences. This approach focuses on assessment of tasks or units of practice that represent the day to day work of the health professional. Performance of these activities requires the learner to incorporate multiple competencies, often across domains.11-14 For example, underlying all the EPAs are the competency of trustworthiness and understanding of a learner’s individual limitations that leads to appropriate help-seeking behavior when needed.15 EPA frameworks have been created by many of the health science education fields including nursing, dentistry, and medicine. One of the earliest organizations to adopt EPAs was the Association of American Medical Colleges. In 2014, they identified thirteen core EPAs for graduating medical students entering residency.16 These thirteen EPAs encompass the units of work that all residents perform, regardless of specialty. Examples include “Gather a history & perform a physical exam,” “Document a clinical encounter in a patient record”, and “Collaborate as a member of an interprofessional team.”

The goal of EPA assessments is to collect information about learners’ “competence” in performing required tasks in their respective field. They assess a learner’s level of readiness to complete these activities with decreasing levels of supervision. As they progress in their abilities, they will be able to perform these activities with less and less supervision from teachers, moving from being able to observe only, to perform with direct supervision, to perform with indirect supervision, to perform without supervision. A major benefit of EPAs is that they provide a holistic approach to assessment. Each EPA requires integration of competencies across domains in order to perform the activity. Since faculty routinely supervise learners performing these professional activities in the clinical learning environment, they find them more intuitive to assess. If multiple direct observations of the activities are performed and the learner demonstrates competence to perform them without need for direct supervision in multiple contexts (e.g., various illness presentations, different levels of acuity, multiple clinical settings, etc.), then a summative assessment can be made that the learner is competent to perform this activity without direct supervision in future encounters.

Figure 2: Example of EPA supervision scale

Observe onlyDirect supervisionIndirect supervisionPractice without
supervision
Able to watch the supervisor perform the activityAllowed to perform the activity with supervisor in the roomAllowed to perform the activity with supervisor outside of the room. Supervisor will double-check findings.Allowed to perform the activity alone

Characteristics of High-quality Assessments

Not only can frameworks improve the quality and effectiveness of learner assessment strategies, certain principles can be applied to individual assessments in to order to support growth-mindset learning and achieve the assessment’s desired goals. As would be expected, not all assessments are of equal value.17 High quality assessments tend to follow 6 simple rules:

Rule 1: Direct observation. Observe learners’ actual performance whenever possible. This means that you are present while learners work with patients in the clinical setting, watching them use the knowledge and skills you are assessing. Frequently, educators observe small parts of the activities and rely on learner reporting of findings to make judgement on how well the learner performed. Some of the reported information can be double-checked by the preceptor by independently speaking with the patient and performing an exam. However, the gold standard is direct observation of an encounter (e.g., How did they ask questions, what was the technique for administering the vaccine, etc.?) Making assumptions can lead to inaccurate assessments and missed opportunities for growth.

Rule 2: Consider context. Use multiple observations and data to guide summative assessments, evaluations, and grading. Learner’s performance may vary based on patient population, presentation of the problem, acuity, and clinical context. Getting multiple assessments in various clinical contexts allows you to see patterns in behavior that will better reveal strengths and areas for improvement.

Rule 3. Consider the learner’s current abilities. Sequence learning tasks based on learner’s level of ability and assess accordingly in order to maximize learning.Aligning assessment difficulty with the knowledge and skills that the learner is most prepared to learn next– building upon what is already known–will help ensure that assessment optimizes learning.

Rule 4: Learner participation. Learners should actively participate in their assessments. Ask learners to self-assess their skills, knowledge, and attitudes. Ask them to identify learning goals for themselves and ensure your assessments encompass these goals.

Rule 5: Feedback. Share results of the assessment with the learner in a timely manner. This is especially important for formative assessment as it should be used to guide learning and work on acquisition of competencies within the current clinical setting.

Rule 6: Behavior-based recommendations. Identify specific strengths and areas for improvement, providing the learner with examples of where these behaviors were observed. Identify areas where learners can improve, focusing on specific, behavior-based recommendations that are attainable. Think to yourself “What does this learner need to do to get to the next level of competence or the next stage of supervision?”

Table 1: Characteristics of high-quality assessments

HIGH-QUALITY ASSESSMENTS
Utilize direct observation
Vary observations to include different skills, settings, complaints, complexity, and acuity
Match the goals of the learning experience
Sequence the level of difficulty of the clinical tasks that are being assessed
Include learners in their set up and implementation
Consider and encompass the learner’s goals
Provide concrete information on how to progress to the “next level”
Provide timely feedback to the learner
Can be strengthened by using a formal assessment framework (e.g., EPAs)

End of module questions

Keith is a nursing student who is learning to give immunizations. After obtaining consent, he and his preceptor, Leticia, enter the room where he administers three intramuscular vaccinations to a 4-year-old child. After observing the encounter, Leticia uses the EPA framework to determine that Keith still needs direct supervision when performing vaccine administration. What is this an example of?

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Sarah is an occupational therapy student who is learning to do a swallow evaluation on an adult who recently suffered a stroke. She performs the examination while her preceptor Phyllis observes. After the encounter, Phyllis pulls Sarah into a private area and asks her to reflect on the experience, identifying areas she did well on and things she can improve on. Phyllis then describes what she observed and gives Sarah clear and concrete recommendations for improving her performance. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony is a dental student who has just completed a rotation in geriatric dentistry. Upon completion of the course, leadership compiled his preceptor evaluations, observed structured clinical encounter assessment form, patient logs, exam score, and patient feedback. They used all the information to provide Anthony with a narrative summary of his strengths and areas for improvement. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony’s performance was compared to a list of set objectives and expectations for the course. Based on his performance, he was provided with a grade of “Honors” in the course. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

What is a key component to feasible, fair, and valid assessment?

  1. Use direct observation
  2. Use multiple encounters to provide formative assessment
  3. Highlight all the learner’s weaknesses
  4. Use single encounters to provide summative assessment.

Bibliography

1.         Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33(3):206-214.

2.         Swan Sein A, Rashid H, Meka J, Amiel J, Pluta W. Twelve tips for embedding assessment. Med Teach. 2020:1-7.

3.         Epstein RM. Assessment in medical education. N Engl J Med. 2007;356(4):387-396.

4.         Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676-682.

5.         Bennett RE. Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice. 2011;18(1):5-25.

6.         van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205-214.

7.         van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve Tips for programmatic assessment. Med Teach. 2015;37(7):641-646.

8.         Dweck C. What Having a “Growth Mindset” Actually Means. Harvard Business Review. 2016.

9.         AAMC. Physician Competency Reference Set. https://www.aamc.org/what-we-do/mission-areas/medical-education/curriculum-inventory/establish-your-ci/physician-competency-reference-set. Accessed May 31, 2021, 2021.

10.       Fromme HB, Karani R, Downing SM. Direct observation in medical education: a review of the literature and evidence for validity. Mt Sinai J Med. 2009;76(4):365-371.

11.       Al-Moteri M. Entrustable professional activities in nursing: A concept analysis. Int J Nurs Sci. 2020;7(3):277-284.

12.       Carney PA. A New Era of Assessment of Entrustable Professional Activities Applied to General Pediatrics. JAMA Netw Open. 2020;3(1):e1919583.

13.       Pinilla S, Kyrou A, Maissen N, et al. Entrustment decisions and the clinical team: A case study of early clinical students. Med Educ. 2020.

14.       Tekian A, Ten Cate O, Holmboe E, Roberts T, Norcini J. Entrustment decisions: Implications for curriculum development and assessment. Med Teach. 2020;42(6):698-704.

15.       Wolcott MD, Quinonez RB, Ramaswamy V, Murdoch-Kinch CA. Can we talk about trust? Exploring the relevance of “Entrustable Professional Activities” in dental education. J Dent Educ. 2020;84(9):945-948.

16.       AAMC. Core Entrustable Professional Activities for Entering Residency: Curriculum Developer’s Guide. https://www.aamc.org/media/20211/download. Published 2017. Accessed May 31, 2021, 2021.

17.       Boyd P, Bloxham S. Developing Effective Assessment in Higher Education: a practical guide. 2007.

Feasibility and Benefit of Using a Community-Sponsored, Team-Based Management Project in a Pharmacy Leadership Course

Abstract

Objective. Assess the impact of community-sponsored, team-based management projects in a leadership and management course on PharmD students’ teamwork skills and project sponsor satisfaction. 

Design. Third-year pharmacy students were divided into eight to ten groups to complete a project proposed by local pharmacists as a “lab” to practice teamwork skills. Projects intended to meet a real need of the submitting organization or the pharmacy profession.

Methods. A validated Team Performance Survey (TPS) assessed teamwork effectiveness. Project sponsors completed surveys to evaluate the quality of the students’ work, the likelihood of project implementation, the benefit of participation, and willingness to sponsor future projects.

Findings. One hundred percent of students and sponsors completed the assessments. TPS average scores across 2017, 2018, and 2019 show that 17 out of 18 activities ranked over 90% by students as being used “every time” or “almost every time,” indicating that students performed well in this team setting. Free-text responses indicated that students found value in participating in management projects. Common themes of project advantages include networking with sponsors, teamwork, building community in the classroom, the autonomy of creating deliverables, and applicable and impactful projects. All sponsors were willing to participate again, and the majority listed interacting with students and increasing their connection to the College as benefits. Ninety-five percent of sponsors said they were “extremely” or “somewhat likely” to implement the student project.

Summary. Community sponsored, team-based management projects in a leadership and management course serve as a model for developing students’ teamwork skills within pharmacy curricula.

Keywords: leadership, teamwork, curriculum, management, project

Introduction and Leadership Framework

The American Society of Health-System Pharmacists (ASHP) 2005 landmark publication highlighted the need for more intentional leadership development in Doctor of Pharmacy (PharmD) programs. The publication stressed that many key pharmacy leadership positions could go unfilled, including a need for over 4,000 new directors in the following decade, unless the pharmacy profession addresses the lack of leadership training.1  White et al. identified that 75% of pharmacy directors do not anticipate remaining in their current positions and only 17% of employers have the ability to fill vacant leadership positions within two months. Employers face difficulty hiring for leadership positions due to 1) a lack of practitionerswith leadership experience, 2) a lack of interestamong current practitioners, and 3) the belief that leadership positions are stringent and stressful. ASHP’s 2012 leadership assessment identified that “a higher percentage of employers (from 3% in 2004 to 17% in 2011) could fill vacant leadership positions within two months, and 37% of employers reported that filling a leadership position was more difficult than it was three years ago (from 57% in 2004).”2 While the need for pharmacist leadership training has been recognized and improved over the past decade, preparing individuals to tackle complex leadership issues remains a challenge.

Student pharmacists may enter the workforce with insufficient leadership skills to effectively serve in a formal leadership position, to function effectively within clinical teams, and to advance the pharmacy profession.3 Elective courses and optional extra-curricular activities increase students’ exposure to leadership, but these limited opportunities may lead to an overall lack of leadership training within the profession. 3 During a complete revision of the PharmD curriculum, The University of Utah College of Pharmacy) identified a need for intentional leadership development through curricular mapping. To address this need, The University of Utah College of Pharmacy developed a longitudinal leadership curriculum based on the framework of Relational Leadership (RL) created by Primary Care Progress (PCP), a non-profit, grassroots leadership development organization dedicated to advocating for improved health care. RL, currently utilized in a multi-site interprofessional, cross-generational leadership program called the Relational Leadership Institute and with interprofessional PCP student teams across the country, consists of four domains: manage self, foster teamwork, coach and develop, and accelerate change.4 This new leadership curriculum, sought to identify innovative and authentic ways to give students experience related to these four domains of leadership.

Several institutions have successfully developed educational projects that focus on leading change5, medication reviews to prevent adverse events such as falls6, or disease state education.7 However, these projects tend to be facilitated by faculty or students rather than current practitioners. The Leadership and Management course at the University of Utah College of Pharmacy (UUCOP) piloted an experiential component of community practitioner-sponsored, team-based management projects to provide context and a “learning lab” for enhanced self-awareness and effective teams. Team-based management projects move away from simulation, business planning, or mock exercises that could have minimal applicability to students. 8-10 Given that engaging with pharmacists allows students to see how leadership can shape real-world practice and has shown to be effective,11 the team-based management projects connected students with current practitioners to collaborate on management or practice-based projects.

Based on a review of the literature, the community sponsored, team-based management projects at the University of Utah College of Pharmacy present a novel, “win-win” educational experience in leadership development for both pharmacy students and pharmacist-sponsors in a leadership and management course. This paper describes a professional curriculum course focused on developing teamwork skills through tackling real-life management problems, with the goal to equip students with the necessary skills to become successful team members. This paper will be the first of its kind to assess the feasibility and benefit of community-sponsored, team-based management projects in a leadership and management course on providing PharmD students’ and the project sponsor experience over three years.

Educational Context and Methods

The UUCOP is a public college of pharmacy within a large academic medical center with an enrollment of approximately 60 students per class. PHARM 7340 Leadership and Management for Pharmacists is a required, two-credit didactic-experiential course taught in the fall semester of the third professional year within the four-year didactic curriculum. A significant portion of the UUCOP longitudinal leadership curriculum resides in the Leadership and Management for Pharmacists course. Students who enter the course have a basic understanding of RL from lectures and activities in other courses with introductory learning specifically related to self-awareness, but limited learning related to effective teams.  The course meets once weekly for two hours over 14 weeks and includes didactic lectures with active learning strategies, reflection, breakout sessions, application exercises, activities in a separate recitation course, and a small-group project. Students complete two didactic modules to build a basic understanding of key leadership concepts, then participate in community practitioner-sponsored, team-based management projects.

Experiential: Practitioner-Sponsored Projects

Involving community sponsors from various settings exposes students to a variety of leadership styles and practices.12 Before each semester began, local pharmacists serving in leadership or clinical roles in retail, ambulatory care, hospital, and managed care settings were contacted and invited to submit management or practice-related projects that could be completed in approximately nine weeks. Sponsors submitted a project form (Appendix 1) to guide the creation of the project, ensure the project would meet course objectives and provide deliverables for the students to complete by the end of the semester. The project form was reviewed and approved by the faculty course director with feedback given and adjustments made as needed to meet project and course objectives. This process also allowed the course director to vet potential sponsors. Projects were designed with the intent that the sponsor could easily implement the project following course conclusion. Sponsors were given extensive latitude to submit projects across a wide array of topic areas. Elements of the projects included but were not limited to, practice management, practice/service development, patient safety, continuous quality improvement, operations, literature evaluation, and production of publishable work. Four to six students were assigned to each project based on their project preference through a ranking survey. Project sponsors and students participated in a vision session during class to establish a relationship as a team, discuss the goals of the project, and create a plan on how to produce the desired results. Students completed their projects longitudinally over the course of the semester. Through this activity, students had the opportunity to learn teamwork by engaging in brainstorming session; collecting and analyzing data; and preparing deliverables and presentations. Students were encouraged to use the RL concepts of manage self, foster teamwork, coach and develop, and accelerate change throughout the duration of the project. At the conclusion of the semester, teams presented their project and deliverables via a formal presentation to project sponsors and the class. Presentations were assessed by the course master, project sponsors, and teaching assistant(s) via rubrics, grading on organization, content, visuals, speaking and presentation skills, conclusion, participation, and responses to live questions.

Two key areas were identified for the assessment of the team-based management projects: team effectiveness in completing the project and sponsor experience. Team effectiveness assesses students’ function in teams as a result of didactic instructions and how well students utilize effective leadership and teaming strategies to accomplish a specific outcome.  In other words, this assessment documented how well students were actually able to accomplish work in a team—which we believe is a reflection of students’ understanding and application of the desired leadership and teaming skills.   

Assessing for sponsor experience ensured the team produced a quality product that met the expectations of the sponsor. Additionally, areas for improvement and future opportunities for collaboration were identified. It was also important to assess the likelihood of projects being implemented to ensure the real-world applicability of the project and the potential for students to continue their efforts with sponsors after course completion.

To assess these key areas, students completed an anonymous, validated Team Performance Survey (TPS) and sponsors completed a project sponsor experience survey via Qualtrics12 upon completion of the semester. Student and sponsor surveys were collected for the Fall 2017, 2018, and 2019 semesters.

Team Effectiveness

The student survey included a previously validated instrument, the Team Performance Survey (TPS).13 The TPS aims to assess how effectively students work together in a team and, potentially, if they were able to implement course concepts that led to improved behavior throughout the duration of the project. In the TPS, students indicated how often on a five-point scale their team members engaged in each of eighteen activities that characterize an effective team (as shaped by effective leadership). Table 1 lists all TPS elements. At the end of the TPS survey, students provided feedback on project experiences to improve student experience in subsequent years.

Sponsor Experience

The project sponsor survey was developed by the course director and collected the sponsors’ assessment of the quality of their students’ work, the likelihood that they would implement the results of the projects, likelihood of sponsoring a future project, and sponsorship benefit on a five-point scale. Free text responses allowed the sponsors to describe features necessary for the creation of successful projects and make suggestions for the future. Feedback was requested to improve sponsor experience in subsequent years in hopes of ensuring a sufficient number of sponsors willing to participate.

For both the team effectiveness and sponsor experience surveys collected in 2017, 2018, and 2019, the scaled data was summarized by the percentage of responses for each choice and the text responses were examined and inductively coded to identify common themes.

Findings and Discussion

Team Effectiveness

Since implementation in Pharmacy Leadership and Management in 2017, 153 students have participated in team-based management projects. One-hundred percent of students completed the TPS. Results indicate that student teams were able to work together to create a final project and develop mutual respect with one another (Table 1). Average scores of the TPS (2017, 2018, and 2019) report that all indicators of team effectiveness had high percentage of responses in the “almost every time” or “every time” response categories. The activity with the lowest average ranking, “often members helped a fellow team member to be understood by paraphrasing what he or she was saying” was ranked above 85%. Overall, the TPS responses indicate students within the Leadership and Management course can form and use relevant leadership and teaming skills to operate within highly effective teams. Free text response regarding the best parts of the community-sponsored, team-based projects identified that students found value in networking with sponsors and other pharmacists through their projects; working in and providing informal leadership on teams; learning about a new area of pharmacy and/or management skill; building community in the classroom; the autonomy of creating deliverables; interesting and relevant projects topics; and working on practical, applicable, and impactful projects (Table 2). Points for improvement included starting projects earlier in the semester, having more time in class to work on projects as a team, having more explicit guidance on project deliverables, and encouraging more frequent contact with project sponsors during the semester (Table 3). From 2017 to 2019, the areas for improvement stayed relatively consistent, but time allotted to work on projects in class and introducing projects earlier in the semester improved.

We interpret the results of the TPS to conclude that students were able to use leadership and teaming skills introduced in the course to work together to accomplish the final project and function as an efficient team, with all categories of this survey being rated over 85%.

Sponsor experience

Between 2017, 2018, and 2019, there was a total of 28 projects with 16 pharmacists who served as sponsors. Four pharmacists sponsored two projects in the same year, five pharmacists served as project sponsors in two out of the three years, and two pharmacists served as project sponsors each of the three years. All 28 project sponsors completed the sponsor survey. One-hundred percent of project sponsors reported that students were able to create deliverables that met expectations with no or minor revisions needed. Sixty-four percent of the project sponsors reported that they would be extremely likely to implement the student projects into their practice (30% of project sponsors reported “likely” and 6% reported “neutral” or “somewhat unlikely”). Benefits of participation to the sponsors included the opportunity to interact with students, engage with the College of Pharmacy, and network with future colleagues while accomplishing something meaningful that would benefit the project sponsor’s work organization (Figure 1). All project sponsors stated they would be willing to sponsor future projects. Free text responses identified that providing students with feedback, setting clear expectations, and communication were essential elements for creating a successful project (Table 4). Communication about expectations and deadlines between students and mentors was a challenge that many project sponsors faced (Table 5). Points for improvement included setting expectations early with students, creating deadlines, and frequent check-ins.

One key factor considered in the design of the course and the management project was the project sponsor’s experience and whether the projects would actually be implemented. Although one project sponsor indicated that “it would be unlikely for them to implement the project into their practice,” the majority indicated that project implementation was likely. Additionally, all of the sponsors answered that they would be willing to help with a future project, signaling an overall high level of satisfaction with the experience. Five pharmacists sponsored projects in two consecutive years and two pharmacists sponsored projects in three consecutive years indicating that community partners built a strong relationship with UUCOP and found value in working with the student teams. The relationships built with UUCOP and students may motivate project sponsors to continue to develop high-quality, innovative and authentic projects.

Implications

The team-based management project continues to be a core element of the Leadership and Management for Pharmacists course. Since their inception, the projects have evolved to explore advanced pharmaceutical practices and produce more innovative and impactful deliverables. The diversity of projects to meet the interests of students has been a strong focus of improvement.  Given the growing demands on student time, more in-class opportunities have been given for project work and, therefore, greater use of leadership and teaming skills. Frequent contact between sponsors and students has been emphasized.

The assessment of team-based leadership projects in the Leadership and Management course identified several “wins” that did not occur in the UUCOP curriculum previously and have not been identified by previous literature. The course creates new and authentic connections between education and practice by engaging students on relevant projects that benefit all involved. Through the projects, students are able to connect with mentors and potential employers while gaining experience using their leadership and teaming skills and becoming more comfortable in their understanding of leadership roles needed to move pharmacy forward. Project sponsors gained closer connections with students who may become their employees and the ability to implement projects that benefit their organizations. By utilizing their community connections and focusing on addressing sponsor needs, other institutions could adopt this model of using team-based projects to provide real-world opportunities for students to learn firsthand how important leadership and teaming skills can be.

A potential barrier for adopting this model of community-sponsored, team-based projects is the inability to find sponsors to offer and facilitate projects that can be completed within a semester, are relevant to advancing current pharmacy practice, have a real-life application, and are intended to be implemented at respective practices sites. Beyond implementation in a leadership or management course, a similar model could be applied to interprofessional education or therapeutics courses where the projects are clinical in nature. This process could also be utilized for medication safety, quality improvement, pharmacy & therapeutics committees, or in other administrative functions occurring in health systems. In all cases, institutions would be free to adapt the parameters of these value-added learning experiences to local conditions, resources, and interests.

The community-sponsored, team-based management projects provided students the opportunity to develop their individual and team leadership skills while creating a beneficial project for the community sponsors and participating organizations. Evaluations from both students and sponsors suggest that community-sponsored, team-based management projects will serve as an effective tool in preparing students to lead change upon entry into the profession and positively impact pharmacy organizations.

Disclosures

Conflicts of Interests: The authors have no pertinent conflicts of interest with respect to the research, authorship, and/or publication of this article. Authors do not have any competing or conflicts-of-interest.

Financial Disclosure: There are no financial conflicts of interest to disclose.

References

  1. White SJ. Will there be a pharmacy leadership crisis? An ASHP Foundation Scholar-in-residence report. Am J Health Syst Pharm. 2005;62(8):845-855. doi:10.1093/ajhp/62.8.845
  2. White SJ, Enright SM. Is there still a pharmacy leadership crisis? A seven-year follow-up assessment. American Journal of Health-System Pharmacy. 2013;70(5):443-447. doi:10.2146/ajhp120258
  3. Feller TT, Doucette WR, Witry MJ. Assessing Opportunities for Student Pharmacist Leadership Development at Schools of Pharmacy in the United States. Am J Pharm Educ. 2016;80(5):79. doi:10.5688/ajpe80579.
  4. Cooper J. The Relational LeadershipTM Model – Primary Care Progress. https://www.primarycareprogress.org/relational-leadership/. Accessed 10 July 2020.
  5. Sorensen TD, Traynor AP, Janke KK. A pharmacy course on leadership and leading change. Am J Pharm Educ. 2009;73(2):23. doi:10.5688/aj730223
  1. Withey MB, Breault A. A Home Healthcare and School of Pharmacy Partnership to Reduce Falls. Home Healthc Nurse. 2013;31(6):295-302. doi:10.1097/NHH.0b013e318294787c.
  2. Shiyanbola OO, Lammers C, Randall B, Richards A. Evaluation of a student-led interprofessional innovative health promotion model for an underserved population with diabetes: A pilot project. J Interprof Care. 2012;26(5):376-382. doi:10.3109/13561820.2012.685117.
  3. Cavanaugh TM, Buring S, Cluxton R. A Pharmacoeconomics and Formulary Management Collaborative Project to Teach Decision Analysis Principles. Am J Pharm Educ. 2012;76(6):115. doi:10.5688/ajpe766115.
  4. Shahiwala A. Entrepreneurship skills development through project-based activity in Bachelor of Pharmacy program. Curr Pharm Teach Learn. 2017;9(4):698-706. doi:10.1016/J.CPTL.2017.03.017.
  5. Rollins BL, Gunturi R, Sullivan D. A Pharmacy Business Management Simulation Exercise as a Practical Application of Business Management Material and Principles. Am J Pharm Educ. 2014;78(3):62. doi:10.5688/ajpe78362.
  6. M. Izham M. Ibrahim, Albert I. Wertheimer, Maven J. Myers, William F. McGhan & Calvin H. Knowlton (1997) Leadership Styles and Effectiveness: Pharmacists in Associations vs. Pharmacists in Community Settings, Journal of Pharmaceutical Marketing & Management, 12:1, 23-32, DOI: 10.3109/J058v12n01_02
  7. Thompson BM, Levine RE, Kennedy F, et al. Evaluating the Quality of Learning-Team Processes in Medical Education: Development and Validation of a New Measure. Acad Med. 2009;84(Supplement): S124-S127. doi:10.1097/ACM.0b013e3181b38b7a.
  8. 2005. QualtricsXM. Provo, Utah, USA: Qualtrics.
  9. Reed BN, Klutts AM, Mattingly TJ 2nd. A Systematic Review of Leadership Definitions, Competencies, and Assessment Methods in Pharmacy Education. Am J Pharm Educ. 2019;83(9):7520. doi:10.5688/ajpe7520
  10. Sullivan GM. A primer on the validity of assessment instruments [published correction appears in J Grad Med Educ. 2011 Sep;3(3):446]. J Grad Med Educ. 2011;3(2):119-120. doi:10.4300/JGME-D-11-00075.1

Appendix 1: Sponsor Project Forms

On-site pediatric and neonatal point-of-care ultrasound (POCUS) course led by multi-disciplinary local experts may promote sustainable clinical POCUS integration.

Disclosure

BC was a consultant for GE company and received research grant from Chiesi USA. BC did not receive any financial support specifically for this project. The other authors disclosed no conflict of interest. The research and REDcap database reported in this publication was supported (in part) by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR002538. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Abstract

Objective: To investigate the impact of an on-site pediatric and neonatal Point-of-care ultrasound (POCUS) course in long-term implementation of POCUS.

Methods: We hosted two pediatric and neonatal critical care POCUS courses in 2018 and 2019 using the Society of Critical Care Medicine curriculum (Critical Care Ultrasound: Pediatric and Neonatal), with local experts and infrastructure. We administered evaluation surveys based on a 5-point Likert scale before and after the course to assess the participants’ reactions, learning, and clinical behaviors. The final analysis incorporated Kirkpatrick’s evaluation model and descriptive statistics to compare confidence rankings and scanning behavior.

Results: A total of 32 on-site [JC1] clinicians from neonatal and pediatric critical care units attended the courses with a survey response rate > 72%. Respondents’ median satisfaction score was 4.0 (IQR 4.0-5.0). The median confidence rankings in their POCUS skills increased from 1.0 (IQR 1.0-2.0) pre-course to 3.0 (IQR 2.8-4.0) at 12 months after the course (p<0.0001). The proportion of respondents who reported an increased trend of performing > 4 scans in the prior month (12.5% vs. 30.4%, p=0.17). We discovered a decreased in institutional barriers, especially concerns over interdisciplinary conflicts.

Conclusions: An on-site pediatric and neonatal POCUS course utilizing local infrastructure and a reputable POCUS course effectively promoted POCUS implementation and addressed institutional barriers. Instead of having learners to seek off-site or online training, structuring an on-site course with multi-disciplinary local faculty in children’s hospitals that lack a robust POCUS program may be a feasible approach.


Keywords

Point-of-care ultrasound, Kirkpatrick’s principle, adult learning, pediatric critical care, neonatal critical care

Introduction

Neonatal and pediatric critical care point-of-care ultrasound (POCUS) training is in high demand. Recent national U.S. surveys showed that 83-90% of respondents thought that POCUS training should be a part of critical care fellowship education in Pediatrics1-3. However, only 67-90% of Pediatric Intensive Care, and 38% of Neonatal Medicine fellowship programs provide POCUS training1-3. Pediatric emergency medicine is the only pediatric subspecialty with established POCUS guidelines with professional endorsement4. None of the other pediatric subspecialties have a structured curriculum-based approach to POCUS training1-3.

However, evidence-based clinical implementation of POCUS is sparse. Furthermore, structured training programs for pediatric practicing clinicians (including post-graduate physicians, nurse practitioners, and physician assistants) are rare2. There is scant research that evaluates the best method to train practicing clinicians without prior POCUS experience and limited follow-up data on the impact of training courses on clinical implementation5.

Lack of a mature pediatric or neonatal critical care ultrasound program with limited skilled POCUS faculty remains a significant barrier to POCUS training for many institutions6. As a result, practicing clinicians are often encouraged to attend online or off-site courses at their discretion. After completing an off-site POCUS course, many clinicians report that integrating POCUS into their daily practice is challenging7, 8. Integration of POCUS into clinical practice varies widely across Pediatric Intensive Care Units (PICU), and only one-third Neonatal Intensive Care Unit (NICU) clinicians use POCUS1, 2.

To address the high demand for pediatric and neonatal critical care POCUS training by fellows and practicing clinicians in our institution, we implemented a nationally recognized and reputable POCUS course and curriculum. We hypothesized that an on-site POCUS training course that utilizes existing institutional infrastructure would enhance POCUS practice adoption by lessening implementation barriers.

Methods

Course structure

We hosted two annual (in June 2018 and September 2019), pediatric and neonatal critical care 2-day POCUS courses for fellows and practicing clinicians in a free-standing university-affiliated children’s hospital and performed a 12-month prospective observational cohort study following course completion. The course curriculum was adapted from the 2-day “Critical Care Ultrasound: Pediatric and Neonatal” developed by the Society of Critical Care Medicine (SCCM)Ó  (Mount Prospect, Illinois, USA). The course consisted of 12-hours of didactic lectures and 8-hours of hands-on training. The hands-on training was performed on pediatric volunteers, phantoms, and simulators (SonoSimÓ Ultrasound Trainings Solution, Santa Monica, CA). An adequate number of faculty is recruited to ensure proper 1:4 faculty to student ratios. One faculty each year was a POCUS expert from the SCCM faculty. Local POCUS faculty experts from PICU, NICU, Pediatric Emergency Department (PEM), and Radiology Department taught the course. The multidisciplinary approach enhances skill generalizability in different specialties. The local experts either had prior extensive POCUS training or fellowship, or were credentialed in echocardiography or sonography. Our institution supported the POCUS course financially and administratively. It was offered to fellows, attending physicians, nurse practitioners, physician assistants, nurses, and respiratory therapists from the PICU and NICU.

We intentionally designed the local course with POCUS faculty who could serve as champions within their individual units and departments to provide on-going support to participants after course completion. We also utilized the same ultrasound machines during the course that participants would continue to use in their own clinical practice. 

Survey Development and Distribution

To evaluate our course effectiveness, we designed and distributed a pre-course and post-course survey. The post-course survey was given immediately post course (post) in paper form, and then a 3- (3mo), 6- (6mo), and 12-month (12mo) follow-up survey were given electronically (Supplementary Material 1). The surveys included multiple-choice, fill in the blank, and Likert-based questions similar to other published POCUS training surveys9-11. The survey collected information on the following: participant background information, clinical practice setting, and POCUS leadership or infrastructure in their respective practice. Additionally, we asked several questions regarding the frequency in scanning, confidence in interpreting, and barriers in integrating POCUS. The post-course survey addressed participants’ perception of the course and satisfaction scores with the various instructors as well as an opportunity for the participants to provide feedback and recommendations for future courses. We captured similar longitudinal data on the questions in the 3-, 6-, and 12-month follow-up surveys. Our data analysis here focused on comparing results between the pre-course and 12-month follow-up surveys.

Three POCUS experts (MSt, OK, BC) created questions for the surveys. Three other investigators (EH, SG, MSk), reviewed the questions and ranked them for clarity and completeness. After three iterations, the panel met again and reviewed each question for intention and brevity.

After the 2018 course, participant feedback prompted additional survey refinement for the 2019 course participants (Supplementary Material 2). Questions were either shortened or rearranged in numerical order to improve response rate and clarity. The investigator team reviewed the revised survey to ensure question fidelity and integrity. The concepts between the two survey versions were the same, even though the wording varied. For example, the question to assess the participant’s confidence in overall integrated POCUS skills, the 2018 survey asked “I am confident in my ability to acquire images and interpret them with POCUS putting it all together” (1=“strongly disagree” to 5=“strongly agree”). The 2019 survey asked “Ability to acquire and interpret images to clinically integrate into a diagnosis?” (1= “not confidence at all” to 5=“very confidence”). Data variation between the two survey versions were tracked to ensure internal validity.

Survey Distribution

The surveys were administered in person prior to the course (pre), and immediately at the end of the course (post). Follow-up surveys were sent via email to participants 3-, 6-, 12 months following course completion. The participants had one month to complete the survey with up to 4 email reminders.

Outcomes Measures

Our primary outcome was to evaluate the on-site course’s effectiveness by comparing the pre-course and the 12-month follow-up survey results, based on the four-level Kirkpatrick’s evaluation framework12.

Level 1: assess the participants’ “reaction” based on their satisfaction of the course content, faculty teaching, and overall experience.

Level 2: assess the “learning” based on self-reported confidence of POCUS knowledge and skills.

Level 3: assess the education effect on the change of “behavior” based on the self-reported number of scans performed in clinical practice.

Level 4: assess the “results” based on the perceived institutional barrier resolution.

Data Analysis

Study data were collected and managed using REDcap data capture tools hosted at the University of Utah13. Descriptive statistics, Student’s t-test, Mann-Whitney, and Wilcoxon test were used as appropriate. Data analysis was performed using GraphPad Prism versions 9.0.2 for Mac (GraphPad Software, San Diego, California, USA).

Institutional Review Board

After reviewing the study application, the University of Utah Institutional Review Board (IRB) exempted this study from full review and consent (IRB #_00112848).

Results

Response rate

The two-year combined survey response rates decreased over time from the course was taken. The response rates were 100% (pre), 100% (post), 94% (3mo), 94% (6mo), and 72% (12mo) respectively.

Participant demographics

A total of 42 participants attended the two courses. Our analysis focused on the 32 on-site attendees from the NICU (50%) and PICU (50%). We excluded 10 participants because they worked at satellite community hospitals which lacked the same POCUS champions and infrastructure. Table 1 describes the clinical roles and years of practice for the 32 included participants. The majority were physicians (84%) who had completed a pediatric residency. 31% of participants had more than 10 years of clinical experience. 88% of participants reported having prior POCUS experience and training, through national conferences, online courses, medical school, or residency programs.

Chan Table 1 - Demographics of Participants

Kirkpatrick’s level 1 “reaction”

The course received good median satisfaction scores of 4 (IQR 4-5) on a 5-point Likert scale (1=disagree, 5=agree) in evaluating course content, objectives, and clinical relevance. The median course content satisfaction rating from the 2018 participants (n=12) was 4 (IQR 4-5) on didactic lectures, hands-on modules, and instructors. The median course content satisfaction score from the 2019 participants (n=20) was 5 (IQR 4-5), which was higher than the previous year (p<0.047). Participants from both years felt that the course met their learning objectives ranking a score of 4 (IQR 4-5) and was relevant to their field of practice ranking a score of 4 (IQR 4-5).

Kirkpatrick’s level 2 “learning”

The respondents reported increased confidence in POCUS image acquisition and interpretation over time (Figure 1). In the question regarding their overall integrated POCUS skill, the respondents reported their confidence increased from a median score of 1 (IQR 1-2) (pre) to 3 (IQR 3-4) (12mo), p<0.0001 on a 5-point Likert scale in combining both years. Looking the two years separately, the 2018 participants had reported median confidence score increased from 0.5 (IQR 0 to 2) to 2.5 (IQR 2 to 3), p<0.0017;  the 2019 participants’ median scored had increased from 2 (IQR 1 to 2) to 4 (IQR 1 to 4), p<0.0001. The scoring trend was parallel between the two years, even the 2019 survey was modified (Supplementary Material 3).

Chan Figure 1 - Median confidence scores in overall POCUS skills

In the pre-course survey, 73% of respondents felt that their lack of confidence in obtaining and interpreting images were the top POCUS implementation barriers. At the12-month follow-up survey, only 41% of respondents considered their personal confidence in POCUS skills as barriers.

Kirkpatrick’s level 3 “behavior”

After attending the on-site course, respondents reported an increase in the number of scans performed (Figure 2). The proportion of respondents who reported that they had performed more than 4 scans in the past month increased from 12.5% pre-course to 30.4% at 12-month follow-up (p = 0.17).

Chan Figure 2 - Proportion of respondents reported to have performed >4 scans in the past month

Of the 28 participants who had prior POCUS experience and training, 21% (n=8)  reported in the pre-course survey that they had not performed any scans in the prior 6 months. At the 12-month follow-up survey, only 1 of 22 (4.5%) respondents reported not performing any scan in the prior 6 months.

Kirkpatrick’s level 4 “results”

The survey asked participants about barriers to POCUS implementation into clinical practice. Aside from their personal POCUS skills, the top 3 institutional barriers identified in 2018 were:

  • lack of experienced POCUS faculty (33%),
  • lack of quality assurance program to verify image acquisition and interpretation (25%),
  • concerns of interdisciplinary conflicts (25%).

None of the 2019 course respondents reported concerns of interdisciplinary conflicts in their 12-month follow-up survey. A proportion of them still perceived the lack of a formal method to confirm image interpretation (40%), quality assurance program to review saved images (33%), and experienced POCUS faculty for hands-on training (20%) as top institutional barriers. Some participants (33%) also felt there was not enough time during their clinical day to perform POCUS.

Discussion

We demonstrate that an on-site pediatric and neonatal POCUS course was effective based on Kirkpatrick’s four principles of reaction, learning, behavior, and results (Figure 3). To our knowledge, we are the first to describe how importing an off-site reputable course to an on-site pediatric and neonatal POCUS model could change POCUS clinical practice behavior. Participants ranked the course favorably and reported increased confidence in their POCUS skills. Although not statistically significant, participants seemed to incorporate POCUS more frequently into their clinical practice after the course, and this practice pattern was sustained. Most importantly, perceived institutional barriers to POCUS were reduced. This on-site pediatric and neonatal POCUS model utilizing nationally recognized ultrasound content while incorporating local expertise and strengthening infrastructure is an efficient way to expand POCUS clinical practice.

Chan Figure 3 - Study findings based on Kirkpatrick's Evaluation Model

Due to limited POCUS expertise, pediatric and neonatal critical care clinicians have relied on online or off-site training courses. Even with many available online or off-site courses, an adequate POCUS knowledge translation into practice remains difficult. Firstly, gaining proficiency in POCUS requires complex training in image acquisition, interpretation, and clinical integration. Competency is best achieved with hands-on training, frequent practice, and integration into clinical practice. Although online or off-site training courses can enhance POCUS knowledge and promote confidence to attendees which meeting Kirkpatrick’s level 1 and 2, they often do not result in the behavior change essential to meet Kirkpatrick’s level 3 requirement. Online and off-site courses are unable to provide adequate post course hands-on training and timely feedback. This is evident in a study by Patrawalla et al. who showed that a 3-day regional POCUS was an effective educational model14. Still, it did not report detailed data on subsequent clinical practice use14. Secondly, institutional infrastructure is essential for clinical integration. National survey had reported the 5 top institutional barriers for POCUS clinical integration, including lack of equipment/funds, lack of personnel to train physicians, lack of time to learn, liability concerns, and cardiology or radiology resistance2. Successful clinical integration of POCUS requires both attaining expert knowledge and skill and overcoming local barriers. Historical online or off-site courses are unable to navigating the local practice environment requires more than distant expertise.

Integration of newly acquired skills and knowledge into clinical practice is challenging. Adhering to the principles of adult learning may help to enact positive behavior change15. As evidenced from our pre-course survey, some of our course participants did not utilize their previous POCUS skills in their clinical practice despite prior POCUS training and experience. After attending our on-site course, participants reported a trend of increasing POCUS usage behavior. Collins et al. described education techniques for lifelong learning that we utilized in this course15. Firstly, the adult learners valued the relevancy and practicality of this course15. Our course used the same ultrasound machines that the participants would use in their units, thereby enhancing the skills learned. Secondly, our participants attended with their own colleagues, which fostered an informal and personal environment in which adults learn best15. Another consideration is adults learn best by doing15. We found our respondents had an increased scanning frequency. The survey only assessed if more than four POCUS scans were performed per month, but the small increase was heading into the right direction of life-long behavior change. We suspect that more scanning now will translate to more scanning in the future. The scanning behavior will further be fostered by the on-site faculty who provide practice reinforcement and on-going feedback after course completion.

The off-site or online courses historically are unable to address the institutional barriers of integrating POCUS into daily practice. Prior to our on-site course, we identified similar barriers from the 2018 pre-course survey to those in other critical care programs1, 2, 16. Barriers included interdisciplinary conflicts, lack of local POCUS faculty, and lack of quality assurance programs. None of these barriers are solved by attending off-site courses. A few local POCUS experts first organized the course to fulfill the POCUS educational gap. As a result of the course, the institution recognized the need to strengthen the local infrastructure. Subsequently, a multi-disciplinary POCUS consortium was formed, including leaders from PICU, NICU, PEM, cardiology, radiology, and hospital administration. Additionally, PICU, NICU, and PEM champions became the ultrasound medical directors for their divisions, providing on-going education, leadership, and quality assurance. Our on-site pediatric and neonatal POCUS course model has fostered inter-departmental collaboration; thereby promoting transparency in POCUS practice, communication and eliminating concerns over multi-disciplinary conflict. By the time the 12-month follow-up survey was sent to the 2019 course participants, the POCUS consortium had been established for 28 months and the respondents reported no interdisciplinary conflicts concerns. We suggest this internal on-site infrastructure is essential to effect change for Kirkpatrick level 3 behavior and 4 results. Strengthening the POCUS infrastructure can help maintain an individual’s POCUS proficiency, expand education program, develop quality assurance processes, develop re-credentialing standards, and sustain POCUS integration.

Our model of importing a structured POCUS training curriculum is feasible and generalizable at hospitals with similar on-site champions. This pediatric and neonatal on-site POCUS model can be available to any institution. The prepared curriculum is nationally recognized and saves faculty time creating suitable education material. Pre-existing online education modules are important and helpful, but are limited by the lack of hands-on training on live human subjects. The on-site course could recruit local volunteers as scanning subjects. To support all clinicians within the institution to attend off-site courses is costly. This on-site pediatric and neonatal POCUS model is relatively more cost-effective. Clinicians can minimize travel time, reduce work schedule disruption, and balance work-life balance, encouraging more participation. We do recognize that many skilled clinician sonographers have developed excellent educational materials and although this paper used a specific POCUS course, many respectable courses could be used.

The limitations of our study include the small sample size and selection bias. As attendance was voluntary, motivated clinicians were more likely to incorporate POCUS into clinical practice and respond to the survey. The participants self-reported scanning pattern may introduce recall bias. As we did not have formal knowledge and technical skills assessments pre-course and during the follow-up periods, participants may have overestimated or underestimated their knowledge17. We felt that behavioral changes in adult learners were more important than knowledge assessments in adapting new skills. Post-course skill assessment is on-going via quality assurance process and expert faculty feedback.

Conclusion

In conclusion, our on-site pediatric and neonatal POCUS course transferred knowledge, positively changed clinician’s behavior, and broke down perceived barriers to POCUS integration at our institution. This model is exportable to other hospitals and clinical environments. Pediatric and neonatal critical care POCUS programs should consider distinctive education challenges and specific institutional barriers when designing their own educational programs. Further studies are needed to evaluate the long-term impact on patient outcomes from this training model.

Acknowledgement

We thank the following faculty for POCUS teaching and instruction during the two courses: R. Mart, R. Day, S. Ryan, E. Contreras, and J. Kim. Additionally, we thank T. Harbor, the research assistants from the Division of Pediatric Emergency Medicine, for helping with IRB maintenance, and creation, maintenance and distribution of the REDCap survey. Additionally, we thank Drs M. Johnson and R. Wilson for editing the manuscripts. We thank the Society of Critical Care Medicine for allowing the use of their education materials: critical care ultrasound: pediatric and neonatal.

References

1.         Conlon TW, Kantor DB, Su ER, et al. Diagnostic Bedside Ultrasound Program Development in Pediatric Critical Care Medicine: Results of a National Survey. Pediatr Crit Care Med. Nov 2018;19(11):e561-e568. doi:10.1097/PCC.0000000000001692

2.         Nguyen J, Amirnovin R, Ramanathan R, Noori S. The state of point-of-care ultrasonography use and training in neonatal-perinatal medicine and pediatric critical care medicine fellowship programs. J Perinatol. Nov 2016;36(11):972-976. doi:10.1038/jp.2016.126

3.         Mosier JM, Malo J, Stolz LA, et al. Critical care ultrasound training: a survey of US fellowship directors. J Crit Care. Aug 2014;29(4):645-9. doi:10.1016/j.jcrc.2014.03.006

4.         Marin JR, Abo AM, Arroyo AC, et al. Pediatric emergency medicine point-of-care ultrasound: summary of the evidence. Crit Ultrasound J. Dec 2016;8(1):16. doi:10.1186/s13089-016-0049-5

5.         Matyal R, Mitchell JD, Mahmood F, et al. Faculty-Focused Perioperative Ultrasound Training Program: A Single-Center Experience. J Cardiothorac Vasc Anesth. Apr 2019;33(4):1037-1043. doi:10.1053/j.jvca.2018.12.003

6.         Ahn JS, French AJ, Thiessen ME, Kendall JL. Training peer instructors for a combined ultrasound/physical exam curriculum. Teach Learn Med. 2014;26(3):292-5. doi:10.1080/10401334.2014.910464

7.         Olgers TJ, Azizi N, Bouma HR, Ter Maaten JC. Life after a point-of-care ultrasound course: setting up the right conditions! Ultrasound J. Sep 7 2020;12(1):43. doi:10.1186/s13089-020-00190-7

8.         Rajamani A, Miu M, Huang S, et al. Impact of Critical Care Point-of-Care Ultrasound Short-Courses on Trainee Competence. Crit Care Med. Sep 2019;47(9):e782-e784. doi:10.1097/CCM.0000000000003867

9.         Webb EM, Cotton JB, Kane K, Straus CM, Topp KS, Naeger DM. Teaching point of care ultrasound skills in medical school: keeping radiology in the driver’s seat. Acad Radiol. Jul 2014;21(7):893-901. doi:10.1016/j.acra.2014.03.001

10.       Stolz LA, Amini R, Situ-LaCasse E, et al. Multimodular Ultrasound Orientation: Residents’ Confidence and Skill in Performing Point-of-care Ultrasound. Cureus. Nov 15 2018;10(11):e3597. doi:10.7759/cureus.3597

11.       Jones TL, Baxter MA, Khanduja V. A quick guide to survey research. Ann R Coll Surg Engl. Jan 2013;95(1):5-7. doi:10.1308/003588413X13511609956372

12.       Kirkpatrick DL. Effective supervisory training and development, Part 2: In-house approaches and techniques. Personnel. Jan 1985;62(1):52-6.

13.       Harris PA, Taylor R, Minor BL, et al. The REDCap consortium: Building an international community of software platform partners. J Biomed Inform. Jul 2019;95:103208. doi:10.1016/j.jbi.2019.103208

14.       Patrawalla P, Narasimhan M, Eisen L, Shiloh AL, Koenig S, Mayo P. A Regional, Cost-Effective, Collaborative Model for Critical Care Fellows’ Ultrasonography Education. J Intensive Care Med. Dec 2020;35(12):1447-1452. doi:10.1177/0885066619828951

15.       Collins J. Education techniques for lifelong learning: principles of adult learning. Radiographics. Sep-Oct 2004;24(5):1483-9. doi:10.1148/rg.245045020

16.       Ben Fadel N, Pulgar L, Khurshid F. Point of care ultrasound (POCUS) in Canadian neonatal intensive care units (NICUs): where are we? J Ultrasound. Jun 2019;22(2):201-206. doi:10.1007/s40477-019-00383-4 17.       Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. Dec 1999;77(6):1121-34. doi:10.1037//0022-3514.77.6.1121

Two sides of the same coin: Elements that can make or break clinical learning encounters

Published in Global Surgical Education – Journal of the Association for Surgical Education: https://link.springer.com/article/10.1007/s44186-022-00006-3

Abstract

Phenomenon: This project explored how faculty, residents, and students at an academic medical center have experienced meaningful learning moments, what contributed to such moments within the clinical learning environment, and how these moments map on to a previously developed conceptual model of the learning environment. Approach: During AY 2018-19, the authors interviewed faculty (n=8), and residents (n=5) from the Surgery and OBGYN departments at the University of Utah School of Medicine. The authors also conducted interviews (n=4) and focus groups (n=2) with 20 third- and fourth-year students. Authors used an appreciative inquiry approach to conduct interviews and focus groups, which were audio-recorded and transcribed verbatim. Transcriptions were coded using manifest content analysis. Findings: Authors found that three factors determined whether learning encounters were successful or challenging: learner-centeredness, shared understanding, and learner attributes. Situations that were characterized by learner-centeredness and shared understanding led to successful learning, while encounters characterized by a lack of learner-centeredness and shared understanding led to challenges in the clinical learning environment. Likewise, some learner attributes facilitated successful learning moments while other attributes created challenges. These three factors map well onto three of the four elements of the previously developed conceptual model. Insights: The clinical learning environment is characterized by both successful and challenging moments. Paying attention to the factors which promote successful learning may be key to fostering a positive learning environment.  

Analyzing the cost of medical education as a component to understanding education value

Problem

What is the cost of medical education?  In 2016, the average yearly tuition for students was $36,755 for public US medical schools  and $60,474 for private US medical schools1 , and the average indebtedness for all medical graduates was $189,165.3  But tuition is only part of the picture. The total annual financial cost of medical student education  is currently estimated to be between $90,000-$118,000 per student or between 360,000 to 472,000 per graduate.4 Are these costs justified?

To answer this question, we turned to recent developments in healthcare delivery known as ‘value-driven outcomes’.  In his seminal paper, “What is Value in Health Care”, Michael Porter addresses the relationship between cost and quality of outcomes by defining value in health care as desired patient outcomes divided by the cost to achieve that outcome.This framework is now well established as a way to consider the relationship between cost and quality. In 2012, the University of Utah Health Care (UUHC) developed the value-driven outcomes (VDO) model and tested it’s application in numerous setting. The key strategy of VDO was to develop a tool that “allows clinicians and managers to analyze actual system costs and outcomes at the level of individual encounters and by department, physician, diagnosis, and procedure.”6  If, for example, the data show that different surgeons incur different costs in performing a standard procedure, then meaningful steps can be taken to understand the source of the variability and reduce costs.

What if the thinking behind the UUHC VDO tool, which aimed to better understand costs in relation to quality of clinical care, could be adapted to better understand the cost of medical education in relation to the quality of that education, and consequently promote a process of better aligning costs with quality?

Approach

To explore this question, we decided to undertake the challenge of translating the clinically-focused VDO principles to medical education. In Phase One of the work, the focus was to understand the cost of medical education at our own institution. In Phase Two, the focus was to understand the desired outcomes (i.e. quality) by stakeholders. In Phase Three, the work will be to integrate the cost and quality components to propose relevant measures of value for medical education. This report describes Phase One relating to costs and builds on previous reports of medical education cost in the literature.

The cost analysis targeted the medical student education program for the academic year 2015-2016 ( Table 1).  The major categories of cost were divided into two domains: Facility Costs and Professional Costs. These two domains were consistent with those of the VDO model.  Within each of the two domains, major categories and detailed subcategories of cost ( Table 1) were identified.

The project was reviewed by the University of Utah Institutional Review Board (IRB), deemed not to meet the definition of human subjects research and was therefore exempt from IRB oversight. This project was funded through support from an Accelerating Change in Medical Education Grant from the American Medical Association.

Setting

The UUSOM is the only AMC in Utah and is a state-funded AMC with four major affiliated teaching hospitals. During the 2015-2016 academic year, 371 unique faculty interfaced with the students in large classroom, small group and lab-based instruction over the 4-year program.  Approximately 700 faculty were involved in clinical supervision of students in the clerkship-based years of the program. There were 415 students enrolled in the UUSOM during 2015-2016.

The integrated pre-clerkship curriculum included seven foundational science courses and longitudinal courses on clinical reasoning/skills and medical humanities ( Figure 1).  A large portion of the pre-clerkship curriculum was delivered in a $40 million-dollar education building constructed in 2005, which included an 18-room clinical skills center.  The program utilized the University’s College of Nursing state-of-the-art, high fidelity simulation center for selected aspects of the curriculum. 

The third year of the program consisted of 7 required core clerkships (internal medicine, pediatrics, obstetrics/gynecology, surgery, neurology, psychiatry, family medicine; 4-8 weeks each). Every clerkship included an objective structured clinical exam (OSCE).  A required, summative end-of-year-three, 8-station OSCE was modeled after the USMLE Step 2 CS examination. Fourth year students were required to complete two, 4-week courses (critical care, core sub-internship), and 24 elective credits (minimum: 12 clinical).  In 2015-2016 students were also required to complete a scholarly project, community service, and engage in five half-to-full day simulation-based interprofessional education courses with students from four health professions colleges.

Data Collection

Facility Costs: Facility costs fell into 6 broad categories: Staff, Building/Facilities/Services, Information technology, Simulation, Materials and Other ( Table 1). There were 64 different major elements of facility costs identified requiring contact with 18 individuals to complete data collection. All cost elements were determined. A single staff member in the UUSOM Dean’s Office undertook the compilation of facility cost data.

Professional costs: Professional costs were all faculty-related costs categorized as: Administrative, Classroom teaching, Clinical teaching, and Mentoring/Advising ( Table 1).   

  • Classroom teaching. All classroom-based teaching time at UUSOM is cataloged in a central database, housed in the UUSOM Dean’s Office of Finance.  Teaching hours of all faculty who teach in the classroom setting, regardless of number of students present, are captured and validated for accuracy at the end of every academic year at the department level. Cost associated with those hours were derived based upon median salary and benefits data for MD and PhD faculty who taught in the program in 2015-2016. The median salary plus benefits for MD and PhD faculty who taught in the curriculum was $316,483 and $138,886, respectively ( Table 2).  Total classroom teaching costs assumed variable degrees of preparation time based on the type of learning session (3 hours per 1 hour of large classroom instruction, 0.5 hours per hour of small group instruction, and 1 hour per hour of laboratory).  In 2015-2016, 67% of instruction was delivered by MD faculty and 33% by PhD faculty.
  • To derive clinical teaching costs, assumptions about clinical teaching time were made based upon the medical education literature. The range of time faculty spent teaching individual students in the outpatient environment was estimated at 0.5-0.8 hours per half day of clinic.7, 8 The time faculty spend teaching individual students in the inpatient environment was estimated at 1.1 hour per full day of inpatient time.9 To calculate clinical teaching costs, the mean for outpatient teaching time (.65 hour per clinic half day) was used to derive costs for ambulatory experiences in our curriculum. Overall, clinical teaching time was calculated using the number of students in the clerkship years of the curriculum assuming the number of outpatient days and inpatient days for every student was nearly constant according to standard lengths of clerkships and required fourth year courses ( Table 2). Finally, professional costs for clinical teaching of individual students in electives were derived based on minimum fourth year elective requirements for graduation (24 weeks, minimum of 12 clinical weeks).
  • Administrative costs were calculated based on percent effort directed at the medical student program multiplied by faculty annual salary and benefits. Course director costs were based on expected time spent performing course planning and administration and varied based upon the length of the course.

Outcomes

Overall Education Costs

In 2015-2016, the overall cost of the 4-year medical student program was $32.7 million, which amounted to ~$79,000 per student per year, much more than the annual tuition and fees of $36,094 

Facility and professional costs were nearly equal in magnitude ($16.3M vs. $16.4M respectively. The three largest cost-drivers in the analysis were attributed to clinical teaching ($10.0M), building costs ($6.6M) and staff ($4.6M). 

The balance of costs for the pre-clinical curriculum (years 1-2) differed significantly from that of the clinical curriculum (years 3-4):  professional costs related to faculty teaching were 8-fold lower in the pre-clinical curriculum than clinical ($1.24M vs. $9.88M, respectively). Conversely, professional costs related to faculty administration time  were 3-fold greater in the pre-clinical years  compared to the clinical ones ($2,660,079 vs. $882,164, respectively).    

Value-Driven Outcomes Initiative Conceptually, and most importantly, the study afforded us the opportunity to move beyond an estimation of cost to a consideration of how to optimize value (maximizing outcomes for the cost incurred), particularly related to professional costs. In 2018, we replaced our distributed model of education delivery wherein over 500 faculty participated in education (many for only a lecture or two) with little direct association between such involvement and the distribution of funds to their departments, with a Core Educator Model, wherein approximately half that number of faculty each contribute a more substantive amount to education and receive direct financial support for those contributions. The aim of the Core Educator Model is to improve learning outcomes for students by consolidating the delivery of the program to a core group of expert educators who are both compensated and held accountable for their efforts.

Next Steps

The cost analysis at the UUSOM has prompted the redesign of funds flow supporting medical student education and has shifted the focus toward more heavily considering the value of education investments. The years ahead will provide opportunity to investigate the impact of the Core Educator Model on learning outcomes, the ability to deliver a high-quality medical education program, and the professionalization of faculty as educators.

At the 2017 AAMC Annual Meeting, Dr. Marsha Rappley, Chair of the AAMC Board of Directors, directly emphasized that the cost of what we do in education is undermining our ability to improve the health of the nation.10 Understanding costs has not traditionally been considered to be in the purview of educators. This needs to change. As medical educators strive to deliver high-value education, a concern for and an active engagement with the costs of medical education must be a part of the equation.

References

  1. Tuition and Student Fees Report, 2012-2013 through 2018-2019, Association of American Medical Colleges; www.aamc.org/data/tuitionandstudentfees/
  2. James Rohlfing, Ryan Navarro, Omar Z. Maniya, Byron D. Hughes & Derek K. Rogalsky (2014) Medical student debt and major life choices other than specialty, Medical Education Online, 19:1, DOI: 10.3402/meo.v19.25603
  3. 2017 Education Debt Manager for Graduating Medical School Students, Association of American Medical Colleges, members.aamc.org/eweb/upload/Education%20Debt%20Manager%20for%20Graduating%20Medical%20School%20Students–2017.pdf
  4. Cooke M, Irby DM, O’Brien BC. Educating physicians: a call for reform of medical school and residency: John Wiley & Sons 2010.
  5. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477–81.
  6. Lee VS, Kawamoto K, Hess R, et al. Implementation of a Value-Driven Outcomes Program to Identify High Variability in Clinical Costs and Outcomes and Association With Reduced Cost and Improved Quality. JAMA.2016;316(10):1061–1072. doi:10.1001/jama.2016.12226
  7. Ricer RE, Van Horne A, Filak AT. Costs of preceptors’ time spent teaching during a third-year family medicine outpatient rotation. Acad Med. 1997;72(6):547-551.
  8. Abramovitch A, Newman W, Padaliya B, Gill C, Charles PD. The cost of medical education in an ambulatory neurology clinic. J Natl Med Assoc. 2005;97(9):1288-90.
  9. Weinberg E, O’Sullivan P, Boll AG, Nelson TR. The Cost of Third-Year Clerkships at Large Nonuniversity Teaching Hospitals. JAMA. 1994;272(9):669–673. doi:10.1001/jama.1994.03520090033015
  10. Rappley, MD. Leadership Plenary Address.  Learn Serve Lead, AAMC Annual Meeting, Boston, Mass.  Nov 5, 2017.

Table 1

Lamb et al. Table 1
Table 1: Total Cost of Undergraduate Medical Education (Click to enlarge Table 1)

Figure 1

Lamb, et al. Figure 1
Figure 1 (Click to enlarge Figure 1)

Table 2

Table 2: Classroom and Clinical Teaching Costs
Table 2: Classroom and Clinical Teaching Costs (Click to enlarge Table 2)

The Influence of Revising an Online Gerontology Program on the Student Experience

Posted 2021/04/08

Acknowledgements

We acknowledge the support of the University of Utah Teaching and Learning Technologies, the University of Utah College of Nursing, and the University of Utah Consortium for Families and Health Research.

Funding

Program revisions were funded through a University of Utah Teaching and Learning Technologies Online Program Development Grant.

Declaration of Interest

We have no conflicts of interest to declare.

Abstract

The recent adoption of gerontology competencies for undergraduate and graduate education emphasizes a need for national standards developed to enhance and unify the field of gerontology. The Gerontology Interdisciplinary Program at the University of Utah revised all of the Gerontology course offerings to align with the Association for Gerontology in Higher Education’s (AGHE) Gerontology Competencies for Undergraduate and Graduate Education (2014), while also making improvements in distance instructional design. In this study, we examined student course evaluation scores and written comments in six Master of Science in Gerontology core courses (at both 5000 and 6000 levels) prior to and following alignment with AGHE competencies and online design changes. Data included evaluations two semesters prior to and two semesters following course revisions and was assessed using paired t-test and thematic analysis. No significant statistical findings were found between pre and post revisions. Qualitative comments post revision did show an increased focus in comments about interactive and engaging technology. These findings will be used for course and program quality improvement initiatives, including enhanced approaches to documenting and assessing competency-based education.

Keywords

Competency-based education, course evaluation, course revision, distance education

Background

Competency-based education (CBE) is growing in popularity and demand (Burnette, 2016; McClarty & Gaertner, 2015). Gerontology curriculum development has moved towards CBE with national standards developed to enhance and unify the field of gerontology (Association for Gerontology in Higher Education [AGHE], 2014; Damron-Rodriguez et al., 2019).  The Academy for Gerontology in Higher Education (AGHE) approved the Gerontology Competencies for Undergraduate and Graduate Education (AGHE, 2014); designed to serve as a curricular guide for undergraduate (i.e., majors, minors, certificates) and master’s degree level programs. Benefits of using competencies for curricular revisions, include: shifting focus to measurable outcomes (Burnette, 2016; Damron-Rodriguez et al., 2019; Wendt, Peterson, & Douglass, 1993), increasing program accountability for learning outcomes (Burnette, 2016; Damron-Rodriquez et al., 2019; McClarty & Gaertner, 2015), preparing students to graduate with necessary skills (McClarty & Gaertner, 2015), and training the gerontological workforce by bridging the gaps between aging services and gerontology education (Applebaum & Leek, 2008; Damron-Rodriquez et al., 2019).

At the same time CBE has increased, online teaching and learning are more accessible and in demand (Means, Toyama, Murphy, Bakia, & Jones, 2010; Woldeab, Yawson, & Osafo, 2020). For programs looking to enhance curriculum and program accessibility, considering both CBE and distance course design are vital. Quality course design for courses incorporating CBE, emphasize opportunities for student application and practice, active learning strategies, and timely instructor response and feedback (Krause, Dias, & Schedler, 2015). In a previous paper (Dassel, Eaton, & Felsted, 2019) we described an approach to the program-wide revisions to align with the AGHE competencies and to meet current recommendations in cyber-pedagogy. The University of Utah Gerontology Interdisciplinary Program (GIP) was in a position to make revisions to enhance both CBE and online instructional design using a course/credit model that embeds competencies within a traditional approach to higher education that offers credit hours towards a degree (Council of Regional Accrediting Commissions [C-RAC], 2015). The University’s Online Teaching and Learning Technologies (TLT) office released a funding opportunity for programs wanting to move completely online. The GIP took the opportunity to apply to use funds with the following purpose: 1) transition the Masters of Science program into a completely online format, and 2) improve the quality and consistency of existing gerontology courses through a full curriculum review by the experts at TLT. The goal was to make the fully online transition in a manner that allowed for dynamic online learning and to incorporate CBE within the program. In 2015, the GIP began the work to revise all program courses to meet best practices of online learning and map program curricula to National Competencies in Gerontology Education (AGHE, 2014).

            Course revisions were complete in 2017. We then applied for and received official UOnline Program status and accreditation as a fully online program through the Northwest Commission on Colleges and Universities (2020). This accreditation allows us to be recognized as an official UOnline Program at the University of Utah. The University is a member of the State Authorization Reciprocity Agreement (NC-SARA) which reduces the number of other state regulations to continually monitor, resulting in more efficiency in the authorization process. Through NC-SARA, GIP is able to offer and expand certain educational opportunities to students in and out of the state of Utah (National Council for State Authorization of Reciprocity Agreements, 2020). In 2017, we were also awarded Program of Merit (POM) status from the Academy for Gerontology in Higher Education (AGHE), at the master’s degree level. The process of curricula review, competency mapping, and online revisions/planning, facilitated our application, review, and award of the POM.

            Course revision and development followed a model that incorporated best practices in teaching pedagogy and online learning. These incorporated Fink’s (2003) approach to designing college courses, using the DREAM exercise, situational factors exercise, course alignment grid, and taxonomy of significant learning. A backward design approach (Wiggins & McTighe, 2005) helped faculty begin with competencies and learning objectives followed by identifying assessments that then measure those objectives. Bloom’s (1984) taxonomy was used to design assessments that evaluate the learning experiences accurately and active learning principles (Bonwell & Eison, 1991; Prince, 2004) guided choices to facilitate dynamic online learning. Instructional designers met individually with instructors to work through, enhance, and redesign courses to facilitate this work.

            Upon completion, the program continued to assess student learning using individual course assessments, grades, progress towards graduation, annual and exit student interviews, and alumni surveys. However, we wondered about the student experience and reaction to changes pre and post revision of the entire curricula. As this process spanned four years and courses, it became of interest to see if existing data might facilitate better understanding of the student experience pre compared to post program revision.

The purpose of this paper is to compare student course evaluations from six core courses of the Master of Science in Gerontology program before and after alignment with AGHE competencies and online design changes. The objective of this study is to analyze pre and post qualitative and quantitative student evaluations in order to assess indicators of program quality and improvement. We hypothesize that course evaluations will improve from pre to post revision. Testing of this hypothesis occurred through two aims:

Aim 1: Assess the changes in numerical course ratings provided by students pre to post course evaluation.

Aim 2: Assess the changes, pre to post course revision, in student open-ended feedback submitted with course evaluations.

Methods

Course Selection

For the purpose of the current study, we compared de-identified, anonymous student course evaluations in six of our Master of Science core courses before and after the course revision and alignment. The six core courses required in our Master of Science program are: 1) GERON 5001/6001: Introduction to Aging, 2) GERON 5370/6370: Health and Optimal Aging, 3) GERON 5002/6002: Services Agencies and Programs for Older Adults, 4) GERON 5500/6500: Social and Public Policy in Aging, 5) GERON 5604/6604: Physiology and Psychology of Aging, and 6) GERON 5003/6003: Research Methods in Aging (Note. 5000 and 6000 level courses are considered to be graduate level by the University of Utah). Two additional core courses, GERON 5990/6990: Gerontology Practicum and GERON 6970/6975: Gerontology Thesis/Project, were omitted from this study as they were newly created in an online format, are mentor-based (one instructor to one or two students), and don’t receive evaluations due to the small course size.

These six core courses underwent significant redesign across three consecutive semesters. Each instructor worked one on one with an instructional designer provided through the UOnline grant mechanism. Instructional designers, associated with the University of Utah’s TLT, aided course instructors in updating their courses with the latest technological media to provide online content in innovative and effective ways. 

Course Evaluations

Faculty were guided in assessing and revising courses through the use of AGHE competencies (2014) and Fink’s (2003) and Bloom’s (1984) taxonomies. AGHE competencies were first mapped across all gerontology courses, identifying redundancy, overlap, and missing content. A detailed description of this process is described in Dassel et al. (2019). Faculty noted recommended revisions based on competencies specific to objectives and modified content. These were incorporated as faculty worked with instructional designers on their assigned course. Next, instructors used the framework of taxonomies to redesign the student learning experience for an active online format. Fink’s taxonomy is a non-hierarchical model, which defines six major domains that need to be present for a complete learning experience – foundational knowledge, application, integration, human dimensions, caring, and learning to learn (Fink, 2003). Bloom’s taxonomy, revised posthumously by a group of cognitive psychologists in 2001, is a hierarchical model which defines and distinguishes six categories of learning (Bloom, 1984; Anderson & Krathwohl, 2001). Bloom’s six categories, which are each intended to be individually mastered before moving to the next category, are remember, understand, apply, analyze, evaluate, and create. These designations allow for the design of the accompanying assessment to accurately evaluate the learning experience by level.

Permission to analyze student course evaluations was submitted and reviewed by the Institutional Review Board (IRB) at the University of Utah. The IRB determined oversight was not required as this work does not meet the definition of Human Subjects Research. All student evaluations are completed on an anonymous basis. Evaluations are used as a tool of quality improvement to assess course outcomes and faculty instructions. In order to obtain a representative sample of student evaluations, we assessed evaluations from two consecutive semesters immediately prior to the course revision and the two consecutive semesters immediately following the course revision.

Course evaluations were emailed to students during the last month of the semester. Students were asked to voluntarily complete the anonymous course evaluations. The data, consisting of numerical scaled response options and open-ended comment sections, was summarized and provided to course instructors at the end of the semester once grades have been submitted. From the full list of course evaluation questions, we selected 10 quantitative questions that we felt were most relevant to course revision. The questions selected include: 1) Overall course evaluation, 2) The course objectives were clearly stated, 3) The course objectives were met, 4) The course content was well organized, 5) The course materials were helpful in meeting course objectives, 6) Assignments and exams reflected what was covered in the course, 7) I learned a great deal in this course, 8) As a result of this course, my interest in the subject increased, 9) Course requirements and grading criteria were clear, and 10) I gained an excellent understanding of concepts in this field. Response options were based on a 5-point Likert scale with 1= strongly disagree to 6 = strongly agree. Open-ended questions ask students to comment on: 1) course effectiveness, 2) the online components of the course, and 3) comments intended for the instructor.

Data Analysis

Data analysis occurred in two phases. Phase one focused on quantitative data from the course evaluations. Pre and post data were aggregated for each course. Since students do not take a course multiple times, analyzing data pre to post by individual student is impossible. Rather than focus on the individual student as the unit of analysis, we assessed pre and post evaluations using the course as the unit of analysis. The means of each sample were calculated for each of the course evaluation questions (e.g., overall course rating, course objectives) as a proxy for evaluating the effectiveness of curriculum revision and course mapping. We used univariate statistics to describe frequencies and mean responses for each evaluation question. Paired-samples t-tests were conducted on the course means to examine score changes from pre to post course revision. Each course was compared separately and then data was pooled for all courses to assess program change over time. For the qualitative portion of this study, we compiled and organized all of the course evaluation open-ended student responses by course and semester. Data was uploaded into NVivo (QSR International, 2018) and assessed in a two-phase process. First each comment was read and coded into four a priori codes: 1) pre-commendations, 2) pre-recommendations, 3) post-commendations, 4) post-recommendations. The second phase of coding used thematic analysis to assess the main themes presented by students (Saldaña, 2009). This allowed us to assess potential change in student thoughts pre- to post-revision.

Results

Data are anonymous and demographics were not gathered as part of student evaluations. However, we do have a general idea of student demographics within the GIP. During a recent fall semester, we had 189 unique students enrolled in gerontology courses. Students represented 6 master’s degree programs, and 3 doctorate programs; with 9 students undeclared and 4 nonmatriculated. The average age of students was 29; 137 female (72.5%), 50 male (26.5%), and 2 unknown (1.05%). The majority of students were white (67.72%), with others identifying as Hispanic/Latino (13.76%), Asian (7.40%), unknown ethnicity (4.23%), multi-racial (3.70%), international (1.59%), Black/African American (1.05%), and Native Hawaiian or other Pacific Islander (0.53%).

A summary of the t-test data results is found in Table 1. Some data were unavailable due to too few responses. One course, GERON 5500/6500, did not have sufficient data for analysis (less than 2 observations per class), as it was a newly developed course and did not have sufficient pre-revision data. This course was retained in the overall comparison of results pre to post. Paired t-tests comparing overall course ratings pre and post course revision revealed a trend in improvement in the GERON 5001/6001: Introduction to Aging (t=4.09; p=.05) course. Examination of aggregate data from all of the courses in relation to individual course evaluation questions showed trends in improvement for the following two areas: 1) “The course objectives were met” (t=1.47; p=.09), and 2) “I learned a great deal in this course” (t=1.36; p=.09). There were no significant differences on overall or individual course evaluation questions pre to post course revision.

Table 1. Assessment of Course Evaluation Questions Pre- to Post-Revision

Open-Ended Student Comments

Qualitative analysis summarizes both overall number of commendations and recommendations and the content of comments to assess change pre to post revision. A total of 298 codes were documented pre-revision (see Table 2). Of these 71% were commendations, focusing on positive feedback about course content, online teaching, and instructor efficacy. Comments focusing on recommendations for change comprised 29% of the total pre-revision codes. These recommendations centered on issues with course content, technology, and instruction. Comments in the recommendation’s category included both negative reviews and constructive ideas for change. Post-revision comments were coded 257 times. Seventy-three percent of these were commendations and 27% were recommendations (Table 2). Percentages are very similar pre to post, demonstrating that overall positive or negative comments did not alter much from pre to post revision.

Table 2. Overall Pre to Post Coding of Course Evaluation Qualitative Comments

The second phase of qualitative analysis assessed the content of the comments to understand the topics focused on pre to post revision. Student comments were evaluated for each course; pre-revision comments were analyzed first followed post the post-revision comments. After identifying themes within pre-revision comments, a summary was written of the main ideas. Following this, post-revision comments were read and coded for the same course. A summary was then written about the main themes for the post-revision codes. Representative quotes were included in each summary, in order to present examples of themes. At this time the pre and post revision summaries were compared for each course. Any major thematic changes were noted in a final course comparison summary. Once this process was complete for each course, all course comparison summaries were re-read and coded for similarities and differences across the group of courses. Table 3 includes a summary of each course, including representative quotes.

Table 3. Analysis of Student Comments by Class

Summary of Qualitative Comments Pre to Post Revision

The following summarizes overall findings from qualitative analysis of student open-ended course evaluation comments. Student comments increased in two main areas post-revision when compared to pre-revision: 1) connection to the instructor, and 2) organized content.

            Connection to the Instructor. Students expressed not wanting all the extra technological features integrated into courses such as screen and video recorded Power Point lectures, interactive quizzes, and movie creation apps. A variety of apps (e.g., Flipgrid, Lucid chart, Pathbrite) led to confusion and overwhelmed students. However, students emphasized the importance of technology in helping them maintain connection with the instructor. For example, one student stated, “I especially liked the introduction videos before each module because it felt like the instructor was in constant communication with the class.” The adoption of video was particularly useful in helping students feel this connection.

            Organized Content. Comments emphasized the importance of balancing assignments, content, and the amount of work. Students noted that spreading assignments out throughout the semester helped them disperse their stress. This was most often mentioned when a course had multiple assignments due the last week of the semester. One student commented, “Assign one of the larger projects to be due at mid-term, to space out the stress.” Students value learning and in an online environment this requires incorporating moments of accountability to help students interact with the content. Students emphasized wanting these opportunities for accountability and when a course was lacking this, they acknowledged their lack of course interaction. “I have mixed feelings about the assignments. On the one hand, I feel that the small amount of assignments was nice, but also allowed for me to be less involved in the course than perhaps I should have.”

Discussion

In this mixed method, multi-year study examining student evaluations from pre to post course revisions, quantitative analysis did not produce statistically significant differences in mean course evaluation scores. This may be attributed to the small sample size, use of aggregate data rather than individual data points, missing data, and little variation in scores with most courses receiving high mean scores. In the qualitative analysis of student evaluations, we gained useful information. We found that students value technology that augments their connection to the instructor and course organization. Some students do not want all the extra features that come with a wide variety of technology (e.g., external sites to create blogs, mini podcasts, video creation). Students noticed video introductions, video lectures, and video summaries, often stating it made them feel connected to the instructor. This aligns with the quality indicators in CBE online courses that emphasize the importance of technology and navigation as one of seven recommended areas for measurement (Krause et al., 2015). Students want to learn. Learning online necessitates the incorporation of one or more forms of accountability (which the students want). In addition, students desire forms of accountability throughout the semester, rather than just at semester’s end. The balance of assignments, content, and amount of work matters to students. Instructional design is vital in quality online courses. Accountability should be an area that faculty and instructional designers collaborate on to facilitate enhanced quality in online CBE. Two quality indicators of accountability include 1) assessment and evaluation, and 2) competence and learning activities (Krause et al., 2015). We also observed an increase in student comments specific to a certain topic each time a major adjustment occurred, whether pre or post revision. This could be an outcome of the “growing pains” related to trying something new. Similar to piloting research, faculty pilot testing teaching strategies often need student feedback to improve changes in a manner that actually works for students. Checking in with students demonstrates the quality indicator of learner support, and allows faculty to assess and evaluate their course as part of quality assurance (Krause et al., 2015).

The information obtained from this study is relevant to course and program quality improvement. Strengths include the mixed method format and multi-year analysis. Limitations include data that did not allow for pre and post data from the same students, as it is impossible to require students to take a course twice. In some cases, there was not sufficient data for analysis, as t-tests require at least two observations per class (e.g., GERON 5500/6500). This insufficient data was attributed to new course development and changes in student evaluation questions that occurred across the University of Utah. This meant that questions were different pre to post revision for some courses. In addition, conducting a technology revision simultaneously with competency revisions makes it difficult to tease out changes due to course format versus curriculum. Instructors need to remind students which competencies are being covered and how they will expect to interact with this content during the course. Clear learning outcomes and student comprehension of the proficiencies they are working on enhances CBE (Burnette, 2016).

Mapping the entire GIP curriculum to the AGHE competency guidelines (Dassel et al., 2019) prepared us to apply for and receive Program of Merit designation through AGHE. This Program of Merit status has provided the foundation for future application for accreditation through the Accreditation for Gerontology Education Council (AGEC), which requires that the programs under review align with the AGHE Gerontology Competencies for Undergraduate and Graduate Education (2014). Students from all health science disciplines participate in undergraduate and graduate level certificates available through our program. Improving program quality and demonstrating the efficacy of such changes should strengthen the ability of students to work with older adults in community and health care settings.

Programs should build on CBE by developing measures to assess student achievement of competencies. This process can be used to improve the quality of the student learning experience (Damron-Rodriguez et al., 2019; McClarty & Gaertner, 2015). Our program is developing a tool that will allow faculty to assess program learning outcomes and AGHE competencies within each class. Data will be gathered every 3 years and will facilitate progress at both the course and program levels.  Tools, such as this, can be shared in an effort to develop tool-kits for other gerontology programs to build quality models of competency-based education (Damron-Rodriguez et al., 2019). It is our goal to enhance the ability of graduates to demonstrate the competencies and skills they have gained through high quality gerontology education as they work with employers and older adults. We will enhance our approach to CBE by assessing the path alumni take and their use of competencies to communicate their knowledge, skills, and contributions within the workforce. Advancing CBE in gerontology needs to happen through organizational leadership (Damron-Rodriguez et al., 2019). Our program benefits from being housed within a College of Nursing that follows a CBE model and process for accreditation. We can learn from this process of documentation, tracking, assessment, and quality improvement to enhance the rigor and approach we take to CBE in gerontology programs. Finally, plan to share our CBE strategies, assessment tools, and models gerontology programs  in the Utah State Gerontology Collaborative.

The results of this study have implications beyond the Gerontology Interdisciplinary Program to the larger Health Sciences campus where our program and college are housed. Many interprofessional health science students enroll in our courses. Thus, improving program quality and demonstrating efficacy ultimately strengthens students’ ability to work effectively with older adults in a variety of settings.

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Applebaum, R. & Leek, J. (2008). Bridging the academic/practice gap in gerontology and geriatrics: Mapping a route to mutual success. Annual Review of Gerontology and Geriatrics, 28, 131-148. doi: 10.1891/0198-8794.28.131

Association for Gerontology in Higher Education [AGHE] (2014). Gerontology competencies for undergraduate and graduate education. Washington, DC: Association for Gerontology in Higher Education. Retrieved from: https://www.geron.org/images/gsa/AGHE/gerontology_competencies.pdf

Bloom, B. S. (1984). Taxonomy of educational objectives: The classification of educational goals. New York: Longman.

Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. ASH-ERIC Higher Education Report. Washington, DC: School of Education and Human Development, George Washington University.

Burnette, D. M. (2016). The renewal of competency-based education: A review of the literature. The Journal of Continuing Higher Education, 64, 84-93. doi: 10.1080/07377363.2016.1177704

Council of Regional Accrediting Commissions [C-RAC]. (2015, June 2). Framework for competency-based education [Press release]. Retrieved from https://download.hlcommission.org/C-RAC_CBE_Statement_6_2_2015.pdf

Damron-Rodriguez, J., Frank, J. C., Maiden, R. J., Abushakrah, J., Jukema, J. S., Pianosi, B., & Sterns, H. L. (2019). Gerontology competencies: Construction, consensus and contribution. Gerontology & Geriatrics Education, 40(4), 409-431. doi: 10.1080/02701960.2019.1647835

Dassel, K., Eaton, J., & Felsted, K. (2019). Navigating the future of gerontology education: Curriculum mapping to the AGHE competencies. Gerontology & Geriatrics Education, 40(1), 132-138.

Fink, L.D. (2003) Creating significant learning experiences: An integrated approach to designing college courses. San Francisco: Jossey‐Bass.

Krause, J., Dias, L. P., & Schedler, C. (2015). Competency-based education: A framework for measuring quality courses. Online Journal of Distance Learning Administration, 18(1). Retrieved from https://www.westga.edu/~distance/ojdla/spring181/krause_dias_schedler181.html

McClarty, K. L. & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. AEI Series on Competency-Based Higher Education. Washington, DC: Center on Higher Education Reform & American Enterprise Institute for Public Policy Research.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. U.S. Dept. of Education, Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service website. Retrieved from https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

National Council for State Authorization Reciprocity Agreements [NC-SARA]. (2020). About NC-SARA. Retrieved from https://nc-sara.org/about-nc-sara

Northwest Commission on Colleges and Universities. (2020). Accreditation. Retrieved from https://www.nwccu.org/accreditation%20/

Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223-231.

QSR International Pty Ltd. (2018). NVivo qualitative data analysis software (version 12) [Software]. Retrieved from https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home. Accessed May 17, 2020.

Saldaña, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: SAGE.

Wendt, P. F., Peterson, D. A., & Douglass, E. B. (1993). Core principles and outcomes of gerontology, geriatrics, and aging studies instruction. Washington, DC: Association for Gerontology in Higher Education and the University of Southern California.

Wiggins, G.P., & McTighe, J. (2005). Understanding by design. (2nd Ed.).  Alexandria, VA: Association for Supervision and Curriculum Development.

Woldeab, D., Yawson, R.M, & Osafo, E. (2020). A systematic meta-analytic review of thinking beyond the comparison of online versus traditional learning. E-Journal of Business Education & Scholarship of Teaching, 14(1), 1-24.