One-Minute Preceptor: An Efficient and Effective Teaching Tool

This document will become one of many chapters in a text book on education in the health professions to be published by Oxford University Press. All of the chapters in the textbook will follow a Problem-based Learning (PBL) format dictated by the editors and used by these authors.

Abstract

As learners progress from early health professions education to the clinical learning environment, there is a need for high-quality instruction from their clinical preceptors to foster the application of knowledge to patient care.  The busy clinical environment poses challenges to both learners and educators as there is a time constraint to meet both the learner’s and patient’s needs.  The One-Minute Preceptor is an easily-learned clinical teaching tool that features five microskills initiated by the educator: getting a commitment from the learner, probing for supporting evidence, teaching a general rule, reinforcing what was done right, and correcting mistakes.  This model has been well-studied.  It effectively and efficiently imparts high-quality education to the learner without compromising patient care.  It is a preferred modality by learners and preceptors alike.  Although the original intent was for use in the ambulatory care setting while working with a learner one-on-one, it can be adapted in a variety of settings.  There are also several factors that can facilitate the model’s success, including educator adaptability and focusing on the principles of effective feedback.

Keywords

One-Minute preceptor, clinical education, teaching model, efficient teaching, effective teaching, feedback

Learning Objectives

  1. Identify strengths, weaknesses and situations to utilize the “One-Minute Preceptor” model
  2. Define and utilize the 5 steps of the One-Minute Preceptor
  3. Adapt the model for various learning environments and groups of learners
  4. Identify barriers to using the “One-Minute Preceptor” model and strategies for resolving them.

Case

Case: 

Maria is a medical student just starting her clerkship year and has been assigned to the pediatric endocrine clinic for a week.  She arrives at the front desk and recognizes the waiting room is already full.  She is greeted and brought back to wait in the team room. 

Questions:

How is learning in the clinical environment different than her previous, pre-clerkship medical school experience?

How can she best meet her learning needs while fitting into the flow of her assigned clinic?

Case progression:

Dr. Wright, Maria’s assigned preceptor, walks through the clinic door to see a full waiting room.  The clinic receptionist greets him saying, “your medical student Maria is here and she is sitting in the team room.”  Dr. Wright had forgotten this was the first day of the clerkship.

Questions:

How can Dr. Wright best meet the medical needs of his patients, keep up with his schedule and still provide the student with a meaningful educational experience?

Is there a proven format that can help?

Case progression:

Following introductions, Dr. Wright quickly shows Maria around the clinic and describes the schedule and his expectations.  When asked about her learning goals, Maria was unsure of how to respond. 

Questions:

How will Maria learn what she is capable of and where she falls short?

How will Dr. Wright observe Maria’s work enough to learn about her abilities and knowledge gaps?

How will Dr. Wright find time in the busy clinic to provide effective feedback safely?

Case progression:

Dr. Wright enters the first patient’s room with Maria and makes introductions.  He then leaves Maria to obtain a history and physical exam. 

Questions:

How can Dr. Wright learn about Maria’s questioning and exam skills without being there?

How can Dr. Wright ensure that Maria’s questioning and exam follow a hypothesis-driven progression?

Case progression:

Maria presents her findings to Dr. Wright when he finishes up with a patient. She pauses looking to Dr. Wright for next steps.  Dr. Wright quietly notes to himself that he is already late for his next patient, but wants to provide clinical teaching to Maria. 

Questions:

How is Dr. Wright able to teach and keep up with his clinical schedule?

How does Dr. Wright provide efficient disease-specific teaching to meet Maria’s needs?

Case progression:

Maria feels stressed recognizing that Dr. Wright is busy.  She is also worried that she is not performing well enough.  There is so much new here in the clinic!

Questions:

What are the components of effective feedback?

Can effective feedback be provided in a busy clinic?

Discussion

The clinical environment introduces educational challenges distinct from those in classroom-based health professions education. In the latter, a more structured environment, there are many teaching modalities to facilitate knowledge acquisition: team-, case- and problem-based learning, simulation, classroom teaching and self-study.  Additionally, during that time, a student’s education is the primary focus of the teaching faculty and their performance is reported directly as a score on a summative assessment.  As students become integrated into the clinical environment, their emerging knowledge and skills are stretched with the real-world complexity of clinical applications.  For example, students need to balance the disease-based knowledge obtained through their reading with actual patient symptoms to construct a prioritized differential diagnosis and a patient-specific management plan.  In the clinical setting, faculty need to create a fruitful and safe educational environment while concurrently administering exceptional patient care. Teaching modalities leaned on heavily in early health professions education are less congruent with the environment of clinical practice.  The assessments students receive can be more subjective and based on short interactions.

To be feasible, models of teaching need to adapt to this environment.  To be useful to the instructor, the model must somehow provide insight into both the patient’s illness and the learner’s abilities, be easy to utilize and fit within the tight time-constraints required of increasing patient volumes.  To be beneficial to the learner, the model should allow for autonomy in a psychologically safe environment, provide direct teaching that improves an area of weakness, and impart honest feedback.  There have been multiple models published to overcome these challenges and maximize learning (SNAPPS, concept mapping and One-Minute-Preceptor).1  Each approach has different strengths and weaknesses.  Based on a broad evidence-base detailing its efficiency, efficacy, learner- and preceptor-preference, and its adaptability for multiple health professions and settings, we will delve into the specifics of the One-Minute Preceptor model.1,2

            The five-step “microskills” model of clinical teaching, known more commonly as the “One-Minute Preceptor” was first formally described in the Journal of the American Board of Family Practice in 1992 by Neher et al.3  This clinical teaching method earned its name due to its emphasis on providing a brief teaching moment within the context of a busy clinical setting.  The model was originally created by senior educators at the University of Washington to provide less experienced family practice preceptors an educational framework to improve their teaching.  It was originally presented within the University of Washington Family Practice Network Faculty Development Fellowship curriculum and at other regional and national meetings.  Since the 1990s, use of the One-Minute Preceptor has spread across various disciplines as an effective approach to clinical teaching.  

            The five microskills are simple teaching behaviors focused on optimizing learning when time is limited.3  The model is best initiated by the clinical preceptor after the learner has seen a patient and presented details about the case. The preceptor then encourages the learner to develop their own conclusions about the patient from the information they have gathered.   The preceptor then identifies gaps in the learner’s knowledge and provides specific teaching and feedback to fill those gaps. This approach is different than traditional models in which the preceptor asks a series of clarifying questions, mostly to aid the preceptor in correctly diagnosing the patient.3

The first microskill is to “get a commitment from the learner”.3  This entails asking the learner to commit to a certain aspect of the patient’s case.  For example, after the learner presents the patient the preceptor may ask, “What do you think is the most likely diagnosis?” or “What laboratory tests would you like to order?”.  This encourages the learner to make a decision and demonstrate their level of knowledge. 

The second microskill is to “probe for supporting evidence”.3  After the learner makes a commitment, this step allows the supervising clinician to better understand the learner’s thought process and identify knowledge gaps.  The preceptor may ask, “What aspects of the patient’s history support your diagnosis?” or “How did you select those laboratory tests?”. 

The third microskill is to “teach a general rule” that ideally helps fill a knowledge gap identified in the first two steps.3  This is meant to be a brief teaching pearl about one aspect of the patient’s case.  For example, the preceptor may highlight physical exam findings that support the most likely diagnosis or discuss an additional laboratory test that could help narrow the differential diagnosis. 

The fourth microskill begins the feedback portion of the model and “reinforces what was done right”.3  Feedback should always be specific, timely, and focused on behaviors.4,5 

The fifth microskill “corrects mistakes”.3  This should be done after allowing the learner to assess their own performance first.  Educators are also encouraged to provide context while giving feedback, highlighting the positive impact of the learner’s behaviors and how to correct any errors that took place.   The five microskills are meant to be a brief set of teaching tools to provide relevant teaching points and feedback in a few quick minutes.

            There are many process-oriented strengths of the One-Minute Preceptor model that explain its widespread use.  First, this model improves upon more traditional approaches in that it not only focuses on the learner but also has the benefit of facilitating correct diagnosis of the patient.6  The first two steps delve into the learner’s knowledge base, thought process, and potential gaps so that the later steps can provide teaching and feedback that are specific to the learner’s needs in that moment.  It has also been shown that teaching is more disease-specific rather than generic when using this model.7  Educators are more likely to provide teaching points that are focused on differential diagnoses, patient evaluation, and disease progression than more general topics, such as approaches to history taking or presentation skills.7  This higher level of teaching can focus on the learner’s decision-making process and clinical reasoning ability, which are essential skills for optimal patient care.3,8  Another process-oriented strength of the model is its efficiency.  In addition to the teaching being high-yield and learner-centered, it is also quick to work through and is viewed by preceptors and residents as more effective and efficient.6,9  The model is also easy for preceptors to learn in just an hour or two3.  Receiving training in the One-Minute Preceptor model also increases the preceptors’ self-efficacy as an educator and increases the likelihood that they will choose to precept in the future.10  Finally, feedback is often lacking in more traditional teaching encounters, which can leave the learner unsure of their performance and where they should focus their learning.  By integrating feedback into the model, the One-Minute Preceptor model has improved the quality and specificity of feedback, even in busy clinical environments.11

            As with any teaching process, there are limitations and weaknesses of the One-Minute Preceptor model.  The premise of the model relies on good information gathering from the learner and an ability to convey this information to the preceptor.  This may be challenging for more junior learners.  As it is a preceptor-driven model, faculty development and practice are necessary for success.  Also, more junior educators such as residents may feel less comfortable teaching general rules due to lack of confidence or limitations in their own knowledge base.12  Due to its focus on efficiency, the general rule that is taught must be limited and succinct and the preceptor may need to omit other key learning points. 

            There have been many studies on the outcomes (i.e. impact and efficacy) of the One-Minute Preceptor model since its inception.  There is evidence to support that this teaching method benefits educators, students, and patients alike.  Clinician educators find the model to be more effective and efficient than traditional models.6  They also indicate higher confidence in their ability to rate learners and tend to rate learner performance more favorably than with more classic methodology.6  The One-Minute Preceptor is also useful in everyday teaching practice, with faculty in the original study indicating that they used the five microskills in 90% of their teaching encounters and all found it a least somewhat helpful.3  Learners also favor this model to more traditional approaches.13  Medical students rate resident teaching skills higher after the resident has received training in One-Minute Preceptor.12  Learners are also more likely to be included in the decision-making process when this model is used as compared to more traditional models.13  Learners also benefit from increased feedback.  With this teaching model, they receive higher quality feedback in that it is specific and includes constructive comments in addition to positive ones.11  One common barrier to effective clinical teaching is that it takes time away from the patient, but One-Minute Preceptor aims for efficiency and leaves time for quality medical care.  There is also evidence to show that patients are more likely to be diagnosed correctly when One-Minute Preceptor is used versus more traditional models.6

            The original intent of the One-Minute Preceptor was to assist clinician educators in the ambulatory clinical setting.  This is an ideal environment as learners are often presenting patients to their preceptor one-on-one.  This setting provides an opportunity for preceptors to tailor teaching to the individual learner’s needs.  Furthermore, there are often high patient volumes in the ambulatory setting with limited time per patient, making quick clinical teaching models necessary for work flow.  The model does function best in the context of patient care rather than in the classroom setting, as the basis for starting this approach is a learner’s presentation of an actual patient’s case.  It also may be challenging at the patient’s bedside as the teaching is tailored to the learner’s level of understanding rather than the patient’s. Despite its initial application in the ambulatory setting, One-Minute preceptor has been used effectively in other clinical and educational environments.  It has been adapted and implemented to teach multiple learners on the inpatient wards.14  After a learner presents a patient on rounds, the clinician educator can then ask the learner to make a commitment to the diagnosis.  If the learner struggles at this step, the same question can then be posed to a more senior learner on rounds.  General rules and feedback can be delivered quickly during rounds as well.  With multiple learners, it may be effective to alter step three (“teach a general rule”) and highlight several general rules, more basic learning points for junior learners and more complex ones for senior learners.  As the clinical setting is often unpredictable, altering the model to fit the scenario can be beneficial.  One environment which might require some adaptation for the One-Minute Preceptor model to be successful is a high-acuity setting, like the emergency department.  Since presentations may happen at the bedside, learners should be counseled ahead of time on what discussions are appropriate to have in front of patients.15  Learners should also be encouraged to circle back to their preceptor to complete the model if an interruption arises.  Another adaptation might be that learners make a commitment on the patient’s most acute problem rather than completing a full assessment as there might not be appropriate time during very critical and pressing scenarios.16  To further aid feasibility, preceptors might opt not to use all the steps in every encounter or to alter their exact order.17   In some instances, only a few of the steps may apply.  This allows for widespread use of the model in a variety of situations, even while teaching procedures.

Table 1. One-Minute Preceptor User’s Guide 2, 3, 4, 17, 18

Step 1. Getting a commitment
Goal: The resident should internally process the information they gathered to create an assessment of the situation.3 Learners can be asked to commit to primary or alternative diagnoses, next diagnostic step or potential therapies.18
Approaches to initiate step: This step is usually initiated following the learner presentation. This questioning can evolve through longitudinal experiences with the same learner.
• “What do you think is the most likely diagnosis for this patient?2
• “What do you think is going on with this patient?3
• “I like you’re thinking that this might be pneumonia, what other diagnoses are you considering?2”
• “What laboratory tests do you feel are indicated?3
• “What would you do for this patient if I weren’t here?” (to decrease pressure of “the ideal” answer)18
Learner deficit identified: Failing to commit could indicate difficulty processing the information, fear of exposing a weakness or dependence on the opinions of others.3 Alternatively, the learner might not have integrated some relevant information they had gathered, which could suggest lack of content knowledge.2,17
Possible remedy for identified learner deficit: Assuming a safe environment, this identified mistake in processing is a teaching opportunity.3 The next step will help elucidate if that teaching point should focus on the learner’s processing, a knowledge deficit, or the need for hypothesis-driven data gathering.
Facilitators for success:
• Create a safe and supportive environment to allow the learner to feel comfortable being vulnerable to make a commitment instead of more safely staying quiet.3
• If necessary for patient care, preceptors can ask a few brief clarifying questions. This should be limited at this stage, as too much questioning highlights the preceptor’s thought process rather than the learner’s.3 These questions are more appropriate later in the process.
• Learners should be gently pushed to make a commitment just beyond their level of comfort.18
Step 2. Probing for supporting evidence
Goal: Help learners reflect on their reasoning to identify process or knowledge gaps.17
Approaches to initiate step: Open-ended questions aimed at having the learner identify information used to arrive at their commitment:
• “Why do you think that is the most likely diagnosis?2
• “What were the major findings that led to your diagnosis?3
• “Did you consider any other diagnoses based on the patient’s presentation and exam?2
• “How did you rule those things out?17
• “Why did you choose that particular medication?17
Learner deficit identified: Probing allows clear evaluation of learner’s knowledge and clinical reasoning and identification of gaps and deficits.
Possible remedy for identified learner deficit: Any deficits (either knowledge or reasoning) identified in this step can serve as content for the next step, “teaching a general rule”.17
Facilitators for success:
• Preceptors should avoid passing judgement or talking and teaching immediately.3 By listening and learning which facts support the learner’s commitment, the teaching point can be tailored to the learner. This decreases the likelihood of general teaching that might repeat areas the learner already knows.3
• Maintain a supportive environment.
Step 3. Teaching a general rule
Goal: Preceptor shares expertise with a relevant and succinct learning point based on what the preceptor learned about the learner’s knowledge and deficits.3
Approaches to initiate step: Direct statements work well:
• “There was a recent journal article indicating that children with otitis media do not necessarily require antibiotics, unless they meet certain criteria…”
• “In elderly people with confusion, it is important to ask about recent medication changes.”
• “Following an uncomplicated vaginal delivery, our standard of care is a follow-up contact within 3-weeks.”
Facilitators for success:
• This step can be skipped if the resident has performed well, and no gaps are obvious, or if more information is needed for a decision.3 The saved time can be spent gathering additional information with the patient.
• Generalizable and succinct “take-home” teaching points relevant to the patient are preferred to complete lectures or descriptions of preceptor preferences.3,17 Topics can include disease-specific features, patient-specific management decisions, or areas for follow-up.18
• If during the probing step, you identify larger knowledge gaps it might be more appropriate to assign more comprehensive reading or plan a slightly longer discussion for a later time.18
Step 4. Reinforcing what the learner did well
Goal: Recognize, validate and encourage certain behaviors. Appropriately build learner confidence.3
Approaches to initiate step: A timely, direct, specific statement that is based on the behavior directly observed by the preceptor is ideal.4, 17 Asking the learner what they felt they did well is an effective place to start.18
• “I was impressed with how you obtained a thorough social history on our patient and noted that smoke exposure at home may be exacerbating her asthma.”
Facilitators for success:
• Aim for specific statements which are more helpful than general praise.3 Brief positive statements can be integrated into the questions from the preceding steps as well.17 (During “probing for evidence”: “Asking about travel history was a great thought, what was your motivation?”)
Step 5. Correcting mistakes
Goal: Tactfully improve learner performance.3
Approaches to initiate step: A timely, direct, specific statement is helpful.4 Asking the learner where they feel they could improve can help the preceptor start the conversation starting from where the learner feels they are.3,4,18
• “A thorough skin exam is important in every patient. Noting his Janeway lesions may have brought endocarditis to the list of his potential diagnoses.”
Facilitators for success:
• Maintain a collaborative and psychologically safe environment.4 “Focus on the decision, not the decion-maker.4” Finding the right moment and setting for this part is helpful for success.3,4 The most effective feedback occurs in quiet, relaxed areas soon after the observed performance.3,4 This can be challenging as the clinical environment is unpredictable and often fairly public.
• Asking students ahead of time how and when they want to receive feedback can be very helpful.18
• Very specific feedback for areas of improvement is more actionable and measurable than general criticism.4 Concrete improvement suggestions can move this delicate conversation in a positive direction; general criticism can impair the supportive and trusting environment.
• Faculty development efforts can be helpful for successful implementation.
Download Table 1 in PDF format

Multiple Choice Questions:

  1. Which of the following is NOT a step in the One-Minute Preceptor model?
    A. Correct mistakes
    B. Get a commitment
    C. Provide five teaching points
    D. Teach a general rule
    Answer: C
  1. Which of the following are benefits of the One-Minute Preceptor model?
    A. Increases quality of feedback to learner
    B. Improves efficiency and effectiveness of clinical teaching
    C. Provides disease-specific, rather than generic teaching
    D. All of the above
    Answer: D
  1. How can the One-Minute Preceptor model be adapted in the emergency department setting?
    A. Prioritize all five steps of the model over patient care
    B. Get a commitment on the patient’s most urgent clinical issue
    C. Encourage the patient to provide feedback instead of the clinician educator
    D. Skip teaching a general rule since time is limited
    Answer: B
  1. Which of the following is the best way to come up with the general rule to teach?
    A. Teach a knowledge gap identified in step two (“probing for supporting evidence”)
    B. Teach a general rule that the learner already knows to reinforce it
    C. Teach the general rule that you know the most about
    D. Teach a general rule that pertains to the next patient that the learner will see
    Answer: A

References

  1. Pierce C, Corral J, Aagaard EM, Harnke B, Irby DM, Stickrath C. A BEME realist synthesis review of the effectiveness of teaching strategies used in the clinical setting on the development of clinical skills among health professionals: BEME guide no. 61. Med Teach. 2020; 42(6): 604-615.
  2. Gatewood E, DeGagne JC. The one-minute preceptor model: a systematic review.  JAANP. 2019; 31(1): 46-57.
  3. Neher JO, Gordon KC, Meyer B, Stevens N. A five-step “microskills” model of clinical teaching. J Am Board Fam Prac. 1992; 5(4): 419–424.
  4. Ende J. Feedback in clinical medical education. JAMA. 1983; 250: 777-81.
  5. Kelly E, Richards JB. Medical education: giving feedback to doctors in training. BMJ. 2019; 366-370.
  6. Aagaard EM, Teherani A, Irby DM. Effectiveness of the one-minute preceptor model for diagnosing the patient and the learner: proof of concept. Acad Med. 2004; 79(1): 42–49.
  7. Irby DM, Aagaard E, Teherani A. Teaching points identified by preceptors observing one-minute preceptor and traditional preceptor encounters. Acad Med. 2004; 79(1): 50–55.
  8. Richards JB, Hayes MM, Schwartzstein RM.  Teaching clinical reasoning and critical thinking: from cognitive theory to practical application.  Chest. 2020; 158(4): 1617-1628.
  9. Arya V, Gehlawat VK, Verma A, Kaushik JS. Perception of one-minute preceptor (OMP) model as a teaching framework among pediatric postgraduate residents: A feedback survey. Indian Journal of Pediatrics. 2018; 85: 598.
  10. Miura M, Daub K, Hensley P.  The one-minute preceptor model for nurse practitioners: a pilot study of a preceptor training program.  JAANP. 2020; 32: 809-816.
  11. Salerno SM, O’Malley PG, Pangaro LN, Wheeler G. A, Moores LK, Jackson JL. Faculty development seminars based on the one-minute preceptor improve feedback in the ambulatory setting.  Journal of General Internal Medicine. 2002; 17: 779–787.
  12.  Furney SL, Orsini AN, Orsetti KE, Stern DT, Gruppen LD, Irby DM.  Teaching the one-minute preceptor: a randomized control trial. J Gen Intern Med. 2001; 16: 620-624.
  13. Teherani A, O’Sullivan P, Aagaard EM, Morrison EH, Irby DM.  Student perceptions of the one-minute preceptor and traditional preceptor models. Med Teach. 2007; 29(4): 323–327.
  14. Pascoe JM, Nixon J, Lang VJ. Maximizing teaching on the wards: review and application of the One-Minute Preceptor and SNAPPS models. J Hosp Med. 2015; 10(2): 125–130.
  15. Farrell SE, Hopson LR, Wolff M, Hemphill RR, Santen SA. What’s the evidence: a review of the One-Minute Preceptor Model of clinical teaching and implications for teaching in the emergency department. J Emerg Med. 2016; 51(3): 278–283.
  16. Sokol, K. Modifying the one-minute preceptor model for use in the emergency department with a critically ill patient. J Emerg Med. 2017; 52: 368–369.
  17. Lockspeiser TM, Kaul P. Applying the one-minute preceptor model to pediatric and adolescent gynecology education. Journal of Pediatric and Adolescent Gynecology. 2015; 28: 74–77.
  18. Neher JO, Stevens NG. The one-minute preceptor: shaping the teaching conversation. Fam Med. 2003; 35(6): 391-393

Assessment of Learners

This document will become one of many chapters in a text book on education in the health professions to be published by Oxford University Press. All of the chapters in the textbook will follow a Problem-based Learning (PBL) format dictated by the editors and used by these authors.

Learning objectives

  1. Compare and contrast feedback, formative assessment, summative assessment, evaluation, and grading.
  2. Identify frameworks for providing learner assessment and tracking growth in the health professions.
  3. Identify key components to providing feasible, fair, and valid assessment. 
  4. Describe the roles and responsibilities of both preceptors and learners in optimizing assessments and evaluations.

Abstract

This chapter explores the concepts of learner assessment and evaluation by presenting a case in which a medical student participates in a year-long clinical experience with a preceptor. Using various data points and direct observation, the student is given both formative and summative assessments throughout the learning experience, providing them with information needed to guide their learning and improve their clinical skills. As the case progresses, questions are posed in order to help identify key concepts in learner assessment and explore the interconnectivity between assessment, evaluation, feedback, and grading. The information presented will help educators identify and develop effective assessment strategies that support learner development and growth.

Key Words: assessment, evaluation, learner, formative, summative, grading

Case

Morgan is a medical student who is beginning a new pediatric clinical experience. This learning opportunity includes weekly outpatient clinics with you as a preceptor. This is Morgan’s first opportunity to learn clinical skills outside of the classroom setting.   

  • What is the role of a learner in the clinical setting as they progress through their training?
  • What are some of the frameworks available for assessing learners’ abilities in the clinical setting?

After introducing yourself and the clinic staff to Morgan, you give them[NL1]  a quick tour of the facilities before sitting down in your office to discuss their current learning goals. He identifies obtaining a history as an area he would like to improve. Specifically, he would like to improve his ability to take a history that is comprehensive but tailored to the chief complaint and the clinical setting. You advise him that you will regularly assess him and provide feedback. You recommend that he keep a patient log so he can track the number of patients and complaints he sees throughout the experience.           

  • What is the difference between assessment and feedback?
  • What are the roles of the faculty and the learner in providing the learner with assessment and feedback?

In the first week, Morgan sees a 6y/o girl who presents with a fever. You follow the student into the room, allowing him to enter first.  After introducing Morgan, you ask the family if they are comfortable with Morgan taking the history. The family is excited to contribute in the education of a medical professional and readily agrees. Morgan stands against the wall, looks down at his tablet to pull up his notes, and begins: “The medical assistants told us your daughter has a fever. How long has it been going on?” He then proceeds to ask about the nature of the fever, some associated symptoms (including runny nose, cough, and rash), and alleviating and exacerbating factors. He asks about her past medical history including surgeries and medicines and then conducts a full family and social history. You ask the family a few follow-up questions and perform a physical exam, finishing the visit by discussing the most likely diagnoses and developing a plan with the patient and family.

After sending the family on their way, you ask Morgan how he felt the history went. You ask him to reflect on what he did well and what he should continue to work on. After listening to Morgan’s response, you provide feedback on your assessment, including concrete suggestions for improvement. The student thanks you for the feedback and commits to integrating your recommendations into his practice. He also thanks you for the opportunity to observe your approach to counseling families and obtaining a physical exam that puts the patient at ease.    

  • What are the key components of effective formative assessment?
  • How often should formative assessment occur to optimize learning and growth?

You continue to work with Morgan over the following weeks. He sees multiple children of all ages with several different complaints. When able, you accompany him into the room so that you can directly observe his history-taking. However, there are multiple times that he goes in alone and then reports his findings to you. During many encounters, he is unsure how to interpret his exam findings and asks you to double-check his technique and interpretations. At times, you ask follow-up questions related to the chief complaint and he admits that he does not know the answer. When this occurs, he reports honestly that he did not ask the question. He promises to ask when you both return to the room. You note that he frequently asks the question in subsequent encounters with patients.

  • What role does trustworthiness play in the assessment of learners?

Three months into the year, a 5y/o child, Kai, presents with a fever. You accompany Morgan into the room to directly observe his history. You obtain the family’s permission for Morgan to participate in their son’s care. Morgan begins by introducing himself and asking the family if they are okay if he takes some notes while they talk. He begins, “Kai, I’m sorry you aren’t feeling well. Can you tell me what’s been going on?” Kai says she doesn’t feel good and has a fever. Morgan proceeds to obtain a comprehensive, but focused history of the fever, including both the patient and parents in the conversation. While taking the history, he uses active listening skills, asks clarifying questions, and summarizes the information for the family to ensure he fully understands their concerns. He asks about commonly associated symptoms and symptoms related to possible diagnoses. He asks the family about treatments they have tried (including over-the-counter and homeopathic remedies), asks about their concerns regarding the fever, and includes recent travel and sick contacts in his social history. Before moving onto the physical exam, the student asks the family if he has missed anything important about the chief complaint or about Kai’s medical history.          

  • What does a learner need to do to show “competence” or the ability to effectively perform a professional activity without supervision?
  • How do learner assessment frameworks help track/note improvements in learner performance?

After you conclude the visit and leave the room, you ask Morgan how he feels the encounter went and how he has progressed with his goal of obtaining a history. He is happy with his progress and able to identify areas in which he has improved and things he would still like to work on. You agree that his skills have improved and provide him with formative feedback regarding your assessment of his performance today. You ask him to stay after the clinic so the two of you can review his progress to date.

  • What is the difference between formative and summative assessment?
  • What are the benefits of longitudinal relationships in both formative and summative assessment?

After the clinic, you sit down with Morgan. You ask him to pull out his patient log and the two of you go through the patients he has seen through the 3-months he has been with you so far. He has been collecting a portfolio of interesting cases and experiences. He brings with him the notes he took when getting feedback on his weekly formative assessments. The two of you go through his portfolio and patient log. He reflects on the improvements he has made and identifies areas he can continue to improve and sets new learning goals. You agree with his findings and provide further guidance on growth you have observed and areas he can continue to work on. You continue this pattern of sitting down with Morgan every 3 months throughout the remainder of the learning experience, to review his progress, discuss learning goals, and add to his portfolio.

At the end of the year, you thank Morgan for his participation in the care of your patients. The school has an evaluation form that asks about students’ strengths and areas requiring further growth. You consider all the work you have done with Morgan, his assessments, and his growth throughout the year. You fill out the evaluation form, providing a summative assessment that includes both quantitative (performance ratings) and qualitative (narrative comments) information. Morgan is required to take a final “exam” that includes a multiple-choice test and participate in an observed encounter with a simulated patient, where an actor plays the role of a patient. Morgan receives a final grade for the rotation with comments on his performance. 

  • What are the key components of effective summative assessment?
  • What are the methods and key components of learner evaluation?
  • What are the similarities and differences between assessment and evaluation?
  • What role does the learner have in accepting and reviewing their evaluation?

Assessment and learning in health sciences education

The goal of health sciences education is to provide the environment, information, and experiences needed for learners to develop the knowledge, skills, and attitudes required to practice as a professional in their specific field. Ultimately, the responsibility for learning lies with the learner.1 The teacher’s role is to support and challenge learners in their journey, providing information, supervision, and assessment in order to help them grow and improve in their abilities.

Assessment is one of the most important methods teachers use to support and challenge their learners.2,3 Assessment, in essence, is the process of judging a student’s performance relative to a set of expectations.4 Through assessment, the teacher guides learning by helping students identify their unique strengths and weaknesses and providing concrete recommendations to address these areas. These may include knowledge gaps, skill sets requiring further practice, or even misunderstandings in requirements and attitudes that need reframing. This is why learning and assessment are linked together – one can’t really be achieved well without the other. In an ideal learning environment, every teacher considers it their responsibility to assess learners routinely and consistently, challenging them to demonstrate their current abilities and then supporting them in their growth where needed.[i]

Assessment can take many forms, varying based on the circumstances of the environment and learner; types of knowledge, skills, and attitudes being assessed; and the primary purpose for which the assessment information will be used. For example, types of knowledge and skills can vary from remembering basic facts to thinking critically to conducting complex surgical procedures. Assessments, therefore, will differ and may include multiple choice written tests to oral examinations to procedural skills simulations or direct observations in the clinical learning environment, respectively. Overall, assessment should be used to tailor individual learners’ education and experiences to support their growth. Each assessment may be formative and relatively informal, geared toward iteratively shaping performance or may be more formal, geared toward giving information about learning outcomes.

Formative vs. Summative Assessment

Overtime, specific terms have emerged to differentiate among the variations in assessment described above. One of the most important distinctions is between formative and summative assessment. Think of these as a continuum.5 On one end is formative. Formative assessments tend to be less formal and focused on providing information to help students ‘form’ their knowledge, skills, and attitudes. They should be performed regularly and may be completed after a single experience or observation. On the other end of the spectrum is summative assessments, which tend to be more formal and focused on “summarizing” a learner’s knowledge and skills after a certain time period. Formative and summative assessments can be systematically sequenced and combined within a school to optimize learning, so that assessments from individual teachers contribute to a larger program of assessments conducted by school leadership to create a holistic understanding of learners’ strengths and weaknesses.6,7 In this chapter, we focus on assessments made by individual teachers.

With formative assessments, teachers use limited data to identify learners’ strengths and areas needing further development and help guide the learner’s education and experiences to support this learning. With summative assessments, teachers use more comprehensive information in order to judge learning outcomes achieved to date and check the learner’s knowledge and skills. These assessments tend to combine information from multiple sources and settings and include information from different time points. To better understand the difference, take the example of a runner competing in a marathon. The athlete is receiving formative assessments and feedback throughout the race, including lap time, current pace, and current position. After the race is completed, the runner gets a summative assessment, including average pace per mile, time to course completion, and overall rank among finishers. Formative assessment may be used by the runner to adjust their strategies and plans throughout the race. Alternatively, summative assessment information can help to guide the runner as they prepare for and begin their next race. Often, a summative assessment is tied to a decision-making process, such as a final grade.

Figure 1: Relationship of formative and summative assessment

Figure 1 O'Connor - Learner Assessment

Assessment and Evaluation

Another important distinction in education is between assessment and evaluation. Although they are often used interchangeably, there are differences. Assessment is used to refer to the process of collecting evidence of learning, identifying learners’ strengths and areas needing further development and growth. Evaluation is used to refer to the process of comparing evidence of progress to learning objectives or standards (criterion-referenced) or even other learners’ performances (norm-referenced). In other words, assessment focuses on the learning process while evaluation focuses on the learning outcomes compared to a standard. Keeping the focus on assessment supports growth-mindset learning and the idea that health professionals are life-long learners.8 Shifting to competency-based education and assessment emphasize criterion-referenced evaluation, promoting self-improvement in learning rather than competition with other learners. Alternatively, overemphasis on evaluation can set up an environment that focuses on performance-mindset learning.

Now, you may be wondering how do feedback and grading fit into assessment and evaluation? Feedback refers to information provided to the learner about their knowledge, skills, or attitudes at a single point in time after a direct observation or assessment. Grading is a form of evaluation, providing the learner with an overall score or rank that is based on their performance.

Assessment Frameworks

For many years, educators used the term learning objectives to describe desired outcomes they wanted learners to achieve through a learning experience. Objectives usually include action verbs and are stated in the following format “At the end of this module, learners will be able to…”, followed by a description of a specific behavior. Refer again to the learning objectives at the beginning of this chapter as examples. More recently, however, educators have begun to state desired learning outcomes as competencies. Competencies refer to a combination of knowledge, skills, values and attitudes required to practice in a particular profession.9 These abilities are observable, so they can be assessed. Learners are expected to demonstrate “competence” in all abilities related to their field prior to practicing without supervision. Therefore, the purpose of most learning programs is to prepare learners by achieving a level of competence for all of the identified critical activities for that profession.

Most professions have identified several competencies . For example, the Association of American Medical Colleges has identified 52 competencies for practicing physicians. These have been organized into domains: medical knowledge, patient care, professionalism, interpersonal and communication skills, medical informatics, population health and preventative medicine, and practice-based & systems-based medical care.9 When used together, they describe the “ideal” physician. Although they are comprehensive and provide a strong basis for the development of an assessment strategy, their descriptions can be abstract and therefore difficult to assess concretely and in the setting in which a learner practices. As a result, obtaining routine and meaningful assessments of these competencies during medical school and graduate medical education is proving to be a challenge.10 In response to these challenges, various organizations have developed approaches to better defining expectations of learners and assessing their progress throughout their training.

One of these new approaches, Entrustable Professional Activities (EPAs), is growing in popularity across the health sciences. This approach focuses on assessment of tasks or units of practice that represent the day to day work of the health professional. Performance of these activities requires the learner to incorporate multiple competencies, often across domains.11-14 For example, underlying all the EPAs are the competency of trustworthiness and understanding of a learner’s individual limitations that leads to appropriate help-seeking behavior when needed.15 EPA frameworks have been created by many of the health science education fields including nursing, dentistry, and medicine. One of the earliest organizations to adopt EPAs was the Association of American Medical Colleges. In 2014, they identified thirteen core EPAs for graduating medical students entering residency.16 These thirteen EPAs encompass the units of work that all residents perform, regardless of specialty. Examples include “Gather a history & perform a physical exam,” “Document a clinical encounter in a patient record”, and “Collaborate as a member of an interprofessional team.”

The goal of EPA assessments is to collect information about learners’ “competence” in performing required tasks in their respective field. They assess a learner’s level of readiness to complete these activities with decreasing levels of supervision. As they progress in their abilities, they will be able to perform these activities with less and less supervision from teachers, moving from being able to observe only, to perform with direct supervision, to perform with indirect supervision, to perform without supervision. A major benefit of EPAs is that they provide a holistic approach to assessment. Each EPA requires integration of competencies across domains in order to perform the activity. Since faculty routinely supervise learners performing these professional activities in the clinical learning environment, they find them more intuitive to assess. If multiple direct observations of the activities are performed and the learner demonstrates competence to perform them without need for direct supervision in multiple contexts (e.g., various illness presentations, different levels of acuity, multiple clinical settings, etc.), then a summative assessment can be made that the learner is competent to perform this activity without direct supervision in future encounters.

Figure 2: Example of EPA supervision scale

Observe onlyDirect supervisionDirect supervisionPractice without
supervision
Able to watch the supervisor perform the activityAllowed to perform the activity with supervisor in the roomAllowed to perform the activity with supervisor outside of the room. Supervisor will double-check findings.Allowed to perform the activity alone

Characteristics of High-quality Assessments

Not only can frameworks improve the quality and effectiveness of learner assessment strategies, certain principles can be applied to individual assessments in to order to support growth-mindset learning and achieve the assessment’s desired goals. As would be expected, not all assessments are of equal value.17 High quality assessments tend to follow 6 simple rules:

Rule 1: Direct observation. Observe learners’ actual performance whenever possible. This means that you are present while learners work with patients in the clinical setting, watching them use the knowledge and skills you are assessing. Frequently, educators observe small parts of the activities and rely on learner reporting of findings to make judgement on how well the learner performed. Some of the reported information can be double-checked by the preceptor by independently speaking with the patient and performing an exam. However, the gold standard is direct observation of an encounter (e.g., How did they ask questions, what was the technique for administering the vaccine, etc.?) Making assumptions can lead to inaccurate assessments and missed opportunities for growth.

Rule 2: Consider context. Use multiple observations and data to guide summative assessments, evaluations, and grading. Learner’s performance may vary based on patient population, presentation of the problem, acuity, and clinical context. Getting multiple assessments in various clinical contexts allows you to see patterns in behavior that will better reveal strengths and areas for improvement.

Rule 3. Consider the learner’s current abilities. Sequence learning tasks based on learner’s level of ability and assess accordingly in order to maximize learning.Aligning assessment difficulty with the knowledge and skills that the learner is most prepared to learn next– building upon what is already known–will help ensure that assessment optimizes learning.

Rule 4: Learner participation. Learners should actively participate in their assessments. Ask learners to self-assess their skills, knowledge, and attitudes. Ask them to identify learning goals for themselves and ensure your assessments encompass these goals.

Rule 5: Feedback. Share results of the assessment with the learner in a timely manner. This is especially important for formative assessment as it should be used to guide learning and work on acquisition of competencies within the current clinical setting.

Rule 6: Behavior-based recommendations. Identify specific strengths and areas for improvement, providing the learner with examples of where these behaviors were observed. Identify areas where learners can improve, focusing on specific, behavior-based recommendations that are attainable. Think to yourself “What does this learner need to do to get to the next level of competence or the next stage of supervision?”

Table 1: Characteristics of high-quality assessments

HIGH-QUALITY ASSESSMENTS
Utilize direct observation
Vary observations to include different skills, settings, complaints, complexity, and acuity
Match the goals of the learning experience
Sequence the level of difficulty of the clinical tasks that are being assessed
Include learners in their set up and implementation
Consider and encompass the learner’s goals
Provide concrete information on how to progress to the “next level”
Provide timely feedback to the learner
Can be strengthened by using a formal assessment framework (e.g., EPAs)

End of module questions

Keith is a nursing student who is learning to give immunizations. After obtaining consent, he and his preceptor, Leticia, enter the room where he administers three intramuscular vaccinations to a 4-year-old child. After observing the encounter, Leticia uses the EPA framework to determine that Keith still needs direct supervision when performing vaccine administration. What is this an example of?

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Sarah is an occupational therapy student who is learning to do a swallow evaluation on an adult who recently suffered a stroke. She performs the examination while her preceptor Phyllis observes. After the encounter, Phyllis pulls Sarah into a private area and asks her to reflect on the experience, identifying areas she did well on and things she can improve on. Phyllis then describes what she observed and gives Sarah clear and concrete recommendations for improving her performance. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony is a dental student who has just completed a rotation in geriatric dentistry. Upon completion of the course, leadership compiled his preceptor evaluations, observed structured clinical encounter assessment form, patient logs, exam score, and patient feedback. They used all the information to provide Anthony with a narrative summary of his strengths and areas for improvement. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony’s performance was compared to a list of set objectives and expectations for the course. Based on his performance, he was provided with a grade of “Honors” in the course. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

What is a key component to feasible, fair, and valid assessment?

  1. Use direct observation
  2. Use multiple encounters to provide formative assessment
  3. Highlight all the learner’s weaknesses
  4. Use single encounters to provide summative assessment.

Bibliography

1.         Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33(3):206-214.

2.         Swan Sein A, Rashid H, Meka J, Amiel J, Pluta W. Twelve tips for embedding assessment. Med Teach. 2020:1-7.

3.         Epstein RM. Assessment in medical education. N Engl J Med. 2007;356(4):387-396.

4.         Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676-682.

5.         Bennett RE. Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice. 2011;18(1):5-25.

6.         van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205-214.

7.         van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve Tips for programmatic assessment. Med Teach. 2015;37(7):641-646.

8.         Dweck C. What Having a “Growth Mindset” Actually Means. Harvard Business Review. 2016.

9.         AAMC. Physician Competency Reference Set. https://www.aamc.org/what-we-do/mission-areas/medical-education/curriculum-inventory/establish-your-ci/physician-competency-reference-set. Accessed May 31, 2021, 2021.

10.       Fromme HB, Karani R, Downing SM. Direct observation in medical education: a review of the literature and evidence for validity. Mt Sinai J Med. 2009;76(4):365-371.

11.       Al-Moteri M. Entrustable professional activities in nursing: A concept analysis. Int J Nurs Sci. 2020;7(3):277-284.

12.       Carney PA. A New Era of Assessment of Entrustable Professional Activities Applied to General Pediatrics. JAMA Netw Open. 2020;3(1):e1919583.

13.       Pinilla S, Kyrou A, Maissen N, et al. Entrustment decisions and the clinical team: A case study of early clinical students. Med Educ. 2020.

14.       Tekian A, Ten Cate O, Holmboe E, Roberts T, Norcini J. Entrustment decisions: Implications for curriculum development and assessment. Med Teach. 2020;42(6):698-704.

15.       Wolcott MD, Quinonez RB, Ramaswamy V, Murdoch-Kinch CA. Can we talk about trust? Exploring the relevance of “Entrustable Professional Activities” in dental education. J Dent Educ. 2020;84(9):945-948.

16.       AAMC. Core Entrustable Professional Activities for Entering Residency: Curriculum Developer’s Guide. https://www.aamc.org/media/20211/download. Published 2017. Accessed May 31, 2021, 2021.

17.       Boyd P, Bloxham S. Developing Effective Assessment in Higher Education: a practical guide. 2007.

Feasibility and Benefit of Using a Community-Sponsored, Team-Based Management Project in a Pharmacy Leadership Course

Abstract

Objective. Assess the impact of community-sponsored, team-based management projects in a leadership and management course on PharmD students’ teamwork skills and project sponsor satisfaction. 

Design. Third-year pharmacy students were divided into eight to ten groups to complete a project proposed by local pharmacists as a “lab” to practice teamwork skills. Projects intended to meet a real need of the submitting organization or the pharmacy profession.

Methods. A validated Team Performance Survey (TPS) assessed teamwork effectiveness. Project sponsors completed surveys to evaluate the quality of the students’ work, the likelihood of project implementation, the benefit of participation, and willingness to sponsor future projects.

Findings. One hundred percent of students and sponsors completed the assessments. TPS average scores across 2017, 2018, and 2019 show that 17 out of 18 activities ranked over 90% by students as being used “every time” or “almost every time,” indicating that students performed well in this team setting. Free-text responses indicated that students found value in participating in management projects. Common themes of project advantages include networking with sponsors, teamwork, building community in the classroom, the autonomy of creating deliverables, and applicable and impactful projects. All sponsors were willing to participate again, and the majority listed interacting with students and increasing their connection to the College as benefits. Ninety-five percent of sponsors said they were “extremely” or “somewhat likely” to implement the student project.

Summary. Community sponsored, team-based management projects in a leadership and management course serve as a model for developing students’ teamwork skills within pharmacy curricula.

Keywords: leadership, teamwork, curriculum, management, project

Introduction and Leadership Framework

The American Society of Health-System Pharmacists (ASHP) 2005 landmark publication highlighted the need for more intentional leadership development in Doctor of Pharmacy (PharmD) programs. The publication stressed that many key pharmacy leadership positions could go unfilled, including a need for over 4,000 new directors in the following decade, unless the pharmacy profession addresses the lack of leadership training.1  White et al. identified that 75% of pharmacy directors do not anticipate remaining in their current positions and only 17% of employers have the ability to fill vacant leadership positions within two months. Employers face difficulty hiring for leadership positions due to 1) a lack of practitionerswith leadership experience, 2) a lack of interestamong current practitioners, and 3) the belief that leadership positions are stringent and stressful. ASHP’s 2012 leadership assessment identified that “a higher percentage of employers (from 3% in 2004 to 17% in 2011) could fill vacant leadership positions within two months, and 37% of employers reported that filling a leadership position was more difficult than it was three years ago (from 57% in 2004).”2 While the need for pharmacist leadership training has been recognized and improved over the past decade, preparing individuals to tackle complex leadership issues remains a challenge.

Student pharmacists may enter the workforce with insufficient leadership skills to effectively serve in a formal leadership position, to function effectively within clinical teams, and to advance the pharmacy profession.3 Elective courses and optional extra-curricular activities increase students’ exposure to leadership, but these limited opportunities may lead to an overall lack of leadership training within the profession. 3 During a complete revision of the PharmD curriculum, The University of Utah College of Pharmacy) identified a need for intentional leadership development through curricular mapping. To address this need, The University of Utah College of Pharmacy developed a longitudinal leadership curriculum based on the framework of Relational Leadership (RL) created by Primary Care Progress (PCP), a non-profit, grassroots leadership development organization dedicated to advocating for improved health care. RL, currently utilized in a multi-site interprofessional, cross-generational leadership program called the Relational Leadership Institute and with interprofessional PCP student teams across the country, consists of four domains: manage self, foster teamwork, coach and develop, and accelerate change.4 This new leadership curriculum, sought to identify innovative and authentic ways to give students experience related to these four domains of leadership.

Several institutions have successfully developed educational projects that focus on leading change5, medication reviews to prevent adverse events such as falls6, or disease state education.7 However, these projects tend to be facilitated by faculty or students rather than current practitioners. The Leadership and Management course at the University of Utah College of Pharmacy (UUCOP) piloted an experiential component of community practitioner-sponsored, team-based management projects to provide context and a “learning lab” for enhanced self-awareness and effective teams. Team-based management projects move away from simulation, business planning, or mock exercises that could have minimal applicability to students. 8-10 Given that engaging with pharmacists allows students to see how leadership can shape real-world practice and has shown to be effective,11 the team-based management projects connected students with current practitioners to collaborate on management or practice-based projects.

Based on a review of the literature, the community sponsored, team-based management projects at the University of Utah College of Pharmacy present a novel, “win-win” educational experience in leadership development for both pharmacy students and pharmacist-sponsors in a leadership and management course. This paper describes a professional curriculum course focused on developing teamwork skills through tackling real-life management problems, with the goal to equip students with the necessary skills to become successful team members. This paper will be the first of its kind to assess the feasibility and benefit of community-sponsored, team-based management projects in a leadership and management course on providing PharmD students’ and the project sponsor experience over three years.

Educational Context and Methods

The UUCOP is a public college of pharmacy within a large academic medical center with an enrollment of approximately 60 students per class. PHARM 7340 Leadership and Management for Pharmacists is a required, two-credit didactic-experiential course taught in the fall semester of the third professional year within the four-year didactic curriculum. A significant portion of the UUCOP longitudinal leadership curriculum resides in the Leadership and Management for Pharmacists course. Students who enter the course have a basic understanding of RL from lectures and activities in other courses with introductory learning specifically related to self-awareness, but limited learning related to effective teams.  The course meets once weekly for two hours over 14 weeks and includes didactic lectures with active learning strategies, reflection, breakout sessions, application exercises, activities in a separate recitation course, and a small-group project. Students complete two didactic modules to build a basic understanding of key leadership concepts, then participate in community practitioner-sponsored, team-based management projects.

Experiential: Practitioner-Sponsored Projects

Involving community sponsors from various settings exposes students to a variety of leadership styles and practices.12 Before each semester began, local pharmacists serving in leadership or clinical roles in retail, ambulatory care, hospital, and managed care settings were contacted and invited to submit management or practice-related projects that could be completed in approximately nine weeks. Sponsors submitted a project form (Appendix 1) to guide the creation of the project, ensure the project would meet course objectives and provide deliverables for the students to complete by the end of the semester. The project form was reviewed and approved by the faculty course director with feedback given and adjustments made as needed to meet project and course objectives. This process also allowed the course director to vet potential sponsors. Projects were designed with the intent that the sponsor could easily implement the project following course conclusion. Sponsors were given extensive latitude to submit projects across a wide array of topic areas. Elements of the projects included but were not limited to, practice management, practice/service development, patient safety, continuous quality improvement, operations, literature evaluation, and production of publishable work. Four to six students were assigned to each project based on their project preference through a ranking survey. Project sponsors and students participated in a vision session during class to establish a relationship as a team, discuss the goals of the project, and create a plan on how to produce the desired results. Students completed their projects longitudinally over the course of the semester. Through this activity, students had the opportunity to learn teamwork by engaging in brainstorming session; collecting and analyzing data; and preparing deliverables and presentations. Students were encouraged to use the RL concepts of manage self, foster teamwork, coach and develop, and accelerate change throughout the duration of the project. At the conclusion of the semester, teams presented their project and deliverables via a formal presentation to project sponsors and the class. Presentations were assessed by the course master, project sponsors, and teaching assistant(s) via rubrics, grading on organization, content, visuals, speaking and presentation skills, conclusion, participation, and responses to live questions.

Two key areas were identified for the assessment of the team-based management projects: team effectiveness in completing the project and sponsor experience. Team effectiveness assesses students’ function in teams as a result of didactic instructions and how well students utilize effective leadership and teaming strategies to accomplish a specific outcome.  In other words, this assessment documented how well students were actually able to accomplish work in a team—which we believe is a reflection of students’ understanding and application of the desired leadership and teaming skills.   

Assessing for sponsor experience ensured the team produced a quality product that met the expectations of the sponsor. Additionally, areas for improvement and future opportunities for collaboration were identified. It was also important to assess the likelihood of projects being implemented to ensure the real-world applicability of the project and the potential for students to continue their efforts with sponsors after course completion.

To assess these key areas, students completed an anonymous, validated Team Performance Survey (TPS) and sponsors completed a project sponsor experience survey via Qualtrics12 upon completion of the semester. Student and sponsor surveys were collected for the Fall 2017, 2018, and 2019 semesters.

Team Effectiveness

The student survey included a previously validated instrument, the Team Performance Survey (TPS).13 The TPS aims to assess how effectively students work together in a team and, potentially, if they were able to implement course concepts that led to improved behavior throughout the duration of the project. In the TPS, students indicated how often on a five-point scale their team members engaged in each of eighteen activities that characterize an effective team (as shaped by effective leadership). Table 1 lists all TPS elements. At the end of the TPS survey, students provided feedback on project experiences to improve student experience in subsequent years.

Sponsor Experience

The project sponsor survey was developed by the course director and collected the sponsors’ assessment of the quality of their students’ work, the likelihood that they would implement the results of the projects, likelihood of sponsoring a future project, and sponsorship benefit on a five-point scale. Free text responses allowed the sponsors to describe features necessary for the creation of successful projects and make suggestions for the future. Feedback was requested to improve sponsor experience in subsequent years in hopes of ensuring a sufficient number of sponsors willing to participate.

For both the team effectiveness and sponsor experience surveys collected in 2017, 2018, and 2019, the scaled data was summarized by the percentage of responses for each choice and the text responses were examined and inductively coded to identify common themes.

Findings and Discussion

Team Effectiveness

Since implementation in Pharmacy Leadership and Management in 2017, 153 students have participated in team-based management projects. One-hundred percent of students completed the TPS. Results indicate that student teams were able to work together to create a final project and develop mutual respect with one another (Table 1). Average scores of the TPS (2017, 2018, and 2019) report that all indicators of team effectiveness had high percentage of responses in the “almost every time” or “every time” response categories. The activity with the lowest average ranking, “often members helped a fellow team member to be understood by paraphrasing what he or she was saying” was ranked above 85%. Overall, the TPS responses indicate students within the Leadership and Management course can form and use relevant leadership and teaming skills to operate within highly effective teams. Free text response regarding the best parts of the community-sponsored, team-based projects identified that students found value in networking with sponsors and other pharmacists through their projects; working in and providing informal leadership on teams; learning about a new area of pharmacy and/or management skill; building community in the classroom; the autonomy of creating deliverables; interesting and relevant projects topics; and working on practical, applicable, and impactful projects (Table 2). Points for improvement included starting projects earlier in the semester, having more time in class to work on projects as a team, having more explicit guidance on project deliverables, and encouraging more frequent contact with project sponsors during the semester (Table 3). From 2017 to 2019, the areas for improvement stayed relatively consistent, but time allotted to work on projects in class and introducing projects earlier in the semester improved.

We interpret the results of the TPS to conclude that students were able to use leadership and teaming skills introduced in the course to work together to accomplish the final project and function as an efficient team, with all categories of this survey being rated over 85%.

Sponsor experience

Between 2017, 2018, and 2019, there was a total of 28 projects with 16 pharmacists who served as sponsors. Four pharmacists sponsored two projects in the same year, five pharmacists served as project sponsors in two out of the three years, and two pharmacists served as project sponsors each of the three years. All 28 project sponsors completed the sponsor survey. One-hundred percent of project sponsors reported that students were able to create deliverables that met expectations with no or minor revisions needed. Sixty-four percent of the project sponsors reported that they would be extremely likely to implement the student projects into their practice (30% of project sponsors reported “likely” and 6% reported “neutral” or “somewhat unlikely”). Benefits of participation to the sponsors included the opportunity to interact with students, engage with the College of Pharmacy, and network with future colleagues while accomplishing something meaningful that would benefit the project sponsor’s work organization (Figure 1). All project sponsors stated they would be willing to sponsor future projects. Free text responses identified that providing students with feedback, setting clear expectations, and communication were essential elements for creating a successful project (Table 4). Communication about expectations and deadlines between students and mentors was a challenge that many project sponsors faced (Table 5). Points for improvement included setting expectations early with students, creating deadlines, and frequent check-ins.

One key factor considered in the design of the course and the management project was the project sponsor’s experience and whether the projects would actually be implemented. Although one project sponsor indicated that “it would be unlikely for them to implement the project into their practice,” the majority indicated that project implementation was likely. Additionally, all of the sponsors answered that they would be willing to help with a future project, signaling an overall high level of satisfaction with the experience. Five pharmacists sponsored projects in two consecutive years and two pharmacists sponsored projects in three consecutive years indicating that community partners built a strong relationship with UUCOP and found value in working with the student teams. The relationships built with UUCOP and students may motivate project sponsors to continue to develop high-quality, innovative and authentic projects.

Implications

The team-based management project continues to be a core element of the Leadership and Management for Pharmacists course. Since their inception, the projects have evolved to explore advanced pharmaceutical practices and produce more innovative and impactful deliverables. The diversity of projects to meet the interests of students has been a strong focus of improvement.  Given the growing demands on student time, more in-class opportunities have been given for project work and, therefore, greater use of leadership and teaming skills. Frequent contact between sponsors and students has been emphasized.

The assessment of team-based leadership projects in the Leadership and Management course identified several “wins” that did not occur in the UUCOP curriculum previously and have not been identified by previous literature. The course creates new and authentic connections between education and practice by engaging students on relevant projects that benefit all involved. Through the projects, students are able to connect with mentors and potential employers while gaining experience using their leadership and teaming skills and becoming more comfortable in their understanding of leadership roles needed to move pharmacy forward. Project sponsors gained closer connections with students who may become their employees and the ability to implement projects that benefit their organizations. By utilizing their community connections and focusing on addressing sponsor needs, other institutions could adopt this model of using team-based projects to provide real-world opportunities for students to learn firsthand how important leadership and teaming skills can be.

A potential barrier for adopting this model of community-sponsored, team-based projects is the inability to find sponsors to offer and facilitate projects that can be completed within a semester, are relevant to advancing current pharmacy practice, have a real-life application, and are intended to be implemented at respective practices sites. Beyond implementation in a leadership or management course, a similar model could be applied to interprofessional education or therapeutics courses where the projects are clinical in nature. This process could also be utilized for medication safety, quality improvement, pharmacy & therapeutics committees, or in other administrative functions occurring in health systems. In all cases, institutions would be free to adapt the parameters of these value-added learning experiences to local conditions, resources, and interests.

The community-sponsored, team-based management projects provided students the opportunity to develop their individual and team leadership skills while creating a beneficial project for the community sponsors and participating organizations. Evaluations from both students and sponsors suggest that community-sponsored, team-based management projects will serve as an effective tool in preparing students to lead change upon entry into the profession and positively impact pharmacy organizations.

Disclosures

Conflicts of Interests: The authors have no pertinent conflicts of interest with respect to the research, authorship, and/or publication of this article. Authors do not have any competing or conflicts-of-interest.

Financial Disclosure: There are no financial conflicts of interest to disclose.

References

  1. White SJ. Will there be a pharmacy leadership crisis? An ASHP Foundation Scholar-in-residence report. Am J Health Syst Pharm. 2005;62(8):845-855. doi:10.1093/ajhp/62.8.845
  2. White SJ, Enright SM. Is there still a pharmacy leadership crisis? A seven-year follow-up assessment. American Journal of Health-System Pharmacy. 2013;70(5):443-447. doi:10.2146/ajhp120258
  3. Feller TT, Doucette WR, Witry MJ. Assessing Opportunities for Student Pharmacist Leadership Development at Schools of Pharmacy in the United States. Am J Pharm Educ. 2016;80(5):79. doi:10.5688/ajpe80579.
  4. Cooper J. The Relational LeadershipTM Model – Primary Care Progress. https://www.primarycareprogress.org/relational-leadership/. Accessed 10 July 2020.
  5. Sorensen TD, Traynor AP, Janke KK. A pharmacy course on leadership and leading change. Am J Pharm Educ. 2009;73(2):23. doi:10.5688/aj730223
  1. Withey MB, Breault A. A Home Healthcare and School of Pharmacy Partnership to Reduce Falls. Home Healthc Nurse. 2013;31(6):295-302. doi:10.1097/NHH.0b013e318294787c.
  2. Shiyanbola OO, Lammers C, Randall B, Richards A. Evaluation of a student-led interprofessional innovative health promotion model for an underserved population with diabetes: A pilot project. J Interprof Care. 2012;26(5):376-382. doi:10.3109/13561820.2012.685117.
  3. Cavanaugh TM, Buring S, Cluxton R. A Pharmacoeconomics and Formulary Management Collaborative Project to Teach Decision Analysis Principles. Am J Pharm Educ. 2012;76(6):115. doi:10.5688/ajpe766115.
  4. Shahiwala A. Entrepreneurship skills development through project-based activity in Bachelor of Pharmacy program. Curr Pharm Teach Learn. 2017;9(4):698-706. doi:10.1016/J.CPTL.2017.03.017.
  5. Rollins BL, Gunturi R, Sullivan D. A Pharmacy Business Management Simulation Exercise as a Practical Application of Business Management Material and Principles. Am J Pharm Educ. 2014;78(3):62. doi:10.5688/ajpe78362.
  6. M. Izham M. Ibrahim, Albert I. Wertheimer, Maven J. Myers, William F. McGhan & Calvin H. Knowlton (1997) Leadership Styles and Effectiveness: Pharmacists in Associations vs. Pharmacists in Community Settings, Journal of Pharmaceutical Marketing & Management, 12:1, 23-32, DOI: 10.3109/J058v12n01_02
  7. Thompson BM, Levine RE, Kennedy F, et al. Evaluating the Quality of Learning-Team Processes in Medical Education: Development and Validation of a New Measure. Acad Med. 2009;84(Supplement): S124-S127. doi:10.1097/ACM.0b013e3181b38b7a.
  8. 2005. QualtricsXM. Provo, Utah, USA: Qualtrics.
  9. Reed BN, Klutts AM, Mattingly TJ 2nd. A Systematic Review of Leadership Definitions, Competencies, and Assessment Methods in Pharmacy Education. Am J Pharm Educ. 2019;83(9):7520. doi:10.5688/ajpe7520
  10. Sullivan GM. A primer on the validity of assessment instruments [published correction appears in J Grad Med Educ. 2011 Sep;3(3):446]. J Grad Med Educ. 2011;3(2):119-120. doi:10.4300/JGME-D-11-00075.1

Appendix 1: Sponsor Project Forms

American Association of Neurological Surgeons Joint Sponsored Activities: A longitudinal comparison of learning objectives and intent-to-change statements by meeting participants

Abstract

Background: Continuing medical education (CME) activities are required for physician board certification, licensure, and hospital privileges. CME activities are designed to specifically address professional knowledge or practice gaps. We examined statements taken from participants of their “intent-to-change” as data to determine whether the CME activity content achieved a stated learning objective.

Methods: We performed a retrospective mixed-method thematic content analysis of written and electronic records from American Association of Neurological Surgery  (AANS) sponsored CME activities. Data was analyzed using a quantitative, deductive content analysis approach. Meeting objectives were examined to determine if they resulted in specific intent-to-change statements in learners’ evaluation of the CME activity on a direct basis for one year as well as longitudinally over 6 consecutive years. Intent-to-change data that did not align with meeting objectives were further analyzed inductively using a qualitative content analysis approach to explore potential unintended learning themes.

Results: We examined a total of 85 CME activities, averaging 12–16 meetings per year over 6 years. This yielded a total of 424 meeting objectives averaging 58–83 meeting objectives each year. The objectives were compared with a total of 1950 intent-to-change statements (146–588 intent-to-change statements in a given year). Thematic patterns of recurrent intent-to-change statements that matched with meeting objectives included topics of resident education, complication avoidance, and clinical best practices and evidence. New innovations and novel surgical techniques were also common themes of both objectives and intent-to-change statements.

Intent-to-change statements were not related to any meeting objective an average of 37.3% of the time. Approximately a quarter of these unmatched statements led to subsequent CME activity new learning objectives. However, the majority of intent-to-change statements were repeated over a number of years without an obvious change in subsequent meeting learning objectives. An examination of CME learning objectives found that 15% of objectives had no intent-to-change statements associated with those objectives.

Conclusion: An examination of CME learning objectives and participant intent-to-change statements provides information for examination of both meeting planner and learner attitudes for future CME activity planning.

INTRODUCTION

Providership Council, provides continuing medical education (CME) accreditation to approximately 20 CME activities each year. This is accomplished under the guidance of the Accreditation Council for Continuing Medical Education (ACCME). The ACCME stipulates that education activities should be designed to specifically address professional knowledge or practice gaps identified before the CME activity by organizers of that activity [1]. The prevailing concerns are to focus CME activities on improving practice rather than just disseminating information [2]. Accomplishing this requires a shift in how CME activities are evaluated, including going beyond measuring learner satisfaction and change in medical knowledge to the level of physician performance and patient outcome[3].

The planning of CME activities to meet the needs of learners participating in those activities can be a difficult task [1, 4]. The AANS staff, through the Joint Providership Application, review the CME activity for practice knowledge gaps, data sources, and needs of the meeting attendees. Practice gap data sources used on this application include previous evaluation results, program committee consensus, expert opinion, survey of target audience, journal articles and medical literature review, and outcomes data. This is ultimately distilled into CME learning objectives. Intent-to-change statements are described in the literature as statements of motivation to change [5, 6], commitment to change [7-14], and readiness to change [15]. These terms seem to be used interchangeably, but for this study we have used the convention of intent-to-change and defined by what learners engaged in an educational activity are asked to list as clinical practice changes they propose to make based on what they feel they gained from the activity [3, 16]. The AANS Joint Sponsorship Chair and Committee use the intent-to-change data from evaluations taken from each CME activity as a measure of the educational value of that activity. Furthermore, these intent-to-change data are given to meeting organizers to use as a framework for future planning and feedback of previous meeting outcomes. This study is an attempt to understand the relationship between intent-to-change data from given CME activities and the learning objectives set for those meetings. We used mixed-method content analysis of meeting objective data to explore how intent-to-change data are used by CME organizers to plan educational activities and discover how meeting objectives are formulated and how they evolve over time.

METHODS

Data Source and Setting

Each year, the AANS Joint Sponsorship Council sponsors 15–20 CME activities. These CME activities were multiple day regional and subspecialty organizational meetings. Specifics about actual meeting organization was available in general terms. CME activity evaluation data for each of these meetings were examined for the years 2011–2016 (supplemental table 1). The majority of these CME activities were sequential, occurring on a yearly basis, so trends in change were evaluated over time. The available data were in a format that was examined without significant editing to allow for the most robust examination of the research questions. This study (Protocol #2019-1152) was determined to be IRB exempt status because no human subject data were utilized in this study.

The data for this study comes from the AANS Joint Sponsorship collection kept by the AANS and is available to interested meeting planners who are members of the AANS. These data are in the form of free text as a list of intent-to-change statements gathered from each meeting and organized into Excel spreadsheets for a given year. The meeting objective data are taken directly from each meeting application and/or promotional material from that meeting. CME objectives were printed in the preconference brochures and a syllabus. Both of these sources of data were taken from all of the meetings sponsored in a given year and were examined using content analysis in line with the conceptual framework of meeting objective themes that could be used to map to intent-to-change data.

Research Design

Our study design is a retrospective mixed-method content analysis of written and electronic records from specific CME activity application records and corresponding CME activity evaluations. The data was anonymous and number of participants of the meeting and number of participants that submitted intent-to-change statements was unknown. It is possible that some participant submitted multiple intent-to-change statements while others may have not submitted any statements. Meeting learning objectives were those formulated for the entire CME activity and not for individual sessions of the meeting. The data were first examined by quantitative content analysis to examine whether meeting objectives result in specific intent-to-change statements in learners’ evaluations of the CME activity. This data was examined both on a direct basis for one year as well as longitudinally over many years for the same CME activity. Yearly objective data were compared to include the effect of intent-to-change data on future meeting planning. Intent-to-change data that failed to align with meeting objectives was noted and examines further using qualitative content analysis to identify patterns in the intent-to-change data that did not align with meeting objectives. The overall scheme of the research design and plan is found in Figure 1.

Data Analysis and Statistics

Data were examined by two independent coders (BD and RLJ) using quantitative content analysis [17] to determine whether meeting objectives result in specific intent-to-change statements in a learner’s evaluation of the CME activity. First, all meeting objectives were examined independently by each reviewer, and a set of themes based on each meeting objective were formulated. Next, the coders met and, through an iterative process, decided on a common set of meeting objective themes for each year of study. Meeting intent-to-change data was examined and coded for words or phrases that related to the derived themes by each observer and the data recorded using the computer program Dedoose (www.dedoose.com, Los Angeles, CA) or by manual grouping in Excel spreadsheets.

The frequency that each rater matched a given learning objective theme to a given intent-to-change statement was calculated. Inter-rater reliability of the raters was measured by weighted Cohen Kappa. These data was interpreted where values ≤ 0 indicate no agreement, 0.01–0.20 is interpreted as none to slight, whereas 0.21–0.40 is judged as fair, and 0.41– 0.60 is moderate, while 0.61–0.80 as rated substantial, and 0.81–1.00 as rated as almost perfect agreement [18]. The two raters’ frequency data were then averaged in the majority of cases. In the rare case that a large discrepancy was found for a given intent-to-change statement, the raters together examined the actual data set made by each observer and determined a consensus. The data were then analyzed for number and percentage of intent-to-change data that mapped to a specific objective theme for each year. Descriptive statistics were used to characterize frequency counts. Furthermore, the frequency with which a given objective theme was mentioned in intent-to-change statements was recorded. Comparisons between groups of aggregated data (usually comparing year to year data trends or match and unmatched data) were made by Chi squared test and Fisher’s exact test. Continuous variables are compared using the unpaired Student t-test with a two-tailed p value. Continuous variables are reported as the mean ± standard deviation unless otherwise specified. The alpha for significance is set to 0.05. All statistical analyses are performed using IBM SPSS software version 26 (IBM Corp., Armonk, NY).

RESULTS

Overall Study Descriptive Outcomes

Overall CME activity descriptive data are found in table 1. During the years 2011–2016, a total of 85 CME activities (yearly range = 12–16, yearly mean = 14, SD 1.3) were sponsored by the AANS Joint Sponsorship Committee (Supplemental table 1). Meeting objective data was taken from CME activity application forms. The total number of meeting objectives per year ranged from 58 to 83, with a mean of 71 (SD 8.6) objectives met each year. Learning objectives for a given CME activity ranged from 3-8 objectives per meeting. There were overlapping meeting objectives from separate CME activities that were consolidated into overall objective themes for a given year.

The intent-to-change statements were taken directly from the meeting evaluation data. The total number of intent-to-change statements submitted by participants ranged from 146 to 588 during this time period, with a mean of 325 (SD 146) in any given year. Intent-to-change statements outnumber meeting objectives by a ratio of 4.4 (range 2.5 to 7.1, SD 2.5) in any given year.

A measure of inter-rater reliability of the frequency of intent-to-change statements matched to a given learning objective theme between the two observers is presented in supplemental table 2. Weighted Cohen Kappa ranged from 0.9160 to 0.9735 when measured for a specific year and 0.9777 (standard error 0.0056, 95% CI 0,9665-0,.9889) when measured overall.

Quantitative Content Analysis of Intent-to-change Statements

CME activity objectives were coded as themes and mapped to participants’ intent-to-change statements from the years 2011–2016 For any given year, there were a number of overlapping objective themes among the different AANS-sponsored meetings, making examination as aggregate data the most effective way to assess overall alignment of meeting planner objectives and participant statements of intention to change. For example, for the year 2016 there were 45 unique objective themes that were identified from 16 AANS-sponsored CME activities. This included 25 major themes with 20 associated subthemes. There were 588 intent-to-change statements from participants available for evaluation during this year. In particular, 175 (29.8%) participant statements did not correspond with any of the 45 themes and subthemes. These will be examined in more detail a follow-up publication. Table 2 condenses the data by objective theme over time. This includes only objectives that matched to intent-to-change themes for more than one year during the study. The most frequent category is “no matched objective” meaning that a meeting attendee intent-to-change statement did not map to any of the stated meeting objectives. These statements are the subject of future work and only a brief summary of this work is included here.

Table 3 summarizes the meeting objective themes with the 5 highest number of intent-to-change statements associated with that objective. The most commonly repeated themes (in bold) include resident education, best practices and clinical evidence, socioeconomics, innovation and emerging technology, and complication avoidance. Not surprisingly, a common thread through all meeting objective themes was the dissemination of new therapeutic options contained in themes such as recent innovations, novel surgical approaches and techniques, recent progress, and advancements. Another common thread included methods of determining whether our current treatments are appropriate and adequate, expressed in themes such as guidelines and databases, practice change and controversy, surgical treatment and outcomes, and current treatment options.

Qualitative Content Analysis of Intent-to-change Statements Not Related to Meeting Objectives

Table 4 contains a summary of intent-to-change statements that did not map to any meeting objective for the years 2011–2016. In total, 728 of 1950 intent-to-change statements did not correspond with any CME meeting objective, a mean of 37.3% (range 29.8–48.6%, SD 6.8%). This represented the largest category of intent-to-change statements for all years studied. We next focused our examination on these unmatched intent-to-change statements made by attendees at these CME activities to determine the nature of the data and look for why they did not map to explicit objectives and whether there might be implicit learning to account for these statements. Furthermore, we wanted to know if these unmatched intent-to-change statement drive subsequent meeting planning in the form of meeting objectives. To accomplish this, we examined all unmatched intent-to-change statements using qualitative content analysis to discover themes that emerge from these intent-to-change statements on a yearly basis.

            Table 5 summarizes this data over the course of the study period. One notable recurrent theme is that of referrals. This theme came up in every year that we studied, but no meeting objectives were ever created by meeting planners to address this perceived need. The other intent-to-change statements not shown in this table are not sustained over multiple years, suggesting some fulfillment at least on an intermittent basis. In fact, 35/45 of the intent-to-change statement themes associated with no stated meeting objective occurred only in a single year suggesting resolution in subsequent years.

We next examined how these statements may have led to meeting objective changes by mapping intent-to-change statements from previous years to subsequent meeting objectives. Some of the unmatched intent-to-change statements appear to have led to a new meeting objective in the following year that had not been seen in the year of the original intent-to-change statement (table 6). For instance, in 2011 the unmatched intent-to-change statements of comprehensive and multidisciplinary care and minimally invasive surgery were found as meeting objectives in 2012. The same is true for stem cell and cellular transplantation in 2012/2013. In 2013, unmatched themes of tumor tissue biomarkers, minimally invasive surgery, neurocritical care, and outcomes and guidelines are possibly related to the same meeting objectives found in 2014. In the years 2014/2015, the same pattern was found for neurocritical care, and outcomes and guidelines are possibly related to the same meeting objectives found in 2014. In the years 2014/2015, the same pattern was found for concussion management, Chiari malformation management, and surgery for intraparenchymal hematoma. No such pattern was found in 2015/2016. Although these seeming relationships exist, table 7 shows that this is not the most common outcome of unmatched intent-to-change statement, as on mean only 22.6% (range 0–41.7%, SD 15.4%) of unmatched intent-to-change statements led to new meeting objectives.

DISCUSSION

Importance and Use of Data from This Study

The ACCME defines “joint providership as the provision of a CME activity by one accredited and one nonaccredited organization.” (https://www.accme.org).  In this study, the AANS is the accredited organization and the multiple meetings we have examined are cosponsored by nonaccredited organizations. The accredited provider is responsible for the conduct of the nonaccredited organization’s CME activity. Thus, the AANS has a special interest in the quality of the CME activities sponsored under its cooperation. AANS CME Mission Statement “ aims to achieve excellence in continuing medical education (CME) through educational activities built on evidence-based medicine and adult learning principles. The AANS CME program provides activities to meet the participants’ identified education needs and to support their life-long learning towards a goal of improving neurosurgeon’s competency skills with a measurable result” (https://www.aans.org/en/Education/CME-Accreditation)

The reaccreditation process required by the ACCME includes that the AANS verify that CME activities sponsored by the AANS meet all ACCME requirements. The AANS staff, AANS Joint Sponsorship Chair and Committee use the intent-to-change data from each CME activity as a measure of the educational value of that activity. Furthermore, these intent-to-change data are given to meeting organizers to use as a framework for future planning and feedback of previous meeting outcomes. It is important to understand the relationship between intent-to-change data from given CME activities and the learning objectives set for those meetings. Furthermore, one can use this data to explore how intent-to-change data are used by CME organizers to plan educational activities and discover how meeting objectives are formulated and how they evolve over time. In addition, by using more traditional inductive qualitative techniques we were able to show that many intent-to-change statements made by CME attendees did not map to any predetermined meeting objectives. As will be discussed below there are a number of ways to interpret this data but consider that at least one possibility is this represents unseen, unplanned learning that takes place at various AANS-sponsored CME activities. A close examination of this data has the potential to reveal themes of “hidden curriculum” or unmet needs that might represent “practice gaps” that meeting planners did not know about or did not think would be of interest to meeting attendees. Examination of this type of data over multiple years, as well as examination of whether practice gaps are changed over time, can be used to improve CME activities to better meet the needs of the participants of these events.

 CME for Improving Neurosurgical Practice

Continuing medical education (CME) is “defined as any activity that serves to maintain, develop, or increase the knowledge, skills, and professional performance and relationships that a physician uses to provide services for patients, the public, or the profession” [2]. CME appears to be effective in contributing to the “acquisition and retention of medical professionals’ knowledge, attitudes, skills, behaviors, and clinical outcomes” [19]. CME activities are required for physician board certification, state licensure, maintenance of certification, and hospital privileges. The most common form of CME activity used in neurosurgery is presented in the form of live meeting events. These types of CME opportunities are generally found to be effective in changing physician performance [4, 20].

It is important to understand the relationship between intent-to-change data from given CME activities and the learning objectives set for those meetings for several reasons. Learning objectives that align with participant’s intent for practice change should be included in future CME activities while those that align with few or no intent-to-change statements might be discarded as something CME attendees are not interested in pursuing. Alternatively, intent-to-change statements that have no relationship to any of the meeting objectives are especially interesting and deserve some exploration. There are several explanations of why this might occur. It is possible that there is an actual discrepancy between the stated meeting objectives and what is the actual content covered or what learning experience evoked. It could be that since there are a limited number of objectives that can be stated for a given CME activity these are not comprehensive of what the meeting planners actually hope to teach during this activity. The level of specificity of the meeting objectives might also account for intent-to-change discrepancies that are broader in nature, or the opposite might occur with broad objectives not seeming to be congruent with specific intent-to-change statements. Finally, and probably most interesting, is the case in which the learning is simply outside of the explicit (stated objectives) curriculum of the planned activity. It is well recognized that quite often there is implicit learning that takes place that is termed the “informal curriculum” or “hidden curriculum” [21] These intent-to-change statements that don’t seem to align with stated learning objectives might represent the unseen, unplanned learning that takes place at various AANS-sponsored CME activities. A closer examination of this data has the potential to reveal unmet needs for future meetings. A careful examination of these data could reveal “practice gaps” that meeting planners did not know about or did not think would be of interest to meeting attendees. Further, examination of this type of data over multiple years, as well as examination of whether practice gaps are changed over time, can be used to improve CME activities to better meet the needs of the participants of these events. Finding methods to drive CME activities to correspond with learner needs is an important unmet need for the Joint Providership Council of the AANS in particular and CME meeting planners across all disciplines in general.

Intent-to-change Statements as a Measure of Learning

This study is based on the concept of mapping CME activity evaluation data in the form of intent-to-change statements directly to CME activity objectives [22, 23]. A previous survey of physicians in the United States demonstrated that they feel confident in identifying their own learning needs [24]. Intent or motivation to change has been thoroughly studied at the level of the individual learner [5, 6, 25] There is strong evidence to suggest that an individual CME participant’s motivation to change leads to knowledge acquisition [5, 25], a process mediated by promoting self-efficacy—the belief that an individual has in his or her own capacity the ability to achieve a given goal. Williams et al. [5, 6] based these observations on social cognitive understanding of change behavior, where the CME activity leads to the motivation and confidence to put the new knowledge into the participant’s medical practice.

Intention-to-change data can be used to assess alignment of intended changes in physician behavior with program objectives, confirm and strengthen intended practice change and explore unanticipated learning outcomes [7]. One confounding problem with this type of analysis is that it has been demonstrated that it is possible to have “no significant difference in intention between a health care professional who later reported a behavior change and those who reported no change” [26]. Others have demonstrated just the opposite—that learners that indicated an intent-to-change immediately after a given lecture were more like to actually use that information in a change of their practice [9], although this does not always take place with a single CME activity [27].

Overton et al. used qualitative methods to “find that there can be a range of meanings underlying intention-to-change statement” and in fact, for some participants “commitment is too strong a word to describe their intention” [28]. Although many CME participants make changes to their practices, this study “highlights that merely asking learners to specify the changes that they intend to make does not necessarily imply that learners feel a sense of commitment towards the intended changes.” When there is a gap between knowledge acquisition and behavioral change, it can be attributed to a number of factors, but there are two factors that are known to drive behavioral change—a sense of urgency and a level of certainty that the behavior change is important [29]. Others have found that physician behavior after CME activities is expected to change if the practice alteration is congruent with values and sense of what the physician’s feels is important [8].

Intent-to-change statements are a means for promoting reflection on current practice and encouraging participants to identify and commit to specific planned practice changes [10, 25, 30, 31]. They can serve as a marker or proxy for actual practice change since physicians who make intent-to-change statements are more likely to follow through with making changes than those who do not  [7, 9, 11, 12, 25, 30, 32-36].

Quantitative Content Analysis of Intent-to-Change Statements

Our data over many years suggest that the majority of intent-to-change statements can be directly tied to stated meeting objectives. From a broad overview, we identified resident education, best treatment practices and treatment options, socioeconomic issues, databases and registries, complication avoidance, patient outcomes, innovation, and new surgical techniques and approaches as common threads from year to year. This is hardly surprising given the audience of a relatively like-minded and focused practice group of neurosurgical learners as the majority audience at these events. There are variations from year to year, but many of the differences are in terminology and not necessary in the intent of the meeting planners or the actual meaning of the individual participants’ statements.

As described previously, intent-to-change statements are acknowledged as a valuable evaluation tool for educational program evaluation. Some have opined that these statements are based on Locke’s goal-setting theory, which holds that behavior is affected by individual motivation and draws on the principle that adults learn what is relevant to their needs [12, 37]. The majority of the intent-to-change statement examined in this study support this notion. These statements all paint a picture of learners committed to practice change and improvement. The heavy reliance of the field of neurosurgery on new technology is reflected in these statements and has be found in the work of others examining CME meeting outcomes from meetings involving rapidly evolving technologies [38]. Our study data are not sufficient to demonstrate actual practice change by participants of the CME activities we have examined here. Others have criticized work similar to ours as incomplete without verification of data on the actual clinical practice change and impact on physician behavior. Our data, and most similar study data, involve information gathered from participants only at the immediate end of a CME activity with no follow up at a later date [3, 39]. Some have questioned the validity of this approach without adding follow-up at a later date to confirm that the commitment to change has been carried out [25, 40]. These authors have argued that the self-reported nature of the statements was the major limitation of this method [3]. Other have demonstrated that, in fact, self-reported intent-to-change statements can be a valid measure of changes in clinical practice behavior [32]. Spending time to complete a post-meeting questionnaire and write intent-to-change statements may, in and of itself, reflect a seriousness about the intent-to-change that may, in turn, predict action [8].

Meeting Objective Themes with No Correlated Intent-to-change Statements

One of the most interesting aspects of this study is the 30–48% of yearly intent-to-change statements that did not map to any meeting objectives. This was the largest single theme of intent-to-change statements for each of the years; however, admittedly it was simply a compilation of unmatched statements. This will be the focus of a subsequent study that we will publish to examine in detail. There are a number of reasons for the intent-to-change statements to not align with meeting objects. These include a difference in opinion between meeting planners and the participants’ importance of a given subject. It is also possible that none of the speakers chose to present information on these objective topics or that some of the intent-to-change statements did reflect actually teaching that took place during the meeting but was not stated as an actual meeting objective.  It is possible that our interpretation of the meeting objectives and coding into themes did not reflect the implicit intent of the meeting planners but the meeting participant’s intent-to-change statements were written with this in mind. That said it does appear that unmatched intent-to-change statements may represent unintended consequences of the CME activity.

Limitations of This Study

There are certainly limitations in our study. One is that this work relied on two coders to perform the content analysis of meeting objective themes and the actual thematic coding of the intent-to-change statements. We attempted to eliminate unintentional bias and errors by having both observers independently start this process. The initial examination of content analysis data involved independent generation of the deductive codes. We later met to review and edit codes through a collaborative, iterative process until final objective themes/codes were generated. Because the second observer was an undergraduate research assistant, it is possible that the senior author may have introduced more unintended bias on themes as a result of greater familiarity with the process of AANS-associated CME and with neurosurgical topics. When examining intent-to-change data that did not map to specific meeting objectives, we took a more inductive coding approach, looking for unknown themes of learning that took place in the CME activities examined in this study. This approach is subject to a similar source of unintended bias. It is possible that even more unintentional bias could be eliminated with participation of additional independent coders.

Another obvious limitation of this study in general is the indirect nature of the data. The data was collected prospectively but is limited by a retrospective analysis. Furthermore, the data was collected for meeting evaluation but not necessarily for the direct comparison to meeting objectives as we have done in this study. Since this was not the intended use of the intent-to-change data there are limitations of the “fit” to the meeting learning objectives. The opposite is true as well, the meeting objectives were not necessarily designed by the meeting planners for later comparison to learner intent-to-change statements. For the sake of simplicity, the data are aggregated from multiple CME activities for analysis. Attempts at a more granular examination of the relationship of meeting objectives and learner perceptions of take-home messages from the CME activity proved difficult because of the reduced number of both objectives and intent-to-change statements. Even more problematic were attempts at examining data from a particular meeting using classification by particular neurosurgical subspecialty. The data were available with this level of detail from the Joint Sponsorship Council but did not prove to be adequate for meaningful examination.

Since meeting evaluations and specific intent-to-change data is anonymous, it is not possible to know the number of participants for any given CME activity examined in this study. Furthermore, it is possible that an individual participant may have submitted multiple intent-to-change statements while other learners may have not participated in the evaluation process at all. A participant with a particular agenda or perception of the CME activity might skew the evaluation data in a certain direction. This can certainly add bias to the interpretation of the intent-to-change statements and not reflect the true overall outcome of a given CME activity.

While we felt that qualitative and quantitative content analysis methods were the best approach for these data, it is entirely possible that that some intent-to-change statements or learning objectives were sufficiently vague that our methods were not sensitive enough to categorize the true meaning of the participant and subsequently failed to capture the relationship between a given objective and intent-to-change statement. It must be acknowledged that there are certainly differences in implicit and explicit meaning for many learning objectives, and intent-to-change statements as well. This can complicate the process of alignment of the data in a study like that presented here. In a similar manner, when a meeting objective was overly broad or narrowly specific in theme, we may have not properly associated a given intent-to-change statement to that objective even though the participants intent was a fulfillment of that objective. This is most certainly possible when intent-to-change statements or learning objectives are more oriented to declarative knowledge compared to procedural knowledge. We recognize that is not possible for meeting planners to state every desired learning goal in their stated meeting objectives. It is likely that some of these unwritten objectives might be found in the intent-to-change statements that we categorized as unmatched or unintended learning and, in fact, represent topics very much included in the meeting planners hoped for learning outcome objectives. Finally, this work is an indirect measure of outcomes of CME activities and does not measure whether the intent-to-change statements, either matched or unmatched to meeting learning objectives, indeed led to physician practice change.

CONCLUSIONS

It appears that intent-to-change data can be useful to examine the relationship between a CME activity and whether it achieved a stated learning objective. The longitudinal examination of objectives and intent-to-change data over time is useful in understanding the efficacy of CME for closing identified knowledge gaps and for determining unmet needs for future CME planning. Intent-to-change statements can be mapped to meeting objectives in a majority of CME activities studied. Theme patterns of recurrent intent-to-change statements that matched with meeting objectives for neurosurgical CME activities are focused on resident education, reduction of patient complications, evidence-based practice change, and innovation of surgical procedures and technical advances. A little over a third of intent-to-change statements were not related to any meeting objective. Approximately a quarter of these unmatched statements led to subsequent CME activity new learning objectives. However, the majority of intent-to-change statements were repeated over a number of years without resolution. A small number of CME learning objectives had no associated intent-to-change statements. When these objectives went unmatched for multiple years, we found that the themes of these objectives tended to be somewhat general/declarative knowledge in topic, whereas objectives on specific/procedural topics were more likely to be unmatched for only a single year. A number of CME learning objectives are repeated for a number of subsequent years without change. This however, was not found to correlate with unmatched status to intent-to-change statements. An examination of CME learning objectives and participant intent-to-change statements is a rich source of information for examination of both meeting planner and learner attitudes and motivation for acquisition of medical knowledge.

Acknowledgments

            We would also like to thank Kristin Kraus for her editorial assistance throughout the preparation and completion of this text.  I would also like to thank Samantha Luebbering and Lorelei Garcia from the American Association of Neurological Surgeons for help collecting and compiling this data.

Disclosures

Dr Jensen served on the AANS Joint sponsorship committee as a member and/or Chair during the years data was collected for this study.

References

1.         Wittich, C.M., et al., Perspective: a practical approach to defining professional practice gaps for continuing medical education. Acad Med, 2012. 87(5): p. 582-5.

2.         Davis, N., D. Davis, and R. Bloch, Continuing medical education: AMEE Education Guide No 35. Med Teach, 2008. 30(7): p. 652-66.

3.         Shershneva, M.B., et al., Commitment to practice change: an evaluator’s perspective. Eval Health Prof, 2010. 33(3): p. 256-75.

4.         Davis, D., et al., Continuing medical education effect on practice performance: effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest, 2009. 135(3 Suppl): p. 42S-48S.

5.         Williams, B.W., H.A. Kessler, and M.V. Williams, Relationship among knowledge acquisition, motivation to change, and self-efficacy in CME participants. J Contin Educ Health Prof, 2015. 35 Suppl 1: p. S13-21.

6.         Williams, B.W., H.A. Kessler, and M.V. Williams, Relationship among practice change, motivation, and self-efficacy. J Contin Educ Health Prof, 2014. 34 Suppl 1: p. S5-10.

7.         White, M.I., S. Grzybowski, and M. Broudo, Commitment to change instrument enhances program planning, implementation, and evaluation. J Contin Educ Health Prof, 2004. 24(3): p. 153-62.

8.         Mazmanian, P.E., et al., Effects of a signature on rates of change: a randomized controlled trial involving continuing education and the commitment-to-change model. Acad Med, 2001. 76(6): p. 642-6.

9.         Mazmanian, P.E., et al., Information about barriers to planned change: a randomized controlled trial involving continuing medical education lectures and commitment to change. Acad Med, 1998. 73(8): p. 882-6.

10.       Armson, H., et al., Is the Cognitive Complexity of Commitment-to-Change Statements Associated With Change in Clinical Practice? An Application of Bloom’s Taxonomy. J Contin Educ Health Prof, 2015. 35(3): p. 166-75.

11.       Domino, F.J., et al., The impact on medical practice of commitments to change following CME lectures: a randomized controlled trial. Med Teach, 2011. 33(9): p. e495-500.

12.       Jones, D.L., Viability of the commitment-for-change evaluation strategy in continuing medical education. Acad Med, 1990. 65(9 Suppl): p. S37-8.

13.       Lockyer, J.M., et al., Commitment to change statements: a way of understanding how participants use information and skills taught in an educational session. J Contin Educ Health Prof, 2001. 21(2): p. 82-9.

14.       Pereles, L., et al., Effectiveness of commitment contracts in facilitating change in continuing medical education intervention. Journal of Continuing Education in the Health Professions, 1997. 17(1): p. 27-31.

15.       Shirazi, M., et al., Applying a modified Prochaska’s model of readiness to change for general practitioners on depressive disorders in CME programmes: validation of tool. J Eval Clin Pract, 2007. 13(2): p. 298-302.

16.       Purkis, I.E., Commitment for changes: an instrument for evaluating CME courses. J Med Educ, 1982. 57(1): p. 61-3.

17.       Morgan, D.L., Qualitative content analysis: a guide to paths not taken. Qual Health Res, 1993. 3(1): p. 112-21.

18.       McHugh, M.L., Interrater reliability: the kappa statistic. Biochem Med (Zagreb), 2012. 22(3): p. 276-82.

19.       Marinopoulos, S.S., et al., Effectiveness of continuing medical education. Evid Rep Technol Assess (Full Rep), 2007(149): p. 1-69.

20.       Forsetlund, L., et al., Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev, 2009(2): p. CD003030.

21.       Balmer, D.F., et al., Learning across the explicit, implicit, and extra-curricula: an exploratory study of the relative proportions of residents’ perceived learning in clinical areas at three pediatric residency programs. Acad Med, 2015. 90(11): p. 1547-52.

22.       Dolcourt, J.L. and G. Zuckerman, Unanticipated learning outcomes associated with commitment to change in continuing medical education. J Contin Educ Health Prof, 2003. 23(3): p. 173-81.

23.       Lockyer, J.M., et al., Assessing outcomes through congruence of course objectives and reflective work. J Contin Educ Health Prof, 2005. 25(2): p. 76-86.

24.       Cook, D.A., et al., Professional Development Perceptions and Practices Among U.S. Physicians: A Cross-Specialty National Survey. Acad Med, 2017. 92(9): p. 1335-1345.

25.       Wakefield, J.G., Commitment to change: exploring its role in changing physician behavior through continuing education. J Contin Educ Health Prof, 2004. 24(4): p. 197-204.

26.       Legare, F., et al., Responsiveness of a simple tool for assessing change in behavioral intention after continuing professional development activities. PLoS One, 2017. 12(5): p. e0176678.

27.       Parochka, J. and K. Paprockas, A continuing medical education lecture and workshop, physician behavior, and barriers to change. J Contin Educ Health Prof, 2001. 21(2): p. 110-6.

28.       Overton, G.K., et al., Practice-based small group learning: how health professionals view their intention to change and the process of implementing change in practice. Med Teach, 2009. 31(11): p. e514-20.

29.       Kennedy, T., et al., Exploring the gap between knowledge and behavior: a qualitative study of clinician action following an educational intervention. Acad Med, 2004. 79(5): p. 386-93.

30.       Wakefield, J., et al., Commitment to change statements can predict actual change in practice. J Contin Educ Health Prof, 2003. 23(2): p. 81-93.

31.       Lowe, M., et al., The role of reflection in implementing learning from continuing education into practice. J Contin Educ Health Prof, 2007. 27(3): p. 143-8.

32.       Curry, L. and I.E. Purkis, Validity of self-reports of behavior changes by participants after a CME course. J Med Educ, 1986. 61(7): p. 579-84.

33.       Dolcourt, J.L., Commitment to change: A strategy for promoting educational effectiveness. Journal of Continuing Education in the Health Professions, 2000. 20(3): p. 156-163.

34.       Purkis, I.E., Continuing medical education: learning preferences of anaesthetists. Can Anaesth Soc J, 1982. 29(5): p. 421-3.

35.       Parker, F.W.I. and P.E. Mazmanian, Commitments, learning contracts, and seminars in hospital-based CME: Change in knowledge and behavior. Journal of Continuing Education in the Health Professions, 1992. 12(1): p. 49-63.

36.       Crandall, S.J.S., The role of continuing medical education in changing and learning. Journal of Continuing Education in the Health Professions, 1990. 10(4): p. 339-348.

37.       Locke, E.A. and G.P. Latham, Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. Am Psychol, 2002. 57(9): p. 705-17.

38.       Dundar, Y., et al., Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies. Health Technol Assess, 2006. 10(5): p. iii-iv, ix-145.

39.       Neill, R.A., M.A. Bowman, and J.P. Wilson, Journal article content as a predictor of commitment to change among continuing medical education respondents. J Contin Educ Health Prof, 2001. 21(1): p. 40-5.

40.       Mazmanian, P.E. and P.M. Mazmanian, Commitment to change: Theoretical foundations, methods, and outcomes. Journal of Continuing Education in the Health Professions, 1999. 19(4): p. 200-207.

On-site pediatric and neonatal point-of-care ultrasound (POCUS) course led by multi-disciplinary local experts may promote sustainable clinical POCUS integration.

Disclosure

BC was a consultant for GE company and received research grant from Chiesi USA. BC did not receive any financial support specifically for this project. The other authors disclosed no conflict of interest. The research and REDcap database reported in this publication was supported (in part) by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR002538. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Abstract

Objective: To investigate the impact of an on-site pediatric and neonatal Point-of-care ultrasound (POCUS) course in long-term implementation of POCUS.

Methods: We hosted two pediatric and neonatal critical care POCUS courses in 2018 and 2019 using the Society of Critical Care Medicine curriculum (Critical Care Ultrasound: Pediatric and Neonatal), with local experts and infrastructure. We administered evaluation surveys based on a 5-point Likert scale before and after the course to assess the participants’ reactions, learning, and clinical behaviors. The final analysis incorporated Kirkpatrick’s evaluation model and descriptive statistics to compare confidence rankings and scanning behavior.

Results: A total of 32 on-site [JC1] clinicians from neonatal and pediatric critical care units attended the courses with a survey response rate > 72%. Respondents’ median satisfaction score was 4.0 (IQR 4.0-5.0). The median confidence rankings in their POCUS skills increased from 1.0 (IQR 1.0-2.0) pre-course to 3.0 (IQR 2.8-4.0) at 12 months after the course (p<0.0001). The proportion of respondents who reported an increased trend of performing > 4 scans in the prior month (12.5% vs. 30.4%, p=0.17). We discovered a decreased in institutional barriers, especially concerns over interdisciplinary conflicts.

Conclusions: An on-site pediatric and neonatal POCUS course utilizing local infrastructure and a reputable POCUS course effectively promoted POCUS implementation and addressed institutional barriers. Instead of having learners to seek off-site or online training, structuring an on-site course with multi-disciplinary local faculty in children’s hospitals that lack a robust POCUS program may be a feasible approach.


Keywords

Point-of-care ultrasound, Kirkpatrick’s principle, adult learning, pediatric critical care, neonatal critical care

Introduction

Neonatal and pediatric critical care point-of-care ultrasound (POCUS) training is in high demand. Recent national U.S. surveys showed that 83-90% of respondents thought that POCUS training should be a part of critical care fellowship education in Pediatrics1-3. However, only 67-90% of Pediatric Intensive Care, and 38% of Neonatal Medicine fellowship programs provide POCUS training1-3. Pediatric emergency medicine is the only pediatric subspecialty with established POCUS guidelines with professional endorsement4. None of the other pediatric subspecialties have a structured curriculum-based approach to POCUS training1-3.

However, evidence-based clinical implementation of POCUS is sparse. Furthermore, structured training programs for pediatric practicing clinicians (including post-graduate physicians, nurse practitioners, and physician assistants) are rare2. There is scant research that evaluates the best method to train practicing clinicians without prior POCUS experience and limited follow-up data on the impact of training courses on clinical implementation5.

Lack of a mature pediatric or neonatal critical care ultrasound program with limited skilled POCUS faculty remains a significant barrier to POCUS training for many institutions6. As a result, practicing clinicians are often encouraged to attend online or off-site courses at their discretion. After completing an off-site POCUS course, many clinicians report that integrating POCUS into their daily practice is challenging7, 8. Integration of POCUS into clinical practice varies widely across Pediatric Intensive Care Units (PICU), and only one-third Neonatal Intensive Care Unit (NICU) clinicians use POCUS1, 2.

To address the high demand for pediatric and neonatal critical care POCUS training by fellows and practicing clinicians in our institution, we implemented a nationally recognized and reputable POCUS course and curriculum. We hypothesized that an on-site POCUS training course that utilizes existing institutional infrastructure would enhance POCUS practice adoption by lessening implementation barriers.

Methods

Course structure

We hosted two annual (in June 2018 and September 2019), pediatric and neonatal critical care 2-day POCUS courses for fellows and practicing clinicians in a free-standing university-affiliated children’s hospital and performed a 12-month prospective observational cohort study following course completion. The course curriculum was adapted from the 2-day “Critical Care Ultrasound: Pediatric and Neonatal” developed by the Society of Critical Care Medicine (SCCM)Ó  (Mount Prospect, Illinois, USA). The course consisted of 12-hours of didactic lectures and 8-hours of hands-on training. The hands-on training was performed on pediatric volunteers, phantoms, and simulators (SonoSimÓ Ultrasound Trainings Solution, Santa Monica, CA). An adequate number of faculty is recruited to ensure proper 1:4 faculty to student ratios. One faculty each year was a POCUS expert from the SCCM faculty. Local POCUS faculty experts from PICU, NICU, Pediatric Emergency Department (PEM), and Radiology Department taught the course. The multidisciplinary approach enhances skill generalizability in different specialties. The local experts either had prior extensive POCUS training or fellowship, or were credentialed in echocardiography or sonography. Our institution supported the POCUS course financially and administratively. It was offered to fellows, attending physicians, nurse practitioners, physician assistants, nurses, and respiratory therapists from the PICU and NICU.

We intentionally designed the local course with POCUS faculty who could serve as champions within their individual units and departments to provide on-going support to participants after course completion. We also utilized the same ultrasound machines during the course that participants would continue to use in their own clinical practice. 

Survey Development and Distribution

To evaluate our course effectiveness, we designed and distributed a pre-course and post-course survey. The post-course survey was given immediately post course (post) in paper form, and then a 3- (3mo), 6- (6mo), and 12-month (12mo) follow-up survey were given electronically (Supplementary Material 1). The surveys included multiple-choice, fill in the blank, and Likert-based questions similar to other published POCUS training surveys9-11. The survey collected information on the following: participant background information, clinical practice setting, and POCUS leadership or infrastructure in their respective practice. Additionally, we asked several questions regarding the frequency in scanning, confidence in interpreting, and barriers in integrating POCUS. The post-course survey addressed participants’ perception of the course and satisfaction scores with the various instructors as well as an opportunity for the participants to provide feedback and recommendations for future courses. We captured similar longitudinal data on the questions in the 3-, 6-, and 12-month follow-up surveys. Our data analysis here focused on comparing results between the pre-course and 12-month follow-up surveys.

Three POCUS experts (MSt, OK, BC) created questions for the surveys. Three other investigators (EH, SG, MSk), reviewed the questions and ranked them for clarity and completeness. After three iterations, the panel met again and reviewed each question for intention and brevity.

After the 2018 course, participant feedback prompted additional survey refinement for the 2019 course participants (Supplementary Material 2). Questions were either shortened or rearranged in numerical order to improve response rate and clarity. The investigator team reviewed the revised survey to ensure question fidelity and integrity. The concepts between the two survey versions were the same, even though the wording varied. For example, the question to assess the participant’s confidence in overall integrated POCUS skills, the 2018 survey asked “I am confident in my ability to acquire images and interpret them with POCUS putting it all together” (1=“strongly disagree” to 5=“strongly agree”). The 2019 survey asked “Ability to acquire and interpret images to clinically integrate into a diagnosis?” (1= “not confidence at all” to 5=“very confidence”). Data variation between the two survey versions were tracked to ensure internal validity.

Survey Distribution

The surveys were administered in person prior to the course (pre), and immediately at the end of the course (post). Follow-up surveys were sent via email to participants 3-, 6-, 12 months following course completion. The participants had one month to complete the survey with up to 4 email reminders.

Outcomes Measures

Our primary outcome was to evaluate the on-site course’s effectiveness by comparing the pre-course and the 12-month follow-up survey results, based on the four-level Kirkpatrick’s evaluation framework12.

Level 1: assess the participants’ “reaction” based on their satisfaction of the course content, faculty teaching, and overall experience.

Level 2: assess the “learning” based on self-reported confidence of POCUS knowledge and skills.

Level 3: assess the education effect on the change of “behavior” based on the self-reported number of scans performed in clinical practice.

Level 4: assess the “results” based on the perceived institutional barrier resolution.

Data Analysis

Study data were collected and managed using REDcap data capture tools hosted at the University of Utah13. Descriptive statistics, Student’s t-test, Mann-Whitney, and Wilcoxon test were used as appropriate. Data analysis was performed using GraphPad Prism versions 9.0.2 for Mac (GraphPad Software, San Diego, California, USA).

Institutional Review Board

After reviewing the study application, the University of Utah Institutional Review Board (IRB) exempted this study from full review and consent (IRB #_00112848).

Results

Response rate

The two-year combined survey response rates decreased over time from the course was taken. The response rates were 100% (pre), 100% (post), 94% (3mo), 94% (6mo), and 72% (12mo) respectively.

Participant demographics

A total of 42 participants attended the two courses. Our analysis focused on the 32 on-site attendees from the NICU (50%) and PICU (50%). We excluded 10 participants because they worked at satellite community hospitals which lacked the same POCUS champions and infrastructure. Table 1 describes the clinical roles and years of practice for the 32 included participants. The majority were physicians (84%) who had completed a pediatric residency. 31% of participants had more than 10 years of clinical experience. 88% of participants reported having prior POCUS experience and training, through national conferences, online courses, medical school, or residency programs.

Chan Table 1 - Demographics of Participants

Kirkpatrick’s level 1 “reaction”

The course received good median satisfaction scores of 4 (IQR 4-5) on a 5-point Likert scale (1=disagree, 5=agree) in evaluating course content, objectives, and clinical relevance. The median course content satisfaction rating from the 2018 participants (n=12) was 4 (IQR 4-5) on didactic lectures, hands-on modules, and instructors. The median course content satisfaction score from the 2019 participants (n=20) was 5 (IQR 4-5), which was higher than the previous year (p<0.047). Participants from both years felt that the course met their learning objectives ranking a score of 4 (IQR 4-5) and was relevant to their field of practice ranking a score of 4 (IQR 4-5).

Kirkpatrick’s level 2 “learning”

The respondents reported increased confidence in POCUS image acquisition and interpretation over time (Figure 1). In the question regarding their overall integrated POCUS skill, the respondents reported their confidence increased from a median score of 1 (IQR 1-2) (pre) to 3 (IQR 3-4) (12mo), p<0.0001 on a 5-point Likert scale in combining both years. Looking the two years separately, the 2018 participants had reported median confidence score increased from 0.5 (IQR 0 to 2) to 2.5 (IQR 2 to 3), p<0.0017;  the 2019 participants’ median scored had increased from 2 (IQR 1 to 2) to 4 (IQR 1 to 4), p<0.0001. The scoring trend was parallel between the two years, even the 2019 survey was modified (Supplementary Material 3).

Chan Figure 1 - Median confidence scores in overall POCUS skills

In the pre-course survey, 73% of respondents felt that their lack of confidence in obtaining and interpreting images were the top POCUS implementation barriers. At the12-month follow-up survey, only 41% of respondents considered their personal confidence in POCUS skills as barriers.

Kirkpatrick’s level 3 “behavior”

After attending the on-site course, respondents reported an increase in the number of scans performed (Figure 2). The proportion of respondents who reported that they had performed more than 4 scans in the past month increased from 12.5% pre-course to 30.4% at 12-month follow-up (p = 0.17).

Chan Figure 2 - Proportion of respondents reported to have performed >4 scans in the past month

Of the 28 participants who had prior POCUS experience and training, 21% (n=8)  reported in the pre-course survey that they had not performed any scans in the prior 6 months. At the 12-month follow-up survey, only 1 of 22 (4.5%) respondents reported not performing any scan in the prior 6 months.

Kirkpatrick’s level 4 “results”

The survey asked participants about barriers to POCUS implementation into clinical practice. Aside from their personal POCUS skills, the top 3 institutional barriers identified in 2018 were:

  • lack of experienced POCUS faculty (33%),
  • lack of quality assurance program to verify image acquisition and interpretation (25%),
  • concerns of interdisciplinary conflicts (25%).

None of the 2019 course respondents reported concerns of interdisciplinary conflicts in their 12-month follow-up survey. A proportion of them still perceived the lack of a formal method to confirm image interpretation (40%), quality assurance program to review saved images (33%), and experienced POCUS faculty for hands-on training (20%) as top institutional barriers. Some participants (33%) also felt there was not enough time during their clinical day to perform POCUS.

Discussion

We demonstrate that an on-site pediatric and neonatal POCUS course was effective based on Kirkpatrick’s four principles of reaction, learning, behavior, and results (Figure 3). To our knowledge, we are the first to describe how importing an off-site reputable course to an on-site pediatric and neonatal POCUS model could change POCUS clinical practice behavior. Participants ranked the course favorably and reported increased confidence in their POCUS skills. Although not statistically significant, participants seemed to incorporate POCUS more frequently into their clinical practice after the course, and this practice pattern was sustained. Most importantly, perceived institutional barriers to POCUS were reduced. This on-site pediatric and neonatal POCUS model utilizing nationally recognized ultrasound content while incorporating local expertise and strengthening infrastructure is an efficient way to expand POCUS clinical practice.

Chan Figure 3 - Study findings based on Kirkpatrick's Evaluation Model

Due to limited POCUS expertise, pediatric and neonatal critical care clinicians have relied on online or off-site training courses. Even with many available online or off-site courses, an adequate POCUS knowledge translation into practice remains difficult. Firstly, gaining proficiency in POCUS requires complex training in image acquisition, interpretation, and clinical integration. Competency is best achieved with hands-on training, frequent practice, and integration into clinical practice. Although online or off-site training courses can enhance POCUS knowledge and promote confidence to attendees which meeting Kirkpatrick’s level 1 and 2, they often do not result in the behavior change essential to meet Kirkpatrick’s level 3 requirement. Online and off-site courses are unable to provide adequate post course hands-on training and timely feedback. This is evident in a study by Patrawalla et al. who showed that a 3-day regional POCUS was an effective educational model14. Still, it did not report detailed data on subsequent clinical practice use14. Secondly, institutional infrastructure is essential for clinical integration. National survey had reported the 5 top institutional barriers for POCUS clinical integration, including lack of equipment/funds, lack of personnel to train physicians, lack of time to learn, liability concerns, and cardiology or radiology resistance2. Successful clinical integration of POCUS requires both attaining expert knowledge and skill and overcoming local barriers. Historical online or off-site courses are unable to navigating the local practice environment requires more than distant expertise.

Integration of newly acquired skills and knowledge into clinical practice is challenging. Adhering to the principles of adult learning may help to enact positive behavior change15. As evidenced from our pre-course survey, some of our course participants did not utilize their previous POCUS skills in their clinical practice despite prior POCUS training and experience. After attending our on-site course, participants reported a trend of increasing POCUS usage behavior. Collins et al. described education techniques for lifelong learning that we utilized in this course15. Firstly, the adult learners valued the relevancy and practicality of this course15. Our course used the same ultrasound machines that the participants would use in their units, thereby enhancing the skills learned. Secondly, our participants attended with their own colleagues, which fostered an informal and personal environment in which adults learn best15. Another consideration is adults learn best by doing15. We found our respondents had an increased scanning frequency. The survey only assessed if more than four POCUS scans were performed per month, but the small increase was heading into the right direction of life-long behavior change. We suspect that more scanning now will translate to more scanning in the future. The scanning behavior will further be fostered by the on-site faculty who provide practice reinforcement and on-going feedback after course completion.

The off-site or online courses historically are unable to address the institutional barriers of integrating POCUS into daily practice. Prior to our on-site course, we identified similar barriers from the 2018 pre-course survey to those in other critical care programs1, 2, 16. Barriers included interdisciplinary conflicts, lack of local POCUS faculty, and lack of quality assurance programs. None of these barriers are solved by attending off-site courses. A few local POCUS experts first organized the course to fulfill the POCUS educational gap. As a result of the course, the institution recognized the need to strengthen the local infrastructure. Subsequently, a multi-disciplinary POCUS consortium was formed, including leaders from PICU, NICU, PEM, cardiology, radiology, and hospital administration. Additionally, PICU, NICU, and PEM champions became the ultrasound medical directors for their divisions, providing on-going education, leadership, and quality assurance. Our on-site pediatric and neonatal POCUS course model has fostered inter-departmental collaboration; thereby promoting transparency in POCUS practice, communication and eliminating concerns over multi-disciplinary conflict. By the time the 12-month follow-up survey was sent to the 2019 course participants, the POCUS consortium had been established for 28 months and the respondents reported no interdisciplinary conflicts concerns. We suggest this internal on-site infrastructure is essential to effect change for Kirkpatrick level 3 behavior and 4 results. Strengthening the POCUS infrastructure can help maintain an individual’s POCUS proficiency, expand education program, develop quality assurance processes, develop re-credentialing standards, and sustain POCUS integration.

Our model of importing a structured POCUS training curriculum is feasible and generalizable at hospitals with similar on-site champions. This pediatric and neonatal on-site POCUS model can be available to any institution. The prepared curriculum is nationally recognized and saves faculty time creating suitable education material. Pre-existing online education modules are important and helpful, but are limited by the lack of hands-on training on live human subjects. The on-site course could recruit local volunteers as scanning subjects. To support all clinicians within the institution to attend off-site courses is costly. This on-site pediatric and neonatal POCUS model is relatively more cost-effective. Clinicians can minimize travel time, reduce work schedule disruption, and balance work-life balance, encouraging more participation. We do recognize that many skilled clinician sonographers have developed excellent educational materials and although this paper used a specific POCUS course, many respectable courses could be used.

The limitations of our study include the small sample size and selection bias. As attendance was voluntary, motivated clinicians were more likely to incorporate POCUS into clinical practice and respond to the survey. The participants self-reported scanning pattern may introduce recall bias. As we did not have formal knowledge and technical skills assessments pre-course and during the follow-up periods, participants may have overestimated or underestimated their knowledge17. We felt that behavioral changes in adult learners were more important than knowledge assessments in adapting new skills. Post-course skill assessment is on-going via quality assurance process and expert faculty feedback.

Conclusion

In conclusion, our on-site pediatric and neonatal POCUS course transferred knowledge, positively changed clinician’s behavior, and broke down perceived barriers to POCUS integration at our institution. This model is exportable to other hospitals and clinical environments. Pediatric and neonatal critical care POCUS programs should consider distinctive education challenges and specific institutional barriers when designing their own educational programs. Further studies are needed to evaluate the long-term impact on patient outcomes from this training model.

Acknowledgement

We thank the following faculty for POCUS teaching and instruction during the two courses: R. Mart, R. Day, S. Ryan, E. Contreras, and J. Kim. Additionally, we thank T. Harbor, the research assistants from the Division of Pediatric Emergency Medicine, for helping with IRB maintenance, and creation, maintenance and distribution of the REDCap survey. Additionally, we thank Drs M. Johnson and R. Wilson for editing the manuscripts. We thank the Society of Critical Care Medicine for allowing the use of their education materials: critical care ultrasound: pediatric and neonatal.

References

1.         Conlon TW, Kantor DB, Su ER, et al. Diagnostic Bedside Ultrasound Program Development in Pediatric Critical Care Medicine: Results of a National Survey. Pediatr Crit Care Med. Nov 2018;19(11):e561-e568. doi:10.1097/PCC.0000000000001692

2.         Nguyen J, Amirnovin R, Ramanathan R, Noori S. The state of point-of-care ultrasonography use and training in neonatal-perinatal medicine and pediatric critical care medicine fellowship programs. J Perinatol. Nov 2016;36(11):972-976. doi:10.1038/jp.2016.126

3.         Mosier JM, Malo J, Stolz LA, et al. Critical care ultrasound training: a survey of US fellowship directors. J Crit Care. Aug 2014;29(4):645-9. doi:10.1016/j.jcrc.2014.03.006

4.         Marin JR, Abo AM, Arroyo AC, et al. Pediatric emergency medicine point-of-care ultrasound: summary of the evidence. Crit Ultrasound J. Dec 2016;8(1):16. doi:10.1186/s13089-016-0049-5

5.         Matyal R, Mitchell JD, Mahmood F, et al. Faculty-Focused Perioperative Ultrasound Training Program: A Single-Center Experience. J Cardiothorac Vasc Anesth. Apr 2019;33(4):1037-1043. doi:10.1053/j.jvca.2018.12.003

6.         Ahn JS, French AJ, Thiessen ME, Kendall JL. Training peer instructors for a combined ultrasound/physical exam curriculum. Teach Learn Med. 2014;26(3):292-5. doi:10.1080/10401334.2014.910464

7.         Olgers TJ, Azizi N, Bouma HR, Ter Maaten JC. Life after a point-of-care ultrasound course: setting up the right conditions! Ultrasound J. Sep 7 2020;12(1):43. doi:10.1186/s13089-020-00190-7

8.         Rajamani A, Miu M, Huang S, et al. Impact of Critical Care Point-of-Care Ultrasound Short-Courses on Trainee Competence. Crit Care Med. Sep 2019;47(9):e782-e784. doi:10.1097/CCM.0000000000003867

9.         Webb EM, Cotton JB, Kane K, Straus CM, Topp KS, Naeger DM. Teaching point of care ultrasound skills in medical school: keeping radiology in the driver’s seat. Acad Radiol. Jul 2014;21(7):893-901. doi:10.1016/j.acra.2014.03.001

10.       Stolz LA, Amini R, Situ-LaCasse E, et al. Multimodular Ultrasound Orientation: Residents’ Confidence and Skill in Performing Point-of-care Ultrasound. Cureus. Nov 15 2018;10(11):e3597. doi:10.7759/cureus.3597

11.       Jones TL, Baxter MA, Khanduja V. A quick guide to survey research. Ann R Coll Surg Engl. Jan 2013;95(1):5-7. doi:10.1308/003588413X13511609956372

12.       Kirkpatrick DL. Effective supervisory training and development, Part 2: In-house approaches and techniques. Personnel. Jan 1985;62(1):52-6.

13.       Harris PA, Taylor R, Minor BL, et al. The REDCap consortium: Building an international community of software platform partners. J Biomed Inform. Jul 2019;95:103208. doi:10.1016/j.jbi.2019.103208

14.       Patrawalla P, Narasimhan M, Eisen L, Shiloh AL, Koenig S, Mayo P. A Regional, Cost-Effective, Collaborative Model for Critical Care Fellows’ Ultrasonography Education. J Intensive Care Med. Dec 2020;35(12):1447-1452. doi:10.1177/0885066619828951

15.       Collins J. Education techniques for lifelong learning: principles of adult learning. Radiographics. Sep-Oct 2004;24(5):1483-9. doi:10.1148/rg.245045020

16.       Ben Fadel N, Pulgar L, Khurshid F. Point of care ultrasound (POCUS) in Canadian neonatal intensive care units (NICUs): where are we? J Ultrasound. Jun 2019;22(2):201-206. doi:10.1007/s40477-019-00383-4 17.       Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. Dec 1999;77(6):1121-34. doi:10.1037//0022-3514.77.6.1121

Two sides of the same coin: Elements that can make or break clinical learning encounters

Abstract

Phenomenon: This project explored how faculty, residents, and students at an academic medical center have experienced meaningful learning moments, what contributed to such moments within the clinical learning environment, and how these moments map on to a previously developed conceptual model of the learning environment. Approach: During AY 2018-19, the authors interviewed faculty (n=8), and residents (n=5) from the Surgery and OBGYN departments at the University of Utah School of Medicine. The authors also conducted interviews (n=4) and focus groups (n=2) with 20 third- and fourth-year students. Authors used an appreciative inquiry approach to conduct interviews and focus groups, which were audio-recorded and transcribed verbatim. Transcriptions were coded using manifest content analysis. Findings: Authors found that three factors determined whether learning encounters were successful or challenging: learner-centeredness, shared understanding, and learner attributes. Situations that were characterized by learner-centeredness and shared understanding led to successful learning, while encounters characterized by a lack of learner-centeredness and shared understanding led to challenges in the clinical learning environment. Likewise, some learner attributes facilitated successful learning moments while other attributes created challenges. These three factors map well onto three of the four elements of the previously developed conceptual model. Insights: The clinical learning environment is characterized by both successful and challenging moments. Paying attention to the factors which promote successful learning may be key to fostering a positive learning environment.  

Introduction

The learning environment encompasses people, social interactions, organizational elements, and material conditions.1 The learning environment is central to the quality of learners’ experiences. Ample evidence exists to suggest that the exceptional learning environment is difficult to create and maintain,1-3 in part because the factors thought to be important in creating an ideal learning environment are not always easy to address or implement.2 Indeed, creating a high quality learning environment is a “wicked problem”4 that is not easily resolved.

Gruppen and colleagues1 explain in detail the various elements that comprise the learning environment. Their conceptual framework includes two dimensions: the psychosocial and material, which when combined, encompass five different elements: personal, social, organizational, physical, and virtual spaces of learning environments. The personal includes factors like prior knowledge, professional identity, and motivation, while the social includes relationships and the interactions included in teaching, learning, and patient care. The organizational refers to the culture, practices, and policies of the organization. Finally, learning spaces can include the physical, such as patient exam rooms and classrooms, and the virtual, including learning management systems and other digital platforms. Gruppen et al.1 aptly point out these dimensions and elements are intersecting and work together within an environment to foster both negative and positive learning encounters. Negative learning environments foster encounters which are often marked by learner mistreatment—a complex, multifactorial issue which has received much attention in the literature.5-13 According to the Association of American Medical Colleges (AAMC), “Mistreatment either intentional or unintentional occurs when behavior shows disrespect for the dignity of others and unreasonably interferes with the learning process.”14 While the AAMC goes on to identify examples of mistreatment, the conditions which lead to mistreatment are often very context-specific and not always well understood especially across all of the interacting elements of the learning environment as captured in the Gruppen et al.1 conceptual framework. As a result, individual institutions must make their own investigations and seek insight about the conditions which prevent or lead to mistreatment behaviors.

Positive learning environments foster encounters which support the well-being of learners. Studies have noted associations between positive learning environments and factors including student demographics and student professional attributes,15 the presence of learning communities,16 and academic performance including United States Medical Licensing Exam (USMLE) Step 1 scores.17 Research has also shown that placing the right amount of trust in students,18 providing clarity around expectations, roles, and communication, and providing feedback can support learning.19

To gain additional insight into what constitutes a positive learning experience within the clinical learning environment within two procedurally-based departments at our institution, we initiated a qualitative investigation designed to elicit perceptions of faculty, residents, and students. We targeted these departments (with reoccurring reports of student mistreatment) to understand the factors influencing the learning experience. We used the following questions to guide our study: (1) “What meaningful moments have individuals experienced in the clinical environment?” and (2) “What factors contributed to these moments?”

Methods

Participants and recruitment

This project was granted exemption by our institution’s Institutional Review Board. We utilized a multi-case study design20 to understand the factors that made learning meaningful and purposefully selected21 faculty, residents, and students to participate. It was important to us to capture perspectives from individuals at multiple levels. Authors LB, BKS, and TW, who hold education leadership positions in their departments, provided names of residents and faculty who were highly involved in educational efforts and CJC chose a subsection of those residents and faculty to participate in the interviews. Residents and faculty were chosen to represent various gender identities, ranks, career tracks, and years of experience. The intention in not interviewing all the suggested individuals was to help participants maintain a degree of anonymity. The Director of Student Affairs provided names of fourth-year students who held current or former leadership positions and who were well-acquainted with their classmates’ experiences. Focus groups with third-year students who were near the end of their clerkship year were conducted while assigned to their surgery clerkship. In total, we spoke with 37 participants: residents (n=3) and faculty (n=4) in the Department of Surgery, residents (n=2) and faculty (n=4) in the Department of Obstetrics and Gynecology, fourth-year medical students (n=4), and 20 third-year medical students. Interviews and focus groups were conducted during AY 2018-19.

Data collection and analysis

Motivated by evidence in the literature about the power of focusing on what was going well in order to generate ideas,22 we chose to employ an appreciative inquiry approach.23 Our intention was not to ignore mistreatment but rather to focus on existing positive conditions upon which to enhance learning outcomes.22 We felt that if we focused singularly on incidences of mistreatment, it would be difficult to move past the problems. Thus, interview and focus group questions (see Appendix24) specifically asked participants to describe a successful learning moment and what they contributed to that moment, to name their values and how well these were reflected by our institution, and to recall an instance in which their values had been challenged. The questions were piloted with two student affairs administrators who were well-acquainted with students’ learning experiences. All interviews were conducted by CJC and/or BFR, who are both PhD-trained educational researchers with qualitative experience. The focus groups were conducted by CJC and CB, who accompanied CJC to some of the interviews to gain qualitative interviewing experience.

We transcribed the interviews and focus groups verbatim using Descript version 3.6.1 (San Francisco, CA). We used Dedoose Version 7.0.23 (Los Angeles, CA: SocioCultural Research Consultants, LLC www.dedoose.com) to engage in manifest content analysis of the data.25 CJC coded the interviews and focus groups. A third of the way through the first round of coding, the coding team (CJC, CB, TC, and BFR) met to discuss, collapse, and reach consensus on the codes in order to establish a codebook. All transcripts were coded based on the agreed upon codebook by CJC, with the understanding that new codes could be added as they were relevant to the data; CB also coded four transcripts to ensure multiple interpretations of the data were explored. Discrepancies were resolved through discussion with the entire coding team. This coding process resulted in code categories that illustrated successes and challenges related to patient care and teaching and learning. To further understand what factors contributed specifically to successful and challenging teaching and learning moments, CJC and TC engaged in a second phase of coding, this time looking for factors that contributed to learning successes and challenges until they reached a saturation of themes.26 Again, we resolved discrepancies through discussion with the coding team. Finally, the codes and categories were further refined as they were organized into themes by CJC and CB.

Ensuring trustworthiness

We recognize that our researcher positionalities matter because of the perspectives we bring to our work. Having multiple coders on the team was essential in exploring multiple interpretations of the findings. Regular peer debriefing27 meetings between those collecting and analyzing the data (CJC, BR, CB, and TC) and those who helped design the study (BKS, LB, TW, and SML) ensured we could make sense of the findings within the context of the departments where the study was conducted. Finally, memo writing28 helped us to keep an audit trail throughout the data collection and analysis process to track the data collection and analysis process. 

Results

We organize our findings into three themes (Figure 1) that help explain why successes and challenges in the learning environment occur: 1) degree of learner-centeredness, 2) extent of shared understanding, and 3) learner attributes. In the following paragraphs, we describe each theme, its positive characteristics (how it’s supportive of the learning environment), its negative characteristics (how it’s distracting to the learning environment), and provide exemplary quotes. We use the terms student or resident when talking specifically about either population and the term learner when referring to both. We use the terms faculty and resident when talking specifically about either group and teacher when talking generally about both. We abbreviate excerpts from surgery and OBGYN interviews with an “S” or “O”, respectively, followed by a number. We abbreviate excerpts from students interviews and focus groups with “St” or “FG, ” respectively, followed by a number.

Figure 1. Factors and attributes that support or distract from a successful learning environment

Figure 1. Factors and attributes that support or distract from a successful learning environment (Click to Enlarge Figure 1)

Learner-centeredness

The theme of learner-centeredness captures how contributions of teachers and learners work together to create an environment where the learners are an important focal point, and where the environment explicitly supports learners’ needs. We organize the contributions to learner-centeredness into three categories 1) teaching is a priority or not, and 2) learning is scaffolded or not, and 3) teaching is differentiated or not. 

  1. Positive: Teaching is a priority. Learners described how gratifying it was when teachers made time for teaching: “when people actually have like two minutes to just like look you in the eye and explain something.” (FG2) Teachers also explained that teaching opportunities were almost always present, and just needed to be utilized: “in my opinion if it’s not a life-threatening situation, there’s always a moment to teach” (S2)
    Negative: Teaching is not a priority. Participants discussed how the urgency of providing patient care could supersede teaching: “We can’t dedicate our undivided attention to… educating students. We have to … take care of the patients.” (S3) Participants also mentioned how “…some [teachers] are not as great about educating students” (O6) or how others do not have the capacity to engage in teaching: “at the beginning of the week, he was…super happy in the operating room and …[say] ‘ hey, like, what do you think this is anatomy wise?’ … towards the end of the week … he like wouldn’t talk” (St2).
  2. Positive: Learning is scaffolded (i.e., chunked, progressively more complex, aligned to context). Teachers discussed how they deliberately expanded opportunities in patient care to scaffold learning: “there was a big incision to suture …I started off and let her kind of watch me do it, …then I watched her do it, and gave her on-the-fly feedback.” (S7). Learners shared that aligning feedback with specific task they were learning to perform enhanced learning: “she’d always give feedback like constantly on how I can improve techniques.” (FG2).
    Negative: Teaching is not scaffolded. Learners and teachers explained that the unpredictability of patient care needs could interfere with carefully scaffolded teaching: “So it is more successful if [skills] can be leveled up in a coherent way, but unfortunately that’s not always how things present themselves …we also are running traumas.” (S3) Learners also shared that it was discouraging and confusing when teachers were dismissive and hostile instead of giving feedback: “When you give your assessment and plan and they don’t even acknowledge it, they say there’s [something wrong] and then they don’t tell you why you were wrong.” (FG1)
  3. Positive: Teaching is differentiated (i.e., tailored to idiosyncratic learner needs). Learners appreciated when the teaching was specific to the context and their individual needs. One mentioned, “They took the time to actually teach me everything in emergency medicine that is related to OBGYN.” (St4) Teachers explained what a difference it made to gear teaching to students’ interests and needs: “…if you try to make an effort the first few days …it goes a long way in terms of buying a little bit of engagement.” (S5)
    Negative. Teaching is undifferentiated. Learners expressed frustration when teaching was not tailored to their level or needs: “…your expectation that you have for a resident is not the same you’re going to have with a… student.” (St4). Teachers expressed frustration at not being able to adequately tailor teaching to individual student needs: “We work with medical students a lot…they’re a very diverse group and they have diverse interests and diverse strengths and weaknesses.” (O4)

Shared understanding

The theme of shared understanding captures the impact of teachers creating an environment where they, their learners, and the rest of the clinical team are on same page. These conditions include 1) explicit expectations or not and 2) communication, teamwork, and camaraderie or not. When these conditions existed, quality learning tended to occur.

  1. Positive: Explicit Expectations. Learners mentioned that knowing what to do to succeed was essential for successful learning encounters: “… it can make a big difference [when] the resident will take 3 or 5 minutes and [say] this is how I want you to structure a note.” (St3) Likewise, teachers explained that managing expectations helps learners know what they can and cannot do: “if you at a beginning of the case say to the students, this is what you should … get out of this case…I think that manages that tension.” (S4) 
    Negative: Lack of consensus on definitions or expectations. Participants discussed how “I think the prior way of teaching accountability and expectations even 10 years ago is different for the learners… so we’re not teaching to them how they learn best” (O3). A lack of consensus also arose when individuals had different definitions for mistreatment: “these reports of mistreatment come from…a different set of expectations on the learner versus the teacher side of things…” (S2).
  2. Positive: Communication, teamwork, and camaraderie. Teachers noted that open communication went a long way in setting learners up for success. One explained, “letting them know, …these are the reasons why you got this feedback and this is how to take it” (O6). Along similar lines, another teacher said, “So what I’ve … done … if I’m sensing things that are getting tense is I’ll pause, … and say look …this is what’s happening.” (S6)
    Negative: Lack of communication and teamwork. Participants spoke about how a breakdown of communication resulted in learning challenges. Both teachers and learners shared that it was challenging to give and receive feedback. A learner reflected on the difficulties of getting feedback: “I try to ask for feedback a lot…I don’t always get it. (FG1) One teacher explained: “I think sometimes … we don’t want to hurt people’s feelings and so we don’t give them the best feedback” (O4).

Learner attributes

The theme of learner attributes captures participants’ perceptions of how learner attributes contribute to the success of a learning encounter. The attributes identified in our study which contributed to successful encounters exemplify interpersonal skills and were 1) resilience and persistence, 2) engagement, and 3) situational awareness.

  1. Positive: Resilience and persistence. Learners discussed the importance of accepting of feedback, “ have a relatively thick skin…keeping that big picture in mind has kind of helped me.” (s7)
    Negative: Lack of resilience/sense of entitlement. Learners who are unable to receive feedback pose challenges to their teachers: “… the response was … not a like, ‘oh, let me learn how to do better,’…but ‘I can’t be anything less than excellent.’” (O2) Similarly, learners who feel like they deserve to be the best can make learning challenging: “What they all want to be is the best team member, not the best member on the whole team…They’re in it for themselves.” (O5)
  2. Positive: Engagement. Learners felt they received more teaching when they showed they were invested. “…my attending my pulled me out of doing other things with the team to come with her to go to this end of life conversation and … I feel like she only would have done that …because I had been engaged” (St1). Teachers also shared that teaching engaged students made for success: “…the ones who really … asked questions or tell me what they have been struggling with … end up being more successful.” (S1)
    Negative: Lack of engagement. Learners who do not display engagement in the learning encounter can discourage teachers from teaching. One attending shared, “…it takes energy for me to take good care of the patient, do the surgery, teach the resident, and teach [a student] at the same time. And if you [the student] don’t put that energy in beforehand, I’m not going to put [in] that energy … it’s disrespectful to the patient.” (S4)
  3. Positive: Situational awareness. Learners and teachers discussed how learners’ situational awareness was important in eliciting teaching: “the best students know how to integrate really well into a team and they understand …this team has a goal of providing the best possible care to the patient” (O2). A learner shared, “…just being able to be aware of the situation and kind of read like, oh now’s a good time to for me to ask a question or I’ll have to wait.” (FG1).
    Negative: Lack of situational awareness. Participants discussed how learners who did not have situational awareness received less teaching: “there are always students who … struggle with …being appropriate or finding the right time to ask questions … Leads them to not get taught as much … and leads to frustration on both sides” (O1).

Discussion

We used appreciative inquiry to explore the learning environment in two of our departments which received reoccurring reports of mistreatment. We conducted focus groups and interviews with faculty, residents, and students to understand what contributed to positive learning encounters in order to generate ideas about how to reproduce these positive situations. Our participants shared meaningful teaching and learning moments in the clinical learning environment. We identified three factors that contributed to these moments: learner-centeredness, shared understanding, and learner attributes. Learner and teacher behaviors represented by these themes contributed, individually and in combination, to the quality of the learning environment which led to learning outcomes. To the degree that these results align with Guppen’s1 recently published model of the learning environment, they help to expand and clarify the literature and support a variety of interventions designed to enhance the learning environment and promote learning.

Alignment to Gruppen model

The three themes identified in our study fit well into the psychosocial domain of the Gruppen model1 Some themes align with multiple elements.29 {Gruppen, 2019 #222}Learner-centeredness falls within the personal, social, and organizational elements of the psychosocial domain. For example, how responsibilities are shared with learners to foster learner-centeredness is largely guided by teachers’ actions (personal), clerkship practices (social), and policies (organization). Shared understanding between students and teachers is highly dependent on the relationships and interactions (social) that take place in the learning environment. Finally, the attributes that learners bring to the learning environment align with the personal element of the model. Interestingly, none of our themes explicitly map onto the material domain, which may be due to the fact that our interview questions may have caused participants to focus on the influences of people, behaviors, and culture rather than on spaces.

Alignment with Literature

Each of our themes reinforce findings from previous studies.  Our participants described how prioritizing teaching contributed to a sense of learner-centeredness and subsequent positive learning outcomes. Likewise, Tien and colleagues30 suggest providing students with opportunities to learn through ownership of patient care is an important part of professional identity formation. It follows then, that being asked to assume ownership also conveys to learners that learning is differentiated and tailored to them. We also found that learners and teachers perceived that scaffolded learning with psychological safety contributed to a sense of learner-centeredness. Similarly, Tsuei and colleagues31 found that students could focus on learning more fully when they did not have to worry about whether they were being judged or how they were performing. Our participants reported that a lack of learner-centeredness was often in conflict with the urgency of providing patient care. Given that exclusion from patient care encounters is associated with mistreatment,7 it is not surprising that feeling excluded leads to challenges in the learning environment. While not new, the findings that faculty and student practices that promote learner-centeredness helps create a positive learning environment which enhances learning the finding is nonetheless reinforcement of the importance of learner centeredness in encouraging learner experiences.

For the theme of shared understanding, our participants identified that having explicit expectations and communication between learners and teachers created situations that fostered positive learning, while not having them led to challenges. These findings heighten the implications of research that found that differences exist among interns, residents, and attendings regarding what should happen on rounds32 and that expectations differ when it comes to time spent on education, skills medical students should have, and the roles students should play.33 This challenges the utility of the traditional approach taken by most academic health centers where students are assigned to teams consisting of interns, senior residents, sometimes fellows, and attendings. Adopting a student-focused clerkship structure such as the one described by Matheny Antommaria et. al 34 more widely could help overcome barriers to achieving student-centeredness, ownership, and autonomy in patient care for learners.  

            In terms of the learner attributes theme, we found that individual learner characteristics strongly influenced the learning and environment and associated outcomes. For example, learners who could navigate new relationships and situations had more positive learning outcomes than did students who lacked situational awareness. Students who engage in a lot of personal impression management during their clinical years, and who are able to convey a positive impression of oneself receive learning opportunities and positive evaluations.35 Similarly, Nguyen & Johnson36 identified ways that students can influence the learning environment to promote positive learning outcomes. 

Implications

These findings have implications for the roles that both learners and teachers play in fostering positive learning encounters. Importantly, our study demonstrates that the relationship between learners and teachers is extremely dependent on the context in which they interact, their individual orientations to the situation, and their engagement with one another.

            Our findings also have implications for how professional learning sessions can be designed to prepare both learners and teachers to engage with each to promote a positive environment and associated learning outcomes. The clinical environment has different demands and expectations from the preclinical setting and students often struggle in making this transition.37 It has been suggested that making these expectations more explicit and teaching them in formal ways could be helpful in preparing students to be successful physicians.38 Changes to the structure of the clinical learning environment, such as creating longitudinal clinical experiences, could also be beneficial.39,40 Similarly, teachers in the clinical environment might need to be taught explicitly about how to support learners during this transition period.39 Likewise, teachers may need instruction on how to incorporate students into learning moments and how to foster psychological safety.41

Limitations

Our appreciative inquiry approach by design did not focus directly on just the factors influencing mistreatment; as such, we could have missed out on learning about factors that contribute more directly to mistreatment and negative learning encounters. In addition, we only interviewed faculty and residents in two clinical departments that were procedurally based at one institution. The factors that positively and negatively impact the learning environment in more medically based disciplines (e.g. internal medicine, pediatrics) may be somewhat different and are an opportunity for future exploration.  Therefore, we acknowledge that surgery and OBGYN have specific cultures and that the findings we report may not fully reflect circumstances in other clinical settings. Furthermore, student perspectives were obtained from individuals who were predominantly speaking to their experiences in the clinical clerkship learning environment. Inquiry into factors influencing the learning environment in the pre-clerkship or predominantly classroom-based setting were not a focus of this study.

            Despite these limitations, our study reinforces that the components proposed in the Gruppen1 model are, in fact, significant contributors to the learning environment. Our study also verifies how intersecting these components are, and that to understand how to create positive learning environments, one must look at multiple factors, including the degree to which the environment is learner-centered, how expectations are shared, and the attributes that learners and teachers bring.

Conclusion

Learner mistreatment in medical education is a major problem, is multifactorial, and difficult to eliminate. Our study has highlighted several factors that influence whether learning encounters are viewed as successful or challenging. Our findings detail factors that promote learning; with careful and consistent attention to increasing learner centeredness, developing shared understanding and cultivating positive learner and teacher attributes we believe we can make progress toward achieving the exceptional learning environment.    

Acknowledgements: We are grateful to Tisha Mentnech for her assistance in helping us conduct our literature review. We are also grateful to the participants who shared their stories with us.

References

1.         Gruppen LD, Irby DM, Durning SJ, Maggio LA. Conceptualizing Learning Environments in the Health Professions. Acad Med. 2019;94(7):969-974.

2.         Kilty C, Wiese A, Bergin C, et al. A national stakeholder consensus study of challenges and priorities for clinical learning environments in postgraduate medical education. BMC Med Educ. 2017;17(1):226.

3.         Schönrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. Key elements in assessing the educational environment: where is the theory? Adv Health Sci Educ Theory Pract. 2012;17(5):727-742.

4.         Rittel H, Webber MM. Dilemmas in general planning theory. Policy Sci. 1973;4(2):155-169.

5.         Castillo-Angeles M, Watkins AA, Acosta D, et al. Mistreatment and the learning environment for medical students on general surgery clerkship rotations: What do key stakeholders think? Am J Surg. 2017;213(2):307-312.

6.         Gan R, Snell L. When the learning environment is suboptimal: exploring medical students’ perceptions of “mistreatment”. Acad Med. 2014;89(4):608-617.

7.         Baecher-Lind LE, Chang K, Blanco MA. The learning environment in the obstetrics and gynecology clerkship: an exploratory study of students’ perceptions before and after the clerkship. Med Educ Online. 2015;20(1):27273.

8.         Kulaylat AN, Qin D, Sun SX, et al. Perceptions of mistreatment among trainees vary at different stages of clinical training. BMC Med Educ. 2017;17(1):14.

9.         Olasoji HO. Broadening conceptions of medical student mistreatment during clinical teaching: message from a study of “toxic” phenomenon during bedside teaching. Adv Med Educ Pract. 2018;9:483-494.

10.       Chung MP, Thang CK, Vermillion M, Fried JM, Uijtdehaage S. Exploring medical students’ barriers to reporting mistreatment during clerkships: a qualitative study. Med Educ Online. 2018;23(1):1478170.

11.       House JB, Griffith MC, Kappy MD, Holman E, Santen SA. Tracking Student Mistreatment Data to Improve the Emergency Medicine Clerkship Learning Environment. West J Emerg Med. 2018;19(1):18-22.

12.       Lau JN, Mazer LM, Liebert CA, Bereknyei Merrell S, Lin DT, Harris I. A Mixed-Methods Analysis of a Novel Mistreatment Program for the Surgery Core Clerkship. Acad Med. 2017;92(7):1028-1034.

13.       Fried JM, Vermillion M, Parker NH, Uijtdehaage S. Eradicating medical student mistreatment: a longitudinal study of one institution’s efforts. Acad Med. 2012;87(9):1191-1198.

14.       Mavis B, Sousa A, Lipscomb W, Rappley MD. Learning About Medical Student Mistreatment From Responses to the Medical School Graduation Questionnaire. Acad Med. 2014;89(5):705-711.

15.       Skochelak SE, Stansfield RB, Dunham L, et al. Medical Student Perceptions of the Learning Environment at the End of the First Year: A 28-Medical School Collaborative. Acad Med. 2016;91(9):1257-1262.

16.       Smith SD, Dunham L, Dekhtyar M, et al. Medical Student Perceptions of the Learning Environment: Learning Communities Are Associated With a More Positive Learning Environment in a Multi-Institutional Medical School Study. Acad Med. 2016;91(9):1263-1269.

17.       Wayne SJ, Fortner SA, Kitzes JA, Timm C, Kalishman S. Cause or effect? The relationship between student perception of the medical school learning environment and academic performance on USMLE Step 1. Med Teach. 2013;35(5):376-380.

18.       Karp NC, Hauer KE, Sheu L. Trusted to Learn: a Qualitative Study of Clerkship Students’ Perspectives on Trust in the Clinical Learning Environment. J Gen Intern Med. 2019;34(5):662-668.

19.       Cherry-Bukowiec JR, Machado-Aranda D, To K, Englesbe M, Ryszawa S, Napolitano LM. Improvement in acute care surgery medical student education and clerkships: use of feedback and loop closure. J Surg Res. 2015;199(1):15-22.

20.       Merriam SB. Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass; 2009.

21.       Maxwell JA. Qualitative research design: An interactive approach: An interactive approach (2nd ed.). Sage; 2005.

22.       Bushe G. Appreciative inquiry is not about the positive. OD practitioner. 2007;39(4):33-38.

23.       Cooperider D, Whitney D. Appreciative Inquiry: A Positive Revolution In Change, California: Barrett. In: Koehler Publishers, Inc; 2005.

24.       Williamson P, Suchman A. Appendix A. In: Caudlin C, Sarangi S, eds. Handbook of Communication in Organizations and Professions. Berlin, Germany: DeGruyter Mouton; 2011.

25.       Kleinheksel AJ, Rockich-Winston N, Tawfik H, Wyatt TR. Demystifying Content Analysis. Am J Pharm Educ. 2020;84(1):7113.

26.       Saunders B, Sim J, Kingstone T, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893-1907.

27.       Marshall C, Rossman GB. Designing qualitative research. Sage publications; 2014.

28.       Charmaz K. Constructing grounded theory. 2nd ed. Thousand Oaks, CA: Sage Publications; 2014.

29.       Babcock C, Buchmann L, Chow C, et al. Personal, Social, Organizational, and Space Components of the Clinical Learning Environment: Variations in their Perceived Influence. Submitted as AAMC RIME Research Paper. 2020.

30.       Tien L, Wyatt TR, Tews M, Kleinheksel AJ. Simulation as a Tool to Promote Professional Identity Formation and Patient Ownership in Medical Students. Simul Gaming. 2019;50(6):711-724.

31.       Tsuei SH-T, Lee D, Ho C, Regehr G, Nimmon L. Exploring the Construct of Psychological Safety in Medical Education. Acad Med. 2019;94(11S):S28-S35.

32.       Balmer DF, Master CL, Richards BF, Serwint JR, Giardino AP. An ethnographic study of attending rounds in general paediatrics: understanding the ritual. Med Educ. 2010;44(11):1105-1116.

33.       De SK, Henke PK, Ailawadi G, Dimick JB, Colletti LM. Attending, house officer, and medical student perceptions about teaching in the third-year medical school general surgery clerkship. J Am Coll Surg. 2004;199(6):932-942.

34.       Matheny Antommaria AH, Firth SD, Maloney CG. Evaluation of an innovative pediatric clerkship structure using multiple outcome variables including career choice. J Hosp Med. 2007;2(6):401-408.

35.       Han H, Roberts NK, Korte R. Learning in the Real Place: Medical Students’ Learning and Socialization in Clerkships at One Medical School. Acad Med. 2015;90(2):231-239.

36.       Nguyen S, Johnston T, Chow C, et al. Student attitudes and actions that encourage teaching on surgery clerkships. Oral plenary presentation at: Association of Surgical Education Conference; Virtual Highlights; September 2020.

37.       O’Brien B, Cooke M, Irby DM. Perceptions and Attributions of Third-Year Student Struggles in Clerkships: Do Students and Clerkship Directors Agree? Acad Med. 2007;82(10):970-978.

38.       Poncelet A, O’Brien B. Preparing Medical Students for Clerkships: A Descriptive Analysis of Transition Courses. Acad Med. 2008;83(5):444-451.

39.       O’Brien BC, Poncelet AN. Transition to Clerkship Courses: Preparing Students to Enter the Workplace. Acad Med. 2010;85(12):1862-1869.

40.       Hauer KE, O’Brien B, Poncelet AN. Longitudinal, Integrated Clerkship Education: Better for Learners and Patients. Acad Med. 2009;84(7):821.

41.       Hemmer PA, Pangaro L. Using Formal Evaluation Sessions for Case-based Faculty Development during Clinical Clerkships. Acad Med. 2000;75(12):1216-1221.

Appendix

**This is a semi-structured interview protocol, adapted from a guide originally developed by Drs. Williamson and Suchman.1 Probes will be used as necessary to elicit additional pertinent information.

Interview questions

  • Introduction: This is going to be what we call an appreciative interview. I am going to ask you questions about times when you experienced educational things working at their best here at [institution]. Many times, we try to ask questions about things that aren’t working well—the problems—so that we can fix them. In this case, we are trying to find out about the things at their best—the successes—so that we can find out what works and why, and find ways to infuse more of it into our practice.
  • As we get started, I’d like to know a little bit about you. Just so you know, this information will not be associated with any of your stories or quotes, but will just be used to provide context to our findings.
    • What’s your role here at [institution] and how long have you been here?
  • People do their best work when they are doing things that they find personally meaningful, and when they feel that their work makes a difference. During your time at [institution], there have no doubt been high points and low points. For now, I’d invite you to think of a teaching and learning moment that meant a lot to you, when things went right, a time that brought out the best in you.
    • Please tell the story of that time.  (If they are very general, try to probe for more specificity.)
    • Without worrying about being modest, please tell me what it was about you—your unique qualities, gifts or capacities; decisions you made; or actions you took—that contributed to this teaching/learning experience?
    • What did others contribute or do?
    • What aspects of the situation made this a success (for example, the place, the time of day or year, recent events)?
  • Now, think of a time at [institution] when you or your values were challenged.
    • Please tell me a story about that time. (If participant needs clarification about what a value is, explain that a value is “a person’s principles or standards of behavior; one’s judgment of what is important in life.”)
  • We each have different qualities, gifts and skills we bring to the world and to our work. Think about the things you value about yourself, the nature of your work and the university. At work, we’re always dealing with challenges and change.
  • How have your strengths and values helped you deal with challenges and change?
    • Your work: When you are feeling good about your work, what do you like about the work itself?
    • Yourself: Imagine you’re at your retirement party. What do you think your colleagues would say they liked most about you?
    • Yourself: Now what do you think your students would say they’ve liked most about you?
    • How do your personal values match those of [institution]? (for example, honesty, compassion, teamwork)?
    • Where have you seen examples of these values at [institution]?
  • Where do you think these reports of mistreatment are coming from?

Focus group questions

  • What was it about you—your unique qualities, gifts or capacities; decisions you made; or actions you took—that contributed to these peak learning experiences? What did others contribute or do?
    • What aspects of the situation made this a success (for example, the place, the time of day or year, recent events)?
    • What are the commonalities among all of your stories?
    • What are two things you can do, as students, to promote more of these experiences? What are two things that curricular leaders can do to promote more of these experiences?
  • This project arose out of a desire to understand why student mistreatment in clerkships occurs. So next, I would like you to think about how these moments of success differ from moments of challenge.
    • Assuming that moments of success were to happen all the time, how likely is it that mistreatment would occur?
    • What are two things that need to happen to prevent mistreatment?

References

  1. Williamson P, Suchman A. Appendix A. In: Caudlin C, Sarangi S. Handbook of Communication in Organizations and Professions. Berlin, Germany: DeGruyter Mouton; 2011.

Comparison of a Self-Care Therapeutics Course Taught in the P1 versus the P2 Year

Abstract

Objective: The objective of this study was to compare student learning outcomes, behaviors, and attitudes in a non-prescription drug and self-care therapeutics course taught in the second professional (P2) year versus the first professional (P1) year at one pharmacy school.

Methods: Mean performance of students by class year on case consultations and exam scores were compared. Focus groups with student volunteers and course teaching assistants (TAs) and one-on-one interviews with a subset of instructors were conducted by an outside educational evaluation specialist to capture perceptions of student learning behaviors and attitudes.

Results:  There was no difference in performance on graded case consultations (mean difference 0.16, p=0.74, 95% CI [-0.77 – 1.09]), mid-term examinations (mean difference 0.53, p=0.62, 95% CI [-1.59 – 2.65]), or final examinations (mean difference 0.73, p=0.57, 95% CI [-1.83 – 3.30]) between P1 and P2 students. P1 students reported being more consistent in completing pre-class readings and in feeling less distracted by other courses than did P2 students. Students, TAs, and instructors consistently spoke about advantages of the course in the P1 year (e.g., less stress, greater eagerness to learn and apply skills at work) and disadvantages in the P2 year (e.g., distraction from concurrent P2 integrated pharmacotherapeutics course, tension between real-world experience and constraints of grading rubric). 

Conclusion: Despite taking the course one year earlier than P2 students, P1 students performed equally as well. All stakeholders agree that the advantages of teaching a self-care course on students’ learning behaviors and attitudes in the P1 year outweigh disadvantages.

Keywords: self-care, pharmacy education, non-prescription medications, flipped classroom, curriculum

Introduction

Patient self-care education is fundamental to pharmacy practice and included in the core pedagogy in pharmacy education.1-2 In 2006, the Nonprescription Medicines Academy issued recommendations to include a minimum of 60 contact hours of self-care instruction within the pharmacy curriculum and stated that a majority of the instruction should occur within a standalone course.3 Despite these recommendations, there is substantial variability in self-care education delivered by colleges of pharmacy across the United States.4 Some colleges require a standalone self-care course and others integrate content into other courses, labs, or experiential education.4 A number of colleges that do not require a standalone course instead offer electives in this topic area.4 Data about timing of self-care courses within the pharmacy curriculum are limited. Recommendations from the Nonprescription Medicines Academy highlight arguments for teaching this content earlier versus later in the curriculum. They note that placing the course in the first professional (P1) year allows for earlier application of pharmacy content and enhances professional development whereas placing it in the third professional (P3) year allows for more in-depth discussion of the pharmacotherapeutics of non-prescription drug therapy.3 To our knowledge, no studies have been published that explicitly address issues related to optimal timing of self-care courses in the curriculum.

The University of Utah College of Pharmacy has included a standalone three credit hour non-prescription drug and self-care therapeutics course since 1989. The course currently utilizes a modified flipped classroom model involving numerous co-instructors and guest lecturers.5 Within this model, students complete a number of required readings related to a self-care topic prior to attending two class sessions per week (1.5 hours per class session). During each class session, the instructor utilizes a series of case vignettes. For each case vignette, the instructor randomly selects a student to perform a self-care consultation role play in which the student is expected to follow a structured framework such as QuEST/SCHOLAR-MAC to assess the patient (played by the instructor) and provide a self-care recommendation.6 Following completion of the simulated consultation, there is a class discussion about the case and key learning points. This discussion facilitates learning and formative assessment. Students are required to complete at least three self-care consultations over the course of the semester. Additional summative assessment tools utilized in this course include two individual written assignments, two multiple-choice exams, and an oral final exam.

This course was taught during the P3 year from 1989 until 2016 and during the second professional (P2) year from 2016 until 2019. Based on feedback from students, faculty, and staff, a need to more evenly distribute curriculum workload during the P1 and P2 years was identified. As a result, the decision was made to transition the Community Practice (non-prescription drug and self-care therapeutics) course from Spring semester of the P2 year to Spring semester of the P1 year. The Pharmacokinetics and Pharmacodynamics course was moved from the P1 year to the P2 year to allow for this transition. As a result, the course was simultaneously taught to 53 P1 and 58 P2 students (one P2 student withdrew midway through the semester) during the 2020 Spring semester, using the same instructors, pre-class readings, cases, and assessments for both professional years. Teaching assistants (TAs) were different for each course, but drawn from the same pool of P3 student volunteers. P1 students also received an additional 15 minutes each week practicing a self-care consultation case as part of their Recitation course. Other than these noted differences, the courses were the same for the P1 and P2 students (Table 1).

Three weeks after the first mid-term exam (about 40% through the course duration), the COVID-19 global pandemic required in-class learning to be replaced by asynchronous on-line lectures. While the course content covered in each online session remained the same as the content scheduled for the in-class session, the case consultations were moved to separately scheduled times for randomly selected students and were conducted telephonically. Given the unprecedented circumstances, assessment and grading required modification, which included reducing the number of required case consultations to two, making the final multiple-choice exam open-book/open-note, and cancelling the oral final exam.

In this study, we sought to answer the following questions:

  1. Will moving the Self-Care course to the P1 year, negatively impact student performance on graded self-care consultations or scores on multiple choice mid-term or final exams? 

    This question stems from our assumption that P2 students taking this course have completed an additional year of the pharmacy curriculum and participated in one semester of the integrated pharmacotherapeutics course series; and, as a result, P2 students may have acquired a stronger base of therapeutics knowledge and exposure to more clinical problem-solving than P1 students. This increased knowledge may positively impact their ability to acquire self-care therapeutics knowledge and confidently engage with a patient to provide advanced non-prescription consultations.

  2. Will moving the course to P1 year impact students’ learning behaviors, such as course preparation, and engagement in class?

    This question stems from our assumption that P2 students have more experience in a professional program and therefore may have developed better study habits which could result in better preparation for class. On the other hand, P2 students may have less time to dedicate to this self-care course given the competing demands of other rigorous courses taken concurrently, particularly the integrated therapeutics course.

  3. Will moving the course to P1 year impact stakeholder (student, instructor, and TA) attitudes about the relative advantages and disadvantages of when the course is taught on students learning to be a competent pharmacist?

This question stems from our assumption that attitudes are different from performance and learning behaviors and may be differentially influenced by the placement of the course. A priori anecdotal feedback suggested that students would prefer the course in P1 year because the workload is lighter and because many students begin working in outpatient settings starting in the first year and would value having the formal training to work with patients in these settings.

Material and Methods

A mixed-methods approach was utilized to answer the three study questions. To answer question one, the mean and standard deviation of case consultation and exam scores were compared for P1 and P2 students using two-tailed t-tests. To answer questions two and three, an educational evaluation team from the University of Utah School of Medicine conducted virtual interviews with course instructors and led focus groups with students and course TAs. Responses to end-of-course evaluations were also compared to answer question three.

Case Consultation and Multiple-Choice Exam Scores

A total of 53 P1 students and 58 P2 students completed the mid-term exam, final exam, and at least two case consultations. The mid-term exam consisted of 54 multiple-choice questions and students were not permitted to utilize resources such as course handouts or reference materials. The final exam was open-book/open-note and consisted of 61 multiple-choice questions. Both exams were administered via ExamSoft. Each self-care consultation was graded by the coursemaster and at least two other graders, comprising faculty members or TAs, using the same grading rubric. The grade was calculated by averaging the grader assessments. Since the number of required self-care consultations was reduced to two, some students completed two and others completed three. The total self-care consultation grade was calculated by averaging the scores for the total number of consultations completed (2 or 3).

Student and TA Focus Groups

Separate in-person focus groups were held with P1 and P2 students a few days after the midterm exam. All students enrolled in the course were invited via communication through the electronic learning management system (LMS). Participation was optional and lunch was served as an incentive. Five P1 students and one P2 student participated and the same structured protocol of questions guided both groups (Table 2). After the final exam, separate virtual focus group were held via Zoom with P1 and P2 students who had not participated in the midterm focus group. Recruitment information was again sent through the electronic LMS and students were incentivized with a $10 gift certificate for participation. Six P1 students and five P2 students participated in the final focus group and a structured protocol of questions modified from the midterm focus group was utilized (Table 2). One face-to-face focus group was held with TAs from both the P1 and P2 course together, over lunch, shortly after the midterm student focus groups. Two TAs from the P1 course and three TAs from the P2 course participated and a structured protocol of questions was used to solicit the TA’s perspective of students’ learning behavior and attitudes across the two years (Table 2). The educational evaluators took detailed notes during each of the focus groups, which they analyzed and shared with other members of the study team. Their analysis consisted of deductive review of focus group notes, looking specifically for answers to study questions two and three.7

Instructor Interviews

In the weeks following the mid-term exam, four 20-30-minute interviews were conducted with course instructors. This included 2 faculty members and 2 adjunct faculty members. A structured protocol of questions (Table 2) specifically designed to answer study questions two and three was utilized to conduct the interviews. Members of the educational evaluation team took detailed notes during the interviews analyzed the data utilizing a deductive review to look specifically for answers to study questions two and three.

Course Evaluations

The same anonymous end-of-course evaluation was assigned to P1 and P2 students and they were incentivized to complete it with 5 extra credit points towards their final course grade. Evaluations were completed by 90.6% (48/53) of P1 students and 79.3% (46/58) of P2 students. In addition to 15 closed questions using a 6-point Likert scale (e.g., strongly disagree=1, disagree=2, mildly disagree=3, mildly agree=4, agree=5, strongly agree=6), the evaluation also included two short-answer questions. Averages on the closed questions and type of and frequency of comments on open ended questions were compared between classes.

Results

Question 1:

There was no difference in performance on graded case consultations or mid-term and final examinations between P1 and P2 students (Table 3).

Question 2:

Preparation. Notes from interviews and focus groups indicated that P1 students spent more time preparing for class than P2 students. One course instructor noted that preparation for the first year was higher as they are more ambitious whereas P2 students have other difficult classes to prepare for. P2 students also indicated that they often divided up pre-class readings and shared notes with one another to reduce the amount of time needed to prepare for class.

Class Engagement. P1 students were also noted to be more engaged during class than P2 students. One instructor noted that P1 students were more attentive and talkative during class whereas P2 students appeared to be more focused on obtaining points needed to achieve a desired grade than on actually learning and processing the material.

Question 3:

Analysis of focus group and interview notes revealed that students, instructors, and TAs consistently recommended the course be taught in the P1 year. Each stakeholder group identified numerous advantages to offering the course earlier in the curriculum.

More opportunity to apply content. It was noted that including the course in the P1 year provides students with more opportunity to apply course material at their internships and experiential education sites. P1 students expressed satisfaction with using information and skills they learned in this course at their job sites and in conversations with family members.

P1 students prioritized course. This course was prioritized higher than other concurrent classes by P1 students and therefore prompted them to dedicate a significant amount of time towards studying. One P1 student stated the course was the most important class in year 1 of pharmacy, so they dedicate a lot more time to preparation for the class. A TA also noted the course is the hardest thing P1 students will do and they will need to take time to study in order to achieve a high grade.

Disadvantages of course in year 2. Stakeholders also identified disadvantages to offering the course during the P2 year. They noted that many other competing demands, specifically the 8-credit hour integrated pharmacotherapeutics course, were prioritized higher than the self-care course. One P2 student stated that it is very good that the class was moved to the first year because having the course during the P2 year is incredibly difficult, and unfortunately, students did not get the most out of it because they were forced to dedicate most of their time to therapeutics.

Nature of Grading Rubric. P2 students felt the self-care consultation grading rubric was too rigid and not representative of what they had observed on their clinical experiences. Given this inconsistency, many P2 students shifted their focus to meeting the rubric criteria rather than learning the material.   

End-of-course evaluations were overwhelmingly positive for both groups. Responses to closed questions were similar; however, the mean was higher for P1 students (Table 4).  Responses to open-ended questions revealed a few consistent themes. Students from both groups felt that the flipped classroom model was beneficial for learning. P1 and P2 students similarly thought that the in-class case consultations were stressful but believed that the chance of being randomly called upon in front of their peers motivated them to adequately prepare for class.

Discussion

Our study revealed that moving the non-prescription drug and self-care therapeutics course from the P2 year to the P1 year did not appear to negatively impact student performance. Although P2 students had more clinical experience and had already completed a full year of the professional curriculum, they did not perform better on graded self-care consultations or score higher on mid-term or final exams than P1 students. This is likely due to differences in learning behaviors that were observed between the two groups. P1 students appeared to spend more time preparing and were more engaged during class time. In addition, P1 students ranked the non-prescription drug and self-care therapeutics course as their highest academic priority, likely because it is the P1 course most directly relevant to clinical application. With more competing demands, P2 students likely had lower motivation and less time to dedicate towards studying for this course. The combination of decreased preparation and less engagement during class may have impacted P2 student performance. Had P2 students been equally as prepared and engaged as P1 students, they may have performed better on self-care consultations and exams given their additional exposure to the pharmacy curriculum and clinical experiences. Instructors, students, and TAs all preferred that the non-prescription drug and self-care therapeutics course be taught in the P1 year rather than the P2 year. These stakeholders based that preference on evidence of benefit for P1 students and evidence of harm for P2 students taking the course. Based on these results, the decision was made to continue to offer this course during the P1 year of the pharmacy curriculum at our institution.

Most literature related to this topic focuses on course content and instructional design methods.8-11 An article published by the Nonprescriptions Medicine Academy Steering Committee highlights the importance of fostering effective communication and patient assessment skills in the self-care curriculum.4 Interactive methods have been shown to improve student confidence in providing self-care recommendations and a few studies have described the use of case simulations for development of these skills. 3,8,10 Based on course feedback, P1 and P2 students similarly found the case consultations to be beneficial for learning and applying course content. It seems that a key benefit of offering this course in the P1 year is promoting development of communication and patient assessment skills earlier in the curriculum.

This study has several limitations. The number of instructors selected for interviews was small; therefore, findings from these interviews may be biased by the small sample size. In addition, the student focus groups may be impacted by self-selection bias as the opinions of those who volunteered may not be truly representative of the class as a whole. We held two separate focus groups with students at the midpoint and end of the course and included different students in each in attempt to ensure diversity in opinions; however, there was an uneven distribution of P1 and P2 students in the midpoint focus group. Our study also has several strengths. The course was administered to P1 and P2 students simultaneously with identical instructional designs, instructors, and evaluation methods. Despite the transition to online learning due to COVID-19, all course changes were identical for both groups of students.

Our analysis of literature on this topic highlights gaps in knowledge related to non-prescription drug and self-care therapeutics within pharmacy education. There is limited information to guide the optimal timing of self-care courses within pharmacy curriculum and it is unclear what impact the timing of delivering this content may have on long-term knowledge retention, application of self-care concepts on experiential rotations and performance on self-care related PCOA and NAPLEX questions. Further studies of these two cohorts of students could be conducted by interviewing preceptors to assess performance on experiential rotations and comparing PCOA and NAPLEX exam scores.

Conclusion

This study showed that P1 students performed equally well in a non-prescription and self-care therapeutics course as P2 students, despite completing the course one year sooner. Numerous advantages to teaching this content earlier in the curriculum were identified. Our study also highlights the value of simulated case consultations in P1 skill development.

Conflicts of Interest: The authors have no conflicts of interest to report.

Acknowledgements

We thank the University of Utah College of Pharmacy Dean’s Office for providing financial support for the student incentives.

References

1.         Accreditation council for Pharmacy Education. Accreditation Standards and Key Elements for the Professional Program in Pharmacy Leading to the Doctoral Pharmacy Degree. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed June 4, 2020.

2.         National Association of Boards of pharmacy. NAPLEX Competency Statements and Sample Questions https://nabp.pharmacy/wp-content/uploads/2020/04/NAPLEX-Competency-Statement-Sample-Questions.pdf. Accessed July 4, 2020.

3.         Zierler-Brown SL, VanAmburgh JA, Casper KA, et al. Status and recommendations for self-care instruction in US colleges and schools of pharmacy, 2006. Am J Pharm Educ. 2006;70(6):139. doi:10.5688/aj7006139.

4.         Nonprescriptions Medicine Academy Steering C, Ambizas EM, Bastianelli KM, et al. Evolution of self-care education. Am J Pharm Educ. 2014;78(2):28. doi:10.5688/ajpe78228.

5.         Hew KF, Lo CK. Flipped classroom improves student learning in health professions education: a meta-analysis. BMC Med Educ. 2018;18(1):38. doi:10.1186/s12909-018-1144-z.

6.         Daniel L. Krinsky SPF, Brian Hemstreet, Anne L. Hume, Gail D. Newton, Carol J. Rollins and Karen J. Tietze. Handbook of Nonprescription Drugs: An Interactive Approach to Self-Care, 19th Edition. American Pharmacists Association; 2017.

7.         Bhavsar VM, Bird E, Anderson HM. Pharmacy student focus groups for formative evaluation of the learning environment. Am J Pharm Educ. 2007;71(2):22. doi:10.5688/aj710222.

8.         Frame TR, Gryka R, Kiersma ME, Todt AL, Cailor SM, Chen AM. Student Perceptions of and Confidence in Self-Care Course Concepts Using Team-based Learning. Am J Pharm Educ. 2016;80(3):46. doi:10.5688/ajpe80346.

9.         Franks AS. Using course survey feedback to encourage learning and concept application in a self-care and nonprescription medications course. Am J Pharm Educ. 2009;73(8):153. doi:10.5688/aj7308153.

10.       Hamilton WR, Padron VA, Turner PD, et al. An instructional model for a nonprescription therapeutics course. Am J Pharm Educ. 2009;73(7):131.

11.       Krypel LL. Constructing a self-care curriculum. Am J Pharm Educ. 2006;70(6):140. doi:10.5688/aj7006140.

Analyzing the cost of medical education as a component to understanding education value

Problem

What is the cost of medical education?  In 2016, the average yearly tuition for students was $36,755 for public US medical schools  and $60,474 for private US medical schools1 , and the average indebtedness for all medical graduates was $189,165.3  But tuition is only part of the picture. The total annual financial cost of medical student education  is currently estimated to be between $90,000-$118,000 per student or between 360,000 to 472,000 per graduate.4 Are these costs justified?

To answer this question, we turned to recent developments in healthcare delivery known as ‘value-driven outcomes’.  In his seminal paper, “What is Value in Health Care”, Michael Porter addresses the relationship between cost and quality of outcomes by defining value in health care as desired patient outcomes divided by the cost to achieve that outcome.This framework is now well established as a way to consider the relationship between cost and quality. In 2012, the University of Utah Health Care (UUHC) developed the value-driven outcomes (VDO) model and tested it’s application in numerous setting. The key strategy of VDO was to develop a tool that “allows clinicians and managers to analyze actual system costs and outcomes at the level of individual encounters and by department, physician, diagnosis, and procedure.”6  If, for example, the data show that different surgeons incur different costs in performing a standard procedure, then meaningful steps can be taken to understand the source of the variability and reduce costs.

What if the thinking behind the UUHC VDO tool, which aimed to better understand costs in relation to quality of clinical care, could be adapted to better understand the cost of medical education in relation to the quality of that education, and consequently promote a process of better aligning costs with quality?

Approach

To explore this question, we decided to undertake the challenge of translating the clinically-focused VDO principles to medical education. In Phase One of the work, the focus was to understand the cost of medical education at our own institution. In Phase Two, the focus was to understand the desired outcomes (i.e. quality) by stakeholders. In Phase Three, the work will be to integrate the cost and quality components to propose relevant measures of value for medical education. This report describes Phase One relating to costs and builds on previous reports of medical education cost in the literature.

The cost analysis targeted the medical student education program for the academic year 2015-2016 ( Table 1).  The major categories of cost were divided into two domains: Facility Costs and Professional Costs. These two domains were consistent with those of the VDO model.  Within each of the two domains, major categories and detailed subcategories of cost ( Table 1) were identified.

The project was reviewed by the University of Utah Institutional Review Board (IRB), deemed not to meet the definition of human subjects research and was therefore exempt from IRB oversight. This project was funded through support from an Accelerating Change in Medical Education Grant from the American Medical Association.

Setting

The UUSOM is the only AMC in Utah and is a state-funded AMC with four major affiliated teaching hospitals. During the 2015-2016 academic year, 371 unique faculty interfaced with the students in large classroom, small group and lab-based instruction over the 4-year program.  Approximately 700 faculty were involved in clinical supervision of students in the clerkship-based years of the program. There were 415 students enrolled in the UUSOM during 2015-2016.

The integrated pre-clerkship curriculum included seven foundational science courses and longitudinal courses on clinical reasoning/skills and medical humanities ( Figure 1).  A large portion of the pre-clerkship curriculum was delivered in a $40 million-dollar education building constructed in 2005, which included an 18-room clinical skills center.  The program utilized the University’s College of Nursing state-of-the-art, high fidelity simulation center for selected aspects of the curriculum. 

The third year of the program consisted of 7 required core clerkships (internal medicine, pediatrics, obstetrics/gynecology, surgery, neurology, psychiatry, family medicine; 4-8 weeks each). Every clerkship included an objective structured clinical exam (OSCE).  A required, summative end-of-year-three, 8-station OSCE was modeled after the USMLE Step 2 CS examination. Fourth year students were required to complete two, 4-week courses (critical care, core sub-internship), and 24 elective credits (minimum: 12 clinical).  In 2015-2016 students were also required to complete a scholarly project, community service, and engage in five half-to-full day simulation-based interprofessional education courses with students from four health professions colleges.

Data Collection

Facility Costs: Facility costs fell into 6 broad categories: Staff, Building/Facilities/Services, Information technology, Simulation, Materials and Other ( Table 1). There were 64 different major elements of facility costs identified requiring contact with 18 individuals to complete data collection. All cost elements were determined. A single staff member in the UUSOM Dean’s Office undertook the compilation of facility cost data.

Professional costs: Professional costs were all faculty-related costs categorized as: Administrative, Classroom teaching, Clinical teaching, and Mentoring/Advising ( Table 1).   

  • Classroom teaching. All classroom-based teaching time at UUSOM is cataloged in a central database, housed in the UUSOM Dean’s Office of Finance.  Teaching hours of all faculty who teach in the classroom setting, regardless of number of students present, are captured and validated for accuracy at the end of every academic year at the department level. Cost associated with those hours were derived based upon median salary and benefits data for MD and PhD faculty who taught in the program in 2015-2016. The median salary plus benefits for MD and PhD faculty who taught in the curriculum was $316,483 and $138,886, respectively ( Table 2).  Total classroom teaching costs assumed variable degrees of preparation time based on the type of learning session (3 hours per 1 hour of large classroom instruction, 0.5 hours per hour of small group instruction, and 1 hour per hour of laboratory).  In 2015-2016, 67% of instruction was delivered by MD faculty and 33% by PhD faculty.
  • To derive clinical teaching costs, assumptions about clinical teaching time were made based upon the medical education literature. The range of time faculty spent teaching individual students in the outpatient environment was estimated at 0.5-0.8 hours per half day of clinic.7, 8 The time faculty spend teaching individual students in the inpatient environment was estimated at 1.1 hour per full day of inpatient time.9 To calculate clinical teaching costs, the mean for outpatient teaching time (.65 hour per clinic half day) was used to derive costs for ambulatory experiences in our curriculum. Overall, clinical teaching time was calculated using the number of students in the clerkship years of the curriculum assuming the number of outpatient days and inpatient days for every student was nearly constant according to standard lengths of clerkships and required fourth year courses ( Table 2). Finally, professional costs for clinical teaching of individual students in electives were derived based on minimum fourth year elective requirements for graduation (24 weeks, minimum of 12 clinical weeks).
  • Administrative costs were calculated based on percent effort directed at the medical student program multiplied by faculty annual salary and benefits. Course director costs were based on expected time spent performing course planning and administration and varied based upon the length of the course.

Outcomes

Overall Education Costs

In 2015-2016, the overall cost of the 4-year medical student program was $32.7 million, which amounted to ~$79,000 per student per year, much more than the annual tuition and fees of $36,094 

Facility and professional costs were nearly equal in magnitude ($16.3M vs. $16.4M respectively. The three largest cost-drivers in the analysis were attributed to clinical teaching ($10.0M), building costs ($6.6M) and staff ($4.6M). 

The balance of costs for the pre-clinical curriculum (years 1-2) differed significantly from that of the clinical curriculum (years 3-4):  professional costs related to faculty teaching were 8-fold lower in the pre-clinical curriculum than clinical ($1.24M vs. $9.88M, respectively). Conversely, professional costs related to faculty administration time  were 3-fold greater in the pre-clinical years  compared to the clinical ones ($2,660,079 vs. $882,164, respectively).    

Value-Driven Outcomes Initiative Conceptually, and most importantly, the study afforded us the opportunity to move beyond an estimation of cost to a consideration of how to optimize value (maximizing outcomes for the cost incurred), particularly related to professional costs. In 2018, we replaced our distributed model of education delivery wherein over 500 faculty participated in education (many for only a lecture or two) with little direct association between such involvement and the distribution of funds to their departments, with a Core Educator Model, wherein approximately half that number of faculty each contribute a more substantive amount to education and receive direct financial support for those contributions. The aim of the Core Educator Model is to improve learning outcomes for students by consolidating the delivery of the program to a core group of expert educators who are both compensated and held accountable for their efforts.

Next Steps

The cost analysis at the UUSOM has prompted the redesign of funds flow supporting medical student education and has shifted the focus toward more heavily considering the value of education investments. The years ahead will provide opportunity to investigate the impact of the Core Educator Model on learning outcomes, the ability to deliver a high-quality medical education program, and the professionalization of faculty as educators.

At the 2017 AAMC Annual Meeting, Dr. Marsha Rappley, Chair of the AAMC Board of Directors, directly emphasized that the cost of what we do in education is undermining our ability to improve the health of the nation.10 Understanding costs has not traditionally been considered to be in the purview of educators. This needs to change. As medical educators strive to deliver high-value education, a concern for and an active engagement with the costs of medical education must be a part of the equation.

References

  1. Tuition and Student Fees Report, 2012-2013 through 2018-2019, Association of American Medical Colleges; www.aamc.org/data/tuitionandstudentfees/
  2. James Rohlfing, Ryan Navarro, Omar Z. Maniya, Byron D. Hughes & Derek K. Rogalsky (2014) Medical student debt and major life choices other than specialty, Medical Education Online, 19:1, DOI: 10.3402/meo.v19.25603
  3. 2017 Education Debt Manager for Graduating Medical School Students, Association of American Medical Colleges, members.aamc.org/eweb/upload/Education%20Debt%20Manager%20for%20Graduating%20Medical%20School%20Students–2017.pdf
  4. Cooke M, Irby DM, O’Brien BC. Educating physicians: a call for reform of medical school and residency: John Wiley & Sons 2010.
  5. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477–81.
  6. Lee VS, Kawamoto K, Hess R, et al. Implementation of a Value-Driven Outcomes Program to Identify High Variability in Clinical Costs and Outcomes and Association With Reduced Cost and Improved Quality. JAMA.2016;316(10):1061–1072. doi:10.1001/jama.2016.12226
  7. Ricer RE, Van Horne A, Filak AT. Costs of preceptors’ time spent teaching during a third-year family medicine outpatient rotation. Acad Med. 1997;72(6):547-551.
  8. Abramovitch A, Newman W, Padaliya B, Gill C, Charles PD. The cost of medical education in an ambulatory neurology clinic. J Natl Med Assoc. 2005;97(9):1288-90.
  9. Weinberg E, O’Sullivan P, Boll AG, Nelson TR. The Cost of Third-Year Clerkships at Large Nonuniversity Teaching Hospitals. JAMA. 1994;272(9):669–673. doi:10.1001/jama.1994.03520090033015
  10. Rappley, MD. Leadership Plenary Address.  Learn Serve Lead, AAMC Annual Meeting, Boston, Mass.  Nov 5, 2017.

Table 1

Lamb et al. Table 1
Table 1: Total Cost of Undergraduate Medical Education (Click to enlarge Table 1)

Figure 1

Lamb, et al. Figure 1
Figure 1 (Click to enlarge Figure 1)

Table 2

Table 2: Classroom and Clinical Teaching Costs
Table 2: Classroom and Clinical Teaching Costs (Click to enlarge Table 2)

The Influence of Revising an Online Gerontology Program on the Student Experience

Posted 2021/04/08

Acknowledgements

We acknowledge the support of the University of Utah Teaching and Learning Technologies, the University of Utah College of Nursing, and the University of Utah Consortium for Families and Health Research.

Funding

Program revisions were funded through a University of Utah Teaching and Learning Technologies Online Program Development Grant.

Declaration of Interest

We have no conflicts of interest to declare.

Abstract

The recent adoption of gerontology competencies for undergraduate and graduate education emphasizes a need for national standards developed to enhance and unify the field of gerontology. The Gerontology Interdisciplinary Program at the University of Utah revised all of the Gerontology course offerings to align with the Association for Gerontology in Higher Education’s (AGHE) Gerontology Competencies for Undergraduate and Graduate Education (2014), while also making improvements in distance instructional design. In this study, we examined student course evaluation scores and written comments in six Master of Science in Gerontology core courses (at both 5000 and 6000 levels) prior to and following alignment with AGHE competencies and online design changes. Data included evaluations two semesters prior to and two semesters following course revisions and was assessed using paired t-test and thematic analysis. No significant statistical findings were found between pre and post revisions. Qualitative comments post revision did show an increased focus in comments about interactive and engaging technology. These findings will be used for course and program quality improvement initiatives, including enhanced approaches to documenting and assessing competency-based education.

Keywords

Competency-based education, course evaluation, course revision, distance education

Background

Competency-based education (CBE) is growing in popularity and demand (Burnette, 2016; McClarty & Gaertner, 2015). Gerontology curriculum development has moved towards CBE with national standards developed to enhance and unify the field of gerontology (Association for Gerontology in Higher Education [AGHE], 2014; Damron-Rodriguez et al., 2019).  The Academy for Gerontology in Higher Education (AGHE) approved the Gerontology Competencies for Undergraduate and Graduate Education (AGHE, 2014); designed to serve as a curricular guide for undergraduate (i.e., majors, minors, certificates) and master’s degree level programs. Benefits of using competencies for curricular revisions, include: shifting focus to measurable outcomes (Burnette, 2016; Damron-Rodriguez et al., 2019; Wendt, Peterson, & Douglass, 1993), increasing program accountability for learning outcomes (Burnette, 2016; Damron-Rodriquez et al., 2019; McClarty & Gaertner, 2015), preparing students to graduate with necessary skills (McClarty & Gaertner, 2015), and training the gerontological workforce by bridging the gaps between aging services and gerontology education (Applebaum & Leek, 2008; Damron-Rodriquez et al., 2019).

At the same time CBE has increased, online teaching and learning are more accessible and in demand (Means, Toyama, Murphy, Bakia, & Jones, 2010; Woldeab, Yawson, & Osafo, 2020). For programs looking to enhance curriculum and program accessibility, considering both CBE and distance course design are vital. Quality course design for courses incorporating CBE, emphasize opportunities for student application and practice, active learning strategies, and timely instructor response and feedback (Krause, Dias, & Schedler, 2015). In a previous paper (Dassel, Eaton, & Felsted, 2019) we described an approach to the program-wide revisions to align with the AGHE competencies and to meet current recommendations in cyber-pedagogy. The University of Utah Gerontology Interdisciplinary Program (GIP) was in a position to make revisions to enhance both CBE and online instructional design using a course/credit model that embeds competencies within a traditional approach to higher education that offers credit hours towards a degree (Council of Regional Accrediting Commissions [C-RAC], 2015). The University’s Online Teaching and Learning Technologies (TLT) office released a funding opportunity for programs wanting to move completely online. The GIP took the opportunity to apply to use funds with the following purpose: 1) transition the Masters of Science program into a completely online format, and 2) improve the quality and consistency of existing gerontology courses through a full curriculum review by the experts at TLT. The goal was to make the fully online transition in a manner that allowed for dynamic online learning and to incorporate CBE within the program. In 2015, the GIP began the work to revise all program courses to meet best practices of online learning and map program curricula to National Competencies in Gerontology Education (AGHE, 2014).

            Course revisions were complete in 2017. We then applied for and received official UOnline Program status and accreditation as a fully online program through the Northwest Commission on Colleges and Universities (2020). This accreditation allows us to be recognized as an official UOnline Program at the University of Utah. The University is a member of the State Authorization Reciprocity Agreement (NC-SARA) which reduces the number of other state regulations to continually monitor, resulting in more efficiency in the authorization process. Through NC-SARA, GIP is able to offer and expand certain educational opportunities to students in and out of the state of Utah (National Council for State Authorization of Reciprocity Agreements, 2020). In 2017, we were also awarded Program of Merit (POM) status from the Academy for Gerontology in Higher Education (AGHE), at the master’s degree level. The process of curricula review, competency mapping, and online revisions/planning, facilitated our application, review, and award of the POM.

            Course revision and development followed a model that incorporated best practices in teaching pedagogy and online learning. These incorporated Fink’s (2003) approach to designing college courses, using the DREAM exercise, situational factors exercise, course alignment grid, and taxonomy of significant learning. A backward design approach (Wiggins & McTighe, 2005) helped faculty begin with competencies and learning objectives followed by identifying assessments that then measure those objectives. Bloom’s (1984) taxonomy was used to design assessments that evaluate the learning experiences accurately and active learning principles (Bonwell & Eison, 1991; Prince, 2004) guided choices to facilitate dynamic online learning. Instructional designers met individually with instructors to work through, enhance, and redesign courses to facilitate this work.

            Upon completion, the program continued to assess student learning using individual course assessments, grades, progress towards graduation, annual and exit student interviews, and alumni surveys. However, we wondered about the student experience and reaction to changes pre and post revision of the entire curricula. As this process spanned four years and courses, it became of interest to see if existing data might facilitate better understanding of the student experience pre compared to post program revision.

The purpose of this paper is to compare student course evaluations from six core courses of the Master of Science in Gerontology program before and after alignment with AGHE competencies and online design changes. The objective of this study is to analyze pre and post qualitative and quantitative student evaluations in order to assess indicators of program quality and improvement. We hypothesize that course evaluations will improve from pre to post revision. Testing of this hypothesis occurred through two aims:

Aim 1: Assess the changes in numerical course ratings provided by students pre to post course evaluation.

Aim 2: Assess the changes, pre to post course revision, in student open-ended feedback submitted with course evaluations.

Methods

Course Selection

For the purpose of the current study, we compared de-identified, anonymous student course evaluations in six of our Master of Science core courses before and after the course revision and alignment. The six core courses required in our Master of Science program are: 1) GERON 5001/6001: Introduction to Aging, 2) GERON 5370/6370: Health and Optimal Aging, 3) GERON 5002/6002: Services Agencies and Programs for Older Adults, 4) GERON 5500/6500: Social and Public Policy in Aging, 5) GERON 5604/6604: Physiology and Psychology of Aging, and 6) GERON 5003/6003: Research Methods in Aging (Note. 5000 and 6000 level courses are considered to be graduate level by the University of Utah). Two additional core courses, GERON 5990/6990: Gerontology Practicum and GERON 6970/6975: Gerontology Thesis/Project, were omitted from this study as they were newly created in an online format, are mentor-based (one instructor to one or two students), and don’t receive evaluations due to the small course size.

These six core courses underwent significant redesign across three consecutive semesters. Each instructor worked one on one with an instructional designer provided through the UOnline grant mechanism. Instructional designers, associated with the University of Utah’s TLT, aided course instructors in updating their courses with the latest technological media to provide online content in innovative and effective ways. 

Course Evaluations

Faculty were guided in assessing and revising courses through the use of AGHE competencies (2014) and Fink’s (2003) and Bloom’s (1984) taxonomies. AGHE competencies were first mapped across all gerontology courses, identifying redundancy, overlap, and missing content. A detailed description of this process is described in Dassel et al. (2019). Faculty noted recommended revisions based on competencies specific to objectives and modified content. These were incorporated as faculty worked with instructional designers on their assigned course. Next, instructors used the framework of taxonomies to redesign the student learning experience for an active online format. Fink’s taxonomy is a non-hierarchical model, which defines six major domains that need to be present for a complete learning experience – foundational knowledge, application, integration, human dimensions, caring, and learning to learn (Fink, 2003). Bloom’s taxonomy, revised posthumously by a group of cognitive psychologists in 2001, is a hierarchical model which defines and distinguishes six categories of learning (Bloom, 1984; Anderson & Krathwohl, 2001). Bloom’s six categories, which are each intended to be individually mastered before moving to the next category, are remember, understand, apply, analyze, evaluate, and create. These designations allow for the design of the accompanying assessment to accurately evaluate the learning experience by level.

Permission to analyze student course evaluations was submitted and reviewed by the Institutional Review Board (IRB) at the University of Utah. The IRB determined oversight was not required as this work does not meet the definition of Human Subjects Research. All student evaluations are completed on an anonymous basis. Evaluations are used as a tool of quality improvement to assess course outcomes and faculty instructions. In order to obtain a representative sample of student evaluations, we assessed evaluations from two consecutive semesters immediately prior to the course revision and the two consecutive semesters immediately following the course revision.

Course evaluations were emailed to students during the last month of the semester. Students were asked to voluntarily complete the anonymous course evaluations. The data, consisting of numerical scaled response options and open-ended comment sections, was summarized and provided to course instructors at the end of the semester once grades have been submitted. From the full list of course evaluation questions, we selected 10 quantitative questions that we felt were most relevant to course revision. The questions selected include: 1) Overall course evaluation, 2) The course objectives were clearly stated, 3) The course objectives were met, 4) The course content was well organized, 5) The course materials were helpful in meeting course objectives, 6) Assignments and exams reflected what was covered in the course, 7) I learned a great deal in this course, 8) As a result of this course, my interest in the subject increased, 9) Course requirements and grading criteria were clear, and 10) I gained an excellent understanding of concepts in this field. Response options were based on a 5-point Likert scale with 1= strongly disagree to 6 = strongly agree. Open-ended questions ask students to comment on: 1) course effectiveness, 2) the online components of the course, and 3) comments intended for the instructor.

Data Analysis

Data analysis occurred in two phases. Phase one focused on quantitative data from the course evaluations. Pre and post data were aggregated for each course. Since students do not take a course multiple times, analyzing data pre to post by individual student is impossible. Rather than focus on the individual student as the unit of analysis, we assessed pre and post evaluations using the course as the unit of analysis. The means of each sample were calculated for each of the course evaluation questions (e.g., overall course rating, course objectives) as a proxy for evaluating the effectiveness of curriculum revision and course mapping. We used univariate statistics to describe frequencies and mean responses for each evaluation question. Paired-samples t-tests were conducted on the course means to examine score changes from pre to post course revision. Each course was compared separately and then data was pooled for all courses to assess program change over time. For the qualitative portion of this study, we compiled and organized all of the course evaluation open-ended student responses by course and semester. Data was uploaded into NVivo (QSR International, 2018) and assessed in a two-phase process. First each comment was read and coded into four a priori codes: 1) pre-commendations, 2) pre-recommendations, 3) post-commendations, 4) post-recommendations. The second phase of coding used thematic analysis to assess the main themes presented by students (Saldaña, 2009). This allowed us to assess potential change in student thoughts pre- to post-revision.

Results

Data are anonymous and demographics were not gathered as part of student evaluations. However, we do have a general idea of student demographics within the GIP. During a recent fall semester, we had 189 unique students enrolled in gerontology courses. Students represented 6 master’s degree programs, and 3 doctorate programs; with 9 students undeclared and 4 nonmatriculated. The average age of students was 29; 137 female (72.5%), 50 male (26.5%), and 2 unknown (1.05%). The majority of students were white (67.72%), with others identifying as Hispanic/Latino (13.76%), Asian (7.40%), unknown ethnicity (4.23%), multi-racial (3.70%), international (1.59%), Black/African American (1.05%), and Native Hawaiian or other Pacific Islander (0.53%).

A summary of the t-test data results is found in Table 1. Some data were unavailable due to too few responses. One course, GERON 5500/6500, did not have sufficient data for analysis (less than 2 observations per class), as it was a newly developed course and did not have sufficient pre-revision data. This course was retained in the overall comparison of results pre to post. Paired t-tests comparing overall course ratings pre and post course revision revealed a trend in improvement in the GERON 5001/6001: Introduction to Aging (t=4.09; p=.05) course. Examination of aggregate data from all of the courses in relation to individual course evaluation questions showed trends in improvement for the following two areas: 1) “The course objectives were met” (t=1.47; p=.09), and 2) “I learned a great deal in this course” (t=1.36; p=.09). There were no significant differences on overall or individual course evaluation questions pre to post course revision.

Table 1. Assessment of Course Evaluation Questions Pre- to Post-Revision

Open-Ended Student Comments

Qualitative analysis summarizes both overall number of commendations and recommendations and the content of comments to assess change pre to post revision. A total of 298 codes were documented pre-revision (see Table 2). Of these 71% were commendations, focusing on positive feedback about course content, online teaching, and instructor efficacy. Comments focusing on recommendations for change comprised 29% of the total pre-revision codes. These recommendations centered on issues with course content, technology, and instruction. Comments in the recommendation’s category included both negative reviews and constructive ideas for change. Post-revision comments were coded 257 times. Seventy-three percent of these were commendations and 27% were recommendations (Table 2). Percentages are very similar pre to post, demonstrating that overall positive or negative comments did not alter much from pre to post revision.

Table 2. Overall Pre to Post Coding of Course Evaluation Qualitative Comments

The second phase of qualitative analysis assessed the content of the comments to understand the topics focused on pre to post revision. Student comments were evaluated for each course; pre-revision comments were analyzed first followed post the post-revision comments. After identifying themes within pre-revision comments, a summary was written of the main ideas. Following this, post-revision comments were read and coded for the same course. A summary was then written about the main themes for the post-revision codes. Representative quotes were included in each summary, in order to present examples of themes. At this time the pre and post revision summaries were compared for each course. Any major thematic changes were noted in a final course comparison summary. Once this process was complete for each course, all course comparison summaries were re-read and coded for similarities and differences across the group of courses. Table 3 includes a summary of each course, including representative quotes.

Table 3. Analysis of Student Comments by Class

Summary of Qualitative Comments Pre to Post Revision

The following summarizes overall findings from qualitative analysis of student open-ended course evaluation comments. Student comments increased in two main areas post-revision when compared to pre-revision: 1) connection to the instructor, and 2) organized content.

            Connection to the Instructor. Students expressed not wanting all the extra technological features integrated into courses such as screen and video recorded Power Point lectures, interactive quizzes, and movie creation apps. A variety of apps (e.g., Flipgrid, Lucid chart, Pathbrite) led to confusion and overwhelmed students. However, students emphasized the importance of technology in helping them maintain connection with the instructor. For example, one student stated, “I especially liked the introduction videos before each module because it felt like the instructor was in constant communication with the class.” The adoption of video was particularly useful in helping students feel this connection.

            Organized Content. Comments emphasized the importance of balancing assignments, content, and the amount of work. Students noted that spreading assignments out throughout the semester helped them disperse their stress. This was most often mentioned when a course had multiple assignments due the last week of the semester. One student commented, “Assign one of the larger projects to be due at mid-term, to space out the stress.” Students value learning and in an online environment this requires incorporating moments of accountability to help students interact with the content. Students emphasized wanting these opportunities for accountability and when a course was lacking this, they acknowledged their lack of course interaction. “I have mixed feelings about the assignments. On the one hand, I feel that the small amount of assignments was nice, but also allowed for me to be less involved in the course than perhaps I should have.”

Discussion

In this mixed method, multi-year study examining student evaluations from pre to post course revisions, quantitative analysis did not produce statistically significant differences in mean course evaluation scores. This may be attributed to the small sample size, use of aggregate data rather than individual data points, missing data, and little variation in scores with most courses receiving high mean scores. In the qualitative analysis of student evaluations, we gained useful information. We found that students value technology that augments their connection to the instructor and course organization. Some students do not want all the extra features that come with a wide variety of technology (e.g., external sites to create blogs, mini podcasts, video creation). Students noticed video introductions, video lectures, and video summaries, often stating it made them feel connected to the instructor. This aligns with the quality indicators in CBE online courses that emphasize the importance of technology and navigation as one of seven recommended areas for measurement (Krause et al., 2015). Students want to learn. Learning online necessitates the incorporation of one or more forms of accountability (which the students want). In addition, students desire forms of accountability throughout the semester, rather than just at semester’s end. The balance of assignments, content, and amount of work matters to students. Instructional design is vital in quality online courses. Accountability should be an area that faculty and instructional designers collaborate on to facilitate enhanced quality in online CBE. Two quality indicators of accountability include 1) assessment and evaluation, and 2) competence and learning activities (Krause et al., 2015). We also observed an increase in student comments specific to a certain topic each time a major adjustment occurred, whether pre or post revision. This could be an outcome of the “growing pains” related to trying something new. Similar to piloting research, faculty pilot testing teaching strategies often need student feedback to improve changes in a manner that actually works for students. Checking in with students demonstrates the quality indicator of learner support, and allows faculty to assess and evaluate their course as part of quality assurance (Krause et al., 2015).

The information obtained from this study is relevant to course and program quality improvement. Strengths include the mixed method format and multi-year analysis. Limitations include data that did not allow for pre and post data from the same students, as it is impossible to require students to take a course twice. In some cases, there was not sufficient data for analysis, as t-tests require at least two observations per class (e.g., GERON 5500/6500). This insufficient data was attributed to new course development and changes in student evaluation questions that occurred across the University of Utah. This meant that questions were different pre to post revision for some courses. In addition, conducting a technology revision simultaneously with competency revisions makes it difficult to tease out changes due to course format versus curriculum. Instructors need to remind students which competencies are being covered and how they will expect to interact with this content during the course. Clear learning outcomes and student comprehension of the proficiencies they are working on enhances CBE (Burnette, 2016).

Mapping the entire GIP curriculum to the AGHE competency guidelines (Dassel et al., 2019) prepared us to apply for and receive Program of Merit designation through AGHE. This Program of Merit status has provided the foundation for future application for accreditation through the Accreditation for Gerontology Education Council (AGEC), which requires that the programs under review align with the AGHE Gerontology Competencies for Undergraduate and Graduate Education (2014). Students from all health science disciplines participate in undergraduate and graduate level certificates available through our program. Improving program quality and demonstrating the efficacy of such changes should strengthen the ability of students to work with older adults in community and health care settings.

Programs should build on CBE by developing measures to assess student achievement of competencies. This process can be used to improve the quality of the student learning experience (Damron-Rodriguez et al., 2019; McClarty & Gaertner, 2015). Our program is developing a tool that will allow faculty to assess program learning outcomes and AGHE competencies within each class. Data will be gathered every 3 years and will facilitate progress at both the course and program levels.  Tools, such as this, can be shared in an effort to develop tool-kits for other gerontology programs to build quality models of competency-based education (Damron-Rodriguez et al., 2019). It is our goal to enhance the ability of graduates to demonstrate the competencies and skills they have gained through high quality gerontology education as they work with employers and older adults. We will enhance our approach to CBE by assessing the path alumni take and their use of competencies to communicate their knowledge, skills, and contributions within the workforce. Advancing CBE in gerontology needs to happen through organizational leadership (Damron-Rodriguez et al., 2019). Our program benefits from being housed within a College of Nursing that follows a CBE model and process for accreditation. We can learn from this process of documentation, tracking, assessment, and quality improvement to enhance the rigor and approach we take to CBE in gerontology programs. Finally, plan to share our CBE strategies, assessment tools, and models gerontology programs  in the Utah State Gerontology Collaborative.

The results of this study have implications beyond the Gerontology Interdisciplinary Program to the larger Health Sciences campus where our program and college are housed. Many interprofessional health science students enroll in our courses. Thus, improving program quality and demonstrating efficacy ultimately strengthens students’ ability to work effectively with older adults in a variety of settings.

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Applebaum, R. & Leek, J. (2008). Bridging the academic/practice gap in gerontology and geriatrics: Mapping a route to mutual success. Annual Review of Gerontology and Geriatrics, 28, 131-148. doi: 10.1891/0198-8794.28.131

Association for Gerontology in Higher Education [AGHE] (2014). Gerontology competencies for undergraduate and graduate education. Washington, DC: Association for Gerontology in Higher Education. Retrieved from: https://www.geron.org/images/gsa/AGHE/gerontology_competencies.pdf

Bloom, B. S. (1984). Taxonomy of educational objectives: The classification of educational goals. New York: Longman.

Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. ASH-ERIC Higher Education Report. Washington, DC: School of Education and Human Development, George Washington University.

Burnette, D. M. (2016). The renewal of competency-based education: A review of the literature. The Journal of Continuing Higher Education, 64, 84-93. doi: 10.1080/07377363.2016.1177704

Council of Regional Accrediting Commissions [C-RAC]. (2015, June 2). Framework for competency-based education [Press release]. Retrieved from https://download.hlcommission.org/C-RAC_CBE_Statement_6_2_2015.pdf

Damron-Rodriguez, J., Frank, J. C., Maiden, R. J., Abushakrah, J., Jukema, J. S., Pianosi, B., & Sterns, H. L. (2019). Gerontology competencies: Construction, consensus and contribution. Gerontology & Geriatrics Education, 40(4), 409-431. doi: 10.1080/02701960.2019.1647835

Dassel, K., Eaton, J., & Felsted, K. (2019). Navigating the future of gerontology education: Curriculum mapping to the AGHE competencies. Gerontology & Geriatrics Education, 40(1), 132-138.

Fink, L.D. (2003) Creating significant learning experiences: An integrated approach to designing college courses. San Francisco: Jossey‐Bass.

Krause, J., Dias, L. P., & Schedler, C. (2015). Competency-based education: A framework for measuring quality courses. Online Journal of Distance Learning Administration, 18(1). Retrieved from https://www.westga.edu/~distance/ojdla/spring181/krause_dias_schedler181.html

McClarty, K. L. & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. AEI Series on Competency-Based Higher Education. Washington, DC: Center on Higher Education Reform & American Enterprise Institute for Public Policy Research.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. U.S. Dept. of Education, Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service website. Retrieved from https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

National Council for State Authorization Reciprocity Agreements [NC-SARA]. (2020). About NC-SARA. Retrieved from https://nc-sara.org/about-nc-sara

Northwest Commission on Colleges and Universities. (2020). Accreditation. Retrieved from https://www.nwccu.org/accreditation%20/

Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223-231.

QSR International Pty Ltd. (2018). NVivo qualitative data analysis software (version 12) [Software]. Retrieved from https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home. Accessed May 17, 2020.

Saldaña, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: SAGE.

Wendt, P. F., Peterson, D. A., & Douglass, E. B. (1993). Core principles and outcomes of gerontology, geriatrics, and aging studies instruction. Washington, DC: Association for Gerontology in Higher Education and the University of Southern California.

Wiggins, G.P., & McTighe, J. (2005). Understanding by design. (2nd Ed.).  Alexandria, VA: Association for Supervision and Curriculum Development.

Woldeab, D., Yawson, R.M, & Osafo, E. (2020). A systematic meta-analytic review of thinking beyond the comparison of online versus traditional learning. E-Journal of Business Education & Scholarship of Teaching, 14(1), 1-24.

Student-Faculty Co-Production of a Medical Education Design Challenge as a Tool for Teaching Health System Science

Posted 2021/04/08

Funding

AMA Accelerating Change in Education Innovation Grant program

Disclosures

None

What problem was addressed:

Medical schools prepare students to enter a complex health system with the knowledge to care for patients but provide little training on the health system they join. Health systems science (HSS) is an important topic that is starting to enter medical school curricula. The difficulty is how to teach this complex topic which is slow to gain traction with key stakeholders.1 We argue that HSS is not as difficult a concept to implement if presented in a familiar context that encourages active participation with the material. We present our educational innovation to teach HSS in an active learning setting that increased buy-in from medical students and faculty. 

What was tried:

We organized the Medical Education Design and Innovation Challenge (MEDIC), a competition that taught medical students HSS as they competed to design an educational innovation. We introduced 24 medical students from all years of training to HSS using the Shingo ModelTM 2 as a framework, the model successfully used by the University of Utah Health system. Students were divided into 6 teams and asked to identify an area for improvement and then design a program, course, or initiative utilizing this model. The Shingo ModelTM requires users to identify guiding principles, key stakeholders, and important outcomes as precursory steps to any innovative problem-solving design. This encouraged students to understand education system before proposing a solution to a perceived deficit. This event was divided over two days with a total of 8 hours of participation. Students were introduced to HSS and the Shingo ModelTM during an introductory dinner and then placed into teams. The following day teams had 4 hours to identify a deficit within their school system and design a solution (ex. Mentorship program for specialty exploration) using the Shingo ModelTM as a framework. Teams then pitched their proposals and were evaluated on their creativity, feasibility, and evidence of utilizing systems science in their design. Winners were determined based on majority vote from guest faculty judges, coordinators and participants.

What lessons were learned:

Two major challenges exist in teaching HSS to medical students: relevance to learners and incorporation into already full medical school curricula. Survey data from MEDIC suggest this project-focused approach to teaching HSS addressed both challenges. Participants (63% response rate) revealed that 100% of students felt that MEDIC was relevant to them. 93% of students thought the Shingo ModelTM was an appropriate framework for approaching medical education innovation and 73% were confident in their ability to apply the model after only four hours of team-based work. 80% of students found they developed new skills and had a change of perception of medical education design by participating in MEDIC. Additionally, 80% of students agreed or strongly agreed that all students would benefit from exposure to HSS in the core curriculum. This experience may be easily reproduced at other institutions. The positive response of the students and success in proposing innovative ideas for medical education encouraged us to continue to use this framework as we engage students and faculty in ongoing curricular reform.

References

  1. Gonzalo JD, Hawkins R, Lawson L, Wolpaw D, Chang A. Concerns and Responses for Integrating Health Systems Science Into Medical Education. Acad Med. 2018; 93(6):843–849
  2. https://shingo.org/model