Assessment of Learners

This document will become one of many chapters in a text book on education in the health professions to be published by Oxford University Press. All of the chapters in the textbook will follow a Problem-based Learning (PBL) format dictated by the editors and used by these authors.

Learning objectives

  1. Compare and contrast feedback, formative assessment, summative assessment, evaluation, and grading.
  2. Identify frameworks for providing learner assessment and tracking growth in the health professions.
  3. Identify key components to providing feasible, fair, and valid assessment. 
  4. Describe the roles and responsibilities of both preceptors and learners in optimizing assessments and evaluations.

Abstract

This chapter explores the concepts of learner assessment and evaluation by presenting a case in which a medical student participates in a year-long clinical experience with a preceptor. Using various data points and direct observation, the student is given both formative and summative assessments throughout the learning experience, providing them with information needed to guide their learning and improve their clinical skills. As the case progresses, questions are posed in order to help identify key concepts in learner assessment and explore the interconnectivity between assessment, evaluation, feedback, and grading. The information presented will help educators identify and develop effective assessment strategies that support learner development and growth.

Key Words: assessment, evaluation, learner, formative, summative, grading

Case

Morgan is a medical student who is beginning a new pediatric clinical experience. This learning opportunity includes weekly outpatient clinics with you as a preceptor. This is Morgan’s first opportunity to learn clinical skills outside of the classroom setting.   

  • What is the role of a learner in the clinical setting as they progress through their training?
  • What are some of the frameworks available for assessing learners’ abilities in the clinical setting?

After introducing yourself and the clinic staff to Morgan, you give them[NL1]  a quick tour of the facilities before sitting down in your office to discuss their current learning goals. He identifies obtaining a history as an area he would like to improve. Specifically, he would like to improve his ability to take a history that is comprehensive but tailored to the chief complaint and the clinical setting. You advise him that you will regularly assess him and provide feedback. You recommend that he keep a patient log so he can track the number of patients and complaints he sees throughout the experience.           

  • What is the difference between assessment and feedback?
  • What are the roles of the faculty and the learner in providing the learner with assessment and feedback?

In the first week, Morgan sees a 6y/o girl who presents with a fever. You follow the student into the room, allowing him to enter first.  After introducing Morgan, you ask the family if they are comfortable with Morgan taking the history. The family is excited to contribute in the education of a medical professional and readily agrees. Morgan stands against the wall, looks down at his tablet to pull up his notes, and begins: “The medical assistants told us your daughter has a fever. How long has it been going on?” He then proceeds to ask about the nature of the fever, some associated symptoms (including runny nose, cough, and rash), and alleviating and exacerbating factors. He asks about her past medical history including surgeries and medicines and then conducts a full family and social history. You ask the family a few follow-up questions and perform a physical exam, finishing the visit by discussing the most likely diagnoses and developing a plan with the patient and family.

After sending the family on their way, you ask Morgan how he felt the history went. You ask him to reflect on what he did well and what he should continue to work on. After listening to Morgan’s response, you provide feedback on your assessment, including concrete suggestions for improvement. The student thanks you for the feedback and commits to integrating your recommendations into his practice. He also thanks you for the opportunity to observe your approach to counseling families and obtaining a physical exam that puts the patient at ease.    

  • What are the key components of effective formative assessment?
  • How often should formative assessment occur to optimize learning and growth?

You continue to work with Morgan over the following weeks. He sees multiple children of all ages with several different complaints. When able, you accompany him into the room so that you can directly observe his history-taking. However, there are multiple times that he goes in alone and then reports his findings to you. During many encounters, he is unsure how to interpret his exam findings and asks you to double-check his technique and interpretations. At times, you ask follow-up questions related to the chief complaint and he admits that he does not know the answer. When this occurs, he reports honestly that he did not ask the question. He promises to ask when you both return to the room. You note that he frequently asks the question in subsequent encounters with patients.

  • What role does trustworthiness play in the assessment of learners?

Three months into the year, a 5y/o child, Kai, presents with a fever. You accompany Morgan into the room to directly observe his history. You obtain the family’s permission for Morgan to participate in their son’s care. Morgan begins by introducing himself and asking the family if they are okay if he takes some notes while they talk. He begins, “Kai, I’m sorry you aren’t feeling well. Can you tell me what’s been going on?” Kai says she doesn’t feel good and has a fever. Morgan proceeds to obtain a comprehensive, but focused history of the fever, including both the patient and parents in the conversation. While taking the history, he uses active listening skills, asks clarifying questions, and summarizes the information for the family to ensure he fully understands their concerns. He asks about commonly associated symptoms and symptoms related to possible diagnoses. He asks the family about treatments they have tried (including over-the-counter and homeopathic remedies), asks about their concerns regarding the fever, and includes recent travel and sick contacts in his social history. Before moving onto the physical exam, the student asks the family if he has missed anything important about the chief complaint or about Kai’s medical history.          

  • What does a learner need to do to show “competence” or the ability to effectively perform a professional activity without supervision?
  • How do learner assessment frameworks help track/note improvements in learner performance?

After you conclude the visit and leave the room, you ask Morgan how he feels the encounter went and how he has progressed with his goal of obtaining a history. He is happy with his progress and able to identify areas in which he has improved and things he would still like to work on. You agree that his skills have improved and provide him with formative feedback regarding your assessment of his performance today. You ask him to stay after the clinic so the two of you can review his progress to date.

  • What is the difference between formative and summative assessment?
  • What are the benefits of longitudinal relationships in both formative and summative assessment?

After the clinic, you sit down with Morgan. You ask him to pull out his patient log and the two of you go through the patients he has seen through the 3-months he has been with you so far. He has been collecting a portfolio of interesting cases and experiences. He brings with him the notes he took when getting feedback on his weekly formative assessments. The two of you go through his portfolio and patient log. He reflects on the improvements he has made and identifies areas he can continue to improve and sets new learning goals. You agree with his findings and provide further guidance on growth you have observed and areas he can continue to work on. You continue this pattern of sitting down with Morgan every 3 months throughout the remainder of the learning experience, to review his progress, discuss learning goals, and add to his portfolio.

At the end of the year, you thank Morgan for his participation in the care of your patients. The school has an evaluation form that asks about students’ strengths and areas requiring further growth. You consider all the work you have done with Morgan, his assessments, and his growth throughout the year. You fill out the evaluation form, providing a summative assessment that includes both quantitative (performance ratings) and qualitative (narrative comments) information. Morgan is required to take a final “exam” that includes a multiple-choice test and participate in an observed encounter with a simulated patient, where an actor plays the role of a patient. Morgan receives a final grade for the rotation with comments on his performance. 

  • What are the key components of effective summative assessment?
  • What are the methods and key components of learner evaluation?
  • What are the similarities and differences between assessment and evaluation?
  • What role does the learner have in accepting and reviewing their evaluation?

Assessment and learning in health sciences education

The goal of health sciences education is to provide the environment, information, and experiences needed for learners to develop the knowledge, skills, and attitudes required to practice as a professional in their specific field. Ultimately, the responsibility for learning lies with the learner.1 The teacher’s role is to support and challenge learners in their journey, providing information, supervision, and assessment in order to help them grow and improve in their abilities.

Assessment is one of the most important methods teachers use to support and challenge their learners.2,3 Assessment, in essence, is the process of judging a student’s performance relative to a set of expectations.4 Through assessment, the teacher guides learning by helping students identify their unique strengths and weaknesses and providing concrete recommendations to address these areas. These may include knowledge gaps, skill sets requiring further practice, or even misunderstandings in requirements and attitudes that need reframing. This is why learning and assessment are linked together – one can’t really be achieved well without the other. In an ideal learning environment, every teacher considers it their responsibility to assess learners routinely and consistently, challenging them to demonstrate their current abilities and then supporting them in their growth where needed.[i]

Assessment can take many forms, varying based on the circumstances of the environment and learner; types of knowledge, skills, and attitudes being assessed; and the primary purpose for which the assessment information will be used. For example, types of knowledge and skills can vary from remembering basic facts to thinking critically to conducting complex surgical procedures. Assessments, therefore, will differ and may include multiple choice written tests to oral examinations to procedural skills simulations or direct observations in the clinical learning environment, respectively. Overall, assessment should be used to tailor individual learners’ education and experiences to support their growth. Each assessment may be formative and relatively informal, geared toward iteratively shaping performance or may be more formal, geared toward giving information about learning outcomes.

Formative vs. Summative Assessment

Overtime, specific terms have emerged to differentiate among the variations in assessment described above. One of the most important distinctions is between formative and summative assessment. Think of these as a continuum.5 On one end is formative. Formative assessments tend to be less formal and focused on providing information to help students ‘form’ their knowledge, skills, and attitudes. They should be performed regularly and may be completed after a single experience or observation. On the other end of the spectrum is summative assessments, which tend to be more formal and focused on “summarizing” a learner’s knowledge and skills after a certain time period. Formative and summative assessments can be systematically sequenced and combined within a school to optimize learning, so that assessments from individual teachers contribute to a larger program of assessments conducted by school leadership to create a holistic understanding of learners’ strengths and weaknesses.6,7 In this chapter, we focus on assessments made by individual teachers.

With formative assessments, teachers use limited data to identify learners’ strengths and areas needing further development and help guide the learner’s education and experiences to support this learning. With summative assessments, teachers use more comprehensive information in order to judge learning outcomes achieved to date and check the learner’s knowledge and skills. These assessments tend to combine information from multiple sources and settings and include information from different time points. To better understand the difference, take the example of a runner competing in a marathon. The athlete is receiving formative assessments and feedback throughout the race, including lap time, current pace, and current position. After the race is completed, the runner gets a summative assessment, including average pace per mile, time to course completion, and overall rank among finishers. Formative assessment may be used by the runner to adjust their strategies and plans throughout the race. Alternatively, summative assessment information can help to guide the runner as they prepare for and begin their next race. Often, a summative assessment is tied to a decision-making process, such as a final grade.

Figure 1: Relationship of formative and summative assessment

Figure 1 O'Connor - Learner Assessment

Assessment and Evaluation

Another important distinction in education is between assessment and evaluation. Although they are often used interchangeably, there are differences. Assessment is used to refer to the process of collecting evidence of learning, identifying learners’ strengths and areas needing further development and growth. Evaluation is used to refer to the process of comparing evidence of progress to learning objectives or standards (criterion-referenced) or even other learners’ performances (norm-referenced). In other words, assessment focuses on the learning process while evaluation focuses on the learning outcomes compared to a standard. Keeping the focus on assessment supports growth-mindset learning and the idea that health professionals are life-long learners.8 Shifting to competency-based education and assessment emphasize criterion-referenced evaluation, promoting self-improvement in learning rather than competition with other learners. Alternatively, overemphasis on evaluation can set up an environment that focuses on performance-mindset learning.

Now, you may be wondering how do feedback and grading fit into assessment and evaluation? Feedback refers to information provided to the learner about their knowledge, skills, or attitudes at a single point in time after a direct observation or assessment. Grading is a form of evaluation, providing the learner with an overall score or rank that is based on their performance.

Assessment Frameworks

For many years, educators used the term learning objectives to describe desired outcomes they wanted learners to achieve through a learning experience. Objectives usually include action verbs and are stated in the following format “At the end of this module, learners will be able to…”, followed by a description of a specific behavior. Refer again to the learning objectives at the beginning of this chapter as examples. More recently, however, educators have begun to state desired learning outcomes as competencies. Competencies refer to a combination of knowledge, skills, values and attitudes required to practice in a particular profession.9 These abilities are observable, so they can be assessed. Learners are expected to demonstrate “competence” in all abilities related to their field prior to practicing without supervision. Therefore, the purpose of most learning programs is to prepare learners by achieving a level of competence for all of the identified critical activities for that profession.

Most professions have identified several competencies . For example, the Association of American Medical Colleges has identified 52 competencies for practicing physicians. These have been organized into domains: medical knowledge, patient care, professionalism, interpersonal and communication skills, medical informatics, population health and preventative medicine, and practice-based & systems-based medical care.9 When used together, they describe the “ideal” physician. Although they are comprehensive and provide a strong basis for the development of an assessment strategy, their descriptions can be abstract and therefore difficult to assess concretely and in the setting in which a learner practices. As a result, obtaining routine and meaningful assessments of these competencies during medical school and graduate medical education is proving to be a challenge.10 In response to these challenges, various organizations have developed approaches to better defining expectations of learners and assessing their progress throughout their training.

One of these new approaches, Entrustable Professional Activities (EPAs), is growing in popularity across the health sciences. This approach focuses on assessment of tasks or units of practice that represent the day to day work of the health professional. Performance of these activities requires the learner to incorporate multiple competencies, often across domains.11-14 For example, underlying all the EPAs are the competency of trustworthiness and understanding of a learner’s individual limitations that leads to appropriate help-seeking behavior when needed.15 EPA frameworks have been created by many of the health science education fields including nursing, dentistry, and medicine. One of the earliest organizations to adopt EPAs was the Association of American Medical Colleges. In 2014, they identified thirteen core EPAs for graduating medical students entering residency.16 These thirteen EPAs encompass the units of work that all residents perform, regardless of specialty. Examples include “Gather a history & perform a physical exam,” “Document a clinical encounter in a patient record”, and “Collaborate as a member of an interprofessional team.”

The goal of EPA assessments is to collect information about learners’ “competence” in performing required tasks in their respective field. They assess a learner’s level of readiness to complete these activities with decreasing levels of supervision. As they progress in their abilities, they will be able to perform these activities with less and less supervision from teachers, moving from being able to observe only, to perform with direct supervision, to perform with indirect supervision, to perform without supervision. A major benefit of EPAs is that they provide a holistic approach to assessment. Each EPA requires integration of competencies across domains in order to perform the activity. Since faculty routinely supervise learners performing these professional activities in the clinical learning environment, they find them more intuitive to assess. If multiple direct observations of the activities are performed and the learner demonstrates competence to perform them without need for direct supervision in multiple contexts (e.g., various illness presentations, different levels of acuity, multiple clinical settings, etc.), then a summative assessment can be made that the learner is competent to perform this activity without direct supervision in future encounters.

Figure 2: Example of EPA supervision scale

Observe onlyDirect supervisionDirect supervisionPractice without
supervision
Able to watch the supervisor perform the activityAllowed to perform the activity with supervisor in the roomAllowed to perform the activity with supervisor outside of the room. Supervisor will double-check findings.Allowed to perform the activity alone

Characteristics of High-quality Assessments

Not only can frameworks improve the quality and effectiveness of learner assessment strategies, certain principles can be applied to individual assessments in to order to support growth-mindset learning and achieve the assessment’s desired goals. As would be expected, not all assessments are of equal value.17 High quality assessments tend to follow 6 simple rules:

Rule 1: Direct observation. Observe learners’ actual performance whenever possible. This means that you are present while learners work with patients in the clinical setting, watching them use the knowledge and skills you are assessing. Frequently, educators observe small parts of the activities and rely on learner reporting of findings to make judgement on how well the learner performed. Some of the reported information can be double-checked by the preceptor by independently speaking with the patient and performing an exam. However, the gold standard is direct observation of an encounter (e.g., How did they ask questions, what was the technique for administering the vaccine, etc.?) Making assumptions can lead to inaccurate assessments and missed opportunities for growth.

Rule 2: Consider context. Use multiple observations and data to guide summative assessments, evaluations, and grading. Learner’s performance may vary based on patient population, presentation of the problem, acuity, and clinical context. Getting multiple assessments in various clinical contexts allows you to see patterns in behavior that will better reveal strengths and areas for improvement.

Rule 3. Consider the learner’s current abilities. Sequence learning tasks based on learner’s level of ability and assess accordingly in order to maximize learning.Aligning assessment difficulty with the knowledge and skills that the learner is most prepared to learn next– building upon what is already known–will help ensure that assessment optimizes learning.

Rule 4: Learner participation. Learners should actively participate in their assessments. Ask learners to self-assess their skills, knowledge, and attitudes. Ask them to identify learning goals for themselves and ensure your assessments encompass these goals.

Rule 5: Feedback. Share results of the assessment with the learner in a timely manner. This is especially important for formative assessment as it should be used to guide learning and work on acquisition of competencies within the current clinical setting.

Rule 6: Behavior-based recommendations. Identify specific strengths and areas for improvement, providing the learner with examples of where these behaviors were observed. Identify areas where learners can improve, focusing on specific, behavior-based recommendations that are attainable. Think to yourself “What does this learner need to do to get to the next level of competence or the next stage of supervision?”

Table 1: Characteristics of high-quality assessments

HIGH-QUALITY ASSESSMENTS
Utilize direct observation
Vary observations to include different skills, settings, complaints, complexity, and acuity
Match the goals of the learning experience
Sequence the level of difficulty of the clinical tasks that are being assessed
Include learners in their set up and implementation
Consider and encompass the learner’s goals
Provide concrete information on how to progress to the “next level”
Provide timely feedback to the learner
Can be strengthened by using a formal assessment framework (e.g., EPAs)

End of module questions

Keith is a nursing student who is learning to give immunizations. After obtaining consent, he and his preceptor, Leticia, enter the room where he administers three intramuscular vaccinations to a 4-year-old child. After observing the encounter, Leticia uses the EPA framework to determine that Keith still needs direct supervision when performing vaccine administration. What is this an example of?

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Sarah is an occupational therapy student who is learning to do a swallow evaluation on an adult who recently suffered a stroke. She performs the examination while her preceptor Phyllis observes. After the encounter, Phyllis pulls Sarah into a private area and asks her to reflect on the experience, identifying areas she did well on and things she can improve on. Phyllis then describes what she observed and gives Sarah clear and concrete recommendations for improving her performance. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony is a dental student who has just completed a rotation in geriatric dentistry. Upon completion of the course, leadership compiled his preceptor evaluations, observed structured clinical encounter assessment form, patient logs, exam score, and patient feedback. They used all the information to provide Anthony with a narrative summary of his strengths and areas for improvement. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

Anthony’s performance was compared to a list of set objectives and expectations for the course. Based on his performance, he was provided with a grade of “Honors” in the course. This is an example of:

  1. Feedback
  2. Formative assessment
  3. Summative assessment
  4. Evaluation

What is a key component to feasible, fair, and valid assessment?

  1. Use direct observation
  2. Use multiple encounters to provide formative assessment
  3. Highlight all the learner’s weaknesses
  4. Use single encounters to provide summative assessment.

Bibliography

1.         Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33(3):206-214.

2.         Swan Sein A, Rashid H, Meka J, Amiel J, Pluta W. Twelve tips for embedding assessment. Med Teach. 2020:1-7.

3.         Epstein RM. Assessment in medical education. N Engl J Med. 2007;356(4):387-396.

4.         Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676-682.

5.         Bennett RE. Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice. 2011;18(1):5-25.

6.         van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205-214.

7.         van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve Tips for programmatic assessment. Med Teach. 2015;37(7):641-646.

8.         Dweck C. What Having a “Growth Mindset” Actually Means. Harvard Business Review. 2016.

9.         AAMC. Physician Competency Reference Set. https://www.aamc.org/what-we-do/mission-areas/medical-education/curriculum-inventory/establish-your-ci/physician-competency-reference-set. Accessed May 31, 2021, 2021.

10.       Fromme HB, Karani R, Downing SM. Direct observation in medical education: a review of the literature and evidence for validity. Mt Sinai J Med. 2009;76(4):365-371.

11.       Al-Moteri M. Entrustable professional activities in nursing: A concept analysis. Int J Nurs Sci. 2020;7(3):277-284.

12.       Carney PA. A New Era of Assessment of Entrustable Professional Activities Applied to General Pediatrics. JAMA Netw Open. 2020;3(1):e1919583.

13.       Pinilla S, Kyrou A, Maissen N, et al. Entrustment decisions and the clinical team: A case study of early clinical students. Med Educ. 2020.

14.       Tekian A, Ten Cate O, Holmboe E, Roberts T, Norcini J. Entrustment decisions: Implications for curriculum development and assessment. Med Teach. 2020;42(6):698-704.

15.       Wolcott MD, Quinonez RB, Ramaswamy V, Murdoch-Kinch CA. Can we talk about trust? Exploring the relevance of “Entrustable Professional Activities” in dental education. J Dent Educ. 2020;84(9):945-948.

16.       AAMC. Core Entrustable Professional Activities for Entering Residency: Curriculum Developer’s Guide. https://www.aamc.org/media/20211/download. Published 2017. Accessed May 31, 2021, 2021.

17.       Boyd P, Bloxham S. Developing Effective Assessment in Higher Education: a practical guide. 2007.



Return to Table of Contents:

Assessment of Learners by Meghan O’Connor, MD & Boyd F. Richards, PhD

Boyd F. Richards, PhD

Boyd F. Richards is a Professor (Lecturer), Department of Pediatrics, University of Utah, Salt Lake City, UT. https://orcid.org/0000-0002-1864-7238