Description
Program identification
Task 1: Analyzing Educational Needs
Choose a specific subject area or field of study. Describe the process of analyzing educational needs for this subject area. Include factors such as changing societal demands, learner demographics, and emerging trends that can influence program identification.
Task 2: Establishing Program Goals
Based on the subject area you chose in Task 1, create two program goals that address specific learning outcomes or educational needs. Ensure that these goals are clear, concise, and reflect the overarching purpose of the educational program.
Task 3: Considering Influencing Factors
Identify and discuss three key factors that could influence the identification of an effective program. These factors could include considerations related to student diversity, resources availability, community involvement, technology integration, and alignment with educational standards.
Needs assessment
Task 1: Identifying Stakeholders
List and briefly describe the key stakeholders involved in a needs assessment for your curriculum design. Explain the roles and perspectives of each stakeholder, such as students, teachers, parents, administrators, and community members.
Task 2: Data Collection Methods
Research and describe three different methods commonly used to collect data during a needs assessment. Choose methods that can provide insights into various aspects of the learning environment. For each method, explain its benefits, potential drawbacks, and when it might be most appropriate to use.
Goals and objectives
Task 1: Defining Clear Learning Goals
Choose a subject or topic of your choice for your proposed course Write two clear and concise learning goals for a unit of instruction related to your chosen subject or topic. Ensure that these goals reflect the overarching aims of the instruction.
Task 2: Crafting Measurable Learning Objectives
For each of the learning goals created in Task 1, develop three measurable learning objectives using Bloom’s Taxonomy as a framework. Each objective should span different levels of cognitive complexity (e.g., knowledge, comprehension, application, analysis, synthesis, evaluation). Make sure your objectives are specific, measurable, achievable, relevant, and time-bound (SMART).
Teaching strategies
Task 1: Exploring Teaching Strategies
Research and compile a list of minimum 3 diverse teaching strategies that are commonly used in educational settings. For each strategy, provide a brief explanation of its key characteristics and how it can be effectively applied in the classroom.
Task 2
For each of the learning objectives created in step 3, select a teaching strategy from the list you compiled in Task 1 that aligns well with the objective. Justify your choices by explaining how each chosen strategy supports the attainment of the respective objective. Highlight how the strategy accommodates diverse learning styles and promotes student engagement.
Implementation
Task: Imagine you’re tasked with adding a new module on “Evidence-Based Medicine” to the current curriculum. Create a micro-implementation plan that outlines:
Schedule: Briefly describe how the module will fit into the existing curriculum schedule.
Resources: List key teaching materials, facilities, and technology needed.
Faculty Prep: Explain how you’ll ensure faculty are prepared to teach the module.
Student Engagement: Provide a strategy for engaging students in the module.
Submission: Create a concise, bullet-point plan (around 500 words) highlighting the essential aspects of your micro-implementation strategy.
Evaluation and feedback
Task: Develop a proposal for evaluating the effectiveness of the current curriculum. Include:
Methods: Describe the data collection methods you’ll use (surveys, interviews, etc.).
Data Focus: Identify key questions or areas you’ll focus on during evaluation.
Feedback: Briefly explain how you’ll present findings and recommendations to the curriculum team.
Submission: Write a succinct proposal (around 700 words) outlining the evaluation plan and its main components.
Unformatted Attachment Preview
CHAPTER SEVEN
Step 6
Evaluation and Feedback
. . . assessing the achievement of objectives and stimulating
continuous improvement
Copyright 2016. Johns Hopkins University Press.
All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.
Brenessa M. Lindeman, MD, MEHP, and
Pamela A. Lipsett, MD, MHPE
Definitions
122
Importance
122
Task I: Identify Users
122
Task II: Identify Uses
123
Generic Uses
Specific Uses
123
124
Task III: Identify Resources
127
Task IV: Identify Evaluation Questions
128
Task V: Choose Evaluation Designs
129
Task VI: Choose Measurement Methods and Construct Instruments
132
Choice of Measurement Methods
Construction of Measurement Instruments
Reliability, Validity, and Bias
Conclusions
134
138
139
148
Task VII: Address Ethical Concerns
149
Propriety Standards
Confidentiality, Access, and Consent
Resource Allocation
Potential Impact/Consequences
149
149
150
151
Task VIII: Collect Data
Response Rates and Efficiency
Impact of Data Collection on Instrument Design
Assignment of Responsibility
Task IX: Analyze Data
Relation to Evaluation Questions
Relation to Measurement Instruments: Data Type and Entry
Choice of Statistical Methods
152
152
153
153
153
153
154
155
Task X: Report Results
156
Conclusion
157
Acknowledgments
157
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY
AN: 979875 ; Patricia A. Thomas, David E. Kern, Mark T. Hughes, Belinda Y. Chen.; Curriculum Development for Medical Education : A Six-Step Approach
Account: ns224396.main.eds
122
Curriculum Development for Medical Education
Questions
160
General References
161
Specific References
164
DEFINITIONS
Evaluation, for the purposes of this book, is defined as the identification, clarification, and application of criteria to determine the merit or worth of what is being evaluated (1). While often used interchangeably, assessment is sometimes used to connote
characterizations and measurements, while evaluation is used to connote appraisal or
judgment. In education, assessment is often of an individual student, while evaluation
is of a program; for the most part, we follow this convention in this chapter. Feedback
is defined as the provision of information on an individual’s or curriculum’s performance
to learners, faculty, and other stakeholders in the curriculum.
IMPORTANCE
Step 6, Evaluation and Feedback, closes the loop in the curriculum development
cycle. The evaluation process helps those who have a stake in the curriculum make
a decision or judgment about the curriculum. The evaluation step helps curriculum
developers ask and answer the critical question: Were the goals and objectives of the
curriculum met? Evaluation provides information that can be used to guide individuals
and the curriculum in cycles of ongoing improvement. Evaluation results can also be
used to maintain and garner support for a curriculum, to assess student achievement,
to satisfy external requirements, to document the accomplishments of curriculum developers, and to serve as a basis for presentations and publications.
It is helpful to be methodical in designing the evaluation for a curriculum, to ensure
that important questions are answered and relevant needs met. This chapter outlines a
10-task approach that begins with consideration of the potential users and uses of an
evaluation, moves to the identification of evaluation questions and methods, proceeds
to the collection of data, and ends with data analysis and reporting of results.
TASK I: IDENTIFY USERS
The first step in planning the evaluation for a curriculum is to identify the likely users
of the evaluation. Participants in the curriculum have an interest in the assessment of
their own performance and the performance of the curriculum. Evaluation can provide
feedback and motivation for continued improvement for learners, faculty, and curriculum developers.
Other stakeholders who have administrative responsibility for, allocate resources
to, or are otherwise affected by the curriculum will also be interested in evaluation results. These might include individuals in the dean’s office, hospital administrators, the
department chair, the program director for the residency program or medical student
education, the division director, other faculty who have contributed political support or
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
Step 6: Evaluation and Feedback
123
who might be in competition for limited resources, and individuals, granting agencies,
or other organizations that have contributed funds or other resources to the curriculum.
Individuals who need to make decisions about whether or not to participate in the curriculum, such as future learners or faculty, may also be interested in evaluation results.
To the extent that a curriculum innovatively addresses an important need or tests
new educational strategies, evaluation results may also be of interest to educators
from other institutions and serve as a basis for publications/presentations. As society is
often the intended beneficiary of a medical care curriculum, society members are also
stakeholders in this process.
Finally, evaluation results can document the achievements of curriculum developers. Promotion committees and department chairs assign a high degree of importance
to clinician-educators’ accomplishments in curriculum development (2, 3), and these
accomplishments can be included in the educational portfolios that are increasingly
being used to support applications for promotion (4–6).
TASK II: IDENTIFY USES
Generic Uses
In designing an evaluation strategy for a curriculum, the curriculum developer should
be aware of the generic uses of an evaluation. These generic uses can be classified
along two axes, as shown in Table 7.1. The first axis refers to whether the evaluation is
used to appraise the performance of individuals, the performance of the entire program,
or both. The assessment of an individual usually involves determining whether he or
she has achieved the cognitive, affective, or psychomotor or competency objectives
of a curriculum (see Chapter 4). Program evaluation usually determines the aggregate
achievements of all individuals, clinical or other outcomes, the actual processes of a
curriculum, or the perceptions of learners and faculty. The second axis in Table 7.1 refers to whether an evaluation is used for formative purposes (to improve performance),
for summative purposes (to judge performance and make decisions about its future
or adoption), or for both purposes (7). From the discussion and examples below, the
reader may surmise that some evaluations can be used for both summative and formative purposes.
One emerging educational framework that can be informative for both formative
and summative assessment is the use of entrustable professional activities, or EPAs.
EPAs are units of professional practice and have been defined as tasks or responsibilities that trainees are entrusted to perform without supervision, once they have attained
sufficient competence (8). EPAs are related to competencies (such as the competency
framework used in GME training from the Accreditation Council for Graduate Medical
Education [ACGME]), in that performance of an EPA requires integration of competencies, often across multiple domains of competence (9). While the EPA framework was
initially formulated for the transition from residency to independent practice, this concept has recently been extended to develop EPAs for the transition from medical school
to residency (10), and some medical schools have developed EPAs for their students.
(See also Chapter 4.)
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
124
Curriculum Development for Medical Education
Table 7.1.
Evaluation Types: Levels and Uses
Level
Use
Individual
Program
Formative
Evaluation of an individual learner or
faculty member that is used to help
the individual improve performance:
identification of areas for
improvement
specific suggestions for
improvement
Evaluation of an individual learner or
faculty member that is used for
judgments or decisions about the
individual:
verification of achievement for
individual
motivation of individual to
maintain or improve performance
certification of performance for
others
grades
promotion
Evaluation of a program that is used to
improve program performance:
identification of areas for
improvement
specific suggestions for
improvement
■
■
Summative
■
■
■
■
■
■
■
Evaluation of a program that is used for
judgments or decisions about the
program or program developers:
judgments regarding success,
efficacy
decisions regarding allocation of
resources
motivation/recruitment of learners
and faculty
influencing attitudes regarding value
of curriculum
satisfying external requirements
prestige, power, influence, promotion
dissemination: presentations,
publications
■
■
■
■
■
■
■
Specific Uses
Having identified the likely users of the evaluation and understood the generic uses
of curriculum evaluation, the curriculum developer should consider the specific needs
of different users (stakeholders) and the specific ways in which they will put the evaluation to use (7). Specific uses for evaluation results might include the following:
■
Feedback on and improvement of individual performance: Both learners and faculty
can use the results of timely feedback (formative individual assessment) to direct
improvements in their own performances. This type of assessment identifies areas
for improvement and provides specific suggestions for improvement (feedback). It,
therefore, also serves as an educational method (see Chapter 5).
EXAMPLE: Formative Individual Assessment. During a women’s health clerkship, students are assessed
on their ability to perform the Core EPA for Entering Residency, “Provide an oral presentation of a clinical
encounter” (10), after interviewing a standardized patient with a breast mass, and are given specific
verbal feedback about the presentation to improve their performance.
■
Judgments regarding individual performance: The accomplishments of individual
learners may need to be documented (summative individual assessment) to assign
grades, to demonstrate mastery in a particular area or achievement of certain curricular objectives, or to satisfy the demands of external bodies, such as specialty
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
Step 6: Evaluation and Feedback
125
boards. In these instances, it is important to clarify criteria for the achievement of
objectives or competency before the evaluation. Assessment of individual faculty
can be used to make decisions about their continuation as curriculum faculty, as
material for their promotion portfolios, and as data for teaching awards. Used in this
manner, assessments become evaluations.
EXAMPLE: Summative Individual Assessment. At the conclusion of the women’s health clerkship, a
multistation Objective Structured Clinical Examination (OSCE) is conducted in which each student gives
an oral presentation after an interview with a standardized patient with a breast mass. Students are assessed using a checklist form developed from the oral presentation EPA milestones (10), from which a
passing score in each station determines mastery of giving an oral presentation of a clinical encounter.
■
Feedback on and improvement of program performance: Curriculum coordinators
can use evaluation results (formative program evaluation) to identify parts of the
curriculum that are effective and parts that are in need of improvement. Evaluation
results may also provide suggestions about how parts of the curriculum could be
improved.
Such formative program evaluation usually takes the form of surveys (see Chapter 3) of learners to obtain feedback about and suggestions for improving a curriculum. Quantitative information, such as ratings of various aspects of the curriculum, can help identify areas that need revision. Qualitative information, such
as responses to open-ended questions about program strengths, program weaknesses, and suggestions for change, provides feedback in areas that may not have
been anticipated and ideas for improvement. Information can also be obtained from
faculty or other observers, such as nurses and patients. Aggregates of formative
and summative individual assessments can be used for formative program evaluation as well, to identify specific areas of the curriculum in need of revision.
EXAMPLE: Formative Program Evaluation. At the midpoint of a surgery clinical clerkship, students met
with the clerkship director for a discussion of experiences to date. Several students wanted additional
elective clinic experiences. The clerkship director reviewed this information with surgery faculty and
team leaders, and a two-week selective in ambulatory surgical clinics was implemented in the following
term (11).
EXAMPLE: Formative Program Evaluation. After each didactic lecture of the radiology residency curriculum, residents were asked to complete a “Minute Paper” in which they briefly noted either the most
important thing they had learned during the lecture or the muddiest point in the lecture, as well as an
important question that remained unanswered (12). This technique allowed the instructor to know what
knowledge students were gaining from the lecture (or not) and provided information about where to
make future refinements.
■
Judgments regarding program success: Summative program evaluation provides
information on the degree to which a curriculum has met its various objectives and
expectations, under what specific conditions, and at what cost. It can also document the curriculum’s success in engaging, motivating, and pleasing its learners
and faculty. In addition to quantitative data, summative program evaluation may
include qualitative information about unintended barriers, unanticipated factors
encountered in the program implementation, or unintended consequences of the
curriculum. It may identify aspects of the hidden curriculum (13, 14). The results of
summative program evaluations are often reported to others to obtain or maintain
curricular time, funding, and other resources.
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
126
Curriculum Development for Medical Education
EXAMPLE: Summative Program Evaluation. At the conclusion of a psychiatry clinical clerkship, 90% of
students received a passing grade in the performance of a standardized patient history and mental
status examination: assessing 10 cognitive and 6 skill objectives in the areas of history, physical and
mental status examination, diagnosis, management, and counseling.
EXAMPLE: Summative Program Evaluation Leading to Further Investigation and Change. One curricular
objective of a trauma and acute care surgery rotation stated that surgery residents would correctly prescribe twice-daily prophylaxis for venous thromboembolism (VTE) in eligible trauma patients. The use of
twice-daily VTE prophylaxis over the academic year was examined and compared with the use of other
VTE prophylaxis. Examination of the reasons why twice-daily VTE prophylaxis had not been used revealed a misalignment of the electronic order-entry system with the clinical guideline. Review of this
information with department administrators led to changes in the electronic order-entry system (15).
EXAMPLE: Summative Program Evaluation Leading to Curricular Expansion. Summative evaluation of
all 13 Core EPAs for Entering Residency (10) among fourth-year students at one medical school revealed
gaps in students’ abilities to identify system failures and contribute to a culture of safety. As a result, the
curriculum for intersessions between clinical clerkships was expanded to include discussions of the
importance of error prevention to individual patients and to systems, a mock Root Cause Analysis exercise, and resources for reporting of real or potential errors within the institution.
■
■
■
Justification for the allocation of resources: Those with administrative authority can
use evaluation results (summative program evaluation) to guide and justify decisions about the allocation of resources for a curriculum. They may be more likely
to allocate limited resources to a curriculum if the evaluation provides evidence
of success or if revisions are planned to a curriculum that presently demonstrates
evidence of deficiency in an accreditation standard. In the above Example, assessment of newly defined program outcomes identified deficiencies in student preparation, leading to expanded allocation of resources for the curriculum.
Motivation and recruitment: Feedback on individual and program success and the
identification of areas for future improvement can be motivational to faculty (formative and summative individual assessment and program evaluation). Evidence of
programs’ responsiveness to formative program evaluation can be attractive to
future learners, just as evidence of programs’ success through summative evaluation can also help in the recruitment of both learners and faculty.
Attitude change: Evidence that significant change has occurred in learners (summative program evaluation) with the use of an unfamiliar method, such as participation in quality improvement projects, or in a previously unknown content area, such
as systems-based practice, can significantly alter attitudes about the importance
of such methods and content.
EXAMPLE: Summative Program Evaluation Leading to Attitude Change. A group quality improvement
curriculum and project were added to the annual requirements for pediatrics residents. The pre-curriculum needs assessment revealed that 38% of residents agreed that physicians play an important role in
quality improvement efforts. However, after participation in the curriculum and project, 96% of residents
agreed with the same statement.
■
Satisfaction of external and internal requirements: Summative individual and program evaluation results can be used to satisfy the requirements of regulatory bodies, such as the Liaison Committee on Medical Education or the Residency Review
and Graduate Medical Education Committees. These evaluations, therefore, may
be necessary for program accreditation and will be welcomed by those who have
administrative responsibility for an overall program.
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
Step 6: Evaluation and Feedback
■
■
■
127
Demonstration of popularity: Evidence that learners and faculty truly enjoyed and
valued their experience (summative program evaluation) and evidence of other
stakeholder support (patients, benefactors) may be important to educational and
other administrative leaders, who want to meet the needs of existing trainees, faculty, and other stakeholders and to recruit new ones. A high degree of learner,
faculty, and stakeholder support provides strong political support for a curriculum.
Prestige, power, promotion, and influence: A successful program (summative program evaluation) reflects positively on its institution, department chair, division
chief, overall program director, curriculum coordinator, and faculty, thereby conveying a certain degree of prestige, power, and influence. Summative program and
individual assessment data can be used as evidence of accomplishment in one’s
promotion portfolio.
Presentations, publications, and adoption of curricular components by others: To
the degree that an evaluation (summative program evaluation) provides evidence
of the success (or failure) of an innovative or insufficiently studied educational program or method, it will be of interest to educators at other institutions and to publishers (see Chapter 9).
TASK III: IDENTIFY RESOURCES
The most carefully planned evaluation will fail if the resources are not available to
accomplish it (16). Limits in resources may require a prioritization of evaluation questions and changes in evaluation methods. For this reason, curriculum developers should
consider resource needs early in the planning of the evaluation process, including time,
personnel, equipment, facilities, and funds. Appropriate time should be allocated for
the collection, analysis, and reporting of evaluation results. Personnel needs often include staff to help in the collection and collation of data and distribution of reports,
as well as people with statistical or computer expertise to help verify and analyze the
data. Equipment and facilities might include the appropriate computer hardware and
software. Funding from internal or external sources is required for resources that are
not otherwise available, in which case a budget and budget justification may have to
be developed.
Formal funding may often be challenging to obtain, but informal networking can
reveal potential assistance locally, such as computer programmers or biostatisticians
interested in measurements pertinent to the curriculum, or quality improvement personnel in a hospital interested in measuring patient outcomes. Survey instruments can
be adopted from other residency programs or clerkships within an institution or can
be shared among institutions. Medical schools and residency programs often have
summative assessments in place for students and residents, in the form of subject,
specialty board, and in-service training examinations. Specific information on learner
performance in the knowledge areas addressed by these tests can be readily accessed
through the department chair, with little cost to the curriculum.
EXAMPLE: Use of an Existing Resource for Curricular Evaluation. An objective of the acute neurological
event curriculum for emergency medicine residents is the appropriate administration of thrombolytic
therapy within 60 minutes of the patient’s hospital arrival with symptoms of acute ischemic stroke. The
evaluation plan included the need for a follow-up audit of this practice, but resources were not available
for an independent audit. The information was then added to the comprehensive electronic medical
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
128
Curriculum Development for Medical Education
record maintained by the emergency department staff, which provided both measures of individual
residents’ performance and overall program success in the timely administration of thrombolytics.
An additional source of peer-reviewed assessment tools is the Directory and Repository of Educational Assessment Measures (DREAM), part of the Association of
American Medical Colleges (AAMC) MedEdPORTAL (17).
EXAMPLE: Use of a Publicly Accessible Resource for Curricular Evaluation. One objective of the neurology clerkship curriculum is students’ demonstration of understanding when to apply specific aspects of
the neurological exam. The Hypothesis-Driven Physical Exam (HDPE) instrument available in DREAM,
from MedEdPORTAL (17), was added to the clerkship OSCE to assess students’ skill in the neurological
exam, as well as their diagnostic reasoning around the exam.
TASK IV: IDENTIFY EVALUATION QUESTIONS
Evaluation questions direct the evaluation. They are to curriculum evaluation as
research questions are to research projects. Most evaluation questions (18, 19) should
relate to the specific measurable learner, process, or clinical outcome objectives of a
curriculum (see Chapter 4). As described in Chapter 4, specific measurable objectives
should state who will do how much (how well) of what by when. The “who” may refer
to learners or instructors, or to the program itself, if one is evaluating program activities.
“How much (how well) of what by when” provides a standard of acceptability that is
measurable. Often, in the process of writing evaluation questions and thinking through
what designs and methods might be able to answer a question, it becomes clear that
a curricular objective needs further clarification.
EXAMPLE: Clarifying an Objective for the Purpose of Evaluation. The initial draft of one curricular objective stated: “By the end of the curriculum, all residents will be proficient in obtaining informed consent.”
In formulating the evaluation question and thinking through the evaluation methodology, it became clear
to the curriculum developers that “proficient” needed to be defined operationally. Also, they determined
that an increase of 25% or more of learners that demonstrated proficiency in obtaining informed consent, for a total of at least 90%, would define success for the curriculum. After appropriate revisions in
the objective, the curricular evaluation questions became: “By the end of the curriculum, what percent
of residents have achieved a passing score on the proficiency checklist for informed consent, as assessed using standardized patients?” and “Has there been a statistically and quantitatively (>25%)
significant increase in the number of proficient residents, as defined above, from the beginning to the
end of the curriculum?”
The curriculum developer should also make sure that the evaluation question is
congruent with the related curricular objective.
EXAMPLE: Congruence between an Objective and the Evaluation Question. An objective of a resident
teaching skills workshop is that participants will demonstrate the five microskills of clinical teaching in a
role-play exercise (a skill objective). The evaluation question “What percentage of residents express
confidence in their ability to provide effective teaching?” is not congruent with the objective because the
evaluation question addressed an affective objective, not a skill objective (see Chapter 4). A congruent
evaluation question would be: “What percentage of residents demonstrated application of at least four
of five teaching microskills during workshop role-playing exercises?” If curriculum developers wanted to
include an affective objective, then an expanded curriculum that addressed residents’ sense of the importance of and their responsibility for teaching, as well as barriers to that practice, would be necessary.
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
Step 6: Evaluation and Feedback
129
Often, resources will limit the number of objectives for which accomplishment can
be assessed. In this situation it is necessary to prioritize and select key evaluation questions, based on the needs of the users and the feasibility of the related evaluation
methodology. Sometimes, several objectives can be grouped efficiently into a single
evaluation question.
EXAMPLE: Prioritizing Which Objective to Evaluate. A curriculum on endotracheal intubation for anesthesia residents has cognitive, attitudinal, skill, and behavioral objectives. The curriculum developers
decided that what mattered most was post-curricular behavior or performance and that effective performance required achievement of the appropriate cognitive, attitudinal, and skill objectives. Setup,
placement, maintenance, and evaluation of an endotracheal intubation are all critical for success in securing a patient’s airway. Their evaluation question and evaluation methodology, therefore, assessed
post-curricular behaviors, rather than knowledge, attitudes, or technical skill mastery. It was assumed
that if the performance objectives were met, there would be sufficient accomplishment of the knowledge, attitude, and skill objectives. If performance objectives were not met, the curriculum developers
would need to reconsider specific assessment of cognitive, attitudinal, and/or skill objectives.
Not all evaluation questions need to relate to explicit, written learner objectives.
Some curricular objectives are implicitly understood, but not written down, to prevent a
curriculum document from becoming unwieldy. Most curriculum developers, for example, will want to include evaluation questions that relate to the effectiveness of specific
curricular components or faculty, even when the related objectives are implicit rather
than explicit.
EXAMPLE: Evaluation Question Directed toward Curricular Processes. What was the perceived effectiveness of the curriculum’s online modules, small group discussions, simulated patients, clinical experiences, and required case presentations?
Sometimes there are unexpected strengths and weaknesses in a curriculum. Sometimes the curriculum on paper may differ from the curriculum as delivered. Therefore,
it is almost always helpful to include some evaluation questions that do not relate to
specific curricular objectives and that are open-ended in nature.
EXAMPLES: Use of Open-Ended Questions Related to Curricular Processes. What do learners perceive
as the major strengths and weaknesses of the curriculum? What did learners identify as the most important take-away and least understood point from each session (Minute Paper / Muddiest Point technique
[12])? How could the curriculum be improved?
TASK V: CHOOSE EVALUATION DESIGNS
Once the evaluation questions have been identified and prioritized, the curriculum
developer should consider which evaluation designs (19–25) are most appropriate to
answer the evaluation questions and most feasible in terms of resources.
An evaluation is said to possess internal validity (22) if it accurately assesses the
impact of a specific intervention on specific subjects in a specific setting. An internally
valid evaluation that is generalizable to other populations and other settings is said to
possess external validity (22). Usually, a curriculum’s targeted learners and setting are
predetermined for the curriculum developer. To the extent that their uniqueness can be
minimized and their representativeness maximized, the external validity (or generalizability) of the evaluation will be strengthened.
EBSCOhost – printed on 6/15/2022 8:42 AM via SAUDI DIGITAL LIBRARY. All use subject to https://www.ebsco.com/terms-of-use
130
Curriculum Development for Medical Education
The choice of evaluation design directly affects the internal validity and indirectly affects the external validity of an evaluation (an evaluation cannot have external validity if
it does not have internal validity). In choosing an evaluation design, one must be aware
of each design’s strengths and limitations with respect to factors that could threaten the
internal validity of the evaluation. These factors include subject characteristics (selection bias), loss of subjects (mortality, attrition), location, instrumentation, testing, history,
maturation, attitude of subjects, statistical regression, and implementation (19, 21–25).
The term subject characteristics refers to the differences between individuals or groups.
If present systematically, they may lead to selection bias. Selection bias occurs when
subjects in an intervention or comparison group possess characteristics that affect the
results of the evaluation by affecting the measurements of interest or the response of
subjects to the intervention. For example, studying only volunteers who are excited to
learn about a particular subject may yield different results than studying all students in a
cohort. If subjects are lost from or fail to complete an evaluation process, this can be a
mortality threat. This is common because many evaluations are designed to occur over
time. When subjects who drop out are different from those who complete the evaluation, the evaluation will no longer be representative of all subjects. Location refers to
the fact that the particular place where data are collected or where an intervention has
occurred may affect results. For example, an intervention in one intensive care unit that
is modern and well-resourced with a large amount of technology may provide different
effects from the same intervention in another intensive care unit with fewer resources.
Instrumentation refers to the effects that changes in raters or measurement methods,
or lack of precision in the measurement instrument, might have on obtained measurements. For example, administering a survey about curriculum satisfaction with a threepoint Likert scale may yield very different results than the same survey given with a
seven- or nine-point Likert scale. Testing refers to the effects of an initial test on subjects’ performance on subsequent tests. History refers to events or other interventions
that aff