Reach and Proof Fall Short for Clinician Learning
Earlier coverage of learning design and its implications for CME providers.
CBME definitional gaps show how any educational framework risks implementation failure when labels precede shared definitions, measurable behaviors, and change-management steps.
Educator-clinicians surfaced a narrow but useful warning: competency-based education becomes hard to implement when people use the same label for different things. Evidence comes from a BEME review discussion with Canadian residency and emergency medicine examples, so treat this as an emerging signal rather than broad consensus.
In a PAPERs Podcast discussion of a BEME scoping review, clinician-educators described a familiar pattern: enthusiasm for competency-based medical education followed by the hard work of delivery, assessment, program evaluation, and refinement. One participant put the definitional problem plainly: “I'm seeing mandates from governments all over the world to quote, do CBME or use EPAs.” The follow-on concern was sharper: “And when I actually talk to people on the ground or some of those government officials, we all don't even agree on what those topics are.”
That matters because CBME is not a single teaching technique. Educators in the same discussion described it as a bundle of curriculum design, assessment, learner responsibility, faculty behavior, organizational structures, and continuous improvement. The video version of the conversation emphasized the move from theory to the practical question of what people actually do on the ground.
For CME providers, the warning extends beyond CBME. Any framework becomes a weak operating model if the label arrives before the shared definition. Terms such as “competency-based,” “outcomes-based,” “workflow-integrated,” “AI-enabled,” and “practice-changing” all sound useful until teams build different products around the same phrase.
We saw a related pattern in last week’s brief on weak evaluation and CME trust: evaluation problems often look like measurement problems but can start earlier as definition problems. If teams have not agreed what a framework means, outcomes plans become activity audits with more sophisticated language.
The concrete question for CME teams is simple: before launch, could a faculty member, outcomes lead, instructional designer, and learner describe the same framework in operational terms and name the behavior, decision, or workflow it is supposed to change?
The same discipline applies to this week’s AI conversations. In one faculty-development discussion, educators described using AI for summaries, question generation, and email drafting while stressing review and validation in the workflow (Faculty Feed). In a surgical education conversation, clinicians framed AI as an augmenting tool that requires clinician judgment, attention to bias, and scrutiny of whether a model was trained on data relevant to the setting (Behind the Knife).
Educators describe how absent shared definitions cause participants to 'talk past each other' and how adoption has followed a Gartner hype cycle into resource-constrained implementation realities.
Open sourceHighlights the shift from theoretical enthusiasm to practical assessment, evaluation, and continuous improvement demands, stressing CBME as a bundle of interventions rather than a single label.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoSurgeons detail ChatGPT/Claude use for cholecystectomy decision support and visualization, emphasizing mandatory human oversight to correct errors and address regulatory hurdles.
Open sourceFaculty describe AI for practice-question generation, transcript summarization, and compassionate email drafting while calling for validation steps and clinician involvement in tool design.
Open source