CME’s Next Bottleneck May Be the Person Running the Room
Earlier coverage of learning design and its implications for CME providers.
Assessment credibility is under sharper scrutiny: generic ratings look weak, while specific feedback and peer-rich formats appear more defensible for complex learning goals.
This week’s clearest signal is that common evaluation tools may be too blunt for the learning goals CME often claims to serve. Across educator-facing and institution-adjacent sources, the concern was consistent: reasoning, growth, and trust-rich exchange are harder to capture than completion or generic ratings. That makes this a meaningful pressure on learning design, even if it is not yet broad clinician-market consensus.
Across this week’s source set, the critique was more specific than the usual complaint that post-tests are too thin. The sharper concern was that generic ratings, blunt evaluation forms, and hierarchy-shaped feedback often fail to capture how someone reasons, what changed, or where growth is actually happening. Several discussions pointed instead to structured reflection, more specific prompting, and peer observation as better ways to surface useful feedback (Sarcoma Insight Podcast, MedEd Thread, Annals On Call Podcast, Faculty Factory).
For CME providers, that matters because outcomes claims are only as credible as the instrument behind them. If an activity is meant to improve judgment, confidence in applying evidence, or reflective practice, a generic satisfaction score will not support much of that claim. This extends the series’ earlier point that the person shaping the learning environment can determine what honest feedback becomes possible: the issue is not just whether you collect feedback, but whether the setting and prompt let people say anything specific and usable.
The practical question for CME teams is straightforward: where are you still using broad end-of-activity ratings when the learning goal really calls for reflection on changed reasoning or intended action?
The second signal was narrower, but useful. The case for in-person or peer-rich learning this week was not that live formats are generally better. It was that some tasks still depend on relational trust, social cue reading, persuasion, and nuanced discussion in ways that thinner digital formats can flatten (At The Beam, Simulcast, Faculty Factory). One example is oncology-led, but the provider implication is broader: format decisions should follow task fit.
That has direct planning consequences. CME teams do not need to defend every live meeting on principle, and they should not assume convenience alone settles the format question either. If the educational objective is interpretation, alignment, influence, or handling disagreement, peer architecture may be part of the intervention rather than a delivery preference.
Before defaulting to webinar, async, or live event, ask which objectives truly require people to read one another, challenge one another, and build enough trust to change how they think.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo