CME’s Next Bottleneck May Be the Person Running the Room
Earlier coverage of outcomes planning and its implications for CME providers.
CME leaders are reworking outcomes plans around lower learner burden and more decision-useful evidence, while complex-skill programs point toward mentored follow-through after the event.
The clearest signal this week is that some CME leaders are redesigning measurement around burden, not just validity. The evidence comes from CPD and conference-planning voices rather than broad independent clinician conversation, but it is concrete enough to matter for provider operations now.
Education leaders were unusually specific this week about what they are cutting. In a CPD discussion, speakers described using simpler checks that fit the claim being made, including pass-fail knowledge items, session review, and outcome methods that can be observed over time rather than relying on heavy post-activity feedback [source]. In a separate conference-planning discussion, CME leaders pointed directly to duplicated questions, weak response quality, and the value of app behavior data and more selective evaluations [source].
That matters because the operational problem is no longer only whether a survey is valid. It is whether the measurement plan creates enough friction that learners disengage and teams still end up with low-value data. For providers, that shifts outcomes planning toward method matching: what is the lightest defensible measure for this objective, and what can be retired?
This connects with an earlier brief on why communication training stops working when it stays episodic: education built for follow-through usually needs measurement that is closer to practice too.
The concrete question for CME teams: where are you still collecting multiple layers of feedback that do not change product, accreditation, or commercial decisions?
A narrower but useful design pattern also surfaced this week: for implementation-heavy skills, the educational product may need to extend well beyond the launch event. The strongest example came from a CPD discussion of Project ECHO-style training, where an initial workshop was followed by months of case-based mentoring, rehearsal, feedback, and participant case presentation [source].
This is still an emerging and narrow signal. The examples are concentrated in psychotherapy and telementoring, so it should not be generalized to every CME topic. But the provider implication is broader: if the goal is actual skill adoption, one well-produced event may be structurally mismatched to the task.
For product and instructional teams, that shifts the planning question from "How much content fits in the session?" to "What reinforcement, faculty time, and case flow are required after the session for learners to use the skill?"
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo