Clinician Learning Brief

CME’s Measurement Problem Is Becoming a Burden Problem

Topics: Outcomes planning, Learning design
Coverage Dec 2–8, 2024

Abstract

CME leaders are reworking outcomes plans around lower learner burden and more decision-useful evidence, while complex-skill programs point toward mentored follow-through after the event.

Key Takeaways

  • CME leaders are shifting from criticizing weak evaluation forms to trimming redundant measurement and using lighter methods tied to actual decisions.
  • Lower-burden evidence plans now include pass-fail checks, selective questions, session review, and behavior-proxy data rather than default survey stacks.
  • For hard-to-embed skills, a single event looks less sufficient; the narrower emerging model is workshop-plus-mentoring, case discussion, rehearsal, and feedback.

The clearest signal this week is that some CME leaders are redesigning measurement around burden, not just validity. The evidence comes from CPD and conference-planning voices rather than broad independent clinician conversation, but it is concrete enough to matter for provider operations now.

Less measurement may produce better evidence

Education leaders were unusually specific this week about what they are cutting. In a CPD discussion, speakers described using simpler checks that fit the claim being made, including pass-fail knowledge items, session review, and outcome methods that can be observed over time rather than relying on heavy post-activity feedback [source]. In a separate conference-planning discussion, CME leaders pointed directly to duplicated questions, weak response quality, and the value of app behavior data and more selective evaluations [source].

That matters because the operational problem is no longer only whether a survey is valid. It is whether the measurement plan creates enough friction that learners disengage and teams still end up with low-value data. For providers, that shifts outcomes planning toward method matching: what is the lightest defensible measure for this objective, and what can be retired?

This connects with an earlier brief on why communication training stops working when it stays episodic: education built for follow-through usually needs measurement that is closer to practice too.

The concrete question for CME teams: where are you still collecting multiple layers of feedback that do not change product, accreditation, or commercial decisions?

Complex skills may need follow-through, not another session

A narrower but useful design pattern also surfaced this week: for implementation-heavy skills, the educational product may need to extend well beyond the launch event. The strongest example came from a CPD discussion of Project ECHO-style training, where an initial workshop was followed by months of case-based mentoring, rehearsal, feedback, and participant case presentation [source].

This is still an emerging and narrow signal. The examples are concentrated in psychotherapy and telementoring, so it should not be generalized to every CME topic. But the provider implication is broader: if the goal is actual skill adoption, one well-produced event may be structurally mismatched to the task.

For product and instructional teams, that shifts the planning question from "How much content fits in the session?" to "What reinforcement, faculty time, and case flow are required after the session for learners to use the skill?"

What CME Providers Should Do Now

  • Audit current outcomes stacks for duplicated questions, low-yield surveys, and measures that do not inform a real decision.
  • Rebuild measurement plans by matching each intended outcome to the lightest defensible method, including targeted checks, behavior proxies, or reviewed performance evidence where appropriate.
  • For complex-skill initiatives, price and scope the full learning architecture up front, including mentoring, case discussion, and feedback capacity after launch.

Watchlist

  • AI learning tools remain worth watching, but the more durable use case still looks narrow: constrained feedback and coaching tasks rather than broad chatbot replacement [source] [source] [source].
  • Digital workflow skills, especially EHR use, may become a clearer continuing-education expectation, but this week’s evidence is single-source and still closer to a curricular gap than a public CME trend [source].

Turn learner questions into outcomes data

ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.

Request a demo