Reach and Proof Fall Short for Clinician Learning
Earlier coverage of outcomes planning and its implications for CME providers.
Supporter trust now requires outcome calculations grounded in claims data and narrowly scoped to the target audience rather than optimistic multipliers.
Patient-impact numbers lose credibility when they rely on broad multipliers or unvalidated self-report instead of claims data and target-audience-only calculations. The week’s sources are limited to one industry session and one provider-owned podcast, yet the operating requirement is clear: outcome math and needs assessments must survive external scrutiny without hidden assumptions.
In a CMEpalooza session on outcomes extrapolation, the credibility problem was concrete: a rare-disease activity could produce a patient-impact estimate larger than the plausible U.S. patient population. The issue was not whether education mattered. It was whether the calculation could be defended when a supporter, internal review team, or grant reviewer asked how the number was produced.
The session favored narrower math: calculate against the target audience, use claims data where available, disclose the formula, and report realistic ranges rather than a single inflated figure. It also flagged the limits of self-report. A learner’s intention to change practice does not cleanly translate into a patient-impact number, and broad multiplication across all learners can make an otherwise useful activity look less credible.
This builds on an earlier brief on defining measurable outcomes before choosing formats. The downstream discipline is just as important: if the outcome was not defined in a way that can be measured, the final impact story will lean on assumptions. Would the patient-impact number still make sense if the reviewer saw the denominator, the data source, and the formula?
A provider-owned Write Medicine episode on gaps and learning objectives made a related point from the planning side: needs assessments become thin when they describe a clinical gap without specifying who experiences it, when it occurs in the workflow, and where care is being delivered. The episode called out audience role, timing from presentation through follow-up, community versus academic setting, rural-urban context, and workforce constraints such as aging specialist workforces.
That matters because the same clinical gap can require different education depending on where it sits. A diagnostic gap at initial presentation is not the same operational problem as a follow-up or monitoring gap. A community setting with workforce shortages is not the same learning environment as an academic center with subspecialty depth. Oncology and urology examples shaped the discussion, but the framework is portable.
The provider-owned source should not be read as broad clinician consensus. It is still a useful standard for internal quality control: can a grant reviewer tell which clinicians are affected, what part of practice is breaking down, and what the education is expected to change?
This week does not argue for smaller ambitions. It argues for claims that can be carried into a funding conversation without caveats appearing only after questions are asked. The strongest CME story may be the one that says less, shows the math, and makes clear where the evidence stops.
Details how broad multipliers and unvalidated self-report produce numbers that commercial supporters reject; recommends target-audience-only math and claims data where available.
Open sourceArgues that CME writers must specify roles, workflow timing, community/academic/rural-urban setting, and workforce issues to produce accurate gap analysis and defensible program design.
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoOutlines narrow-scope storytelling, mobile visuals, platform-follower faculty requirements, and limits on higher-order performance assessment for social formats.
Open source