CME Planning Still Stops at Objectives, Not Actions
Earlier coverage of learning design and its implications for CME providers.
A funder panel pressed CME teams to connect needs, design, and outcomes tightly enough to show how education changes practice.
The week’s clearest public signal was that funders are judging CME less by whether clinicians learn the evidence and more by whether the activity is built to change what they do. The evidence base is narrow: one provider-owned CMEpalooza session with grant reviewers and funder-side education leaders, but the implications are portable across therapeutic areas.
In the CMEpalooza discussion, panelists drew a clear line between helping clinicians understand evidence and helping them apply it in practice. They described knowledge as necessary, especially when evidence is early or a product is new, but insufficient when the desired outcome is a change in decision-making, skill, or workflow.
For CME providers, the important point is not that every activity must promise behavior change. It is that the activity’s ambition has to match the evidence lifecycle and the stated gap. If the problem is awareness of new data, a knowledge-focused format may be appropriate. If the problem is application, hesitation, self-efficacy, or a system barrier, then a content-heavy activity with a post-test will look underbuilt.
That puts more pressure on the needs assessment. The panel emphasized outcomes data, learner insights, and practice consequences alongside literature scans and guideline updates. In fast-moving areas such as oncology and hematology, currency also matters: an old needs assessment can weaken an otherwise strong proposal if the clinical landscape has moved.
We saw a related pattern in an earlier brief on CME evaluation moving from knowledge checks to self-efficacy and real-world impact. This week’s sharper provider implication is that needs assessment and design now have to show the pathway from “clinicians do not know” to “clinicians can act here, despite these barriers.” The question for CME teams: does the proposal name the exact practice behavior it is trying to change, or does it stop at explaining the evidence?
The second theme was about proposal quality. Panelists described seeing familiar language—case-based, interactive, personalized, expert-led—used as shorthand for rigor. Their concern was not with those formats themselves. It was with proposals that list formats without explaining why each element is the right response to the gap.
That matters because grant review is becoming less tolerant of format-first proposals. A one-hour grand rounds session may be perfectly useful for awareness, but it is hard to defend as a vehicle for sustained performance change unless the proposal explains how practice, feedback, follow-up, and measurement will work. Likewise, a costly “robust” program needs more than branding; it needs a visible logic chain.
The panel also pushed beyond pre-post testing. Pre-post knowledge or competence data may still be part of the package, but reviewers described wanting insight: why learners responded as they did, what barriers remained, whether there is 60- or 90-day follow-up, and how the findings help explain ongoing educational need. In other words, outcomes are not only proof after the fact. They are part of the argument for why the design deserves support.
The concrete implication is simple: every major proposal should make the reviewer’s job easier. Connect the gap, objective, format, learner action, measurement plan, and follow-up insight in one coherent line. If the same needs assessment and learning objectives can be reused across six differently titled activities, the design is probably not specific enough.
The useful shift is not that funders want flashier education. They appear to want less theater around formats and more discipline around causality: what gap exists, why clinicians are stuck, what the activity asks them to practice, and how the provider will know whether anything changed. In a quiet week, that is still a clear operating message for CME teams: the strongest proposals will read less like activity menus and more like accountable learning plans.
Panelists repeatedly stressed that knowledge acquisition alone is insufficient; education must close skill gaps through deliberate practice, feedback, and design targeting what clinicians actually do in practice. They noted that self-efficacy and workflow support are stronger predictors of behavior change than knowledge or generic confidence, and that needs assessments must evolve beyond literature scans to incorporate outcomes data, learner insights, and system-level barriers.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo