Research Years Need More Than Research Mentors
Earlier coverage of outcomes planning and its implications for CME providers.
A narrow provider-led week points to a concrete redesign: build evaluation around self-efficacy, practice change, and team-based care.
CME-provider conversations this week connected AI-enabled personalization, self-efficacy measurement, real-world outcomes, and interprofessional design into one outcomes problem. The evidence is narrow—provider-owned podcast content rather than independent clinician corroboration—but the implication is concrete: evaluation cannot sit at the end of an activity as a knowledge check and still prove strategic value.
This week’s public conversation was not clinicians asking for another topic. It was CME-provider voices arguing that education now has to show whether learners can apply what they learned, work across roles, and connect education to measurable care priorities.
Across Write Medicine’s year-end CME episodes, the through-line was clear: pre/post tests may still be useful, but they are too small to carry the outcomes burden alone. The discussion linked self-efficacy questions, practice improvement plans, reflective prompts, EHR or real-world data, Moore’s outcomes levels, and data visualization. That is a different build than adding a confidence question after the fact.
For providers, the design change is to put application inside the activity. A patient case should not only test recall; it can ask the learner to choose a next step, document a barrier, rate confidence in performing the behavior, and identify what would change in practice. A post-activity report should not only say knowledge improved; it should show how the activity was meant to move toward performance or patient-care measures.
The IPCE thread adds another layer. Provider voices pointed to ACCME data showing expansion of interprofessional activities and argued for designs that give physicians, specialists, educators, and other team members a shared case narrative plus role-specific objectives. That matters because team-based care cannot be evaluated as if every learner has the same decision rights, workflow, or patient touchpoint.
We saw a related pattern in an earlier brief on patient impact numbers that supporters will actually believe: outcomes credibility depends on showing the chain from learning to behavior to care impact. This week’s version is more operational. CME teams should ask: where, inside the activity, does the learner practice the behavior we later claim to measure?
The quiet-week caveat matters: this was a provider-led signal, not a broad clinician chorus. But it lands at a moment when trust in required professional learning is vulnerable. One clinician-side watch item this week questioned ABIM revenue growth and asked what value physicians are getting in return (source). A single post is not a trend, but it reinforces the same operating reality: when education is compulsory, expensive, or tied to credentials, CME providers have less room for vague value claims. The defensible answer is not more polished content. It is clearer evidence that learning changes what clinicians are prepared to do next.
Frames CME survival as requiring unlearning of episodic models in favor of EHR-integrated, collaborative, outcomes-aligned approaches with explicit enterprise KPI linkage.
Open sourceDetails shift from knowledge tests to self-efficacy metrics and real-world data (anti-VEGF, GLP-1) plus ACCME IPCE growth requiring layered team designs and patient narratives.
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoDocuments sharp ABIM revenue growth ($106M projected) and questions allocation of funds relative to quality impact on licensure and privileges.
"In 2022 @ABIMcert made 72 million dollars! In 2024 they will make about 106 million dollars! Imagine what they will make in 2030. What are they doing with all this extra money?"
Show captured excerptCollapse excerpt