From Milestones to Quintuple Aim: CME Must Now Prove Patient Outcomes
Earlier coverage of learning design and its implications for CME providers.
Clinician signals this week point to a tighter test for CME: reduce burden, use AI carefully, and diagnose barriers before building education.
Clinicians and CME voices this week converged on a simple pressure point: education has to fit the work, not ask clinicians to step outside it. The AI and micro-CME signal draws on both provider and clinician-facing sources; the needs-assessment signal is mainly provider-owned, so it should be treated as a planning cue rather than broad clinician consensus.
The sharper AI signal this week was not that clinicians want more technology in CME. It was that technology has to remove friction.
In a cardiology discussion about MOC and AI, the most concrete learning preference was for on-demand, micro-CME tied to real clinical questions rather than broad, high-stakes testing. The same conversation treated AI with restraint: useful in some areas, but still limited by hallucinations, black-box models, bias in training data, and the need for real evaluation before trust scales (Medscape).
That extends the January 7 brief on AI content workflows. The conversation has moved from “Can AI help create content?” to “Can AI help clinicians learn with less wasted time, less irrelevant assessment, and more confidence in the output?”
The generational thread is narrower but useful. A surgeon-led X thread asked for input from learners because, “We are exploring Gen Z and Y’s learning preferences in medical education/ professional development for our @asco LDP project.” (X). The point for CME teams is not to assume all younger clinicians want the same thing. It is to measure format preference directly and separate actual learner behavior from platform fashion.
For providers, the next AI pilot should have a burden metric attached: time saved, relevance improved, fewer irrelevant questions, cleaner credit capture, or better routing to the right learning unit. If a tool cannot show one of those, it is probably not ready to shape the learner experience.
The second signal came mainly from a provider-owned CPD podcast, so it should be weighted accordingly. Still, it names a real operational problem: too many needs assessments still read like evidence inventories when the actual barrier may be workflow, incentives, equity, patient preference, or team coordination.
The Write Medicine discussion argued that CME planning has to distinguish educational gaps from gaps education cannot solve alone. If the driver is an incentive problem, a process failure, or a preference-sensitive decision, the activity design should change—or the intervention may need to include non-educational partners (Write Medicine).
This supplies the operational layer beneath the January 21 brief on quintuple-aim outcomes. Population health, equity, cost, and patient experience are not outcome labels to add at the end of a proposal. They require a different front end: who is involved in the assessment, what barriers are named, and whether the education team has permission to say “education alone will not close this gap.”
The concrete implication is a planning workflow change. Needs-assessment templates should force a root-cause pass before topic approval: What part of the gap is knowledge? What part is team behavior? What part is system design? What part is patient preference or access? The hard question for CME teams is whether their current grant and editorial processes allow that answer to change the intervention.
CME teams will face a harder standard for every new requirement they place on clinicians: show why the burden is worth it. That same burden-versus-outcomes test surfaced in a radiation oncology discussion about quality payment programs and accreditation. The concern was not anti-quality; it was whether accreditation-linked payment adds effort without a clear patient-outcome connection, and whether conflicts of interest are being handled cleanly (The Accelerators Podcast; X). The through-line for CME providers is straightforward. Whether the tool is AI, micro-CME, needs assessment, or accreditation-adjacent quality work, clinicians are asking for the same thing: less performative process, more visible value.
Outlines AI uses in content development, credit tracking, and risk prediction while stressing bias awareness, iterative Bayesian approaches, and preference for micro-CME over traditional testing formats.
Open sourceCardiology-specific exploration of AI in MOC and clinician-facing technology, highlighting personalization potential alongside caution on over-hype.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoClinician thread emphasizes Gen Z/Y preference for flexible, tech-enabled micro-learning and reduced time burden compared with conventional formats.
"Dear @medicalstudent @ResidentsMed @MedTweetorials @Medtwitteer. We are exploring Gen Z and Y’s learning preferences in medical education/ professional development for our @asco LDP project. We would greatly value your input! @SyedAAhmad5 @motazqadan"
Show captured excerptCollapse excerptRadiation-oncology discussion of Donabedian model limitations and historical QRRO data showing better outcome improvements without pay-for-performance penalties tied to accreditation.
Open sourceIndependent clinician thread raises COI and burden concerns when societies control both accreditation standards and revenue streams.
"QPP/MIPS did nothing to improve patient care quality--but it increased the amount of time we spent in front of our computers and spun off an entire industry of consultants"
Show captured excerptCollapse excerpt