Clinician Learning Brief

CME’s Next Product May Be a Path, Not an Event

Topics: Learning design, Outcomes planning, AI oversight
Coverage 2026-01-26–2026-02-01

Abstract

CME and CPD voices framed education less as an event product and more as purpose-matched design for behavior change, with AI adding a concrete encounter-training need.

Key Takeaways

  • CME and CPD voices spent the week describing education less as an event-plus-credit product and more as a sequenced system built to change behavior under real access constraints.
  • That reframing puts pressure on providers to choose modality after defining the job to be done, not before.
  • A concrete downstream implication is emerging in AI: clinicians need training for verifying and discussing patient-brought AI information during visits, not just orientation to the tools themselves.

The clearest development this week was a change in how CME voices described the job: less about packaging updates into hours and formats, more about helping clinicians do something differently in practice. Because the evidence comes mainly from CME, CPD, and educator sources rather than broad independent clinician conversation, this reads best as an industry reframing, not settled market consensus.

From event planning to behavior design

Across this week’s CME and educator sources, the argument was not just that lectures are insufficient. It was that credit hours and default formats are becoming a weak way to define the product itself. Leaders pointed toward competency, performance, and behavior gaps rather than information transfer alone, while also arguing that modality should follow the learning task and the learner’s access constraints, not institutional habit (European CME Forum; The Alliance Podcast; ASH News TV; Faculty Factory).

For providers, that is a product-definition issue. If the aim is movement from competence toward performance, the first design question is no longer "live or online?" It is which parts of the behavior change require discussion, rehearsal, reinforcement, or simply easier access when clinicians can actually engage. This extends our earlier brief on the session no longer being the whole product: the new wrinkle is that format choice is being treated as a consequence of purpose, not the starting point.

This is broadly relevant but still narrow in sourcing, since the case is being made mainly from inside CME and education leadership. Even so, the operating question is concrete: are portfolios still organized around event inventory and credit packaging, or around the specific practice behaviors each learning path is meant to change?

AI is moving into the visit

This week’s AI discussion was less about whether clinicians should use AI tools and more about what happens when patients arrive with AI-generated health information in hand. Sources described patients bringing chatbot outputs into care discussions, clinicians having to sort useful synthesis from misleading claims, and the encounter itself becoming the place where verification and explanation have to happen (Patient Empowerment Network; Urology Times Podcasts; European CME Forum; AI and Healthcare).

That matters for CME because generic AI literacy will not be enough if the practical problem is encounter management. Clinicians may need habits for checking claims, clearer thresholds for what must be independently verified, and language for responding without either endorsing the output or dismissing the patient’s effort. This week’s examples lean oncology, patient education, and urology, but the provider implication is portable: AI is creating a communication-and-judgment problem, not just a technology one.

The evidence is still mixed, and not all of it reflects direct CME demand. But the design implication is concrete. Providers should test whether current AI curricula include encounter-based cases that teach clinicians how to verify patient-brought AI information and explain uncertainty in plain language.

What CME Providers Should Do Now

  • Audit one major therapeutic area portfolio and rewrite its offerings by intended practice behavior, then check whether the current formats still make sense.
  • Require teams to specify the verification step and the communication step separately in any new AI-related activity design.
  • Review outcomes plans before launch and ask one simple question: what workplace barrier or encounter behavior is this activity actually trying to change?

Watchlist

  • Watch whether vetted content repositories become a repeatable answer to lost real-world exposure in training. The current public evidence is too specialty-specific and society-led to elevate beyond monitoring, but the hematology example is worth tracking for cross-specialty echoes (ASH News TV).

Turn learner questions into outcomes data

ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.

Request a demo