Peer Networks May Be the Missing Layer in Practice Change
Earlier coverage of learning design and its implications for CME providers.
CME and CPD voices framed education less as an event product and more as purpose-matched design for behavior change, with AI adding a concrete encounter-training need.
The clearest development this week was a change in how CME voices described the job: less about packaging updates into hours and formats, more about helping clinicians do something differently in practice. Because the evidence comes mainly from CME, CPD, and educator sources rather than broad independent clinician conversation, this reads best as an industry reframing, not settled market consensus.
Across this week’s CME and educator sources, the argument was not just that lectures are insufficient. It was that credit hours and default formats are becoming a weak way to define the product itself. Leaders pointed toward competency, performance, and behavior gaps rather than information transfer alone, while also arguing that modality should follow the learning task and the learner’s access constraints, not institutional habit (European CME Forum; The Alliance Podcast; ASH News TV; Faculty Factory).
For providers, that is a product-definition issue. If the aim is movement from competence toward performance, the first design question is no longer "live or online?" It is which parts of the behavior change require discussion, rehearsal, reinforcement, or simply easier access when clinicians can actually engage. This extends our earlier brief on the session no longer being the whole product: the new wrinkle is that format choice is being treated as a consequence of purpose, not the starting point.
This is broadly relevant but still narrow in sourcing, since the case is being made mainly from inside CME and education leadership. Even so, the operating question is concrete: are portfolios still organized around event inventory and credit packaging, or around the specific practice behaviors each learning path is meant to change?
This week’s AI discussion was less about whether clinicians should use AI tools and more about what happens when patients arrive with AI-generated health information in hand. Sources described patients bringing chatbot outputs into care discussions, clinicians having to sort useful synthesis from misleading claims, and the encounter itself becoming the place where verification and explanation have to happen (Patient Empowerment Network; Urology Times Podcasts; European CME Forum; AI and Healthcare).
That matters for CME because generic AI literacy will not be enough if the practical problem is encounter management. Clinicians may need habits for checking claims, clearer thresholds for what must be independently verified, and language for responding without either endorsing the output or dismissing the patient’s effort. This week’s examples lean oncology, patient education, and urology, but the provider implication is portable: AI is creating a communication-and-judgment problem, not just a technology one.
The evidence is still mixed, and not all of it reflects direct CME demand. But the design implication is concrete. Providers should test whether current AI curricula include encounter-based cases that teach clinicians how to verify patient-brought AI information and explain uncertainty in plain language.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo