Clinicians Are Judging CME by Whether It Teaches the Work
Earlier coverage of learning design and its implications for CME providers.
Clinicians are filtering educational value through real-world fit: AI has to reduce burden in the moment, and communication training has to account for hierarchy.
Clinicians this week were blunt about what earns a place in their day: help has to fit the moment without adding burden or exposure. Across AI discussions and communication training, the same point surfaced: design fails when it asks for more effort, more risk, or more interface friction than clinicians will tolerate.
Across clinician-facing discussions in oncology and radiology-adjacent settings, AI was framed less as a sweeping change than as useful help for specific tasks: easing documentation, reducing administrative drag, supporting safer reads, or surfacing authoritative answers quickly. The strongest filter was simple: if a tool adds clicks, interrupts care, or forces another interface, clinicians lose patience. That came through most clearly in podcast discussion, with an emergency medicine X clip adding a similar friction point but not carrying the same weight as independent clinician evidence (Treating Together, AJR Podcasts, X video).
For CME providers, that means AI education should stop where the use case stops feeling plausible in a real care setting. As our earlier brief on AI use training argued, the series has already tracked the move away from AI literacy alone. This week adds a narrower test: whether the example actually saves steps while preserving confidence in the answer. The evidence is specialty-led and not broad clinician consensus, but the design implication is still useful. Build around one clear point of friction, show the speed-versus-reliability tradeoff, and ask whether the featured tool would still feel worth using if it required extra app-switching or duplicate checks.
The second signal was about who will speak honestly in front of whom. A medical education research discussion pointed to evaluation pressure as a reason learners stay silent, especially when speaking up could affect assessments. A surgery discussion added a related point from peer-based qualitative work: residents may disclose more to near-peers than to faculty because the power dynamic changes what feels safe to say (Medical Education Podcasts, Behind The Knife). One source sits specifically in racism and reconciliation work, and that context matters; the broader takeaway is about training design under hierarchy, not a generic professionalism claim.
For CME teams, the implication is straightforward. Communication, bias response, professionalism, and escalation training can underperform when the activity assumes learners will reveal uncertainty or challenge behavior in front of people who evaluate them. This extends our earlier brief on why communication training stops working when it stays episodic by sharpening the mechanism: hierarchy and evaluator risk change participation. If candor is part of the learning goal, peer-first discussion, facilitated small groups, evaluator-separated reflection, and explicit escalation language may matter as much as the case itself. The practical question is whether your format asks learners to take interpersonal risks that the room has not been designed to support.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo