What Clinicians Need From AI Near Decisions
Earlier coverage of accreditation operations and its implications for CME providers.
Some CME innovation may be getting blocked less by accreditation rules than by teams’ own assumptions about them.
Some of the biggest limits on CME right now may be assumptions about what teams are allowed to build and what clinicians can safely offload. This week points to two issues for providers to test: self-imposed accreditation constraints and AI learning designs that make help easy without protecting expertise. The evidence is credible but narrow, so these are operational signals to audit, not broad market conclusions.
Accreditation itself may not be the main brake on innovation. In a single but highly relevant authority interview, an ACCME leader discussed recurring myths around AI use, planning committee composition, physical separation, and disclosure requirements, arguing that providers often restrict themselves beyond what the standards actually require (The Alliance Podcast).
For CME providers, that matters because delays may be coming from local interpretation, not formal prohibition. If a team assumes it cannot test AI-assisted workflows, alter committee structures, or simplify disclosure processes in low-risk contexts, product and operational changes can stall before they are even scoped. This is a credible operational signal to test, not settled market consensus, because it rests on one authority-source perspective rather than broad independent clinician conversation.
The implication is straightforward: review which "we can’t do that" statements in your organization are tied to written standards and which are inherited habit. The bottleneck may be policy interpretation inside the organization, not accreditation itself.
The AI question is no longer just whether clinicians can use these tools safely, but whether regular use weakens the reasoning and interpretive skills they still need to own. Across medical education and specialty discussions, sources described AI as useful support while also stressing verification, supervision, and the risk of de-skilling if learning design does not compensate (Medical Education in 2025: AI’s Double-Edged Sword, Artificial Intelligence in Urology: What’s Here, What’s Next?, Out of the Box: LLMs in Radiology, The Radiology Review Podcast).
The examples are partly specialty-led, and the source mix does not fully establish independent clinician consensus. Even so, the provider implication is clear: a generic session on AI capabilities or governance is less responsive to this concern than learning design that requires clinicians to verify outputs, decide when to override them, and practice the parts of judgment they cannot safely outsource. This extends our earlier brief on what clinicians need from AI near decisions from oversight into skill preservation.
For CME teams, the question is whether your AI activities teach convenience or competence. If a clinician finishes the program more willing to use the tool but with weaker habits of checking, escalation, or independent reasoning, the design may be solving the wrong problem.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo