What Clinicians Need From AI Near Decisions
Earlier coverage of accreditation operations and its implications for CME providers.
A narrow but useful signal: some physicians are arguing that credit should reflect real learning between exams, while acceptable AI use remains concentrated in low-risk, checkable tasks.
Some physicians are drawing a sharper contrast between how they actually stay current and how competence is still formally recognized. The evidence is narrow and partly specialty-linked, but it raises a concrete provider question: can CME products document longitudinal, practice-linked learning better than an exam-centered model does?
In one hematology-oncology leadership conversation, distributed in both video and podcast form, the argument was not just that exams are unpopular. It was that ongoing competence may be better reflected by what physicians already do: attend conferences, follow up on literature, answer clinic-driven questions, write, and work on projects with peers.
That matters for CME providers because it shifts the issue from format convenience to recognition of real learning activity. The question is whether providers can package education as a documented mix of activity, reflection, and follow-up rather than as isolated events alone.
This is still an emerging signal. The source base is effectively one conversation in two formats, and it sits inside a specialty society MOC debate, so it should not be treated as broad physician consensus. But the implication is concrete enough: if credit systems move even modestly toward portfolio logic, which of your current products could already serve as evidence of longitudinal learning?
The AI theme here is narrower than in recent editions. In a clinician discussion about systematic reviews on X and a longer YouTube discussion, the acceptable role for AI was bounded technical help with a clear human check, not core evidence judgment where mistakes could distort conclusions. Separately, a CPD-focused podcast highlighted faculty disclosure management as a useful test case because it is tedious, universal, and compliance-sensitive, while still stressing minimal data exposure, human review, and variable outputs.
For CME providers, that argues against broad AI messaging and toward narrow, auditable use cases. As our recent brief on what AI really optimizes suggested from a different angle, support weakens as soon as the task shifts from inspectable assistance to interpretive evidence work.
The caveat matters here too. The clinician-trust evidence is stronger than the operations example, and the disclosure workflow point is still a single-source conference-preview discussion. Even so, the operator question is straightforward: where are you using AI only when staff can verify the output step by step, and where are you still asking it to do work your own educators would hesitate to trust?
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo