Certification Policy Shifts Force CME Providers to Choose Between Compliance and Meaningful Learning
Earlier coverage of accreditation operations and its implications for CME providers.
ACCME’s permissive stance gives providers room to test formats while AI education shifts toward local validation and measurable outcomes.
ACCME’s leadership signaled this week that providers can experiment with formats more freely because learner demand remains strong when offerings match real needs. The accreditation signal is authoritative but single-source; the stronger provider takeaway is not that every pilot is justified, but that cautious over-compliance may now be a bigger constraint than the standards themselves. The adjacent AI conversation points to the other half of the same operating model: pilots need validation habits built into the learning design from the start.
In ACCME’s March 3 episode of Coffee with Graham, Graham McMahon described a permissive framework that does not rely on a review-and-approve model for every educational activity, but instead uses trust and verify to allow creativity and evolution. In the same discussion, he tied that stance to record levels of continuing education consumption and argued that learners respond when providers meet their needs (Buzzsprout podcast).
For CME teams, the important point is operational. If accreditation is not asking providers to freeze formats until every detail is pre-cleared, then the bottleneck moves inside the provider organization: governance, risk tolerance, production cycles, and willingness to learn from small failures.
That does not mean standards become loose. It means providers can separate compliance discipline from format conservatism. A live-social extension, a workflow-embedded tool, or a shorter recurring format can be tested if the purpose, independence, learner need, and evidence trail are clear. The concrete question for teams: which current program is being protected by habit rather than by an actual accreditation requirement?
The AI education signal was sharper than generic clinician AI literacy. Nigam Shah argued that healthcare AI cannot be judged only by model performance at launch; it needs local validation, continuous monitoring, and a defined benefit that can be checked after deployment (YouTube video). That extends a thread we saw in an earlier brief on AI oversight becoming a workflow requirement, but the emphasis here is more concrete: clinicians need to know what to test, when to escalate, and when a tool is no longer delivering the promised benefit.
A separate Medscape discussion with Ami Bhatt reinforced the same point from a clinical AI adoption lens. AI may help with information navigation, imaging triage, translation, and care-team support, but Bhatt’s caution was about scale: systems need ways to measure whether AI is doing what it was meant to do, whether outcomes are better, and whether algorithm changes or cybersecurity risks are exposing patients or clinicians to harm (YouTube video).
For CME providers, this changes the shape of AI education. A module that defines generative AI, lists risks, and ends with a comfort question is no longer enough. The learning task should look more like practice: define the use case, name the local context, choose the benefit metric, identify drift or bias signals, and decide what the clinician should do when the tool behaves unexpectedly. The concrete design question: can learners rehearse the governance behavior, or are they only hearing about it?
The useful shift is not that CME providers received permission to be reckless. It is that the permission structure and the measurement burden are now moving together. Accreditation leadership is telling providers they can evolve; AI experts are reminding the field that evolution without local proof is not enough.
Graham McMahon states open framework allows providers to innovate rapidly while record learner demand validates that needs-responsive offerings succeed; mission focus must be maintained amid external shocks.
Open sourceNigam Shah stresses local validation, ongoing drift monitoring, and firm benefit assessment rather than blanket accuracy claims; warns against Turing trap of merely automating existing tasks.
Open sourceEarlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoAmi Bhatt reinforces workflow integration requirements and the need for defined benefit metrics when deploying LLMs for translation and trial matching.
Open source