Clinicians Draw Lines on AI Authorship
Earlier coverage of accreditation operations and its implications for CME providers.
ABIM’s MOC point removal and NBPAS recognition are prompting clinicians to separate credit from useful learning, with longitudinal formats and AI guardrails offering clearer alternatives.
ABIM’s removal of the two-year MOC point requirement and CMS recognition of NBPAS by a major dialysis provider gave clinicians a concrete reason to separate credit from useful learning. The signal is strongest in internal medicine, hem-onc, and nephrology, but the rest of the week pointed to the same provider question from different angles: format design and AI trust.
A practicing hem-onc physician framed ABIM’s removal of the two-year MOC point requirement and DaVita’s acceptance of NBPAS certification after CMS recognition as a small but meaningful correction in physician certification. The evidence here is narrow—a single independent X thread—but the policy moves are national, and the conversation is strongest in specialties where MOC burden has long been a flashpoint.
For CME providers, the implication is not that MOC credit stops mattering. It is that credit alone is a less secure answer to the learner’s question: why should I spend scarce time here? When alternative certification pathways gain employer or regulatory recognition, clinicians have more room to distinguish between activities that satisfy a requirement and activities that help them practice better.
That changes the portfolio question. Providers should be able to explain which activities produce evidence of learning, competence, or practice change even when the learner is not primarily chasing a point total. The useful internal test is simple: which of your activities would still be worth defending if MOC points were removed from the value proposition? The physician thread is public here.
A JCEHP companion podcast on applying the Project ECHO model to CBT for psychosis described a very different answer to the same value question. The model combines an initial workshop, repeated case-based consultation, peer learning, behavioral rehearsal, and long follow-up. In the example discussed, participants continued in biweekly ECHO clinics over a 12-month period, with participation records, brief satisfaction and confidence checks, micro-skill assessment, fidelity review, and patient-health measures.
This is provider- and journal-affiliated educational content, not broad independent clinician chatter. Still, it is useful because it shows how a CME format can be built to generate evidence over time rather than bolt outcomes onto the end of a one-off event.
The broader lesson is not “run an ECHO.” It is that longitudinal learning can solve two operator problems at once: it reduces travel and access friction, and it creates repeated moments where competence and performance can be observed. For CME teams, the question is whether the hardest practice-change goals in the portfolio are being served by formats long enough to see whether clinicians can actually do the work. The ECHO discussion is available here.
An oncology podcast discussion on AI made the trust problem concrete. A clinician described asking a generalist model to summarize trial information and getting some answers right but one materially wrong. The response from the AI expert was blunt: when users ask open-ended questions and assume the model already contains current, correct knowledge, hallucination is predictable. Performance improves when the model is given trusted documents—guidelines, recent trials, or other curated materials—and asked to reason from those sources.
This is a single oncology-framed source, but the learning-tool implication is broader. CME teams experimenting with chatbots, faculty-support tools, content summarization, or case-based assistants should not treat prompt quality as the main safety layer. The safer architecture starts with curated ingestion, visible source boundaries, documented human review, and disclosure of what the tool was used for.
We saw a related pattern in an earlier brief on AI oversight as a workflow requirement: the hard part is not access to AI, but deciding who verifies it, against what source, and before which learner-facing moment. This week’s oncology discussion adds a sharper rule: if the tool cannot show where its answer came from, it should not be treated as clinical learning infrastructure. The discussion is available here.
This week’s through-line is that clinicians are being offered more ways to separate meaningful learning from administrative compliance. Certification reform weakens the assumption that points are the main product. Longitudinal formats show how CME can document progress beyond attendance. AI tools force a trust question before they can become part of learning workflow. For senior CME teams, the reconsideration is portfolio-level: are you still organizing around credit delivery, or around proof that the learning was worth a clinician’s time?
Independent practicing clinician thread documents ABIM policy change and CMS NBPAS recognition with direct link to reduced administrative burden.
"A step, albeit small one still, in the right direction. People's voice is heard. Social media empowers transparency, and eradicates monopoly in society. 🎉🎊🥳🙏One of many to be Thankful for ... in this 2024 Thanksgiving season. Help spread the news. And support the cause."
Show captured excerptCollapse excerptEarlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoPodcast describes ECHO mechanics and measurable gains in competence, fidelity, and patient outcomes when paired with structured frameworks.
Clinician-educator dialogue shows performance gap between ungrounded vs. document-grounded LLM queries and necessity of human verification.
Open source