Generative AI Moves Into Clinical Teaching Workflows, Raising Immediate Privacy and Trust Questions
Earlier coverage of learning design and its implications for CME providers.
Clinician frustration with duplicative modules and recertification tests is creating a concrete opening for CME providers to offer streamlined, outcomes-tracked alternatives.
Clinicians were blunt this week: duplicative online modules and recertification-style exams are being framed as lost clinical time, not meaningful learning. The loudest examples came from oncology and internal medicine voices, with one supporting CME-provider podcast, but the burden argument was broader than a single specialty.
A clinician thread described the “sheer number” of duplicative mandatory online modules across clinical care, research, and employment as unsustainable, with replies tying the problem directly to patient care time. One reply put the tradeoff plainly: “if I had more time (not taken by this) that could improve patient care.” (source)
That matters because the objection is no longer just that required training is annoying. Clinicians are questioning whether repeated modules and trivia-style assessments are proving anything that matters. A separate post made the AI connection explicit, asking what recertification exams prove when AI can ace multiple-choice questions with reference access. (source)
A provider-owned CME discussion made a similar point about MOC-style testing: “Not actually testing what you're doing.” (source) Because that source is provider-owned, it should not be treated as independent clinician consensus. But it does sharpen the operational point: CME can only credibly become the lower-burden alternative if it helps employers, boards, and health systems retire duplicative requirements rather than relabel them.
This extends an earlier brief on ABIM certification complaints, but the emphasis has moved from a specific certification dispute to a broader replacement question. Which required activities can CME providers help retire, and what outcome evidence would make that acceptable to the institution requiring them?
The AI thread this week was practical, not utopian. Clinicians and education-adjacent voices pointed to AI for rote tasks: summarizing information, drafting content, helping with prompt-driven work, and potentially reducing the time spent on low-value modules. But they also emphasized human review for accuracy, referencing, creativity, and final judgment.
That distinction matters for CME teams. In a medical writing and education discussion, AI was framed as something professionals need to learn to work with, including prompt engineering, while still requiring human double-checking for accuracy and references. (source) The most useful CME response is not an abstract AI overview. It is short, task-specific training that teaches clinicians when AI can reduce first-pass work and when a human must verify the output.
For providers, this is also a product design issue. If CME is positioned as a lower-burden alternative to rote requirements, AI can help personalize pathways, summarize prior performance, and support documentation. But final sign-off, outcome interpretation, and clinical relevance still need human ownership. The concrete question is where AI can remove repeat work without weakening the evidence that learning changed practice.
A useful contrast came from a single podcast on interprofessional continuing education for quality improvement and patient safety. The speakers argued for CE built around team processes, patient safety events, and the roles of nurses, pharmacists, technicians, patients, and caregivers—not just biomedical updates delivered to one profession. (source) That is only one source, so it belongs as a watch item rather than a main claim. Still, it clarifies the week’s larger lesson. Clinicians are not rejecting education. They are rejecting repeated requirements that do not show why they matter. The CME opportunity is to make required learning feel less like a time sink by tying it to real decisions, real teams, and evidence that something improved.
Physicians describe mandatory modules as draining spare time with no proven outcome improvement and note AI can already pass MOC-style exams
"The sheer number and rapid increase in the number of (often duplicative) mandatory online training modules for clinical care, research, and institutional employment is completely unsustainable @AmerMedicalAssn"
Show captured excerptCollapse excerptMOC criticized for testing trivia rather than real practice; calls for CME-based alternatives
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoReinforces duplicative training across clinical, research, and employment domains as unsustainable
"With AI acing (surpassing humans) multiple choice questions what is the point of recert exams with uptodate access only?@ABIMFoundation @VincentRK"Open source
Details prompt engineering and automation of note writing, module completion, and literature synthesis with human review required for accuracy
Open sourceArgues IPE and high-reliability principles can break generational and disciplinary silos via team-based grand rounds
Open source