AI Assistance Is Quietly Eroding Core Clinical Skills
Earlier coverage of ai oversight and its implications for CME providers.
Simulation educators call for structured AI frameworks in CME and show that extended coaching outperforms modeling for procedural skill gains.
Simulation educators are calling for structured frameworks to govern AI use in CME rather than relying on scattered tool demos and extended expert modeling.
A simulation journal club discussion this week framed AI use in healthcare simulation as scattered rather than systematized, with examples ranging from scenario writing to avatars, data analysis, feedback, and dissemination. The proposed structure separates five simulation domains — education, assessment, faculty development, translational simulation, and research — from cross-cutting applications such as design, delivery, data collection, evaluation, and dissemination (Simulcast Journal Club).
For CME providers, the important part is not the taxonomy itself. It is the governance layer beneath it: ethics, bias, consent, AI literacy, cybersecurity, and accountability. A related urology conference conversation made the same problem concrete outside simulation: AI can make clinical information more accessible, but it also raises risks around hallucination, misrepresentation, and unauthorized or biased reuse of guideline material (GU Cast; YouTube).
The caveat is clear: these are mostly educator and conference voices, not a broad sample of independent practicing clinicians. But the implication is still portable. Simulation-based CME that uses AI for cases, avatars, feedback, summaries, or assessment should not treat those uses as isolated production choices. Each use should have a named purpose, a faculty-development requirement, and a risk check.
We saw a related pattern in an earlier brief on clinicians naming the real barriers to AI adoption in practice: trust problems are often operational before they are philosophical. For CME teams, the question is simple: can faculty explain where AI enters the learning experience, what it is allowed to do, and how learners or instructors should challenge it?
The second signal came from a randomized dual eye-tracking study in sonography education. Learners assigned to extended coaching outperformed those assigned to extended modeling by 12% in dynamic image interpretation, with better time efficiency in tasks that required translating perception into action (Medical Education Podcasts).
This is not a license to generalize across all procedural CME. The study was small, sonography-specific, and based on a brief training period. The authors also noted that coaching benefits were less clear for static tasks. Still, it gives CME teams a useful design challenge: many hands-on workshops still protect too much time for expert demonstration because it feels efficient and faculty-centered.
For dynamic procedural skills, efficiency may come from earlier supervised learner action, not longer modeling. That means agenda design should identify which minutes are for expert thinking aloud, which minutes are for learner attempts, and which minutes are for targeted correction. A strong workshop is not the one with the most polished demonstration; it is the one where faculty can see enough learner performance to intervene.
The concrete question for CME teams: in the next procedural activity, what proportion of faculty time is spent performing expertise versus watching learners perform and coaching the next attempt?
The common thread is control. AI tools and expert demonstrations both look useful when they are visible, polished, and easy to schedule. But this week’s education-led signals point CME providers toward the less glamorous work: defining faculty rules, protecting learner autonomy, and measuring whether simulation time produces better performance rather than better presentations.
Outlines the five thematic categories and cross-cutting applications that should structure AI use in simulation.
Open sourceEmphasizes foundational elements of ethics, AI literacy, cybersecurity and governance required before scaling AI in sim.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoProvides concrete examples of AI applied to design, delivery, evaluation and dissemination phases.
Reports 12% improvement in dynamic image interpretation and superior autonomy with extended coaching versus modeling in dual eye-tracking RCT.
Open source