AI Assistance Is Quietly Eroding Core Clinical Skills
Earlier coverage of ai oversight and its implications for CME providers.
Shadow GenAI use by clinicians and patients has created an urgent triadic decision-making gap that standard AI literacy modules do not cover.
Clinicians and patients are independently using commercial GenAI for diagnosis, documentation, and information without training or disclosure. The resulting triadic conversation now sits inside routine encounters, requiring CME to rehearse verification habits and human-override checkpoints rather than tool familiarity alone. The two supporting signals on participation and procedural retention come from single education-focused audio papers and are treated here as design hypotheses.
The sharpest signal came from a BMJ discussion of “shadow AI” in the consultation room. In UK GP survey data, use of commercial GenAI tools for clinical tasks reached 25 %, including documentation, differential diagnosis, referrals, and treatment options; 95 % of respondents said they had no training. Patients are also arriving with AI-shaped explanations that may be more fluent than standard web search results.
That changes the learning problem. CME teams cannot assume AI use is sanctioned, visible, or confined to back-office documentation. The encounter may now include what one BMJ speaker called a third element: “And now with a third element, informed at all levels and all processes by AI.” In parallel, a JAMA+ AI conversation described rapid institutional uptake alongside gaps in accuracy, bias, and monitoring.
For providers, the implication is rehearsal: how to ask whether AI was used, how to respond when a patient brings an AI-generated differential, how to disclose clinician use without damaging trust, and when to override a plausible answer. We saw a related pattern in an earlier brief on AI disclosure and patient trust; this week’s difference is that the AI may be present before anyone names it.
The concrete question: do current AI activities train clinicians for the conversation in which AI is already in the room but not yet disclosed?
A Medical Education audio paper argued that low uptake of mentorship, wellness, faculty development, and CPD may reflect habitual non-participation rather than lack of awareness. The clearest example was mentorship: despite known benefits, only a minority of trainees report having a mentor, and the authors frame the gap through System-1 habits, cognitive load, and choice architecture.
This is a single academic source, so it should not be treated as proof that every participation problem has the same cause. But the provider implication is useful: if clinicians are overloaded, the default action is often no action. A well-designed program that sits outside the clinician’s normal path may still lose to habit.
For CME teams, that means registration and continuation are part of instructional design. Default enrollment, opt-out pathways, peer norms, timely prompts, visible social proof, and reduced friction should be tested against actual participation and completion—not assumed to be “marketing.”
The week’s procedural-learning signal came from a Medical Education audio paper on POCUS competency retention. Short ultrasound courses under four hours were associated with a median competency decline of 11.8 %, compared with 2.6 % for longer courses. Acquisition skills showed the steepest decay, and integrated approaches using hands-on practice, simulation, case review, discussion, and clinical context showed stronger retention.
There are important limits: the source is a single audio paper, the underlying studies were heterogeneous, many used lower-level evaluation designs, follow-up windows were often short, and medical students were a substantial share of study populations. Still, the provider lesson is broader than ultrasound. Skills that require psychomotor performance and judgment do not survive on exposure alone.
For CME teams, the hard question is commercial and operational as much as educational. If a procedural course is sold as a short event, what does the provider know about competence six weeks later? A stronger model would include spaced refreshers, supervised application, delayed recall checks, and outcomes reporting that measures retention rather than immediate post-course confidence.
The common thread is that clinician behavior is not waiting for formal education design to catch up. AI is being used before training and disclosure norms are reliable. Valuable programs are being bypassed when participation depends on extra effort. Procedural competence decays when the course ends before practice begins.
Documents 95% of clinicians using commercial GenAI without training and patient arrival with AI-generated information.
Open sourceHighlights clinician and patient resistance to full AI outsourcing and demand for trained humans in the loop.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoFrames habitual non-participation as System-1 behavior creating a measurable value-action gap that knowledge interventions alone cannot close.
Quantifies median skill decay after short courses and demonstrates retention gains from longitudinal multimodal integration.
Open source