AI Assistance Is Quietly Eroding Core Clinical Skills
Earlier coverage of ai oversight and its implications for CME providers.
Clinicians now need explicit disclosure language and workflow checkpoints when using AI. Cost-effectiveness measurement in CME remains early-stage baseline work.
Clinician conversation this week pointed to a harder requirement for AI education: clinicians need to explain, check, and justify AI use in the encounter. The AI examples came from radiology, oncology, and general medical discussion, but the provider implication is broader: CME has to make trust-preserving behaviors visible enough to teach and measure.
The sharpest signal came from a JAMA AI discussion of a survey of about 1,300 U.S. adults: when identical physician ads disclosed AI use, perceived competence, trustworthiness, empathy, and willingness to book an appointment dropped. The discussion did not argue for hiding AI use. It argued that clinicians need better language for explaining how AI supports care while making clear that the clinician remains responsible (JAMA+ AI Conversations).
That moves AI education out of the “tool demo” lane. A clinician who can operate an AI-enabled documentation or imaging tool may still be unprepared for the patient-facing moment: Why was AI used? What did it contribute? What did the clinician verify? What happens if the AI finding conflicts with the clinician’s judgment?
Workflow fit is the other half of the same problem. In a radiology leadership discussion, one clinician described impression generators as useful but not necessarily time-saving: “So far, we've found tools that our radiologists say, they may not make me much faster, if at all.” The more important benefit was lower cognitive load, paired with a warning that tools can create extra review work if they sit outside the normal workflow (AJR Radiology Trailblazers). A separate clinical AI conversation made a similar point from the other direction: AI can help with complex uncertainty, but in routine cases it can add alert fatigue, automation bias, and bloat (Healthcare Unfiltered).
We saw a related pattern in an earlier brief on AI adoption barriers: the problem is rarely awareness alone. This week’s update is more specific. CME teams should build AI cases around disclosure language, verification checkpoints, and “do not use AI here” decisions, then assess whether learners can actually perform those behaviors.
The second signal was narrower and came from provider-owned educational content, so it should be treated as conceptual framing rather than a definitive benchmark. Still, the finding is useful for CME operators: a 10-year landscape review found that cost concepts are barely visible in accredited CME outcomes literature. Out of roughly 1,000 screened records, 32 mentioned cost in some way, and 13 reported an actual cost variable such as cost per participant, cost per unit of educational improvement, or modeled cost savings (JCEHP Emerging Best Practices in CPD).
The useful distinction is between the cost of the educational activity itself and the effect of that education on downstream healthcare costs. The latter is where the literature appears especially thin. That matters because health systems and accreditors may ask CME teams to speak in the language of value, but most providers do not yet have a stable internal method for doing so.
The implication is not to bolt a health economics model onto every activity. It is to start with a baseline: what did this activity cost to plan, deliver, and measure; what changed; and what denominator makes the comparison honest? For high-volume or system-partnered programs, even a simple cost-per-learner and cost-per-primary-outcome-change field would make future value conversations less anecdotal.
If an AI activity ends with tool proficiency, it is probably stopping too early. If an outcomes report ends with confidence and intent, it may not be ready for value conversations. The week’s two signals point to the same operating discipline: teach the behavior clinicians must perform in public, and measure enough of the cost to know what the education actually required.
Documents patient-trust drop on AI disclosure and clinician view that AI struggles with competing comorbidities.
Open sourceShows radiologists and oncologists gaining modest efficiency from impression generators yet needing super-user training.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoHighlights workflow-fit requirement and communication strategies for maintaining patient trust.
Captures real-time clinician discussion of oversight needs and transparency limits.
"I’m torn on the AI generated trial graphics. I guess good to convey a simple message (which almost is never the case), but they often miss study nuances & sometimes the interpretation is just plain wrong."
Show captured excerptCollapse excerptLandscape review of 10 years of literature showing near-total absence of cost data in accredited CME records.
Open source