AI Literacy Needs Failure Drills, Not Feature Tours
Earlier coverage of ai oversight and its implications for CME providers.
Daily AI use is now paired with explicit fact-checking steps before clinical decisions.
Oncologists this week were not talking about AI as a future experiment; they were naming tools they already use and the checks they still perform before trusting output. The clearest signal came from an oncology-led X discussion on OpenEvidence, Claude, DoxGPT, ChatGPT, NotebookLM, and related tools, paired with educator commentary on why health-system AI and consumer-facing AI require different levels of supervision.
The useful detail in this week’s conversation was not that clinicians are interested in AI. It was that they described actual use: evidence lookup, clinic prep, letters of medical necessity, long-email summaries, note support, and teaching materials. Several clinicians also drew lines around trust: tools can speed synthesis, but outputs still need fact-checking, pushback, and expert review before they touch a patient decision.
That matters because much AI education still risks stopping too early. A demo can show what OpenEvidence, Claude, DoxGPT, or another tool produces. It does not show whether the learner knows when a generated answer is incomplete, outdated, overconfident, poorly sourced, or unsafe for the clinical context. We saw a related pattern in an earlier brief on clinicians supervising multi-agent AI: the hard part is not access to the tool; it is the judgment at the handoff between machine output and clinical responsibility.
The second layer is governance literacy. In Medscape’s AI discussion, educators distinguished internally vetted health-system tools from consumer-facing AI, while also noting that traditional oversight structures do not map neatly onto adaptive clinical decision-support systems: “Those models don't really work here.” For CME teams, that points to a different educational job: not declaring a tool safe or unsafe in the abstract, but teaching clinicians to ask what environment the tool operates in, what evidence it is grounded in, who has vetted it, and what must still be verified by the clinician.
The caveat is straightforward: the liveliest examples this week were oncology-led. But the learning problem is broader. Any specialty using AI for synthesis, documentation, triage, imaging support, or decision support needs clinicians who can test the output rather than simply consume it. The concrete question for CME teams: does the activity assess whether a learner can review, correct, or reject AI output before acting on it?
The opportunity is not to add another AI primer. It is to codify the review behavior clinicians are already describing informally: what to check, what to disregard, when to seek another source, and when the clinician must override the machine. If an AI activity does not test that step, it may be teaching familiarity without teaching responsibility.
Practicing oncologists describe daily use of specific synthesis and note-taking tools while insisting on mandatory human fact-check and expert review.
"Totally agree. I use OE, DoxGPT (HIPAA compliant), & occasionally ChatGPT. NotebookLM is helpful too. Always fact check & be ready to push back when needed. Highly recommend @RKouzyMD’s resource: practical guides & curated tools for getting started w/ AI"
Show captured excerptCollapse excerptEducators highlight lower risk of health-system AI due to internal guardrails versus consumer tools and call for CME on critical appraisal.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo