Clinicians and Patients Are Already Using GenAI Without Training
Earlier coverage of ai oversight and its implications for CME providers.
Clinicians are shifting from using AI tools to supervising them; CME design must move from tool orientation to measurable handoff and verification drills.
Clinicians are describing AI less as a tool to try and more as a set of agents they must direct, check, and overrule. The signal is narrow and oncology-led, with physician/social and society sources in the mix, but the provider implication is broader: CME design has to prove that learners can manage the handoff between content, tool, and decision.
The clearest AI signal this week was not another call for general literacy. It was the move from “use the model” to “supervise the model.” In an oncology AI thread, Stephen V Liu, MD summarized the role change this way: “physicians move from users to supervisors.”
A separate clinician-facing prompt example showed what that supervision can look like at the task level: asking a model to explain CTCAE toxicity grades, include when treatment should be held, dose reduced, or discontinued, and then taking the same prompt to another LLM for comparison (source). That is not autonomous decision-making. It is role assignment plus verification.
For CME providers, the design gap is straightforward. Many AI modules still stop at tool orientation: what the model can do, what the risks are, and how to write a better prompt. The stronger learning objective is behavioral: can the clinician define the model’s role, test the answer against a second source or standard, recognize ambiguity, and decide when escalation is needed?
That extends an earlier brief on testing what clinicians do when AI output is wrong, but this week’s conversation makes the handoff more concrete. The question for CME teams is whether an AI activity can be scored on the learner’s verification steps, not only on whether the learner knows the tool exists.
The second signal came mainly from a society-sponsored medical education source, so it should not be treated as broad clinician consensus. Still, the point is useful for CME operations: multimedia is not a virtue by itself.
In an ASH medical educators segment, speakers framed format choice around goal and learner need, not novelty. Visual learners may need something other than a podcast; complex pathways may be better served by an infographic than by audio alone (source).
That matters because many CME catalogs still treat format as a production decision made after the content plan is set. The better sequence is reversed: first define what the learner must recognize, compare, rehearse, or recall; then choose the medium that makes that task easier. A webinar, infographic, video, podcast, or interactive summary should each have a reason to exist.
The implication is not “make everything multimodal.” It is to stop using one format as the default answer. For each activity, CME teams should be able to explain why the chosen medium fits the cognitive work the learner is being asked to do.
Tool familiarity and format variety are weak proxies for learning quality. This week’s signal points to a stricter test: can the activity show that clinicians can supervise a tool, verify a recommendation, and receive information in a medium that fits the task? If not, the activity may be teaching awareness when the market conversation has already moved toward supervised use.
Demonstrates concrete prompt-engineering techniques and role-assignment language clinicians already use when treating AI as support layer.
Open sourceThread captures multiple clinicians discussing hallucination risks and mandatory verification steps in multi-agent workflows.
"Moving from usability to sustainability with articulation intelligence. Dr. @PrelajArsela with a brilliant breakdown of how AI moves from large language models to agents - physicians move from users to supervisors. And integration with drug development and validation critical. #ITCD2026"
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoConference session articulates rule: match format to pathway complexity and learner preference with concrete examples of when podcasts fail visual learners.
Open source