Residency Programs Are Already Using AI to Personalize Case Exposure
Earlier coverage of ai oversight and its implications for CME providers.
Radiation oncology and radiology clinicians describe governance, privacy, liability, and workflow barriers stalling AI tools; CME must shift from literacy to oversight and integration rehearsals. Simulation design shows
Radiation oncology clinicians are describing exactly why AI tools stall in practice: legacy IT systems, privacy fears, liability uncertainty, and unclear oversight responsibilities. Educator conversations on simulation point to the same gap—training must rehearse real interfaces and learner emotion rather than assume immersion alone drives change. The provider implication is cross-specialty.
A radiation oncology–anchored AI discussion this week was not mainly about whether AI can help. It was about why useful tools stall: legacy IT, privacy anxiety, vendor-versus-hospital data environments, model drift, liability, and clinician trust. In an AI and Healthcare discussion, adoption was framed as a governance and workflow problem as much as a model-performance problem: clinicians need to know where a tool works, where it fails, who supervises it, and what happens when the output changes care.
That extends an earlier brief on clinicians building their own AI tools. The difference now is that the learning need is less “what is AI?” and more “how do I safely use, monitor, and escalate around a specific tool in my workflow?” Oncology and radiology examples were prominent, but the barriers named here are not specialty-specific.
For CME teams, this argues against generic AI primers as the default. Programs should rehearse validation questions, bias checks, human-in-the-loop handoffs, dashboard interpretation, and workflow load: does the tool remove work, move work, or add work? The sharper design question is what a learner should do when a model underperforms for a subgroup or slows the team down.
Simulation educators raised a neighboring problem: making training look more like practice is not the same as proving that it changes practice. A Simulcast journal club discussion described a low-cost simulated EMR interface that learners found more realistic, while also noting that its learning impact was hard to quantify. That caveat matters for CME providers investing in immersive formats: realism is only useful if teams define what transfer should look like before the scenario runs.
A second educator-led simulation signal focused on faculty behavior. In a Medical Education podcast conversation, facilitators were described as varying in how they respond when learners show negative emotion. Some let stress run as part of the learning experience; others adapt the scenario to preserve psychological safety and performance.
These are simulation- and emergency-medicine-adjacent signals, not broad clinician consensus. Still, the implication is useful for any procedural or high-stakes CME: interface realism and emotional calibration should be designed together. CME teams should specify when faculty should let difficulty continue, when to pause, and what evidence would show that the simulation improved behavior rather than only immersion.
The week’s narrow signal is not that CME needs more technology content. It is that education built around tools must teach the operating conditions: data environment, model limits, handoffs, workload, emotional calibration, and measurement. Providers that can turn those conditions into rehearsal—not lecture—will be closer to the work clinicians are trying to do.
Practicing radiation oncology resident describes stalled AI adoption due to IT legacy systems, privacy concerns, and unclear liability.
Open sourceEducator discussion on bias, drift, and need for transparent performance monitoring in clinical AI tools.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoEducator voices note that simulated EMR interfaces raise realism scores yet outcome metrics linking immersion to behavior change are still missing.
Facilitators described as inconsistent in adapting scenarios when negative learner emotion appears, risking psychological safety.
Open source