Educators Want AI Workshops That Fix Their Actual Workflow Friction
Earlier coverage of ai oversight and its implications for CME providers.
Clinicians want explicit roles defining AI tools and feedback systems rather than accepting top-down rollouts.
Clinicians are asking for a role in setting the rules for the systems that shape their work: AI tools at the bedside and feedback systems in training. The AI examples this week were oncology- and radiology-heavy, but the underlying ask was broader: do not hand clinicians tools or assessments without letting them define what good looks like.
In a Cleveland Clinic Cancer Advances discussion, the clinician framing was not simply that AI is coming. It was that AI will alter clinical relationships, documentation, and data work — “It's going to change how we interact with our patients,” as one speaker put it in a conversation about AI in oncology and clinical workflow (source).
The sharper point was about participation. Clinicians described the EMR as a warning: a system largely defined outside the clinical encounter that later had to be made workable by the people using it. Their argument for AI was to start earlier, with clinicians helping define how tools fit inpatient care, outpatient care, ICU work, medical education, safety, imaging, pathology, and documentation.
That aligns with the regulatory-adjacent material in the FDA REdI session, where AI use was discussed in terms of context of use, model influence, decision consequence, transparency, and ongoing performance monitoring (source). This is not the same as broad clinician consensus, but it gives CME teams a useful structure: clinicians do not only need to know what AI can do; they need to practice deciding when model output is credible enough to affect a clinical decision, when it needs review, and what failure would look like.
We saw a related pattern in an earlier brief on AI failure-mode training. This week’s version moves from recognizing AI limits to asking who gets to write the operating rules. CME teams should ask whether their AI education gives clinicians rehearsal space for oversight decisions, or only explains the technology after someone else has chosen it.
The medical education signal was smaller but pointed in the same direction. A clinician thread questioned whether residency programs are assessing their teaching process or merely waiting months and judging competence after the fact (source). The critique was not about adding another evaluation form. It was about whether the system captures whether teaching produced usable knowledge for patient care.
A Faculty Feed episode made the operational issue more concrete: numeric scores in health professions education can skew generous, while narrative feedback can identify specific skills to strengthen or remediate (source). The discussion also separated formative feedback, which learners can still act on, from summative judgment, which often arrives too late to change performance.
For CME providers, the implication is not “make more faculty-development content.” It is to design faculty education around observable behaviors: how to turn a vague comment into a specific narrative, how to distinguish description from assessment, how to calibrate feedback across raters, and how to prevent evaluation systems from rewarding politeness over usefulness.
The link to the AI conversation is straightforward. In both cases, clinicians are pushing back against systems that affect their work but are poorly shaped by their daily reality. CME teams should treat feedback training as a systems skill, not a soft add-on.
The week’s common thread is that implementation cannot start after the system is already built. Whether the system is an ambient scribe, a risk model, or a residency evaluation process, clinicians are asking to help define the rules before they are expected to trust the output.
Clinicians stress human-in-the-loop oversight and risk-based frameworks for AI in documentation and diagnostics.
Open sourceEmphasis on user-friendly ambient AI scribes that reduce burden without adding friction across multiple specialties.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoDiscussion of how gossip undermines culture and the need for leadership to model solution-oriented feedback.
"This is phenomenal & very pertinent to #MedEd. Many residency programs fail to assess their educating process: “Did we spend time imparting actionable knowledge residents can use in patient care? Or did we just say: ‘Here you go’ & after 6-9 mo judge if they’re competent?” @ACGME"
Show captured excerptCollapse excerptResidency assessment often fails to distinguish teaching that imparts actionable knowledge from mere competency ticking.
Open source