Clinicians Are Naming the Real Barriers to AI Adoption in Practice
Earlier coverage of ai oversight and its implications for CME providers.
Clinician signals show demand for CME that rehearses AI prompting, judgment, and intraoperative teaching workflows rather than broad awareness.
Clinicians want training they can use inside the encounter, the operating room, and the feedback conversation. Examples from oncology, primary care, and AUA25 urology education show that CME limited to awareness is missing the operational work clinicians need to do.
The AI conversation this week was less about whether clinicians should use AI and more about how they should work with it without surrendering judgment.
One clinician-educator discussion framed ambient scribes as useful because they can get technology out of the way: less note-taking, more eye contact, more attention to the patient story. The same conversation also raised the harder issue for education teams: clinicians need to know how to prompt models with the right context, judge whether outputs fit a local population, and recognize when bias or workflow barriers make the tool less useful than it appears (AI and Healthcare).
That caution showed up in oncology. A randomized trial summary shared by Fumiko Ladd Chino described AI-triggered emails to oncologists about genomically matched trials that did not improve trial enrollment. Her takeaway was blunt: “It's important to publish negative trials.” (Fumiko Ladd Chino). Arturo Loaiza-Bonilla’s related thread pointed toward “collaborative intelligence” and Moravec’s Paradox rather than full automation (Arturo Loaiza-Bonilla).
For CME providers, this is a curriculum-design problem. Generic AI literacy can explain model types and risks, but it does not teach a clinician what to do when an ambient note is mostly right, a trial-matching alert is operationally weak, or a model’s answer looks plausible but may not transfer across settings. This extends an earlier brief on clinicians building their own AI tools while CME still teaches literacy: the demand is now for repeatable human-AI routines, not just comfort with the technology.
The implication: AI education should include short, specialty-specific rehearsals where learners write prompts, inspect outputs, identify missing context, and decide what remains a human responsibility.
At AUA25, the education signal was different but related: clinicians were treating teaching itself as something that needs structure, tools, and documentation.
One urologist highlighted the Briefing-Intraoperative teaching-Debriefing model and wrote, “Briefing, Intraop Teaching and Debriefing for all surgeries should be the norm during training!” (Chandru Sundaram). Another AUA25 thread praised an “Educating the Educator” course, pointed to the BID model for operative cases, and emphasized formative and summative feedback, letters of recommendation, and documenting teaching, education, and mentorship as academic accomplishments (Seth Cohen).
This is not just a urology note. Urology is the source context, but the provider implication is broader for procedural specialties: faculty development works better when it maps to the actual teaching sequence. Before the case, define the learner’s goal. During the case, use focused teaching scripts. After the case, debrief with reflection, reinforcement, correction, and documentation.
For CME teams, the opportunity is not another lecture on mentorship. It is a portable educator-skills package: BID scripts, feedback rubrics, sample debrief language, and academic-portfolio templates that departments can reuse. The question is whether your educator programs help faculty teach in the moment where teaching happens, or only describe good teaching after the fact.
If your catalog still separates “AI literacy” from clinical workflow, or “faculty development” from the operative teaching moment, this week’s signal argues for a tighter design standard. Clinicians are asking less for explanation and more for guided practice: show me the tool, the prompt, the feedback exchange, the debrief, and the judgment call I need to make when the real case is moving.
YouTube discussion provides contextual framing of ambient scribes and trial-matching results from oncology and primary-care educators.
Open sourceVerified practicing oncologist post emphasizes collaborative intelligence and warns against over-reliance on pattern-matching tools.
"Emails to academic medical oncologists with info about genomically matched therapeutic #clinicaltrials for pts with tumor progression based on AI interpretation of imaging reports did NOT ⬆️ trial enrollment. It's important to publish negative trials. AI cannot solve EVERYthing."
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoSecond verified clinician thread reinforces demand for training on nuanced judgment and data-bias awareness in AI-assisted care.
"Thrilled! Our @NEJM_AI perspective “Harnessing Moravec’s Paradox in Health Care” - is now live. Proud to publish in @NEJM’s AI journal and grateful for brilliant co-author & friend Scott Penberthy (@scottpenberthy) of @GoogleCloud. Onward to truly collaborative intelligence! 🔗 #NEJMAI #AIinHealthcare #CollaborativeIntelligence #MoravecsParadox #Oncology"
Show captured excerptCollapse excerptClinician thread details AUA25 Educating the Educator course and advocates making BID the norm for every surgical case.
"Educating urologists by leaders in education #AUA25. Briefing, Intraop Teaching and Debriefing for all surgeries should be the norm during training! @CoYoUroMD @khkraft @TashaPosidPhD @GMBadalato @lindsayahampson"
Show captured excerptCollapse excerptSecond thread reinforces need for formative/summative feedback frameworks and treating LOR writing and teaching documentation as academic accomplishments.
"2/3 Crystallized descriptions of optimizing how to #teach, with understandings of #formative & #summative #feedback by @khkraft & @TashaPosidPhD. Consider the “BID Model” as an excellent way to educate around/during operative cases. #AUA25"
Show captured excerptCollapse excerptDebate paper discussion advocates matching self-led, facilitator-led, or hybrid debriefing to learner level, modality, and safety needs.
Open source