High-Stakes Training Needs Tighter Learning Moves
Earlier coverage of learning design and its implications for CME providers.
Observable faculty behaviors—admitting uncertainty, inviting dissent, and giving candid feedback—now define effective psychological-safety training for CME.
Faculty who model vulnerability—admitting uncertainty, setting clear expectations, and inviting quieter voices—create the psychological safety trainees need to speak up. Faculty-development podcasts and oncology AI examples show this week that CME must shift from compliance modules to measurable behavior change across specialties.
Educators discussing mistreatment in training described a familiar problem: reported mistreatment rates remain in the high 30s to 40% nationally, while many episodes still arise from hierarchy, time pressure, poor communication, or faculty who do not recognize the effect of a passing comment. In one MedEd Thread episode, the actionable point was not simply “tell learners to report.” It was that attendings, deans, clerkship directors, and institutional structures all have to make reporting and speaking up credible.
A second faculty-development discussion made the behaviors more concrete. Faculty described setting expectations before questioning learners, giving peer-to-peer discussion time before public answers, admitting when the attending does not know something, and using candid feedback without turning it into shame. In that Faculty Feed conversation, psychological safety was framed less as a climate slogan than as a sequence of daily moves by people with power.
That matters for CME because many mistreatment and professionalism modules still reward completion more than changed behavior. A better faculty activity would ask learners to practice the first 90 seconds of a teaching encounter: explain why questions are coming, say that wrong answers are part of learning, invite a quieter learner in without calling them out, and model how to say “I don’t know—let’s look it up.” We saw a related pattern in an earlier brief on ability-based progression: if programs want to evaluate reasoning rather than performance theater, learners need a room safe enough to show unfinished reasoning.
The concrete question for CME teams: can your faculty-development activity point to an observable behavior a trainee would notice the next day?
The AI signal was thinner and oncology-led, with vendor demonstrations plus an independent clinician thread. Still, the learning need was clear: clinicians are not only asking how to use an AI tool; they are asking how to judge what it returns and explain its limits to a patient.
In a Medscape demonstration, a thoracic oncologist used AI to review molecular testing, adjuvant treatment options, chemotherapy choices, and a patient-facing side-effect sheet for early-stage lung cancer, while noting that searches included references and that the tool surfaced where data did not yet support a use case (Medscape video). That is a useful workflow example, but it is still provider-owned content and should not be treated as broad clinician consensus.
The sharper educational implication came from an oncology thread about AI’s epistemic limits: models can summarize and retrieve, but they cannot guarantee that trial populations, historical data, or inferred recommendations fit the person in front of the clinician. The thread emphasized curation, transparent assumptions, and a bedside explanation of what the model adds and where it could be wrong (X thread).
For CME providers, that means AI education should not stop at prompting or feature walkthroughs. A stronger activity would require the learner to make an initial judgment, compare it with an AI output, identify the source-quality problem or patient-fit problem, and then practice saying the uncertainty plainly. The concrete question: does the activity teach clinicians when to override the tool and how to tell the patient why?
The common thread this week is not psychological safety versus AI. It is whether CME is still over-investing in information transfer when the harder learning problem is behavior under pressure. A faculty member who knows the mistreatment policy can still shut down a learner. A clinician who can operate an AI tool can still overstate what it knows. The next test for CME is whether activities help clinicians rehearse the moment when hierarchy, uncertainty, or a polished answer makes the wrong behavior feel easy.
Educators detail leadership modeling of vulnerability and growth-mindset language as essential for psychological safety.
Open sourceTrainees and faculty link protected peer discussion time and structures such as Expect Respect committees to reduced mistreatment.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoOncologist frames AI explicitly as librarian assistant requiring source curation and limitation disclosure.
Open sourceClinicians demonstrate AI use for early-stage lung cancer decisions and patient education sheets while stressing unknowns.
Open sourceIndependent clinician thread reinforces bedside humility and trade-off communication when using AI outputs.
"do. Finally, return to the bedside voice. Patients do not need Borges; they need clarity. “Here is what the best‑available evidence suggests for someone like you. Here is what the model adds, and here is where it could be wrong. Here are the trade‑offs we can choose together.” In the Library of Clinical Babel, compassion is our index, humility our cataloging rule. The miracle is not that AI might find the perfect page; it’s that we can still read it with patients—slowly, aloud, in their language—so that choices remain human. As targeted therapies and immunotherapy move earlier, as real‑world evidence thickens, as LMIC datasets finally take their rightful shelf space, our library grows both richer and riskier. The task is not to silence the stacks, but to practice better librarianship: selective, transparent, equity‑minded. In Borges’s world, meaning was an ethical labor. In ours, it still is."
Show captured excerptCollapse excerpt