Educator Milestones Turn Vague Faculty Development Into Measurable Progression
Earlier coverage of learning design and its implications for CME providers.
Medical training often skips longitudinal critical appraisal, leaving clinicians under-equipped to evaluate studies they apply daily; CME can supply repeated practice on real evidence.
Clinicians report that medical training never taught them the critical-appraisal skills they now need every day to interpret trials, assess surrogate endpoints, and counsel patients on marginal benefits. Examples surfaced in oncology but the gap is described as portable across specialties.
The strongest signal came from a live Sensible Medicine discussion in which clinicians moved from specific trial-design tradeoffs to the broader problem of how physicians learn to judge evidence. In one oncology example, Vinay Prasad framed dosing and timing uncertainty with the line, "It's like that Paracelsus quote, it's the dose that maketh the poison." The point was not the drug itself. It was that clinicians are asked to act on studies that may answer a narrower, messier, or more commercially convenient question than the one at the bedside.
The same discussion, posted in longer form on YouTube, connected that bedside problem to training. Prasad described early medical school as heavy on rote memorization and light on epidemiology, statistics, and appraisal. He also described residency critical appraisal as an “afterthought” carried by a small number of motivated faculty. That is a useful warning for CME teams: many clinicians are not arriving at continuing education with a stable appraisal foundation that only needs updating.
A separate surgical education conversation made the same design lesson from another angle: if a skill is not reinforced, it fades. In the Behind the Knife episode on osteopathic education in surgery, the discussion emphasized structured curricula, skills labs, OSCE-style testing, and the risk of letting valued capabilities slide when they are not deliberately practiced. CME providers should hear that as a broader instructional design cue. Critical appraisal cannot be solved by a single journal-club module appended to a disease-state update.
This also extends the problem we saw in an earlier brief on LLM tools reaching clinics before clinicians had evaluation frameworks. AI may make verification more urgent, but this week’s conversation points to the older substrate: clinicians need durable habits for asking what a study proves, what it does not prove, whose interests shaped the evidence, and how much confidence belongs in a patient conversation.
The implication is concrete. CME teams should design appraisal as a longitudinal capability: repeated paper-reading practice, current clinical examples, surrogate-endpoint interpretation, COI literacy, and exercises that force clinicians to explain uncertainty in plain language.
Audit your needs assessments and outcomes tools for an uncomfortable assumption: do they test familiarity with the latest data, or the ability to judge the data? If the assessment never asks learners to critique a study, detect hype, or translate uncertainty for a patient, it may be measuring content exposure while missing the skill clinicians said they still need.
Vinay Prasad highlights rote memorization emphasis and lack of epidemiology/statistics training in US medical schools.
"Sensible medicine Podcast/ Q&A Wash U St Louis - LIVE Prasad & Picarello https://t.co/lBKClnmJL9"Open source
Educators note that residency and fellowship treat critical appraisal as an afterthought taught by few faculty.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoCommunity physicians described as more skeptical of literature due to direct outcome accountability yet still struggle with biased or low-value research.
Open source