Motivation Up, Knowledge Flat: What the Latest E-Learning Trial Means for CME Design
Earlier coverage of learning design and its implications for CME providers.
Urology-led M&M redesign replaces punitive case review with committee curation, trained moderators, and tracked QI actions; similar measurable-practice gains appear in oral-board simulators.
Urology programs are replacing punitive M&M conferences with committee-curated cases, coached moderators, and documented follow-up actions. The redesign turns a required high-attendance meeting into measurable peer learning, and a parallel signal appears in oral-board simulators that now deliver scalable deliberate practice.
For earlier context, see Five Design Rules Are Replacing Time-Based CME With Ability-Based Progression.
In the AUANews Inside Tract episode, clinicians described a UCSF urology M&M redesign that moved away from chief-resident case selection, long chronological recaps, and public embarrassment. The replacement uses open reporting, committee-selected high-value cases, coached moderators, nonjudgmental norms, assigned stakeholders, and tracked follow-up. (source)
For CME providers the core opportunity is converting mandated attendance into a reproducible system that protects psychological safety while making next steps visible. This directly extends the earlier point that formats must define observable actions before they can deliver safer practice.
The model originated in urology with replication at three institutions; independent corroboration remains limited. Still, the levers—moderator training, case-selection rubrics, reporting templates, and post-session QI tracking—are portable to any procedural field.
Behind the Knife’s oral-board simulator addresses the classic access barrier: live mock orals are valuable yet limited by mentor availability, scheduling, and inconsistent feedback. The platform supplies real-time conversational practice, coaching modes, proctor recording, performance analysis, and progress tracking. (source)
Platform-reported metrics (more than 10,000 exams, 99 % recommendation rate, 85 % 4- or 5-star ratings) are useful but must be read as affiliated evidence. For providers the practical question is format economics: when a live course should be paired with simulator practice, what faculty oversight standards apply, and how remediation can use repeated performance curves rather than single-session attendance.
High-stakes clinical education is being redesigned around repeatable practice, safer discussion, and visible follow-through. Legacy formats that function only as events miss the chance to operate as systems. M&M should not end when the room empties; mock-oral preparation should not depend solely on mentor availability. The next versions of these formats will be judged by whether the design captured what changed afterward.
Clinicians detail the shift from chief-resident case selection and punitive tone to open reporting, committee curation of high-value cases, trained moderators, assigned stakeholders, and iterative database review.
Open sourceResidents and attendings describe unlimited access, real-time AI feedback, and proctor features that overcome live-mock scheduling limits.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoPerformance analytics, coach modes, and 85% 4-5/5 satisfaction ratings are presented with usage data.
Open source