Clinician Learning Brief

Why Better CME Starts With the Faculty Brief

Topics: Learning design, AI oversight
Coverage 2025-09-01 to 2025-09-07

Abstract

This week’s clearest signal is a design one: stronger CME may depend less on new platforms than on briefing faculty to design for learner action.

Key Takeaways

  • An emerging educator-led signal suggests CME quality often breaks before delivery, when faculty plan around slides instead of learner action.
  • A secondary AI signal is narrower but useful: educational value may come less from content generation than from bounded coaching, feedback, and supervised practice.
  • This week’s evidence is mostly educator- and society-adjacent rather than broad clinician polling, so the implications are best treated as design guidance, not settled market consensus.

Better CME may start with the faculty brief, not the platform. This week’s evidence is educator-facing rather than broad clinician demand, but it points toward backward design, in-activity reflection, and tighter alignment between objectives, participation, and what learners should be able to do afterward.

Faculty development is becoming a quality lever

Across this week’s educator conversations, the critique was not just that some talks rely too heavily on slides. It was that session design still too often begins with the presenter’s deck, habits, or performance style rather than with what learners need to understand, decide, or do. A faculty-development discussion on beginning with the end in mind made that contrast explicit, and a simulation pedagogy conversation on reflection during and after participation reinforced that learning design should account for processing inside the activity, not only in the wrap-up.

For CME providers, that shifts attention earlier. If speaker onboarding still starts with title, timing, and slide count, many downstream fixes will stay cosmetic. Better interaction design and stronger outcomes language help, but they do not solve the core problem when faculty are not first asked to define the learner action. This builds on our earlier brief on why lecture format alone no longer carries the learning load, but the signal here starts one step earlier: in the design brief itself.

The practical question for CME teams is simple: does your faculty brief ask speakers what should change for the learner, or mainly what they plan to cover?

AI is being positioned as coaching, not substitution

The week’s AI discussion was narrower than the faculty-design theme, but it was distinct from the recent run of governance and disclosure coverage. Across a medical-education discussion on lower- and higher-stakes AI use, a training conversation about feedback and tutor-style support, and a society-adjacent platform discussion focused on guided development and retrieval, the throughline was not AI-generated content. It was where AI can give feedback, scaffold practice, or personalize support without replacing the reasoning learners still need to show on their own.

That matters for CME design because the risk is not only inaccuracy. It is premature outsourcing of competence. These examples do not show broad adoption, and some come from specialty or society contexts, so the safest read is a design-principle signal: if you add AI to education, define where it can support practice and where independent performance still has to remain visible.

The practical question for CME teams is whether each AI-enabled element makes learner reasoning easier to observe, or easier to bypass.

What CME Providers Should Do Now

  • Rewrite faculty brief templates so the first prompts ask what learners should decide, do, or change after the session—not what slides will be shown.
  • Add one required in-activity reflection or decision moment to selected live and online formats, then review whether it improves alignment between objectives and learner behavior.
  • Set explicit rules for AI-enabled learning experiences: where AI may coach or give feedback, and where learners must still perform reasoning without assistance.

Watchlist

Turn learner questions into outcomes data

ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.

Request a demo