CME Still Designs for Teaching Instead of Retention
Earlier coverage of learning design and its implications for CME providers.
CME teams know outcomes frameworks but rarely name the exact clinician actions education is built to change.
CME planning frequently stops at high-level objectives without naming the specific clinician actions those activities are designed to change. The signal is narrow—a single CME-provider webinar—but the operating implication for planning templates is concrete.
In a Good CME Practice Group webinar on designing CME for learner action, the discussion moved quickly from outcomes frameworks to the layer many planning templates still underspecify: the two or three concrete tasks, decisions, or steps a learner must take to meet each objective.
That distinction matters because objectives often sound measurable while still leaving the actual behavior vague. “Assess patient candidacy,” for example, can imply several different actions: reviewing severity markers, evaluating prior treatment response, screening for contraindications, ordering tests, communicating with team members, or deciding whether to escalate therapy. The atopic dermatitis example was specialty-specific, but the design problem is portable: if the activity cannot name the action, the assessment team has to infer what to test later.
This connects to an earlier brief on feedback that teaches learners how to improve themselves: feedback, verification, and outcomes measurement all depend on a prior decision about what behavior is worth observing. Backward design is the upstream step that makes those downstream claims less improvised.
The hard part is granularity. The webinar framed action design as dependent on clinical workflow, role, geography, audience mix, format, and length. A 15-minute mobile activity cannot carry the same behavior map as a 60-minute case-based program. A global program cannot quietly assume one Western specialist-center workflow. For CME teams, the question is simple: before content development starts, can the planning document say what the learner should do, under what conditions, and with what evidence of success?
A separate medical-affairs conversation this week pointed in the same direction from a different angle: HCPs want faster ways to reach specific answers, and conversational interfaces are being used to reduce search time and support follow-up questions. The MAPS podcast example is pharma medical communications, not CME, so it should not be treated as a direct provider mandate. Source is CME-provider webinar content; treat as best-practice guidance rather than broad clinician demand.
Provides step-by-step guidance on starting with outcomes frameworks, working backwards to objectives, then specifying 2-3 concrete measurable actions while incorporating workflow and audience factors.
Open sourceReports 77% HCP use of conversational AI search, 200% engagement lift, and need for accuracy/compliance monitoring at scale.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo