Communication Has Entered the Skills Lab
Earlier coverage of outcomes planning and its implications for CME providers.
A narrow but useful signal this week: some CME voices are pressing past completion-era metrics toward implementation, follow-up, and feedback loops.
The clearest signal this week is a sharper question about what counts as evidence that education worked. The evidence is still early and insider-weighted, but it points CME teams toward implementation, follow-up, and feedback loops rather than treating immediate post-activity measures as the whole story.
This week’s outcomes conversation was more specific than the usual call for better measurement. In an Alliance conference preview, speakers explicitly questioned whether the standard knowledge/competence/confidence trio is still enough and pointed instead to QI-linked measurement and feedback loops. A separate education research discussion on ultrasound training retention reinforced the same limitation from another angle: one-time success says little if skills decay and no follow-up is built in.
This is not broad market consensus. The sources are educator-heavy, and one is conference-owned rather than evidence of independent clinician demand at scale. Still, for CME operators, the pressure is practical: completion-era endpoints can no longer stand in for the full value story by themselves. That extends a provider-facing thread we noted in an earlier brief on why lecture-style delivery had to show more than attendance and delivery value.
The operator question is straightforward: where are you still measuring reaction to education when buyers may want signs of implementation after education? A useful next step is to redesign a small number of activities around one visible post-activity behavior, workflow checkpoint, or QI touchpoint rather than trying to rebuild every outcomes plan at once.
A second, specialty-led signal came from oncology and pediatric serious-illness communication: clinicians were not just discussing empathy in general, but the specific work of explaining what is unknown. In one serious-illness communication discussion, speakers emphasized naming uncertainty honestly, understanding what patients and families think is happening, and preserving trust without false certainty. A pediatric oncology communication talk made the educational implication even clearer: before telling families what clinicians know, clinicians need language and structure for finding out what families understand.
This is not a broad communication reset across specialties. The evidence is concentrated in oncology and pediatric serious-illness care, and corroboration is limited. But the design implication travels beyond those examples: if uncertainty is a recurring part of care, communication education cannot stop at general advice to be compassionate. It has to teach phrases, sequencing, and understanding checks that clinicians can actually use.
For CME teams, that means building scenarios where prognosis, treatment effect, or next steps remain unsettled. If a communication activity only tests recall of frameworks and never asks learners to speak through ambiguity, it is probably missing the skill this week’s sources were actually pointing to.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo