CME Still Designs for Teaching Instead of Retention
Earlier coverage of outcomes planning and its implications for CME providers.
A respiratory-therapy scale project shows CME teams how to treat scholarly practice as measurable behavior rather than assuming publication or participation counts suffice.
Respiratory-therapy educators are developing validated scales to measure scholarly practice—curiosity, critical appraisal, and reflection—as trackable CPD outcomes instead of relying on activity counts or publication tallies. The evidence comes from a single journal-affiliated podcast episode, so the signal is emerging; the methods, however, are directly relevant to CME teams trying to move beyond participation counts.
In a JCEHP Emerging Best Practices in CPD episode, the discussion starts with respiratory therapy but quickly becomes a CPD measurement problem. Scholarly practice is not treated as publishing papers. It is framed as curiosity, critical thinking, deliberate engagement with knowledge, reflection, evidence use, and the behaviors that help clinicians keep improving.
That matters because many CME outcomes models still make the easiest things most visible: registrations, completions, satisfaction, confidence, maybe intent to change. Those are useful, but they do not tell a provider whether a learner is becoming more reflective, more capable of appraising evidence, or more likely to use knowledge in daily practice.
The useful part of the episode is methodological. The authors describe a structured scale-development process: define the construct, review the literature, interview practitioners, generate items in the language of the field, test with experts and users, pilot, and use exploratory factor analysis to reduce redundancy. The goal was not a long survey. It was a usable instrument that could capture dimensions of scholarly practice without asking the same question 50 ways.
For CME providers, the lesson is not to copy a respiratory-therapy tool wholesale. It is to stop treating reflection as a free-text box added after the real activity. A short, tested reflection scale could help longitudinal programs track whether learners are strengthening appraisal habits, seeking evidence, identifying support needs, and connecting learning to practice decisions. We saw a related concern in a March brief on clinicians asking for better appraisal training; this week’s signal adds the measurement layer.
The caveat is important: this tool still needs independent validation before broad adoption, and its development sample and professional context matter. But CME teams do not need to wait for a universal instrument to improve their own measurement discipline. The immediate question is whether a program’s stated outcomes include behaviors like reflection and evidence use—and whether the evaluation plan has any credible way to observe them over time.
The strongest CME outcome is not always a new clinical answer. Sometimes it is a better professional habit: pausing, questioning, checking evidence, asking for support, and applying what is learned with more discipline. If CME providers want credit for building those habits, they need instruments that make them visible before, during, and after the activity.
Presents scholarly practice as a core competency beyond publishing and outlines DeVellis-style scale development steps that CME designers can directly reuse for reflection frameworks and benchmarking.
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo