Hidden Funding and Weak Evaluation Threaten CME Trust
Earlier coverage of outcomes planning and its implications for CME providers.
CME signals emphasize defining measurable learner outcomes before selecting formats, while online faculty development succeeds only when access and engagement constraints are designed in from the start.
CME planning is judged less by the activity produced and more by whether the intended outcome was defined before the format. The evidence base is not broad clinician consensus; it is a CPD-heavy week, with moderate independent-clinician corroboration and a few single-source examples. Still, the operator implication is clear: evaluation cannot be the last document built before launch.
The clearest thread came from CPD educators arguing that providers should decide what they intend to measure before building the education. In a Write Medicine discussion on outcomes design, the planning sequence is the point: identify the learner outcome, choose the evidence needed to assess it, then build the learning experience around that evidence.
That extends a thread we saw in an earlier brief on evaluation quality and CME trust. The difference this week is that the conversation moved further upstream. Instead of asking how to repair weak evaluation after the fact, the emphasis is on preventing weak evaluation by making outcomes part of the design brief.
The ANCC OBCE discussion makes the same issue more structural. In the OBCE model conversation, the credit unit is no longer just time spent; the value is evidence of what a learner can know, show, or do. That opens the door to learner roadmaps, portfolios, credit for prior learning, and competency-based progression.
The caveat matters: the OBCE signal is a single accreditor-aligned source, and the broader outcomes discussion is led largely by CME and CPD professionals rather than a wide sample of practicing clinicians. But CME teams do not need a full portfolio overhaul to act on the signal. The immediate question is simpler: for each new activity, can the team name the intended outcome level before it names the format?
The second signal was narrower but useful: an online faculty-development course for anesthesia educators in East Africa reported gains in knowledge, attitudes, and practice when the course was deliberately designed around constraints. The JCEHP Emerging Best Practices in CPD episode described a 13-week Zoom and Telegram program with pre-, mid-, and post-course evaluation, asynchronous materials, peer teaching across cohorts, and attention to bandwidth limits.
This is a single-source, emerging signal from anesthesia educators in East Africa, so it should not be treated as a universal template. Its value is more specific: it shows what “online” has to include when access is not assumed. The course design had to account for unreliable internet, time-zone friction, limited ability to require video-on participation, and gaps in hands-on practice.
For CME providers, the broader implication travels beyond that setting. If a faculty-development or train-the-trainer program depends on live participation, visible engagement, or procedural practice, then access planning is instructional design, not logistics. The concrete test: does the online version include a pre-assessment, a low-bandwidth async path, a plan for engagement signals beyond camera use, and a supplemental method for skills practice?
The planning sequence is the story this week. If the first decisions are topic, faculty, format, and date, the outcomes strategy will keep arriving too late. If the first decisions are desired change, evidence needed, learner constraints, and assessment method, the activity has a better chance of producing data that can guide improvement, support accreditation, and matter to the learner. The pressure is not to make every activity a full competency portfolio. It is to stop treating outcomes and access as afterthoughts. CME teams should ask where their process still rewards finishing the event more than defining what the event is supposed to change.
Stresses deciding what to measure before design and using qualitative plus quantitative data aligned to Moore's levels.
Open sourceReinforces pre/post outcome measurement and intentional evaluation frameworks in faculty development contexts.
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoDetails ANCC OBCE model enabling learner co-created roadmaps, prior-learning credit, and portable portfolios decoupled from seat time.
Open source