Precision Education Finally Gets Real Data Sources and Learner Control
Earlier coverage of outcomes planning and its implications for CME providers.
A scoping-review discussion exposed a narrow but important gap: CPD assessments often measure learning without testing their real-world consequences.
Pre-post knowledge tests remain the familiar default in CPD assessment, even as competence evaluation is being pushed toward multiple sources and real-world performance. This week’s evidence is a single research-summary podcast, so the signal is emerging rather than settled, but the gap it describes is one CME providers should not ignore.
In a JCEHP Emerging Best Practices in CPD discussion of a scoping review, the authors described a CPD assessment literature base that looks very familiar to providers: live, online, classroom, asynchronous, and simulation-based education, much of it assessed with written pre/post tests or audience-response questions. In the review discussed on the podcast, 119 of 130 articles used assessment to measure impact, while knowledge testing dominated: 49% assessed knowledge alone, 27% blended knowledge and performance, and only 7% measured performance exclusively (JCEHP Emerging Best Practices in CPD).
The sharper finding was not that CME still leans on pre/post testing. It was that validation rarely reaches the question that matters most once a learner leaves the activity: what are the consequences of the assessment? The discussion described five validity evidence domains, with content validity receiving the most attention and consequences receiving none in the reviewed literature. That means CPD assessments may be treated as low-stakes measurement tools while still influencing confidence, remediation choices, certification maintenance, professional identity, and ultimately patient care.
This connects with an earlier brief on patient impact numbers that supporters will actually believe: credibility in outcomes work does not start at the final impact claim. It starts with whether the assessment instrument is fit for the decision being made from it.
The caveat matters. This is a single CME-society-adjacent research summary, and independent clinician corroboration is still needed. The examples span nursing, oncology, and emergency care; the principle is portable, but implementation should be tested in each profession and specialty context.
For CME teams, the question is simple: if an assessment result will be used to claim impact, guide a learner, satisfy a stakeholder, or support a competence story, have you validated only the content—or also the downstream effects of acting on that result?
A separate radiology discussion this week made the same point from a different direction. In a sponsored RSNA podcast on narrow AI tools, the speaker emphasized validating an algorithm on local patient data before deployment and monitoring performance over time as scanners, software, and workflows change (Radiology Podcast | RSNA).
That was not an education-design conversation, but it is a useful mirror. Clinical workflow tools are being judged by local fit, ongoing performance, and downstream consequences. CPD assessments should be held to a similar standard. The test is not whether a question looks reasonable on review. The test is whether the result can safely support the decision a provider, learner, accreditor, or supporter wants to make from it.
JCEHP podcast episode summarizes scoping review showing pre-post testing dominates CPD assessment while consequence validation and longitudinal studies are absent.
Open sourceRadiology-society podcast details narrow AI models automating triage, measurement, and incidental-finding follow-up with explicit requirements for human oversight and bias monitoring
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo