Clinician Learning Brief

The Learning Physicians Want Credit For Happens Between the Exams

Topics: Accreditation operations, Learning design, AI oversight
Coverage 2024-11-11–2024-11-17

Abstract

A narrow but useful signal: some physicians are arguing that credit should reflect real learning between exams, while acceptable AI use remains concentrated in low-risk, checkable tasks.

Key Takeaways

  • An emerging MOC conversation is shifting from exam criticism to a clearer claim: credit should reflect the conferences, literature follow-up, writing, and collaborative work physicians actually do between formal tests.
  • For CME providers, that points toward portfolio-style product architecture and documentation that can capture distributed learning without turning it into administrative drag.
  • On AI, the usable public signal remains narrow: clinicians and CE teams appear most comfortable starting with low-risk tasks whose outputs can be checked line by line, not judgment-heavy evidence work.

Some physicians are drawing a sharper contrast between how they actually stay current and how competence is still formally recognized. The evidence is narrow and partly specialty-linked, but it raises a concrete provider question: can CME products document longitudinal, practice-linked learning better than an exam-centered model does?

Credit models are being judged against everyday physician learning

In one hematology-oncology leadership conversation, distributed in both video and podcast form, the argument was not just that exams are unpopular. It was that ongoing competence may be better reflected by what physicians already do: attend conferences, follow up on literature, answer clinic-driven questions, write, and work on projects with peers.

That matters for CME providers because it shifts the issue from format convenience to recognition of real learning activity. The question is whether providers can package education as a documented mix of activity, reflection, and follow-up rather than as isolated events alone.

This is still an emerging signal. The source base is effectively one conversation in two formats, and it sits inside a specialty society MOC debate, so it should not be treated as broad physician consensus. But the implication is concrete enough: if credit systems move even modestly toward portfolio logic, which of your current products could already serve as evidence of longitudinal learning?

AI support is finding acceptance in the smallest, most checkable tasks

The AI theme here is narrower than in recent editions. In a clinician discussion about systematic reviews on X and a longer YouTube discussion, the acceptable role for AI was bounded technical help with a clear human check, not core evidence judgment where mistakes could distort conclusions. Separately, a CPD-focused podcast highlighted faculty disclosure management as a useful test case because it is tedious, universal, and compliance-sensitive, while still stressing minimal data exposure, human review, and variable outputs.

For CME providers, that argues against broad AI messaging and toward narrow, auditable use cases. As our recent brief on what AI really optimizes suggested from a different angle, support weakens as soon as the task shifts from inspectable assistance to interpretive evidence work.

The caveat matters here too. The clinician-trust evidence is stronger than the operations example, and the disclosure workflow point is still a single-source conference-preview discussion. Even so, the operator question is straightforward: where are you using AI only when staff can verify the output step by step, and where are you still asking it to do work your own educators would hesitate to trust?

What CME Providers Should Do Now

  • Audit your current product line for activities that could be documented as part of a longitudinal learning portfolio, not just claimed as isolated credit events.
  • Review forms, transcripts, and learner documentation to see whether they can capture practice-linked follow-up, reflection, or collaborative work without adding heavy reporting burden.
  • Limit AI pilots to low-risk operational tasks with explicit human review and clear failure visibility, and keep evidence interpretation outside that boundary unless your review process is much stronger than standard editing.

Watchlist

  • Outcome expectations remain worth watching. A simulation discussion argued that visible activity is not enough when programs claim performance or system value; goals, stakeholder alignment, and proof still matter.
  • Competency language may be hardening into design expectations. A health professions education podcast laid out competency-based education as explicit competencies, sequenced progression, tailored experiences, and programmatic assessment, but this remains more conceptual than market-proven for CME.

Turn learner questions into outcomes data

ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.

Request a demo