Clinician-Educators Want Coaching Programs, Not More Lectures
Earlier coverage of accreditation operations and its implications for CME providers.
Cardiology clinicians link MOC time demands and irrelevance to rising interest in micro-CME, journal credits, and alternative boards such as NBPAS; hybrid CPD gains a practical engagement checklist.
Cardiologists this week explicitly connected MOC frustrations—time demands, questions unrelated to subspecialty practice, and privileging risk—to preferences for automatic micro-CME, journal reading with credit, and alternative boards such as NBPAS. The signal is concentrated in cardiology with one CME-provider discussion and independent X corroboration, yet the implication for providers is portable: low-friction activities gain relevance when they sit beside credentialing pressure.
In a Medscape Cardiology discussion, cardiologists tied dissatisfaction with MOC to three concrete problems: time demands, questions that do not match subspecialized practice, and the possibility that a multiple-choice process could affect privileges. The alternatives they described were not exotic: brief practice-triggered lookups, journal reading with credit, conferences, and certification routes such as NBPAS.
An independent X thread from an oncology clinician echoed frustration with the evidentiary and governance basis of MOC, which matters because the issue is not only cardiology’s internal politics. Cardiology led this week’s examples, but the same pattern can travel to specialties where clinicians see certification as disconnected from their actual scope of practice.
For CME providers, the lesson is not simply to make shorter activities. It is to treat certification friction as part of the learner’s operating environment. We saw a related pattern in an earlier brief on micro-CME and root-cause needs assessment: the barrier may not be content awareness at all, but the way education competes with work, documentation, and institutional requirements. The question for CME teams is whether their micro-learning catalog is mapped to the credentialing pressures clinicians actually feel, or only to content taxonomies.
The second signal came from a JCEHP Emerging Best Practices in CPD podcast on TEC-VARIETY: Tone, Encouragement, Curiosity, Variety, Autonomy, Relevance, Interactivity, Engagement, Tension, and Yielding products. This is not broad real-time clinician chatter; it is a single educator-led journal discussion, though the underlying forum article is described as drawing on more than 70 research publications.
The useful point for CME teams is that online and hybrid attrition is not just a learner motivation problem. The discussion repeatedly framed poor content structure, weak feedback, usability issues, and platform friction as ways educators can block otherwise self-directed learners. That is a sharper way to look at engagement than asking whether a webinar needs more polls.
The operational implication is modest: pick one or two elements and test them. A welcome orientation, better feedback modality, authentic task, usability pass, or learner-analytics review may tell the team more than a wholesale redesign. The question to ask before the next hybrid launch: which part of the learner experience is making a motivated clinician work too hard to stay engaged?
If certification pathways fragment, CME providers may be asked to support multiple definitions of credible lifelong learning. That could make credit portability, activity metadata, and evidence of meaningful participation more important than the activity format itself. The AI conversation points to a parallel caution. A JAMA+ AI discussion focused on automation bias, clinical AI oversight, and training standards, while a European Urology discussion described specialty interest in generative AI alongside ethical concerns. The near-term CME task is to remove needless friction without replacing it with unexamined automation. Easier learning still needs visible standards.
Cardiologists and educators detail ABIM MOC time burden, irrelevant subspecialty questions, ethical concerns over high-stakes exams, and explicit preference for 'CME on the fly' micro-learning and conferences.
Open sourceIndependent cardiologists express frustration with MOC and voice support for alternative boards while cautioning against replicating existing problems; NBPAS referenced as functional option.
"I am very disappointed in @JAMANetwork for publishing this without soliciting opportunity a counter response, especially when written by officers of @ABIM & citing biased & weak evidence! Was this even peer reviewed? #MOC JAMA Network"
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of workflow-based education and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoAuthors detail how poor instructional design thwarts self-directed health-professional learners and show how each TEC-VARIETY element can be operationalized with welcome orientations, varied feedback modalities including AI, quizzes, discussion boards, and learner-analytics iteration.
Open sourceJournal discussion on automation bias, explainability paradox, and need for aviation-style multi-agency oversight and training standards.
Open sourceChatGPT survey in urology and ethics/bioethics context highlights deployment outpacing safeguards and need for user-centered studies.
Open source