State Mandates Are Turning MOC Into a Time Tax Clinicians Can No Longer Ignore
Earlier coverage of learning design and its implications for CME providers.
CME providers gain an auditable five-component checklist for CBE redesign plus guarded GenAI workflows that trim disclosure workload while preserving accreditation compliance.
The strongest provider signal this week was a design audit question: can a current CME activity show how learners move from exposure to demonstrated ability? The evidence is narrow and education-led—podcasts and YouTube, not independent clinician conversations—but it gives CME teams a useful way to test whether “competency-based” is more than a label.
Educators discussing competency-based education described it as a design system, not a single tool or credentialing vocabulary. In the PAPERs Podcast episode, the core test was whether a program has five connected components: defined outcome competencies, sequenced progression toward competence, tailored learning experiences, competency-focused instruction, and programmatic assessment.
That matters because many CME offerings can name an outcome without showing how a learner progresses toward it. The companion video version emphasized that time is a resource, not the organizing principle. For CME providers, that distinction changes the build: a webinar series, enduring module, or longitudinal curriculum should be judged by whether it deliberately sequences practice, feedback, and reassessment—not just whether it covers the right topics.
This extends an earlier brief on defining the destination before choosing the route. The new contribution is the auditability: if a flagship program cannot point to its blueprint, its progression logic, and its low-stakes assessment data, it is probably still time-based education with competency language attached. The immediate question for CME teams is simple: where, exactly, does the learner get diagnosed, coached, and moved forward?
The week’s second signal came from CME operations rather than instructional design. An Alliance Podcast preview described a five-step GenAI workflow for faculty disclosure management, including prompts, live-event testing, minimal data inputs, and human oversight. The reported value was not replacing compliance judgment; it was trimming repeated review work in a high-volume process.
This is an emerging, single-source signal from CME professionals, so it should be treated as a pilot pattern, not a field-wide benchmark. Still, the operational logic is clear. Disclosure review often requires staff to repeat similar checks across many faculty relationships. If GenAI can reduce a portion of those repetitive steps while keeping final decisions with humans, the savings can be meaningful without asking teams to buy a new platform.
The constraint is accreditation defensibility. A useful pilot should leave behind a record: what data were entered, what the prompt was allowed to do, what it was not allowed to do, what humans reviewed, and how exceptions were handled. The question is not “Can AI do disclosure?” It is “Can we prove that automation helped staff work faster without weakening our compliance process?”
The useful move is not to declare a wholesale shift to competency-based CME or AI-enabled compliance. It is to make design and operations inspectable. CME teams should be able to show how a learner progresses toward ability and how an automated workflow stays inside agreed boundaries. If either story depends on trust rather than evidence, the program is not yet auditable enough.
Outlines the five core CBE components with historical context and mastery-learning sequencing logic.
Open sourceReinforces programmatic assessment and blueprinting as essential for diagnosing learner gaps and making progression decisions.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoDemonstrates 30% time reduction across 100+ real faculty cases using minimal-data prompts and live-event testing with regulatory safeguards.
Describes testimonial and hermeneutical injustice mechanisms in educator-learner power dynamics.
Open source