Clinicians Are Building Their Own AI Tools While CME Still Teaches Literacy
Earlier coverage of learning design and its implications for CME providers.
Competency-based pathways succeed only when AI personalization is paired with coaching, explicit oversight rules, and peer-distilled recaps.
Competency-based learning delivers value only when surrounding products teach clinicians to interpret evidence, set goals, and receive coaching rather than merely tracking required activities. Evidence is strongest in cardiology and internal medicine training, yet the provider implication is broader: AI can personalize pathways, but it will not repair weak advising, vague review policies, or formats that force busy clinicians to perform synthesis themselves.
Cardiology education leaders were direct about the limits of time-based training. In a Medscape cardiology discussion, they argued for flexible pathways, competency assessment, simulation, practice-specific maintenance, and lifelong portfolios rather than assuming every learner needs the same number of years or the same board-prep path (Medscape). A separate discussion of accelerated 3-year MD programs described AI-supported “precision education”: matching ward exposures to next-day resources, using assessment data to surface strengths, and helping students see whether a specialty path fits their profile (Academic Medicine Blog).
The counterweight came from internal medicine residency advising. In a Medical Education podcast on resident-advisor co-regulated learning, advisor dyads often centered on EPA counts, advancement requirements, and system navigation rather than helping residents build self-regulated learning habits (Medical Education Podcasts). That matters because a portfolio without coaching can become a better-looking compliance file.
For CME providers, the near-term question is not whether to add an AI recommender. It is whether the product defines the coaching behavior around the recommendation: how faculty help learners interpret mixed evidence, choose goals, revise plans, and prove competence over time. We saw a related pattern in an earlier brief on research appraisal: if learners are expected to act on evidence, appraisal has to be built into the learning model, not treated as optional enrichment.
The AI discussion this week was less about novelty and more about accountability. A Medical Education podcast based on an editorial by journal leadership framed AI as a tool closer to search, spelling support, or manuscript-processing software than an autonomous author. The useful distinction: disclose AI when it is central to the rigor or replicability of the work, but do not turn every grammar assist into disclosure noise (Medical Education Podcasts).
For CME teams, that maps directly onto faculty, editorial, and peer-review operations. If AI helps draft a needs assessment, generate cases, summarize literature, or propose assessment items, the provider still needs a named human accountable for accuracy, source checking, clinical judgment, and bias review. If reviewers use AI, confidentiality becomes the first operational constraint: protected manuscripts, unpublished materials, and learner data cannot be pasted into general tools without clear permission and safeguards.
This was primarily a medical education publishing signal, not broad clinician consensus. Still, it gives accredited providers a useful policy shape: define permitted uses, name the accountable human, specify when disclosure is required, and add review checkpoints before AI-assisted work reaches learners.
Hospitalists recapping SHM Converge25 did not describe value as access to full-session recordings. They focused on concise, transferable points: ethics framing for capacity and surrogates, inpatient sequencing of GDMT in heart failure, wearable data in perioperative assessment, Parkinson’s management in hospice, and junior-faculty MedTED talks that turned focused interests into short teaching moments (The Curbsiders).
This is a single-source signal with possible provider sponsorship context, so it should not be read as a universal conference preference. But it is a concrete format lesson. Busy inpatient clinicians appeared to value peer translation: what changed my thinking, what I can use tomorrow, and what deserves expert backup.
For CME providers, the implication is to design conference-adjacent learning as editorial synthesis, not content storage. A stronger recap product may be a short peer-led series with cases, “when to call a specialist” thresholds, and one application prompt per pearl—not a searchable warehouse of hour-long sessions.
The risk is not that CME teams will ignore AI or competency-based education. The risk is that they will adopt the surface layer: dashboards, nudges, milestones, recaps, and review tools—while leaving the human work undefined. This week’s clearest lesson is that evidence needs interpretation. Learners need coaching. Authors need accountability. Reviewers need boundaries. Conference audiences need synthesis. If a CME product asks clinicians to trust a pathway, portfolio, or recap, the provider should be able to point to the human judgment behind it.
Educator/organization voices articulate concrete calls for competency-based shortening, direct-to-structural-heart pathways, AI personalization, and lifelong portfolios over traditional boards.
Open sourcePracticing clinician perspective reinforces need for early mentorship and AI-driven nudges in personalized training.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoShows most advisor-resident dyads remain stuck on EPA quotas and system navigation rather than self-regulated learning, highlighting faculty development gap.
Open sourceEditorial voice from journal leadership frames AI as tool like spellcheck, stresses author accountability, selective disclosure, and reviewer confidentiality obligations.
Open sourceClinicians at SHM Converge25 explicitly praised concise pearls, junior-faculty MedTED talks, and real-world case application across multiple clinical topics.
Open source