Clinicians Are Writing Their Own AI Literature Tools
Earlier coverage of ai oversight and its implications for CME providers.
AI case selection and short-form evidence summaries point to the same provider challenge: curation only works when learners can see the guardrails.
A radiology education example this week showed AI selecting supplemental resident cases that fixed rotations may miss. The broader lesson for CME providers is not simply “use AI”: curated learning, whether algorithmic or editorial, needs visible human oversight if it is going to shape clinical judgment.
In an AJR training and education podcast, a radiology department chair described a precision education model that uses teaching files and LLM analysis of resident reports to see which pathologies residents have encountered, then assigns supplemental cases based on gaps and section-defined priorities. The goal is to reduce the randomness of rotation-based exposure and make competency measurement less dependent on what happens to appear on a worklist.
That is a meaningful shift for CME providers because it moves AI from content generation into learning operations: case selection, performance review, error analysis, and possibly longitudinal recommendations for fellows and attendings. We saw a related issue in an earlier brief on LLM tools reaching clinical practice before evaluation frameworks were in place; this week’s signal is more concrete because the guardrails are attached to a specific educational workflow.
The caveat is important: this is radiology-led evidence, and the strongest example comes from one detailed program discussion, amplified by a resident post pointing peers to the episode. But the provider implication extends to other procedural, diagnostic, and image-heavy fields: if AI is used to personalize exposure, CME teams need to define who decides the curriculum map, how errors are interpreted, when the learner sees the algorithm, and where faculty teaching remains non-negotiable.
For CME leaders, the question is not whether AI can recommend the next case or resource. It is whether the program can explain the recommendation, check for bias, and preserve the expert readout where judgment is actually taught.
The second signal came from a Wiley/MAPS podcast discussion of Wiley’s 2024 HCP information-seeking study. The discussion framed clinicians’ top problem as finding relevant, current information in a publication environment described as more than 2 million articles per year, with source credibility close behind. It also reported that 78% still rely on full-text articles, even as demand grows for videos, infographics, factual summaries, podcasts, and other on-demand formats.
This evidence is mainly provider-owned and CME-adjacent, not independent multi-clinician conversation from this week. Still, it is useful because it names a tension CME teams face every day: clinicians want less friction, not less rigor. A short summary that cannot be traced back to the evidence may reduce time burden but lose the reason clinicians came to an education provider in the first place.
For CME providers, the operational move is to treat short-form assets as entry points into an evidence pathway, not as replacements for it. A video abstract, infographic, or two-minute faculty commentary should make the next step obvious: source, rationale, limitation, and application. If the learner cannot tell who curated the content and why it is credible, the format has solved speed while creating a new problem.
The common thread this week is curation. AI can curate case exposure; editorial teams can curate evidence into short formats. Both are valuable only if the learner can see the handrail: who selected this, what it is based on, what it leaves out, and where expert judgment enters. Before scaling personalization or shortening more content, CME teams should ask whether their programs make those handrails visible enough to earn the learner’s next five minutes.
Department chair describes AI-driven supplemental cases and milestone tracking that replace random rotation exposure.
Open sourceResident perspective reinforces need for second-reader workflows and continued attending teaching to counter algorithm bias.
"🎙️ How can we harness AI for precision education? Our latest @AJR_Radiology Training & Education Series Podcast ft Dr. Michael Recht, chair @nyulangone Apple Spotify Article"
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoStudy data show clinicians prefer full-text journals but need short-form, easy-to-find formats that maintain KOL credibility and clear learning objectives.
Open source