Clinicians Are Writing Their Own AI Literature Tools
Earlier coverage of ai oversight and its implications for CME providers.
Educator discussion sharpened the AI-in-CPD gap: practicing clinicians need verification habits and ethics support, not just tool orientation.
AI education is moving faster than clinicians’ verification habits. Educator voices highlight that ChatGPT and AI-enabled tools are already in daily use for writing, searching, and assessment, yet hallucinated or biased outputs can pass unchecked into clinical reasoning. The evidence base remains educator-led with limited independent practicing-clinician corroboration, but the implication for CME is direct: verification and ethics must be treated as core competencies rather than optional add-ons.
In a PAPERs Podcast discussion of a BEME scoping review, medical educators described everyday AI use that is already ahead of training norms: ChatGPT and AI-enabled search tools are being used for writing, searching, assessment support, and workflow shortcuts. The concrete worry was not abstract resistance to AI. It was that hallucinated or biased outputs can move through learning and clinical reasoning workflows without being checked.
That changes the provider problem. In an earlier brief on AI research bypassing practicing clinicians, the issue was largely a literature imbalance: AI medical education work was heavily weighted toward UGME and GME. This week’s discussion adds operational detail. The reviewed corpus included 278 papers, but only 3% focused on CPD; only 14 addressed ethics topics such as algorithmic bias, transparency, informed consent, and privacy. For CME providers, that means the shortage is not simply “more AI content for clinicians.” It is more education that teaches clinicians how to verify, disclose, question, and safely apply AI outputs in practice.
The same review discussion, also available in the video version, pointed to SAMR as a useful way to classify whether AI is merely substituting for an old task, augmenting it, modifying it, or redefining it. That matters for instructional design. A session that demonstrates prompt-writing sits at a different level than an activity requiring clinicians to compare AI-generated recommendations against source evidence, identify bias or missing context, and decide what can be used in patient care.
FACETS offers a related discipline for reporting and comparing AI education work. For CME teams, the useful move is to apply those frameworks internally before building another AI module: Is the activity teaching verification? Is it surfacing consent and privacy decisions? Is it assessing whether the clinician can recognize when AI output is plausible but unsupported? If not, the activity may improve familiarity without improving judgment.
The week’s signal is not that CME should chase every new AI tool. It is that tool familiarity without verification practice may leave practicing clinicians overconfident at exactly the wrong point in the workflow. CME teams should reconsider whether their AI content is teaching clinicians to use AI, or teaching them to judge AI when it sounds convincing.
Papers Podcast hosts discuss BEME 278-paper scoping review, highlighting 'dabbler' behavior, learner hallucination risks, 3 % CPD focus, and 14 ethics papers.
Open sourceSame review unpacked with SAMR/FACETS frameworks and concrete examples of AI use in assessment (MCQ generation, narrative feedback, virtual OSCEs).
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo