Five Design Rules Are Replacing Time-Based CME With Ability-Based Progression
Earlier coverage of ai oversight and its implications for CME providers.
Clinician debate over AI-written manuscripts is pushing CME teams to tighten disclosure, faculty guidance, and critical appraisal of AI-assisted literature.
Clinicians are drawing a sharper line between AI as a writing aid and AI as an author in medical papers. The signal is narrow—mostly clinician X threads, with thoracic surgery and oncology-adjacent examples, and no formal society guidance in this week’s evidence—but the provider implication is immediate: CME teams rely on literature whose production process is becoming less visible.
The clearest thread asked whether recent manuscript co-authors actually met ICMJE authorship criteria, then extended the question to generative AI: how do those criteria apply to ChatGPT? The clinician concern was not that AI can never touch a manuscript. It was that authorship implies responsibility, approval, and accountability—things an AI system cannot provide.
A second discussion made the boundary more concrete. One physician asked why readers should care if ChatGPT helped write a manuscript when the data, methods, analysis, and conclusions remain sound; in the same thread, he noted the ambiguity by saying, “I’m not sure journals require disclosing the use of a professional editor.” That debate landed closer to disclosure or acknowledgment than co-authorship, especially when AI contributes to text rather than study conception, data, or analysis.
A related exchange added the risk CME teams should not flatten: concern was strongest around review articles or low-human-intervention manuscripts, not targeted use for syntax, flow, or summarizing human-produced data. The distinction matters because a learner evaluating a paper needs to know whether AI may have shaped interpretation, synthesis, or claims—not merely whether a paragraph was polished. That thread also tied AI writing to broader integrity worries such as citation distortions and fabricated data.
For CME providers, this extends the AI-trust issue beyond learner tool use. We saw a related pattern in an earlier brief on clinicians building their own AI literature tools; this week’s conversation narrows the problem to publication-process rules. The concrete question for CME teams: when faculty cite, summarize, or teach from AI-assisted literature, what disclosure and appraisal steps are required before that content reaches learners?
Do not treat AI disclosure as a side note for innovation sessions only. If a CME activity depends on published evidence, the production of that evidence is now part of the trust environment around the activity. The near-term work is modest but important: make AI involvement visible, keep accountability with human authors and faculty, and teach learners how to ask what role AI may have played before accepting a manuscript’s synthesis at face value.
Clinicians explicitly reject AI co-authorship under ICMJE rules because AI cannot accept accountability or make substantial intellectual contributions.
"Think about the last few manuscripts for which you were a co-author…did all authors meet the ICMJE criteria? And for a sub-tweet, how do these criteria apply to generative AI (ie ChatGPT)?"
Show captured excerptCollapse excerptMultiple posts flag hallucination risks and metric-driven distortions when AI drafts manuscripts without human curation.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo"This is an incomprehensible take. We have citations rings, metric-driven distortions, epidrmic levels of data fabrication, lobbies pushing for validation of pharmaceuticals on which we do fuckall about But we should instead worry about articles written by an LLM."
Show captured excerptCollapse excerptConsensus that disclosure or acknowledgment is preferred over authorship claims for AI-generated text.
"Why should I care if a manuscript is written by ChatGPT? As long as the data are original, the the methodology and analysis are legitimate, and the conclusions are supported by the data… Please tell me why I should care? Isn’t using ChatGPT just the modern version of using a…"
Show captured excerptCollapse excerptKentucky AHEC runs statewide didactic curricula and regional training from K-12 through CME, allowing practicing professionals to earn required credits while addressing preceptor shortages.
Open source