Residency Programs Are Already Using AI to Personalize Case Exposure
Earlier coverage of ai oversight and its implications for CME providers.
Chatbots scored higher on empathy and readability than oncologists on real patient questions, creating demand for CME that teaches verification and hybrid oversight skills.
AI chatbots scored higher on empathy and readability than board-certified oncologists when answering real patient questions posted on social media. The strongest public data this week came from oncology, yet the implication for CME providers is portable: clinicians need repeated practice treating AI output as a fluent first draft that still requires human verification and escalation.
In a Medscape discussion of a recent oncology analysis, Maurie Markman described a study comparing responses from verified oncologists and an AI chatbot to 200 public cancer questions from social media. The reported finding was striking: the best AI platform scored higher than physicians on quality, empathy, and readability.
That does not make AI clinically autonomous. The Medscape source itself framed the result as a snapshot and stressed the need for monitoring, especially around hallucinations. A separate oncology thread made a related point about diagnosis. Lakshmi Krishnan wrote, “When ChatGPT outperformed docs in diagnosis, it made me reflect beyond the man vs machine debate.” Her thread points away from replacement talk and toward a different question: what does good clinician oversight look like when the machine’s first answer is often fluent and sometimes impressive?
For CME providers, that changes the shape of AI education. A module that explains what large language models are will age quickly. The durable need is rehearsal: ask learners to prompt an AI system, identify overconfident language, check source-dependent claims, rewrite patient-facing explanations, and decide when the answer must be escalated to a human clinician or a specialty team.
We saw a related pattern in an earlier brief on point-of-care chatbots turning clinical questions into accredited learning. This week’s signal is sharper because it moves the issue from access to judgment. If AI can produce a more empathetic draft than a busy clinician, the educational task is not to defend the clinician’s pride. It is to teach the clinician how to edit, verify, and own the final answer.
AI education will become part of everyday clinical workflow training, not a separate innovation track. The risk is that providers turn it into another compliance module. A separate X thread about ABIM certification paperwork offers a useful reminder: clinicians quickly recognize education that feels like administrative obligation. AI oversight training will land better if it looks like the work itself—draft, challenge, revise, escalate—rather than a lecture about tools.
Medscape video demonstrates ChatGPT producing superior empathy and readability scores versus verified oncologists on social-media cancer questions.
Open sourcePracticing oncologist thread confirms AI can outperform on rote diagnostic items yet still requires human escalation for nuance and context.
"My latest for @statnews. When ChatGPT outperformed docs in diagnosis, it made me reflect beyond the man vs machine debate. As a physician and historian, I explore how this moment invites us to reimagine diagnosis itself..."
Show captured excerptEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoClinicians describe ABIM communications as 'pay or perish' and recertification as wasteful bureaucracy.
"Board certification used to be a honor, not a mandatory requirement, but it has evolved that way American Board of Internal Medicine sends us this "friendly" letter: pay or perish #medtwitter"
Show captured excerptCollapse excerpt