Board Exams Still Test What Oncologists No Longer Use
Earlier coverage of ai oversight and its implications for CME providers.
Clinician conversation shows AI skepticism centers on depth questions and accountability; peer Q&A platforms offer a workflow model for accountable answers.
Oncology clinicians state that AI hallucinates on specific clinical questions and must be used in conjunction with a provider. CME that teaches prompting without failure-mode drills and documented overrides risks teaching access without judgment. The signal is strongest in oncology yet portable across specialties using AI for clinical information.
The clinician concern this week was blunt: AI can help with broad medical information, but it becomes less trustworthy as questions get more specific. Justin Dubin, MD, put it this way: “HOWEVER, the more specific and in depth the question you ask, the more likely the data is inaccurate and/or a hallucination.”
For CME providers, that moves AI education beyond tool tours. A module that shows learners how to prompt an AI system is incomplete if it does not also show them how to catch a false answer, decide whether to override it, and document why.
A related MAPS podcast discussion framed AI use in medical affairs around privacy, bias, hallucination, regulatory accountability, and the higher risk of moving outputs from internal use to external scientific communication (source). That is not the same as independent clinician conversation, but it reinforces the operational point: AI education needs workflows, not slogans.
We saw a related pattern in an earlier brief on LLM hallucination and verification drills. This week’s version is more concrete. CME teams should audit AI activities for observable behaviors: classify the failure, identify the missing evidence, route to clinician review, and record the rationale for accepting or rejecting the output.
The second signal came from physician-only Q&A behavior. In an oncology thread, Balazs Halmos described theMednet’s search feature as a way to type full clinical questions “just like you’d ask a colleague” and receive expert answers with context. His summary was simple: “theMednet is where physicians learn from each other.”
This is an emerging signal, not a broad platform mandate. But it points to a real format problem for CME: many clinicians do not experience a learning need as a curriculum title. They experience it as a messy question between visits, tumor boards, inbox work, and literature checks.
For providers, the implication is not to bolt a chatbot onto every activity. It is to examine whether case-based education answers questions in the shape clinicians actually ask them: full context, expert reasoning, clear limits, and a next step. If a program’s search experience returns only long content lists, it may be solving the provider’s indexing problem rather than the clinician’s question problem. Would replacing one linear module with a rapid peer-answer format increase relevance and participation?
The useful question is no longer whether AI tools or peer platforms belong near education. Clinicians are already testing both. The harder question is whether CME can make the answer accountable. If an activity cannot show where an answer came from, how it was checked, what its limits are, and what a clinician should do when confidence is low, it is teaching access without judgment. That is the trust risk to fix first.
Clinicians state AI is useful for broad search but hallucinates on specific clinical questions; must be positioned as tool used 'in conjunction' with providers.
"Further confirmation of what many of us have been saying from the beginning. AI absolutely has and will continue to have even more value in education, especially medical information. HOWEVER, the more specific and in depth the question you ask, the more likely the data is inaccurate and/or a hallucination. Key here is understanding AIs limit as a tool and using it IN CONJUNCTION with your medical provider to guide any of your health decisions and care. #ai #medicine"
Show captured excerptCollapse excerptEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoPodcast segment reinforces that AI education must move beyond demos to governance, limits, and human oversight workflows.
Clinicians promote upgraded search on physician-only platforms that accept full clinical questions and return expert answers with deep context.
"Ever hoped you could fish for easier answers to your daily onc questions? Now you can throw a better net! theMednet is where physicians learn from each other. It is a physician community for shared learning which i enjoyed participating in and learning from over the years. @themednet Now Search lets you type full questions—just like you’d ask a colleague—and get expert answers with deep context. Try it →"
Show captured excerptCollapse excerpt