AI Is Breaking the Assessments CME Still Uses
Earlier coverage of learning design and its implications for CME providers.
Board exams reward recall of new agents while oncology practice requires critical appraisal, surrogate-endpoint critique, and shared decisions.
Board exams continue to reward rote recall of new agents while daily oncology practice demands critical appraisal of evidence and patient-centered decisions. The clearest examples came from oncology clinicians, but the provider implication travels to any field where evidence changes faster than exams or curricula.
The strongest signal came from oncology clinicians and trainees questioning whether board-oriented study resources prepare physicians for the decisions they actually face. One radiation oncology trainee asked peers what helped them learn and pass exams, prompting discussion of exam-prep tools and study strategies (source). A separate oncology thread tied the issue more directly to practice: meeting questions are moving toward appraisal of practice-changing benefit, surrogate endpoints, and financial toxicity, while board preparation still leans toward remembering new agents and indications.
That matters because exams shape learning behavior. If the assessment asks for the newest drug fact, clinicians will optimize for recall. If practice asks them to weigh quality of life, comorbidity, financial toxicity, and supportive-care choices, CME that mirrors the exam too closely may reinforce the wrong skill.
Bishal Gyawali captured the practice-side change in one sentence: “The questions being asked at annual meetings are changing from cheerleading-type questions to critical inquiry type questions” (source). We saw a related pattern in an earlier brief on clinicians needing to read the literature they now must apply: the durable gap is not access to information, but rehearsal in evaluating it.
For CME teams, the question is simple: does the post-test reward the same behavior the activity says it wants to change?
The AI signal was narrower and mostly from society-hosted oncology education, so it should be read as emerging rather than broad clinician consensus. Still, the substance was concrete. In a pediatric oncology webinar, participants described frequent use of large language models for work tasks, while raising concerns about privacy, hallucinated references, and whether AI outputs can be used safely in patient-specific reasoning (source).
The useful lesson for CME providers is not “teach AI.” It is to teach where AI sits in the clinical workflow. The webinar discussion repeatedly returned to reference checking, not entering patient-identifying data into open tools, and using AI outputs as material for tumor-board discussion rather than as treatment recommendations.
That changes the learning design. A generic module on prompt writing is too thin. A stronger activity asks clinicians to inspect the source trail, find missing patient-context variables, identify when the model is over-answering, and decide what belongs in a tumor-board packet. The assessment should include false or unsupported citations, ambiguous patient details, and a required “I don’t know / needs more data” response option.
For CME teams, the concrete move is to make AI verification observable. If learners can complete the activity without checking the evidence behind the output, the activity has not taught the workflow.
A format note is worth keeping in view, but with a caveat: the evidence this week came from institutional educational content, not independent clinician conversation. Harvard Medical School’s external education overview emphasized blended programs that combine live teaching with online materials for busy professionals (source). That format only matters if the learning task is right. Longitudinal delivery can help clinicians practice judgment over time, but it can also stretch a recall course across more months. This week’s sharper point is that CME teams should look first at what the activity asks learners to do. If the task is still memorize, click, and move on, the format will not fix the mismatch.
Trainees describe board questions testing latest-agent knowledge while real decisions require weighing QOL and watchful waiting.
"I asked Medsky (Bluesky Medtwitter): When you were studying for tests/board exams, what kinds of resources did you find most useful? What do you think was the most important thing that helped you learn/pass? #radonc #medtwitter #meded Migrate over:"
Show captured excerptCollapse excerptMid-career oncologists request resources combining guideline mastery with critical-inquiry skills.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo"Listening to audience questions at this year’s ASCO meeting has made me hopeful. Change is happening. The questions being asked at annual meetings are changing from cheerleading-type questions to critical inquiry type questions."
Show captured excerptCollapse excerptOncologists detail daily LLM use cases and stress human validation plus reference checking.
Open sourcePediatric hematology-oncology teams highlight enterprise tools that integrate into tumor boards without replacing judgment.
Open sourceHarvard-branded program highlighted for combining live teaching with flexible online cohort materials.
Open source