The Safe Bet for AI in Medical Education Is Adaptation
Earlier coverage of communication skills and its implications for CME providers.
In epilepsy- and oncology-led examples, specialty education is treating patient counseling as part of the clinical skill being taught, while AI governance keeps settling into routine CME operations.
Some specialty education is now teaching the move from interpretation to patient counseling as part of the clinical act itself. This week’s examples came mainly from epilepsy and oncology, so the pattern should be read narrowly: in counseling-heavy decision areas, the hardest instructional step may be explaining options, uncertainty, and tradeoffs to patients rather than stopping at the evidence readout.
Across this week’s sources, communication was presented as part of disease-specific clinical performance rather than a generic bedside skill. In an epilepsy education video, faculty explicitly tied shared decision-making and difficult patient conversations to care decisions (source). In oncology-oriented discussions, the same pattern appeared as the work of turning testing, biomarkers, and treatment options into patient-facing explanations and counseling (source, source).
The evidence here is still narrow and not strong enough to claim broad clinician demand. But it does suggest a sharper design target in counseling-heavy specialties: cases may need to test the move from interpretation to explanation, not just whether the learner reaches the right clinical answer. That builds on an earlier brief about communication becoming an assessable clinical skill and makes the current shift more specific inside specialty teaching.
For CME teams, the key question is where communication objectives are still generic when the real learning task is framing risk, sequencing options, or handling a difficult decision conversation inside the case.
The AI update this week was operational, not strategic. A CME-focused source described AI use for summarization, outlining, literature distillation, and draft support, while stressing verification, privacy, bias, and human review (source). A conference video added broader context on safeguards and accuracy controls, though it was not specific to CME production workflows (source).
Because these sources are organization-adjacent, this is best read as a continuity item rather than a new market turn. The practical update is that AI use appears routine enough in production work that governance gaps now show up in ordinary drafting and source-handling steps.
The operator question is straightforward: if AI is already helping with synthesis and first-pass drafting, where exactly do source verification, privacy review, and human sign-off occur?
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo