Specialty Education Starts Teaching the Counseling Step
Earlier coverage of communication skills and its implications for CME providers.
This week’s strongest signal framed communication as a teachable care-delivery skill, while AI examples kept emphasizing defined tasks, grounded sources, and human review.
Communication was framed this week less as a soft skill and more as part of clinical performance. The evidence is still narrow and most visible in oncology-adjacent settings, but it is concrete enough to matter for CME design now.
Across this week’s sources, the shift was not that communication matters. It was that people described it as a set of observable moves: explaining under clinic-time pressure, translating jargon, and checking whether patients actually understood what they heard. One conversation treated brevity and clarity as part of competent care, not an optional bedside extra (Healthcare Unfiltered). Another named teach-back, lay-language explanation, and culturally responsive communication as behaviors that can be taught and assessed (IASLC 2025 WCLC Press Briefing).
The team implication mattered too. Communication was not framed as the physician’s burden alone; nurses and navigators were part of the picture, especially in survivorship and next-step conversations (ONS Voice). These are still oncology-heavy, thinly attributed sources, so this is better read as an emerging signal than broad consensus. But it fits the series’ longer communication thread, including our earlier brief on specialty education teaching the counseling step, and this week’s version is more operational and more team-based.
For CME providers, the decision is practical: if the real need is explanation, understanding checks, and handoff language under time pressure, communication should be designed and assessed like clinical capability, not parked as elective professionalism content. The useful test is whether your activity asks learners to demonstrate those moves or merely rate their own confidence.
The AI thread this week did not add a new debate about AI in general. The clearest distinction was between broad models and narrower systems tied to defined tasks, curated source material, and human review. One implementation discussion described literature search, summarization, and label-based response drafting while stressing controlled inputs and a human in the loop (MAPS Elevate). Another contrasted general-purpose LLMs with a specialty multi-agent system that referenced WHO, ICC, NCCN, and trial resources for defined case-support tasks (VJHemOnc). A third discussion reinforced the limit case, pointing to hallucinations and outdated information in general models for complex oncology use (Cleveland Clinic Cancer Advances).
Most of this support comes from organization-led or implementation-led sources rather than broad frontline clinician conversation, so it should be treated as an emerging expectation, not settled consensus. Even so, the operator implication is clear: if AI appears in a learning product, faculty session, or support workflow, providers should be able to state the task, name the source base, and show where expert review remains in the process.
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo