What Teamwork Training Misses When Clinicians Don’t Say the Hard Part Directly
Earlier coverage of communication skills and its implications for CME providers.
This week’s clinician-learning signal is tighter than a generic call for empathy: communication is being treated as a skill to practice and assess, while content quality is also being judged by how clearly it teaches.
Communication education is being framed less as something clinicians absorb by example and more as something they should practice, review, and assess. The evidence this week is cross-specialty but still source-limited, so the right read is a recurring pattern with clear design implications, not a claim of universal adoption.
Across this week’s sources, communication was discussed in concrete, trainable terms: difficult conversations can be improved with coaching, shared decision-making can be practiced with feedback, and language-appropriate communication can be taught and assessed. That framing appears in a JAMA discussion, an oncology conversation describing video review and feedback for hard conversations (Kidney Cancer Unfiltered), and a pulmonology discussion that emphasized teaching patients how to use action plans rather than simply handing them over (Keeping Current CME).
For CME providers, the implication is straightforward: lecture-only communication content is harder to justify when the skill itself is being framed as observable performance. If the objective is better counseling, shared decisions, or clearer patient instructions, the activity likely needs rehearsal, feedback, or some form of guided review.
This is distinct from generic empathy programming. It points toward communication as a clinical skill that benefits from scenarios, faculty observation, and outcomes measures tied to specific tasks. CME teams should ask where their current communication portfolio still teaches principles without giving learners a chance to demonstrate them.
A second, narrower pattern this week is that educational quality is being judged through visible teachability. In an RSNA review, strong education exhibits were described as timely, image-rich, and easy to absorb rather than dense research summaries. A conference-style oncology program on YouTube also foregrounded interactivity and downloadable materials (Medscape), while a provider-owned CME example highlighted practice aids alongside the teaching itself (PeerView).
This is not broad proof of all-specialty consensus, and part of the support here comes from provider-owned educational content. Still, it is a useful editorial cue. Clinicians and educators are not treating “good science update” and “good teaching” as the same thing. Visual clarity, digestibility, and usable support tools are part of how educational quality is being recognized.
That lands less as a new packaging thesis than as a faculty and content-standard issue. As an earlier brief on why dense education fails at first glance argued, content that is accurate but hard to absorb can still miss the mark. The operator question is whether review workflows explicitly test for teachability: can a busy clinician grasp the point quickly, use it in practice, and return to a visual or aid later without re-consuming the full session?
Earlier coverage of communication skills and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of communication skills and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo