Nursing Education Budgets Are the First Cut That Raises Turnover
Earlier coverage of learning design and its implications for CME providers.
Clinician and provider conversations pointed to the same lesson: dense CME needs deliberate learning architecture, not better packaging of passive formats.
The clearest signal this week was not a new content topic; it was a stronger push to make CME structure do more of the learning work. The strongest independent examples came from oncology educators at ASCO25, but the provider implication is broader: dense education, mixed teams, and multi-site scale all fail when format is treated as an afterthought.
At ASCO25, clinicians circulated a teaching framework built around CLEVR: contrasting cases, listening and participation, elaboration, visualization, and repetition. One thread framed the goal plainly: align the full education chain—who, why, where, what, and how—while reducing cognitive overload in high-density oncology education (source). A second clinician post highlighted the same CLEVR teaching model and its contrast with passive learning (source).
For CME providers, the useful part is not the acronym itself. It is that the framework turns “make it interactive” into a checklist that can be built into agenda design, faculty prep, case sequencing, slides, polling, and outcomes plans. That matters in fields where data volume can overwhelm even motivated learners.
The oncology examples dominate this week’s evidence, but the instructional-design principles are portable. This also connects to an earlier brief on ability-based progression: if CME wants to move beyond attendance and recall, it needs formats that let learners compare, explain, rehearse, and apply—not simply listen.
The near-term question for CME teams: which one high-density lecture or panel can be rebuilt as a CLEVR-aligned case sequence, then compared against the prior version on retention or intended practice change?
A provider-sourced CPD conversation this week made a sharper distinction between co-locating professions and designing for collaboration. The critique was familiar but specific: many activities market to several professions, put different speakers on a panel, and stop there. What is missing is structured small-group discussion, role clarity, facilitation, and a patient story that gives the team a shared problem to solve (source).
Because this signal comes from provider-owned educational content rather than independent clinician conversation, it should not be treated as broad clinician consensus. Still, the operational point is strong. IPCE does not happen because nurses, physicians, pharmacists, therapists, or other professionals are in the same room. It happens when the activity forces learners to surface how each profession sees the case, where their roles overlap, and when one role should step back so another can lead.
That creates a different production burden. Writers need profession-specific review. Faculty need facilitation instructions. Cases need prompts that ask learners to compare perspectives, not collapse them into one generic answer. Outcomes teams need measures that capture team behavior, not just knowledge gain.
The question to ask before labeling an activity interprofessional: where, exactly, will learners learn with, from, and about each other?
The global CME signal was narrower and provider-sourced, but it was concrete. In an Alliance Podcast discussion, the argument was that U.S. or Western European education cannot simply be exported, translated, and distributed. Local partners need to be involved from the needs assessment stage; local-language literature, practice environment, patient journey, learner access, and platform behavior all affect whether education will work (source).
For providers pursuing multi-country programs, the warning is easy to underestimate. A global proposal can look efficient on paper while hiding several different learning markets inside one grant. The same clinical gap may have different root causes across countries. The same digital format may be trusted in one setting and ignored in another. English-language content with subtitles may reach international conference regulars while missing community-based clinicians.
The implication is not to make global CME slower for its own sake. It is to move local validation earlier, before the content architecture is locked. Needs assessment should document not only the clinical gap but also where learners already go for education, which partners have audience trust, what language will support participation, and what outcomes can realistically be measured.
The operating question: would a local partner recognize the activity as built for their learners, or merely adapted after the fact?
The week’s through-line was not novelty; it was discipline. Clinicians and educators were pointing to the same failure mode from different angles: CME often asks content to carry work that only design can do. The highest-leverage move this quarter is to choose one activity where the format is currently passive, name the learning behavior the provider wants to see, and redesign the session so learners must practice that behavior before the evaluation form appears.
Practicing clinician thread details CLEVR components and reports improved learner participation when small-group discussion replaces panels.
"At #ASCO25 @RManochakian @MayoClinicFL @MayoClinic @MayoCancerCare emphasized the importance of aligning educational content with the full chain of education – Who, Why, Where, What & How – while integrating core learning science principles (CLEVR): ✔️ Contrasting Cases ✔️ Listening & Participation ✔️ Elaboration ✔️ Visualization ✔️ Repetition 💡 Goal? Deliver high-impact knowledge with less cognitive overload"
Show captured excerptCollapse excerptEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoAdditional clinician posts link CLEVR to reduced overload in dense oncology sessions and call for faculty development on the education chain.
"#ASCO25 education # CLEVR @OncBrothers Beautifull presentation @RManochakian 👏"
Organization voice articulates the gap between co-located panels and true collaboration and names the minimal structural elements required.
Open sourceOrganization voice stresses that education without robust geography-specific needs assessment including local literature will fail and defines the required collaborative process.
Open sourceFaculty discuss ASCO Guidelines Assistant and the need for responsible-use training.
Open source