CME Audiences Self-Select Depth When Given Modular Pathways
Earlier coverage of learning design and its implications for CME providers.
Pre-clinical learners equate active learning with recall tools unless safety, tone, and participation norms are made explicit; workplace CME shows the same risk when theory remains decorative.
Pre-clinical learners treat active learning as flashcards and recall drills unless educators explicitly design in safety, tone, and participation norms. Two narrow educator signals this week make the same point for CME: programs often ask for engagement or transfer without confirming the conditions exist. Evidence remains limited to academic podcasts without practicing-clinician corroboration, yet the design fix is low-cost once the misalignment is named.
In one mixed-methods study discussed on Medical Education Podcasts, pre-clinical medical students in the UK and Malaysia overwhelmingly recognized the importance of active learning, but their working definition was much narrower than many educators would assume. The study reported that 94.7% saw active learning as important, yet students’ independent strategies clustered around active recall, flashcards, practice questions, and other high-yield exam tools. The episode also noted that 266 students completed the survey and 25 participated in focus groups (source).
For CME providers, the point is not that flashcards are bad. It is that “active learning” is not a shared term. If a faculty member expects discussion, elaboration, visualization, or applied case reasoning, while the learner hears “efficient recall,” the activity can look interactive while still producing shallow engagement.
The same source also tied participation to practical and emotional conditions: time, exhaustion, teacher tone, group dynamics, confidence in the topic, and fear of embarrassment. That matters for clinicians as much as students. A busy learner may choose the fastest strategy that feels safe and useful unless the activity makes deeper participation both efficient and low-risk. We saw a related measurement issue in an earlier brief on moving beyond knowledge checks toward self-efficacy and real-world impact: if the goal is transfer, the design has to create conditions beyond correct recall.
The implication is simple: begin active formats by naming what kind of participation is expected, why it matters, and what safety rules govern the room. Otherwise, CME teams may mistake attendance, polling, or Q&A for real engagement.
A second Medical Education Podcasts episode made a parallel argument about workplace learning: theory is often used too lightly. The researchers discussed Communities of Practice, Landscapes of Practice, and Figured Worlds as ways to understand how residents enter rotations, negotiate roles, learn from interprofessional actors, and move across settings (source).
The residency examples came from clinical rotations in Bogotá, Colombia, so this is not broad clinician-market evidence. But the provider implication travels well: workplace CME fails when “context” is treated as a paragraph in the needs assessment rather than a design variable. In the episode’s rotation examples, residents’ learning depended on whether they entered as apprentices or sojourners, how their goals aligned with the clinical community, and whether nurses, respiratory therapists, and other team members helped or hindered integration.
That is directly relevant to interprofessional, multi-site, and workflow-based CME. If the activity assumes every learner has the same role clarity, authority, team access, and implementation latitude, transfer problems will look like learner motivation problems. They may actually be role-negotiation problems.
CME teams should pick the theory before the format. For a workplace program, map the intended behavior against Communities of Practice constructs: Which communities must recognize the learner’s new behavior? Who controls access to the workflow? Where do power, role ambiguity, or local norms block use? If those questions are absent from the design brief, “apply this in practice” is doing too much work.
The week’s strongest message is not “make CME more interactive.” It is to stop treating engagement and transfer as automatic once the right format is chosen. A provider-owned CE podcast this week made a related internal point: CE units are under pressure to explain their value with data, outcomes, trust, and resource arguments (source). That signal is different from the two academic education sources above, and it should not be overstated as clinician consensus. But it reinforces the same operating question for CME leaders: can your team explain not only what the activity covers, but what hidden assumption the design is correcting? If the answer is no, the activity may be relying on learners to supply the missing design work themselves.
Direct analysis of student definitions and barriers shows recall dominance plus safety/tone prerequisites.
Open sourceResearchers demonstrate how deeper application of sociocultural theories clarifies role negotiation and improves cross-setting transfer.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoCE leaders describe data-driven arguments (PARs, outcome surveys, revenue) needed to justify unit resources and counter 'anyone can do this' perceptions.
Open source