Longitudinal Assessments Quietly Reshape What Clinicians Expect From Certification-Linked CME
Earlier coverage of learning design and its implications for CME providers.
Single-source critique of medical training accountability and apprenticeship raises the question of whether CME teaches professionalism through practice or discussion.
A single educator discussion this week highlighted accountability shortfalls in medical training, including normalized 'fake sick' calls and insufficient supervised responsibility. The narrow source base makes this directional only, yet the critique directly questions whether professionalism and clinical judgment can be taught through discussion alone.
The discussion described a training culture in which calling in “fake sick” was treated as routine among some trainees, while private practice showed the opposite pattern of overwork. The core issue was not attendance but the carry-over of weak accountability into poor handoffs, resentment, and readiness gaps.
For CME providers, the practical reading is that professionalism education weakens when reduced to shared values. It strengthens when learners confront concrete tradeoffs under pressure: when to show up, how to hand off work, how to protect colleagues, and how to balance self-care with patient care. The source discussion is uncorroborated elsewhere this week but names a clear design task for continuing education.
Implication: activities on professionalism should require learners to navigate specific consequences rather than affirm norms already accepted.
The same discussion argued that removal of low-value “scut” work has not been replaced by meaningful responsibility, iterative feedback, or bedside apprenticeship. Trainees may therefore receive less supervised practice making decisions, relating to patients, and learning from errors.
This matters for CME because many current formats still assume difficult clinical behaviors can be conveyed through lectures, cases, or discussion. Empathy, judgment, and decision-making require doing, feedback, and repetition. We saw a related tension in an earlier brief on AI empathy and factual guardrails: visible performance on one dimension does not guarantee clinical judgment.
For CME teams, the question is whether faculty development is core infrastructure. If the goal is better bedside reasoning, programs need facilitators who observe, interrupt, and coach—not only experts who explain the right answer after the fact.
This week does not demonstrate broad clinician demand. It surfaces a harder question for CME leaders: if upstream training leaves gaps in accountability and supervised judgment, can continuing education address them with formats built for information transfer? The answer depends less on topic selection and more on whether CME can create settings where clinicians practice responsibility, receive feedback, and observe the consequences of their decisions.
Trainees calling in 'fake sick' when healthy reflects a cultural rot; replacing scut work with no accountability leads to poor decision-making and lack of preparation for practice. Private practice shows opposite problem with overwork.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo