Clinicians Are Treating MOC as an Antitrust Problem, Not a Points Problem
Earlier coverage of learning design and its implications for CME providers.
Equity in assessment is not one design goal. CME teams need to choose which version they mean before they choose methods or metrics.
Assessment equity surfaced this week as a choice CME teams have to make before design begins. A fairness-oriented activity might improve standardization, reduce rater bias, and strengthen validity evidence; a justice-oriented approach might instead ask whose standards are being used, who holds decision power, and whether the assessment system itself should be rebuilt. This is a narrow signal from a single academic audio source, not broad clinician corroboration, but the design implication is concrete.
The useful point in this week’s education conversation is that “equity” in assessment can point to three different orientations: fairness, inclusion, or justice. In the Medical Education audio paper, fairness is framed around reducing bias and construct-irrelevant variance; inclusion around accessibility, flexibility, participation, and co-design; justice around power, history, culture, and rebuilding systems with marginalized users in central roles.
Those are not interchangeable labels. Fairness may push a CME provider toward standardized items, rater training, psychometric review, and comparisons across learner groups. Inclusion may push toward accessible formats, learner participation in design, and shared interpretation of assessment data. Justice may push toward questioning who defined the competency, whose knowledge counts, and whether the assessment structure reinforces the disparity it claims to measure.
That matters because CME assessment plans often collapse these choices into generic language: equitable, unbiased, inclusive, accessible. The audio paper’s clearest warning is that each orientation uses different methods and outcome measures. As the paper puts it, “Process equity is the equity of how assessment is done, its design, context, and use.” If the team has not named the orientation, it can end up mixing methods that look aligned on paper but answer different questions.
For CME providers, the implication is less about adopting a new equity vocabulary and more about governance. Before revising a pre/post test, outcomes rubric, case simulation, faculty scoring guide, or learner analytics dashboard, teams should ask: are we trying to make the measure more impartial, make participation more accessible, or challenge the assumptions behind what we measure? We saw a related assessment problem in an earlier brief on ability-based progression: once learning moves from time spent to ability demonstrated, the assessment frame carries more weight. This week’s signal adds that equity claims need the same upfront precision.
The assessment conversation is also being pulled by AI, but this week’s stronger lesson sits underneath the tool question. In a separate educator discussion, a pediatric cardiology educator described AI use as changing what learners can retrieve, summarize, and present, which forces educators to assess critique, defense, and judgment rather than answer production alone (Faculty Feed).
That makes the equity orientation choice more important, not less. If CME teams redesign assessment for AI-era learning without first deciding what kind of equity they mean, they may improve measurement mechanics while leaving the underlying equity claim unexamined. The immediate audit is simple: find one active assessment project and ask whether its equity language, methods, and outcomes are all pointing in the same direction.
Academic discussion defines fairness (psychometric bias reduction), inclusion (accessibility and co-design), and justice (dismantling power structures) as distinct paradigms requiring different outcome measures.
Open sourcePediatric cardiology educator describes learners using AI for differentials and summaries before independent thought, requiring assessments focused on critique, defense of decisions, and nuance navigation.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo