Competency-Based Training Is Moving From Checkboxes to AI-Coached Portfolios
Earlier coverage of learning design and its implications for CME providers.
Training incentives still favor abstract counts and recall while practice requires rigor, judgment, and adaptability under AI.
Research training is rewarding abstract volume over rigorous scholarship, while recall-heavy assessment is poorly matched to AI-augmented practice. The strongest signal this week came from Hem/Onc-led discussion of abstract-count pressure, with a parallel clinician-educator conversation on why current metrics must shift toward oversight and adaptability.
A Hem/Onc-led clinician discussion described a training culture where abstracts can become a fast currency for applications and promotion rather than a step in serious inquiry. Samer Al Hadidi’s thread put the critique plainly: “Trainees and junior faculty compete with abstract counts instead of meaningful research.” Replies in the same discussion pressed for separating conference abstracts from peer-reviewed journal articles in ERAS-style reporting and floated abstract-to-publication ratio as a better measure.
The specialty mix matters: these examples are oncology-led. But the underlying incentive problem is portable across GME and academic medicine. If a system rewards visible scholarly output before it rewards rigor, trainees learn how to produce artifacts before they learn how to finish meaningful work.
For CME providers, this is not just a faculty-development topic. It affects conference strategy, mentorship design, and critical-appraisal education. We saw a related concern in an earlier brief on clinicians calling out research pollution; this week’s discussion moves that concern upstream, into how early-career clinicians are taught to define scholarly success. CME teams should ask whether their conference-adjacent learning and faculty-development programs make completion, methods, mentorship, and appraisal more visible than abstract volume.
A separate clinician-educator conversation argued that AI changes what medical education should measure. The core point was not that clinicians need to become coders. It was that AI can retrieve information, recognize patterns, process multiple data types, and surface options at a scale that makes pure memorization a weaker marker of readiness.
In the YouTube discussion, AI was framed around pattern recognition, predictability, multimodal inputs, hallucination risk, and the continuing need for human oversight. The companion podcast episode made the education implication explicit: future practice may diverge sharply from what current trainees are being assessed on, especially when assessment privileges retrieval and recall over adaptability.
This signal comes from podcast and YouTube sources, so it should be treated as clinician-led commentary rather than broad independent consensus. The oncology examples are used as examples, not as a boundary. For CME providers, the implication is clear enough: AI education should not stop at tool awareness. Assessment should require learners to compare AI outputs, identify use-case limits, name what needs verification, and explain when a human should override or narrow the model’s suggestion.
The common thread is measurement. Abstract volume and memorized recall are easy to count, but they can teach the wrong habits if they become the center of the learning system. This week’s clinician conversation asks CME teams to look harder at what their formats and assessments reward: finished scholarship over visible output, and adaptable judgment over answer retrieval.
Practicing clinicians and educators describe pressure to prioritize abstract quantity, resulting in inflated CVs and reduced conference value; explicit call to separate abstracts from publications in ERAS/NRMP.
"Check our viewpoint published @JAMA_current #MedEd Abstract Factory—Research Culture Harming Medical Education The "abstract factory" is destroying medical education. Trainees and junior faculty compete with abstract counts instead of meaningful research. Result: inflated CVs, diluted conferences. We shouldn't celebrate this—you don't need publications to be a great doctor. ➡️ @utswcancer @rajshekharucms @HiraSMian @ManniMD1 @HemOncFellows @ASCOTECAG"
Show captured excerptCollapse excerptEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoDetails how AI handles pattern recognition and multimodal data better than humans, necessitating new metrics for problem-solving and creativity.
Argues that future practice will be at odds with current recall-heavy training and that adaptability plus human oversight must be taught explicitly.
Open source