Oncologists Are Calling Out Conference Hype That Lacks Human Data
Earlier coverage of learning design and its implications for CME providers.
Social-science evidence shows structured peer networks reduce diagnostic errors more effectively than central-expert lectures; CME programs should audit interaction time and add bias checkpoints for any AI content.
Physician groups that shared information made fewer diagnostic and treatment errors than physicians working alone, according to a social-network keynote summarized this week. The evidence is narrow—one CME educator podcast recap of conference keynotes—but the implication for providers is concrete: learning design has to create repeated peer testing, not just deliver expert interpretation.
The strongest signal came from Damon Centola’s keynote on social networks, summarized by Write Medicine. The recap described a study in which physician groups working through medical scenarios made significantly fewer errors when they shared information and worked together than when physicians worked alone.
The point for CME providers is not simply “add discussion.” It is that behavior change may depend on multiple reinforcing interactions across a network. That challenges the default conference model in which a high-profile faculty member sits at the center and change is expected to diffuse outward.
This sharpens a thread we saw in an earlier brief on clinicians routing around long CME and repeat training: clinicians value formats that let them process judgment with peers, not just consume polished expertise. This week’s signal adds a mechanism—complex contagion—and a measurable outcome: fewer errors in structured physician groups.
For CME teams, the question is whether flagship programs are built to create those reinforcing interactions. A lecture followed by a short Q&A is not the same as a cohort, cross-departmental case exchange, or peer-feedback loop that continues after the event.
The same Write Medicine episode summarized Immani Shephard’s keynote on AI in healthcare education. The concern was not only that AI can be wrong, but that it can amplify biased data and then reinforce clinician bias through human-AI feedback loops.
The example in the recap was the flawed race-based GFR assumption, linked to delayed interventions, prolonged dialysis, and reduced transplant opportunities. The broader design implication is cross-specialty: AI literacy cannot stop at prompt-writing, tool familiarity, or productivity use cases.
CME activities that include AI outputs should make skepticism visible. Learners need planned moments to ask: What data may be missing? Whose outcomes might worsen? Are social determinants of health being considered? When should AI augment judgment, and when should the clinician pause?
For providers, this changes faculty preparation and outcomes planning. If an activity uses AI, the evaluation should not only ask whether learners understood the tool. It should ask whether they can identify bias risk and apply the tool equitably in clinical decision-making.
The week’s quieter signal is that format is carrying more of the educational burden. Peer networks and bias checks point to the same issue: clinicians need structured ways to slow down, compare interpretations, and test judgment before practice changes. The question for providers is no longer whether interaction belongs in the agenda. It is whether the agenda gives interaction enough structure, repetition, and measurement to change what clinicians do after the session ends.
Dr. Damon Centola keynote presents study data showing group information-sharing reduces diagnostic/treatment errors versus solo work; translates to concrete design implications for small-group and cross-organizational learning communities.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo