Small Peer Networks Outperform Star Faculty in Changing Clinical Behavior
Earlier coverage of learning design and its implications for CME providers.
Clinicians are building custom AI tools for workflow tasks and praising live social formats, signaling that CME must advance from literacy modules to practical customization skills and experiential design.
Clinician conversation this week pointed to a gap between how many CME programs are packaged and how some learners are already choosing to learn and work. The public signal was narrow but clear: live and social formats earned attention, while clinician-built AI tools showed that learners may need practical workflow training more than broad AI awareness.
The strongest format signal was not a new platform. It was clinicians praising learning that felt live, social, and worth showing up for.
One post about a CME course in Puerto Rico highlighted the combination of topic selection, faculty, setting, and attendance, with the concrete phrase: “Standing room only at their annual course in beautiful @PuertoRicoPUR.” Another clinician post pointed to a virtual ASCO Training Program Leaders Town Hall on training innovations, including discussion of the in-training exam and AI-enhanced board review, as a real-time forum rather than a static content drop (source).
For CME providers, the lesson is not that every program needs a destination venue. It is that the surrounding experience—peer presence, discussion, timing, and social visibility—can be part of the educational value. Purely asynchronous delivery may still be essential for access, but it should not be the default answer for every flagship topic.
The question for CME teams: which activities are important enough to merit a live, hybrid, or socially amplified layer, and what accessibility trade-offs need to be documented before making that choice?
The AI signal was more emerging, but more strategically important. Clinicians were not only asking whether AI is useful. They were building tools around specific work problems.
A radiation oncology resident described TRAC, a trial reasoning and analysis companion, as “A web app I developed for augmenting trial understanding and variable visualization with GPT-4o under the hood.” The thread included workflow details such as data fetching, prompt engineering, and JSON schema output. Separately, a note-writing copilot called Osler was presented as local-first in key areas and designed to help polish documentation while keeping clinical reasoning with the clinician (source).
That changes the CME task. A basic module on what AI is will not meet the learner who is already testing prompts, extracting trial variables, or deciding where clinical judgment should remain explicit. We saw a related pattern in an earlier brief on LLM tools reaching clinics before clinicians had evaluation frameworks; this week’s examples push the issue from evaluation into customization and workflow use.
The implication is concrete: AI education should include short simulations where clinicians compare outputs, inspect assumptions, decide what data can be used, and define what the tool is not allowed to do.
The week’s useful signal is not that live meetings are back or that AI will remake CME. It is that clinicians are rewarding education that helps them do something: discuss in real time, learn with peers, interpret trials faster, or write clearer notes without surrendering judgment. For CME leaders, the sharper test is whether a program teaches participation and use, or only delivers information. That distinction is becoming harder to ignore.
X posts highlight standing-room-only CME courses in Puerto Rico with strong faculty and appealing settings as highly valued learner experiences.
"know how to do CME courses right! Standing room only at their annual course in beautiful @PuertoRicoPUR. Combining great topics, great speakers, and great setting makes for a fantastic meeting! @MayoMitch @KSternAZ @m_e_nielsen and more!"
Show captured excerptCollapse excerptPosts reference virtual ASCO town halls on training innovations and weekly #MedEd Twitter chats as effective interactive formats.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo"Looking forward to the virtual @ASCO Training Program Leaders Town Hall, 4pm ET on February 24, to speak about how we at @NYULISOM_HemOnc leverage the ASCO in-training exam. Agenda and registration link below:"
Show captured excerptCollapse excerptPosts detail TRAC tool for variable extraction/visualization and emphasis on keeping clinical judgment with the human user.
"Happy to share one of my side projects: TRAC - Trial Reasoning and Analysis Companion. A web app I developed for augmenting trial understanding and variable visualization with GPT-4o under the hood. #AITools"
Show captured excerptCollapse excerptOsler copilot described as local-first tool that preserves clinician reasoning while polishing notes.
"Physician friends - I'm excited to release Olser, a copilot for writing medical notes! I built it for my personal use, around two principles - 1) the medical note-writing process should facilitate clinical reasoning 2) notes should be clear, concise, & readable Osler is… https://t.co/eaacPkscOm https://t.co/GV5qArFFxN"
Show captured excerptCollapse excerpt