Clinicians Now Demand AI Training That Names Its Own Failure Modes
Earlier coverage of workflow-based education and its implications for CME providers.
Oncology clinicians are adopting 2-15 minute modules, multi-stream audio, and real-time AI summaries to manage FOMO and fit learning into crowded schedules.
Clinicians at major oncology meetings are turning to 2-15 minute modules, multi-stream headset audio, and real-time AI summaries to manage FOMO and fit learning into crowded clinical days. The examples are oncology-led, but the provider implication is broader: professional learning must reduce friction without assuming unlimited attention.
At WCLC25, one clinician praised a setup that let attendees sit before multiple screens and choose audio through headsets: “You can sit in front of these screens and listen to simultaneous presentations w your own headset.” The point was not novelty for its own sake; it was a direct response to the impossible choice created by concurrent sessions at large meetings (source).
A separate surgical-oncology education discussion described the same pressure from the provider side. The Society of Surgical Oncology’s platform update emphasized mobile access, fewer clicks, search by content type and credit status, and micro-learning activities “between two to about 15 minutes in length” for use during clinical downtime (source). This is provider-owned content, but it is corroborated by clinician posts from a live conference environment rather than standing alone as a platform announcement.
The lesson for CME teams is that conference access is no longer just a registration, room, and archive problem. Learners are trying to create their own pathways through dense meetings: listen across streams, search by time available, return later for credit, and use AI tools or summaries to decide what deserves deeper attention. Another WCLC25 clinician post on an early-career session highlighted AI tools for presentation work and the need for the right prompts, which is a reminder that clinicians are already blending meeting content with digital helpers (source).
For providers, the question is whether the event is still designed around full-session attendance as the default. If it is, the archive may be preserving content but losing the learner’s actual workflow.
The week’s AI documentation signal was narrower: one expert interview on discharge summaries, not broad clinician conversation. Still, it is useful because it ties AI adoption to a concrete workflow pain point. The clinician described the burden plainly: “But it also takes an hour and a half of my time when I could be seeing other patients to just write this thing.” (source)
The interview’s main point was not that LLM-generated summaries are ready to replace clinicians. It was that draft quality can be strong enough to pilot, while review remains essential. The discussion covered coherence, conciseness, hallucinations, omissions, potential harm, physician editing before signature, and scalable validation approaches such as using multiple models to check an output before the final human review.
That echoes an earlier brief on clinicians building their own AI tools, but the documentation example is more operational. CME teams should not treat AI documentation education as a generic “how to use the tool” session. The learning object should be the handoff: what the model drafts, what the clinician must verify, what error types matter, and when a draft should be escalated or rejected.
The concrete question for CME teams: can an activity make the review behavior observable, or does it stop at showing that the draft looks fluent?
The common thread this week is time burden. Clinicians are not rejecting education or documentation support; they are rejecting formats that make them do the organizing work themselves. CME teams should audit where they still assume a linear learner: one room, one session, one full recording, one tool demo. The stronger model is bounded, searchable, reviewable, and honest about where clinician judgment still has to enter.
Educators detail mobile-first platforms and micro-learning modules (2-15 min) as direct response to clinician scheduling constraints.
Open sourceClinician describes using headsets for simultaneous multi-stream listening to manage FOMO at large meetings.
"#WCLC25 @IASLC has cracked the code on how to listen to 9 presentations at once during a medical conference. You can sit in front of these screens and listen to simultaneous presentations w your own headset. Brilliant! No more #FOMO #lcsm"
Show captured excerptEarlier coverage of workflow-based education and its implications for CME providers.
Earlier coverage of workflow-based education and its implications for CME providers.
Earlier coverage of workflow-based education and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoClinician highlights agentic-AI tools that summarize abstracts in real time during conferences.
"Great early career session @IASLC #WCLC25 on effective delivery presentations 👏know your audience 👏tell your story 👏use AI tools but need right prompts Thx AI links @PrelajArsela Will try these out"
Show captured excerptCollapse excerptClinicians report LLM drafts equivalent or superior in coherence/conciseness but stress hallucination risks and mandatory physician review plus LLM-as-jury validation.
Open source