Point-of-Care Chatbots That Turn Clinical Questions Into Accredited Learning
Earlier coverage of ai oversight and its implications for CME providers.
Clinician AI conversations favored ambient scribes and admin help, while accreditor content pointed to ready templates for more active CPD.
Clinician AI discussion this week favored narrow workflow tools over broad generative AI promises. The signal came from ASCO24, so it is oncology-led, but the preferred use cases—documentation relief, administrative support, and caution around direct health advice—are portable across specialties.
At ASCO24, a clinician-shared Gartner Prism discussion placed ambient digital scribes and healthcare administrative assistants in the high-value, high-feasibility tier, while EHR search and summarization and autogenerated patient education materials sat lower in the framework (source). That is a useful distinction for CME teams: clinicians are not asking for another broad tour of generative AI. They are separating tools that remove work from tools that still require more trust, integration, and evidence.
A second practicing oncologist reinforced the boundary from another angle, cautioning that LLMs have potential for health education and access but are “not ready for prime time” for direct health advice (source). Taken together, the implication is not that CME should avoid patient-facing AI education. It is that the first educational layer should be much more specific: where the tool fits, what it should not do, how clinicians verify outputs, and what implementation tradeoffs matter.
This extends an earlier brief on defining the destination before choosing the route: AI education should start with the job clinicians are trying to get done, not the technology category. CME teams should ask whether each AI activity names a concrete workflow decision—scribe adoption, administrative task routing, output review, patient-education governance—or whether it is still teaching “AI” as a generic topic.
The week’s second signal came from provider/accreditor-owned CPD content, so it should not be read as broad clinician conversation. But it matters operationally because it points to ready infrastructure. In a JCEHP Emerging Best Practices in CPD episode, Graham McMahon and David Wiljer discussed the ACCME CE Educators Toolkit as a practical set of supports for small-group learning, case-based learning, reflective learning, PDSA cycles, RE-AIM, Moore’s framework, self-assessment, templates, checklists, and worksheets (source).
The provider implication is straightforward: “active learning” no longer needs to remain a design value that teams endorse but defer. The toolkit discussion frames the barrier as confidence and habit as much as time. For organizations still relying on lectures plus post-tests, the question is not whether they can redesign everything. It is whether one activity this quarter can be rebuilt around a case, a small-group exchange, a reflection prompt, and an evaluation plan that goes beyond attendance or satisfaction.
That matters for AI education too. If clinicians are asking for narrow workflow help, the format should let them rehearse the workflow: compare a scribe note to the encounter, decide what must be verified, identify where administrative automation could fail, or map who is accountable for patient-facing content. CME teams should ask which existing program could be converted from “expert explains tool” to “learner practices the decision.”
The planning question will move from “Do we need AI education?” to “Which workflow deserves education first, and what format lets clinicians practice it?” The stronger CME response is not a larger AI curriculum. It is a tighter match between the tool clinicians are ready to consider and the learning design that helps them use it safely, efficiently, and measurably.
Dr. Joseph McCollom summarizes Dr. Flora's Gartner Prism ranking ambient digital scribe and healthcare administrative assistant as highest value and feasibility.
"Dr Flora presents the "Gartner Prism" measuring potentials for #AI use cases evaluating based on #value #feasiblity as well as what clinicians want out of generative #AI #ASCO24"
Show captured excerptCollapse excerptDanielle Bitterman cautions LLMs are not ready for direct health advice, reinforcing focus on non-clinical tools.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of workflow-based education and its implications for CME providers.
Earlier coverage of workflow-based education and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo"Noticed the new AI Overviews in Google? Check out this @nytimes article on whether you trust it for health advice (I am quoted!) Lots of potential for gen AI/#LLMs for health education and access, but not ready for prime time."
Show captured excerptCollapse excerptMcMahon and Wiljer describe toolkit components (small-group, case-based, reflective learning, rapid reviews, focus groups, QI/equity lenses) and practical worksheets for distributed educators.
Open source