A 7-Step Needs-Assessment Workflow That Turns Insight Into Tight Agendas
Earlier coverage of ai oversight and its implications for CME providers.
Clinician discussion this week points to an AI education problem CME cannot solve with another one-off tool demo.
Clinician discussion this week put a sharper point on a practical AI education problem: trainees may be moving faster than the faculty expected to supervise them. The signal is strongest in oncology and medical education settings, but the provider implication is broader: AI learning pathways need to be tiered by role, responsibility, and workflow exposure.
One radiation oncologist reflecting on AIMed25 framed the asymmetry plainly: “Many students & residents now are more fluent in AI tools than their faculties” (source). That is a different problem from low AI awareness. It means some learners may be experimenting in real clinical or pre-clinical workflows while their supervisors are still trying to understand the tools, risks, and boundaries.
A second thread made the faculty-side problem concrete: “We do not know what these tools actually do, how they make decisions, or how we can check their safety and limits” (source). The requested fix was not another overview of generative AI. It was structured training, protected time, and closer work between clinicians, data scientists, and educators.
For CME providers, the lesson is to stop treating AI education as a single audience. Trainees need foundations, interpretation, responsibility, and leadership competencies. Faculty need enough fluency to coach, challenge, supervise, and escalate safely. A conference reflection from the same week also emphasized trust, regulatory compliance, model reliability, and clinician-data scientist learning as part of the AI conversation (source).
That connects directly to a pattern we saw in an earlier brief on simulation faculty needing shared AI governance rules, not more tool demos. The next step is curriculum architecture: separate tracks for faculty upskilling and trainee development, joined by shared language around ethics, hallucination risk, safety checks, and when to seek help. The operating question for CME teams is simple: are you teaching a tool, or are you teaching a supervised clinical behavior?
The risk is not that every clinician will adopt AI at the same pace. They will not. The risk is that CME programs keep offering flat, one-size-fits-all AI education while clinical teams develop uneven fluency inside the same learning environment. If this pattern holds, the providers with an advantage will be the ones that can map AI education to role, stage, and responsibility. Faculty development and trainee education should not be separate silos, but they should not be identical courses either.
Practicing clinician thread explicitly describes trainee AI fluency exceeding faculty and lists required curriculum elements including safety/limits checking and ethics.
"🎓 How should we teach medicine in the age of AI? Day 2 of #AIMed25 focused on The Future of #AI in #MedicalEducation. Many students & residents now are more fluent in AI tools than their faculties. We need to help educators to guide trainees not just to use AI, but to use it safely, responsibly, and ethically. How admissions and selection processes may need to evolve. 🔑 It’s time to rethink medical curricula from the ground up, bringing #AIEducation into the earliest stages of training. A major highlight came from the International AI in Medical Education Working Group at the @UofT, led by Dr. Muhammad Mamdani, who presented a thoughtful framework for integrating AI into medical training. The day included a “Shark Tank”-style showcase of new ideas in #AI. #Innovation in #healthcare is growing. 💡"
Show captured excerptCollapse excerptEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoSecond independent thread calls for curriculum redesign from earliest training stages through faculty development and collaboration with data scientists.
"How can we close the growing knowledge gap between physicians and the AI tools that shaping our future clinical practice? Many of us can see how fast #AI is changing clinical care, but most of us were never taught the basics. We do not know what these tools actually do, how they make decisions, or how we can check their safety and limits. Recent surveys from the US and Europe keep showing the same thing: low confidence, no formal training, and real concerns about accuracy, ethics, and regulation. 🛟 We need structured training, protected time to learn, and real opportunities to work closely with data scientists and educators. The knowledge gap is right in front of us. We need to close it. #AIEducation"
Show captured excerptCollapse excerptThread highlights demand for responsibility and leadership competencies in addition to foundational literacy.
"Day 1 of #AIMed25: a powerful keynote that set the tone for a full day of learning and connection. Great sessions on building trust in #AI, regulatory compliance, federated learning, evidence-based implementation, model reliability, and a lively #ChatGPT vs. #DeepSeek face-off that showcased AI’s ability to answer clinical questions. It was exciting to see #physicians becoming engaged in every aspect of #AI in #healthcare. Physician trust is increasing. Physicians, data scientists, and innovators learning from each other. 🤝 For me the highlight was an outstanding subspecialty-focused discussion in #oncology with many new ideas and reflections. We learned about the work the @AmericanCancer is doing. The future of #oncology could become multimodal and multi-omics, with functional and integrative medicine included in every aspect of care to make #cancer treatment truly whole-patient care."
Show captured excerptCollapse excerpt