Clinician and writer discussions highlighted immediate needs around AI governance in content creation, structured faculty support, and credible recertification assessments.
Generative AI is now being tested inside CME and medical writing workflows for literature synthesis, gap analysis, bias checks, translation, titles, and ideation. The strongest signal came from CPD and medical writing sources, with a separate physician thread pointing to AI literacy in medical education; the broader implication is portable across specialties: CME teams need rules for how content work is changing, not just opinions about whether AI is good or bad.
Medical writers and CME professionals were not talking about AI in the abstract. In a CME-focused Write Medicine discussion, AI was framed as a tool for testing counterarguments, checking for missing perspectives, identifying bias, generating titles, and supporting parts of literature work—while repeatedly emphasizing expert judgment and human supervision (Write Medicine). A separate physician post on AI in medical education pushed the same issue from the teaching side: physicians will need to understand how to use and teach with these tools, not treat them as a novelty (Justin Dubin, MD).
For CME providers, the immediate work is procedural. If a writer uses AI to draft title options, test a gap statement, translate patient-facing language, or look for bias, what must be documented? When does AI use belong in disclosures or acknowledgments? Who validates outputs against the literature and against the activity’s educational purpose?
The implication is simple: AI literacy is no longer only a faculty-development topic. It is part of editorial operations, accreditation risk management, and learner trust. CME teams should decide where AI is allowed, where it is prohibited, what must be logged, and who signs off before AI-assisted work reaches faculty or learners.
Faculty-development sources this week emphasized a practical tension: clinicians and faculty are expected to perform at a high level while still needing protected space to learn, reflect, and grow. A Cleveland Clinic talk put the distinction plainly: “So in order to grow, we cannot only live in the performance zone.” (Cleveland Clinic) The same discussion pointed to structured mentorship, sponsorship, peer support, and alternative mentorship models rather than relying on informal access to well-connected mentors.
A Faculty Factory conversation added the day-to-day operating layer: saying no, using calendar blocks, protecting writing time, and aligning tasks with priorities rather than treating productivity as task volume (Faculty Factory). This section is supported by a mix of institutional and educator-led sources, with some independent clinician texture; it is best read as an emerging faculty-development pattern, not a sweeping clinician consensus.
For CME providers, the design question is whether leadership and faculty-development offerings are built for clinicians’ real time constraints. Mentorship curricula need facilitation guides, boundaries, sponsor roles, and peer-learning structures. Productivity offerings need to help faculty decide what not to do, not simply teach them how to do more.
The sharpest clinician frustration this week centered on ABIM MOC and Longitudinal Knowledge Assessment questions. Hematology/oncology and internal medicine physicians described questions as outdated, inaccurate, irrelevant, or excessively time-consuming—especially when maintaining multiple certifications. One physician wrote, “My answer to a question was marked wrong because they didn’t update with positive randomized trial reported in 2023.” (Amer Zeidan, MBBS, MHS) Another thread described the burden of maintaining internal medicine, hematology, and oncology certifications while also completing CME and MOC requirements (David Hedrick).
This is not a general referendum on every assessment format. It is a specialty-heavy, physician-channel signal about credibility. But the provider implication is broader: when clinicians experience required questions as stale or disconnected from patient care, remediation content has to work harder to earn trust.
CME teams that support MOC should treat question quality as part of learner experience, not a back-office detail. Currency checks, rapid updates after major evidence changes, transparent rationales, and targeted remediation matter because assessment is not just measurement; it is a visible claim about what the profession considers worth knowing.
The common thread is not a need for more content. It is the need for tighter operating rules around how professional learning is created, protected, assessed, and defended. A nursing-focused accreditation discussion this week made a related point: accredited education has to communicate outcomes, resource stewardship, and organizational value in terms leaders understand (Let’s Chat Accredibility). That source is single-source and nursing-centric, but the management lesson travels: CME teams should be able to explain not only what they produced, but why the process deserves trust.
Detailed clinician discussion of prompt engineering for literature synthesis and bias detection in CME writing, with emphasis on required human ethical review.
Open sourceExploration of AI disclosure practices and the limits of pattern detection versus expert judgment in accredited content production.
Open sourceChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoIndependent clinician thread on teaching AI literacy to faculty and medical students while stressing transparency and prompt validation.
"Honored to be part of this fantastic AUANews Special AI Edition. I don’t think there is any question that AI is here to stay and as we progress in our utilization and knowledge of AI, the conversation not only becomes how do we use it, but also how as physicians should we TEACH…"
Show captured excerptCollapse excerptCleveland Clinic institutional perspective on structured mentorship linked to academic promotion success and retention.
Open sourceDiscussion of peer learning circles and daily reading habits as sustainable faculty growth practices.
Open sourceClinician thread on calendar blocking, saying no, and sponsorship as tools for protecting capacity and advancing others.
"٣٠ دقيقة يوميًا قد تغير حياتك ( عن تجربة شخصية ) ! في بداية عام ٢٠٠٠ في أثناء دراستي بكلية الطب، ، كنت أشتكي لصديقي وأخي العزيز الدكتور ناصر الهاجري من دولة الكويت الحبيبة، عن كمية المعلومات التي أحتاج إلى استيعابها. نصحني وقال : بسيطة , كل ليلة، قبل النوم بنصف ساعة اقرأ صفحة…"
Show captured excerptCollapse excerptPhysicians detail outdated trial data and lack of clinical relevance in current MOC questions.
"Addendum:"Open source
Complaints about excessive time burden when maintaining multiple certifications.
"Anything else @ABIMcert should know? @AaronGoodman33"Open source
Growing sentiment that MOC functions as box-checking rather than meaningful learning.
"I am extremely frustrated as many @ABIMcert questions are not only useless but some provide factually wrong/outdated information. My answer to a question was marked wrong because they didn’t update with positive randomized trial reported in 2023. They are teaching wrong answers!!"
Show captured excerptCollapse excerptNPD leaders stress framing accredited education via the nursing process, measurable outcomes, and executive-summary communication to secure leadership buy-in.
Open source