LLM Tools Reach Clinics Before Clinicians Have Evaluation Frameworks
Earlier coverage of learning design and its implications for CME providers.
Clinician-educator milestones give CME teams a ready behavioral framework to replace vague faculty-development goals with observable progression steps.
Clinician-educators now have a clearer way to judge their own teaching growth than a generic post-workshop reflection form. The signal comes from a single institutional faculty-development podcast, so it should be treated as emerging, but the framework it describes is concrete enough for CME teams to test now.
The clinician-educator milestones discussed in the University of Louisville’s Faculty Feed episode are built around 5 competencies and 20 sub-competencies, with levels that move from novice to expert. The important detail for CME providers is not the number of categories. It is that the framework describes observable behaviors, skills, and attitudes rather than asking faculty to rate whether a session was useful.
That matters because faculty development often stalls at aspiration: improve feedback, teach more effectively, support learners, lead educational programs. The milestone structure gives educators a way to ask a more operational question: where am I now, what would the next level look like, and what evidence would show movement? The podcast frames the tool as useful for self-assessment, peer assessment, and mapping gaps in health professions education curricula, including areas such as educational theory and practice, well-being, diversity, equity, inclusion, and administration (Faculty Feed).
The caveat is important: this is an educator-leadership source, not a broad independent clinician conversation. The milestones are also described as explicitly not for accreditation, not a grade, and not a job-retention tool. That boundary is part of their usefulness. CME teams should resist turning them into another compliance checklist and instead use them to make faculty growth more specific.
A useful starting point is to embed one sub-competency into an existing faculty-development workshop. For example, a feedback workshop can begin with a short self-assessment against the relevant milestone language, then end with a written plan for moving one level forward. That connects to a pattern we covered in an earlier brief on feedback that teaches learners how to improve themselves: feedback education works better when learners and teachers can see the next step, not just receive a judgment.
The provider implication is straightforward: use the milestones as a shared language for progression. If a workshop, conference session, or self-directed module cannot name the educator behavior it is trying to move, the milestone framework can expose that weakness before the activity is built.
The change this week is not that faculty-development expectations suddenly became mandatory. It is that CME teams have a portable behavioral framework they do not have to invent from scratch. Pick one sub-competency, make the expected behavior visible, and ask faculty to plan one step of growth. If that works, the same structure can support broader curriculum audits and more credible outcomes measurement without pretending every educator needs to reach expert level in every domain.
Describes the 5-competency, 20-sub-competency structure, level-1-to-5 behavioral descriptors, intended uses for reflection/action planning, and explicit non-accreditation purpose.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo