From Milestones to Quintuple Aim: CME Must Now Prove Patient Outcomes
Earlier coverage of outcomes planning and its implications for CME providers.
A SACME discussion showed CME leaders turning overall program evaluation into a standing operating rhythm, not a reaccreditation scramble.
CME leaders used a SACME discussion this week to describe overall program evaluation as a recurring operating rhythm rather than a last-minute reaccreditation task. The signal is narrow and provider-side, but the operational details are useful: monthly OPE committees, Qualtrics repositories, LMS data, Asana and Smartsheet dashboards, annual reports, retreats, focus groups, and continuing friction with staffing and free-text analysis.
In the SACME National Coffee Chat on overall program evaluation, academic CME leaders described a move away from episodic self-study preparation toward standing OPE processes. One program described a monthly committee that includes compliance, RSS, quality improvement, strategy, outcomes, and leadership roles. Another described rolling activity-level data into a dashboard, pairing annual evaluation with department-chair meetings and focus groups.
The most useful part of the conversation was not the tool list. It was the operating model underneath it. Evaluation data were being used to set training agendas, inform annual retreats, identify outlier activities, prepare annual reports, and decide what the office should change next. That connects directly to an earlier brief on patient-outcomes pressure in accredited education: if CME is expected to show stronger outcomes, the office needs a repeatable way to collect, interpret, and act on evidence before reaccreditation season arrives.
The friction is just as important. In SACME’s live polling, 41% of respondents said they did not have a staff person assigned to OPE work. Only 19% reported using a dashboard, while 72% pointed to data collected from an LMS. Later polling showed a split between annual OPE and reaccreditation-driven OPE, and an almost even split on whether OPE data are used to leverage beneficial change or resources. Those figures should not be treated as nationally representative, but they make the problem concrete: many offices are trying to professionalize evaluation without a dedicated evaluation function.
For CME providers, the implication is to treat OPE as workflow design. A useful OPE process defines the committee, the meeting rhythm, the minimum data set, the repository, the dashboard owner, and the decision points. It also says what not to collect. The question for CME teams is simple: if a data field cannot change a decision, support a report, or improve an activity, why is it in the evaluation?
The useful shift is from proving that evaluation happened to showing how evaluation changes the office. Monthly committees, dashboards, and annual retreats are not glamorous, but they are the machinery behind credible outcomes reporting. The CME teams that get this right will not simply have cleaner reaccreditation files; they will know sooner what is working, what is noise, and where their limited staff time should go.
SACME coffee-chat recording in which CME directors from Boston University, Stanford, SIU, and OHSU detail monthly OPE committee composition, Qualtrics/LMS data aggregation, Asana/Smartsheet task dashboards, 41% zero dedicated staffing rate, and challenges analyzing free-text feedback.
Open sourceEarlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo