CME Offices Are Replacing Reaccreditation Fire Drills with Monthly Evaluation Committees
Earlier coverage of accreditation operations and its implications for CME providers.
Clinician discussion centered on undisclosed pharma funding and evaluation systems that prove compliance better than they improve learning.
Clinicians challenged the credibility of educational channels when pharma funding is routed through journals, societies, podcasts, or third-party media without clear disclosure. The week’s examples were oncology-led, but the provider implication is broader: trust now depends on making commercial support and evaluation limits visible before learners are asked to believe the content.
The sharpest discussion this week was not about whether industry money exists in medicine. It was about whether clinicians can tell where that money enters the learning ecosystem.
One oncology clinician argued that many journals and societies receive pharma money, then pointed to a less visible route in podcasts: “But many oncology podcasts are selling to third party sites which launder pharma money to pay the hosts.” The concern, raised in a public X thread, was not just sponsorship. It was undisclosed distance between the funder, the platform, and the person delivering the message.
A separate clinician thread widened the frame beyond individual online personalities: “But COI does not only exist in blogs, podcasts, and it isn't limited to "influencers".” The same post named academic journals, conference presentations, and hospital or industry media as places where transparency can be uneven (source).
For CME providers, this extends an earlier brief on why disclosure-only COI processes can fall short. The issue is no longer only whether the faculty disclosure slide exists. It is whether learners can understand the funding path before they commit attention to the activity.
The concrete question: can a clinician tell, within seconds, whether the program is independent, commercially supported, society-partnered, or built on third-party media with its own funding stream?
The second credibility pressure came from inside education operations. A Medical Education audio paper on accreditation contexts described evaluation leads who reported limited shared understanding of evaluation, undervalued evaluator expertise, and routine conflict between proving compliance and improving programs (source).
The source is a single academic podcast based on work across academic health science institutions in Australia, Canada, Hong Kong, and the US, so CME teams should not treat it as a direct map of every US CME setting. Still, the pattern is familiar enough to matter: accreditation can pull resources toward outcome verification, positive documentation, and selective reporting, while deeper implementation questions get postponed.
That matters because trust is not only built through independence policies. It is also built through evidence that the provider is willing to learn from what did not work. If evaluation is designed mainly to satisfy an accreditor, it may produce clean files and weak insight.
The concrete question: where does your evaluation process create protected space to identify failure modes, implementation barriers, or learner friction that would be inconvenient in an accreditation narrative but essential for improvement?
This week’s common thread is credibility under constraint. Commercial funding can be legitimate, and accreditation reporting is necessary. But both become trust problems when the operating system hides too much: who paid, who mediated the relationship, what was measured, and what the provider chose not to show.
CME teams do not need louder integrity claims. They need cleaner interfaces between funding, content, and evaluation. If a learner has to investigate the money trail, or if an evaluator has to avoid the uncomfortable finding, the trust problem has already started.
Documents widespread pharma payments to journals and societies plus third-party laundering specifically in oncology podcasts.
"I would never but most medical journals receive money from Pharma. NEJM, JAMA and most professional societies too."Open source
Corroborates undisclosed funding patterns and calls for transparency across educational channels.
"This is ever more important. But COI does not only exist in blogs, podcasts, and it isn't limited to "influencers". U find it in academic journals, conference presentations, and hospital/industry media content too. Call it out 👏 and boost those honest humans!"
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of accreditation operations and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoEvaluation leads report absence of coherence on aims, exclusion from decision tables, and routine conflicts between proving compliance and surfacing real dysfunction.
Open sourceDemonstrates 60% faster thrombectomy times and 300% functional independence gains alongside clinician needs for new interpretation and shared-decision skills.
Open sourceHighlights risk aversion, unstructured assessment challenges, and patient demand for AI tools.
"Very interesting analysis of concordance between @UW #GynOnc tumor board and @ChatGPTapp treatment recommendations by @RiosDoriaMD Great work! #SGOmtg #powerofsharedpurpose"
Show captured excerptCollapse excerptClinicians describe relinquishing boards rather than complying with repeated rule changes and costs.
"Not good enough. The people want NO LKA/MOC. CME is enough. @AaronGoodman33"Open source