Hidden Funding and Weak Evaluation Threaten CME Trust
Earlier coverage of commercial disclosure trust and its implications for CME providers.
Clinician discussion this week tied $2.46B in payments and undisclosed endorsements to variable digital-resource quality and reinforcement effects, showing why visible independence and quality cues now matter for learner
A $2.46B physician-payment figure and examples of undisclosed drug and device endorsements put education trust back in the learner’s line of sight this week. The sharpest examples came from oncology and related social-media discussion, but the provider implication is broader: independence, evidence quality, and reinforcement all need to be visible inside the learning experience, not assumed in the background.
Clinician commentary this week connected large-scale industry payments with everyday learning channels. One practicing oncologist pointed to the $2.46B in 2022 physician payments and framed off-label prescribing by paid physicians as a corruption risk, with oncology called out as especially exposed (source). A separate cardiology commentary summarized a JAMA research letter on physicians endorsing drugs and devices on social media, including cases where sponsored testimonials lacked compensation disclosure (source).
For CME providers, the issue is not whether accredited education has disclosure policies. It is whether learners experience those policies as meaningful protection. A disclosure slide buried at the beginning of an activity is a weak trust signal when clinicians are also seeing industry-linked posts, product photos, and favorable commentary in the same digital environment where they learn.
This connects to a broader provenance problem we noted in an earlier brief on learner control and data sources: learners are making fast judgments about which source feels least compromised. CME teams should ask whether their independence is legible at the moment of selection, not only documented for accreditation files.
The second signal came from discussion of how residents choose self-directed digital resources. In a live education podcast reviewing focus-group work with internal medicine residents, digital textbooks, podcasts, and X/Twitter were common resources; selection was driven by triggers such as patient relevance, peer conversation, upcoming rotations, capacity, familiarity, and social fit (source).
The concerning part for education teams was what did not appear prominently: quality assessment. Educators in the discussion worried that learners were treating digital resources as roughly equivalent, then choosing based on ease of access, fit with life, and peer visibility. That does not make learners careless. It reflects the real conditions under which clinical learning happens: short windows, high cognitive load, and strong pressure to find something usable now.
The provider implication is format-level, not moralizing. If an accredited podcast, video, or micro-module expects a learner to notice rigor, it has to surface rigor quickly: evidence date, author independence, review process, guideline status, and what is uncertain. The question is whether a learner can recognize why this source is better within the first screen or first minute.
The reinforcement signal is more emerging and comes from a provider-linked outcomes discussion, so it should be treated with that caveat. In a JCEHP Emerging Best Practices in CPD episode, educators discussed a large Medscape dataset suggesting that when activities reinforced what learners already knew, self-efficacy gains could be larger than when activities introduced new material, especially among learners with higher baseline knowledge (source).
"But when that happened, there was a larger proportion of the learners who had an increase in their self-efficacy as measured by a pre post self-efficacy rating."
That matters because many outcomes dashboards still privilege visible knowledge movement. If a learner starts high and stays high, the activity can look flat. But in practice, that learner may have used the activity to reconfirm a treatment sequence, refresh a rarely used pathway, or rebuild confidence before the next relevant patient.
CME teams should consider segmenting outcomes by baseline knowledge and measuring whether reinforcement changes self-efficacy, intention, or later behavior.
The common thread this week is not that clinicians need more content. It is that they are judging sources under pressure, in channels where industry influence, convenience, and peer visibility blur together. For CME teams, trust will come less from saying “accredited” and more from making the work behind accreditation visible: who shaped the content, how conflicts were handled, why the evidence was chosen, and whether the activity helped the learner act with confidence when the patient appears.
Practicing oncologist details $2.46B payments and off-label prescribing while receiving manufacturer cash, framing it as corruption risk.
"Refreshingly honest The biggest way medicine is corrupt is doctors prescribe drugs in off-label indications, while they take tons of cash from the drug maker. It smells like corruption. Oncology is especially bad."
Show captured excerptCollapse excerptEducator discussion links undisclosed KOL conflicts to unduly favorable guideline and education content.
Earlier coverage of commercial disclosure trust and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoContextual examples of social media endorsements without disclosure.
Open sourceFocus-group analysis of resident resource selection showing low quality scrutiny.
Open sourceTrainee and educator reports confirm assumption that all digital resources are equally useful.
Open source