Weekly Pulse

Lung Screening Adherence Gets Reframed as a Team Workflow Problem (Not a Clinician Reminder Problem)

Topics: Operations, Outcomes, Instructional-design
Published

Abstract

This week’s most transferable CME-system signal came from lung cancer screening education that treated adherence as a program design problem—navigation, tracking, and incentives—rather than more clinician knowledge.

Coverage: 2026-03-23–2026-03-29

The strongest operator-relevant signal this week wasn’t an accreditation update—it was a practical reframing: sustained screening adherence depends on infrastructure (navigation + tracking + incentives), not just clinician intent. In a PeerView Oncology program, Nichole T. Tanner, MD, MSCR explicitly pushed beyond “individual provider” behavior toward system-level interventions like a dedicated navigator and a non-spreadsheet tracking system, with incentives on the table for adherence support PeerView activity page.

The 60-Second Take

  • Adherence was treated as an operational outcome, not a knowledge outcome: the conversation moved from reminders to navigation + tracking + follow-up design PeerView activity page.
  • “Beyond an Excel spreadsheet” is the real tell: measurement and continuity require durable registries/work queues, not episodic documentation PeerView activity page.
  • Navigator staffing was framed as a foundational control point: buy-in for a dedicated navigator was positioned as a starting move for new programs PeerView activity page.
  • Incentives were mentioned as a legitimate lever: the segment suggested considering incentives to improve adherence—not just more education PeerView activity page.
  • The workflow model choice matters: the program contrasted primary-care-held screening workflows with alternative models, making “who owns follow-up” a design decision PeerView activity page.

Lead Story

On a PeerView Oncology program, Nichole T. Tanner, MD, MSCR (role/affiliation not specified in the provided source text) argued that improving lung cancer screening adherence has to “go beyond just the individual provider,” pointing instead to system interventions like navigators, tracking systems, and even incentives PeerView activity page.

What changed

Instead of presenting “adherence” as a downstream behavioral outcome from clinician education, the segment treated adherence as something you build—through roles (navigation), tools (tracking beyond spreadsheets), and program governance (buy-in and incentives) PeerView activity page. That framing is directly portable to CME teams trying to show performance change: it implies the educational intervention may be necessary but insufficient unless the enabling workflow exists PeerView activity page.

Receipts

  • Tanner said adherence work “has to go beyond just the individual provider,” explicitly calling for “system level interventions” PeerView activity page.
  • She described the need for buy-in “for a dedicated navigator” and “a tracking system that is beyond an Excel spreadsheet” as an “important start” PeerView activity page.
  • She added that provider education should include understanding adherence importance and “even considering applying incentives” PeerView activity page.
  • In a second segment, the program framed “workflow solutions” as the mechanism to create a “stage shift,” then laid out that screening has “traditionally sat” with primary care and described a model where primary care owns shared decision-making, ordering, and follow-up PeerView activity page.

What it means for CME providers

  • Your outcomes plan can’t just measure learner knowledge/competence; it should measure whether the care team implemented (or already has) a tracking and follow-up mechanism that makes the desired behavior possible.
  • If your education is aimed at “adherence,” your intervention likely needs a parallel operations artifact: registry fields, a callback cadence, a work-queue owner, and escalation rules.
  • “Navigator” is a measurable implementation variable—an education series can explicitly target role clarity, handoffs, scripts, and exception handling, not just screening guidelines.
  • Incentives are sensitive but real: if you’re educating around program adherence, you should at least surface what incentive structures exist (or could exist) and how to evaluate them ethically and practically.
  • This is a reminder to design CME around the actual unit of change: often it’s a clinic workflow, not an individual clinician.
No Yes Define target behavior: annual screening adherence Map current ownership: PCP vs program team Is there a durable tracking system? Implement registry/work-queue(beyond spreadsheets) Audit data quality + follow-up rules Assign navigator role + handoff points Build education around workflow:who does what, when, exceptions Decide incentive approach + guardrails Measure outcomes:completion, time-to-follow-up,missed annual screens

What to do next Monday

  • Audit one “adherence” activity you run: list the non-educational dependencies (tracking, staffing, handoffs) and mark which ones you assumed instead of confirmed.
  • Add a workflow ownership slide (or planning-committee question) to your next design meeting: “Who owns follow-up at day 30/180/365, and where is it tracked?”
  • For one program, draft a one-page “implementation companion” for learners: roles, call cadence, escalation paths, and what gets documented where.
  • Update your evaluation plan to include at least one operations/process measure (e.g., presence of registry/work-queue, navigator assignment, percentage with scheduled annual follow-up).
  • Decide whether your outcomes story is “clinician behavior change” or “program reliability”—then align your educational format and measurement to that story.

Steal this template (copy/paste into your planning notes):

  • Target reliability metric we care about:
  • Current workflow owner(s) and handoffs:
  • Tracking system/tool in use (and where it breaks):
  • Role that closes the loop (navigator/care coordinator/etc.):
  • Incentive lever (if any) and constraints:
  • Outcomes we can collect in 30/90/180 days:

Other signals (Quick hits)

  • A second segment explicitly positioned “workflow solutions” (not novel tactics) as the path to “stage shift,” reinforcing that CME aimed at screening must be paired with care-process redesign PeerView activity page. Provider takeaway: if your gap statement is “late-stage diagnosis,” your intervention can’t stop at guideline refreshers.

Competitive mentions (only if repeated)

  • PeerView — appeared as the host/distributor of the featured segments and framing around workflow-based adherence PeerView activity page.

Sentiment

mixed

  • The program emphasized system interventions and operational levers like navigation and tracking as prerequisites for adherence success PeerView activity page.
  • It also acknowledged these models aren’t “new or exciting,” implying the barrier is implementation and ownership—not awareness PeerView activity page.

What We're Watching Next Week

  • More examples where CME activities explicitly separate “education” from “enablement,” and specify the workflow artifact that makes the practice change stick.
  • Whether incentive structures show up more often in CME-adjacent conversations (and how providers document guardrails for independence and appropriateness).
  • Additional public operator talk on tracking/registry infrastructure as an outcomes backbone (especially beyond one disease state).
  • Follow-on threads that connect these workflow designs to measurable outcomes plans (30/90/180-day signals) rather than end-of-year surveys.
  • Continued evolution of AI-in-ops conversations from earlier issues like AI review workflow rebuilds, but grounded in one concrete, auditable use case rather than broad tooling debates.

Turn learner questions into outcomes data

ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.

Request a demo