Case Study

From question to evidence in 30 seconds

How ChatCME helped a CME provider capture 2,100+ learner questions from a cardiovascular activity, surface knowledge gaps by objective, and demonstrate evidence engagement—all from a single deployment.

De-identified case study. Activity details have been generalized to protect provider and learner privacy.

2,100+
Learner questions captured
780+
Unique learners engaged
87%
Questions aligned to objectives
<30s
Median time to cited answer

Activity snapshot

The activity

  • Format: Enduring online activity with video lectures and slide decks
  • Therapeutic area: Cardiovascular medicine
  • Audience: Cardiologists, internists, and advanced practice providers
  • Duration: 12-week deployment period

The challenge

The CME provider wanted to understand what learners were actually asking about the content—beyond completion rates and post-test scores. They needed:

  • Insight into knowledge gaps by learning objective
  • Evidence of learner engagement with sources
  • De-identified reporting for commercial supporter briefings

What clinicians valued

Instant answers

Median response time under 30 seconds, with citations to the exact slide or video timestamp. Learners could verify claims immediately.

Verifiable sources

Every answer included citations. Learners opened references 1.8× per session on average—evidence of active verification.

Follow-up depth

Average of 2.7 questions per session. Learners asked follow-ups to deepen understanding—not just single queries.

What clinicians asked

Questions clustered into themes and mapped to learning objectives

Top knowledge-gap themes

  • Treatment sequencing: When to escalate therapy and which combinations to consider
  • Diagnostic workup: Test selection and interpretation in ambiguous presentations
  • Monitoring protocols: Follow-up intervals and escalation triggers
  • Contraindications: Patient factors affecting treatment eligibility

Objective coverage

87% of learner questions aligned to one or more stated learning objectives. Distribution by objective:

  • Objective 1 (Diagnosis)34%
  • Objective 2 (Treatment selection)41%
  • Objective 3 (Monitoring)25%

The remaining 13% were off-topic or ambiguous, flagged for editorial review.

What partners can do next

For CME providers

  • Use theme data to refine learning objectives for the next iteration
  • Identify content gaps where questions exceeded available evidence
  • Export CSV data for accreditation reporting and QA cycles

For commercial supporters

  • Review aggregated, de-identified theme reports from the provider
  • Use knowledge-gap insights to inform future grant strategy
  • No learner-level data—independence and privacy preserved

Methods

  • Data collection: All learner interactions captured during the 12-week deployment. Questions and responses logged with timestamps, citation opens tracked.
  • Objective alignment: Each learner message classified against stated learning objectives using ChatCME’s alignment model. Editorial review applied to ambiguous cases.
  • Theme clustering: Natural-language clustering of questions into topic groups, mapped to objectives and labeled by editorial team.
  • Privacy: All data de-identified at point of capture. No learner-level reporting. Small-N suppression applied to subgroups.

Get the full case study

Enter your email to receive the complete PDF with additional details and methodology notes.

We’ll email you the PDF. No spam—just the case study.

See how ChatCME can work for your programs

We’ll walk your team through the assistant, analytics, and Admin Console with examples from your therapeutic area.

Request a demo