From 3caa7a56ac07c9013fc39389ec49692ef77a65fd Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Wed, 11 Feb 2026 10:36:49 -0500 Subject: [PATCH 01/13] docs: add user research workflow brainstorm and implementation plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Brainstorm and deepened plan for adding a user research workflow to the compound-engineering plugin — a /workflows:research command with three skills (research-plan, transcript-insights, persona-builder) and one agent (user-research-analyst). Plan was enhanced by 7 parallel research agents covering architecture, patterns, simplicity, best practices, and skill/agent conventions. Co-Authored-By: Claude Opus 4.6 --- ...02-10-user-research-workflow-brainstorm.md | 342 ++++++++++ ...-02-11-feat-user-research-workflow-plan.md | 643 ++++++++++++++++++ 2 files changed, 985 insertions(+) create mode 100644 docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md create mode 100644 docs/plans/2026-02-11-feat-user-research-workflow-plan.md diff --git a/docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md b/docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md new file mode 100644 index 00000000..8f1c84c1 --- /dev/null +++ b/docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md @@ -0,0 +1,342 @@ +# User Research Workflow for Compound Engineering + +**Date:** 2026-02-10 +**Status:** Brainstorm complete + +## What We're Building + +A user research workflow that closes the gap between research and implementation. Today, the compound engineering plugin has zero research capabilities — insights from user interviews sit in Google Docs and never reach the developer. This workflow makes research a first-class input to AI-assisted development. + +**Core flow:** Plan research -> Conduct interviews -> Store transcripts -> Process into insights -> Build personas -> Feed into feature planning + +**Artifacts live in:** `docs/research/` following existing YAML frontmatter patterns. + +**Methodology grounded in:** Teresa Torres' *Continuous Discovery Habits* (story-based interviewing, interview snapshots, Opportunity Solution Trees) and Rob Fitzpatrick's *The Mom Test* (past behavior over future speculation). See `references/discovery-playbook.md` bundled with the skills. + +**Shipped in two PRs:** +- **PR 1:** The research workflow command, three skills, agent, and directory structure +- **PR 2:** Integration with `/workflows:brainstorm` and `/workflows:plan` to auto-surface research + +## Why This Approach + +**Approach chosen: Research Workflow (full workflow command + modular skills + agent)** + +This mirrors the existing workflow pattern (`brainstorm -> plan -> work -> review -> compound`) and adds a parallel research track. Each piece is independently useful, but the workflow command orchestrates the sequence. The agent integration means research automatically compounds into every feature decision. + +**Rejected alternatives:** +- Standalone skills (no orchestration, less "compound" feeling) +- Single monolithic skill (doesn't follow plugin's pattern of specialized, focused tools) + +## New Components (5 total) + +### 1. Workflow Command: `/workflows:research` + +Orchestrates the full research loop as a single command with phases (matching how `/workflows:brainstorm` and `/workflows:plan` work — one command file, multiple phases, skills provide process knowledge): + +- **Phase 1: Plan** — Create a research plan (loads `research-plan` skill) +- **Phase 2: Process** — Process a transcript into structured insights (loads `transcript-insights` skill) +- **Phase 3: Personas** — Build/update persona documents from accumulated insights (loads `persona-builder` skill) + +The command accepts an optional argument to jump to a specific phase (e.g., `/workflows:research process`). Without an argument, it asks which phase to run. Each phase is independent — users can run them in any order as their research progresses. + +### 2. Skill: `research-plan` + +Creates a structured research plan document in `docs/research/plans/`. Grounded in Continuous Discovery Habits — plans are **outcome-focused** (tied to a metric, not a feature) and generate **story-based discussion guides** following the Mom Test. + +**Outputs:** +```yaml +--- +title: Dashboard Usability Study +date: 2026-02-10 +status: planned +outcome: "Reduce time-to-insight for dashboard users by 30%" +hypotheses: + - Users check dashboards first thing in the morning for problems + - Users need exportable reports for stakeholders +participant_criteria: + - Marketing managers at B2B SaaS companies + - Active dashboard users (3+ times/week) +sample_size: 5 +screener_questions: + - "How often do you use a data dashboard in your work?" + - "When was the last time you shared data with a colleague or stakeholder?" +--- + +## Research Objectives +1. Understand daily dashboard usage patterns +2. Identify export/reporting pain points + +## Three Most Important Things to Learn +1. What triggers a dashboard visit and what do users look for first? +2. How do users currently share data with stakeholders? +3. What workarounds exist for unmet dashboard needs? + +## Discussion Guide + +### Warm-up (2-3 min) +- Tell me about your role and how data fits into your day-to-day + +### Story Collection (15-20 min) +*Story-based prompts — ask about specific past behavior, not opinions:* +- "Tell me about the last time you opened your dashboard. What was happening? What were you looking for?" +- "Tell me about a recent time you needed to share data with someone. Walk me through what happened." + +**Follow-up probes:** +- "What happened next?" +- "How did you feel at that point?" +- "What did you end up doing?" + +*Redirect generalizations:* If participant says "I usually..." → "Can you think of a specific time that happened? Walk me through it." + +### Wrap-up (2-3 min) +- "Is there anything else I should have asked?" +- "Who else should I talk to about this?" + +## Post-Interview Checklist +- [ ] Complete interview snapshot within 15 minutes +- [ ] Run `/workflows:research process` on transcript +- [ ] Note any follow-up items or new hypotheses +``` + +**Key features:** +- **Outcome-focused** — plans start with a measurable outcome, not a feature idea +- Generates **story-based discussion guides** (Teresa Torres) — "Tell me about the last time..." not "Would you use...?" +- Embeds **Mom Test principles** — past behavior, no pitching, redirect generalizations +- Includes **screener questions** for participant recruitment +- Includes **"Three Most Important Things to Learn"** pre-interview focus (Mom Test) +- Includes **post-interview checklist** to close the loop with transcript processing +- Links back to features/brainstorms that motivated the research + +### 3. Skill: `transcript-insights` + +Takes raw interview transcripts (from `docs/research/transcripts/`) and produces two outputs: a structured **interview snapshot** (Teresa Torres) and **atomic research nuggets** for cross-interview analysis. + +**Process:** The skill follows the highlight → tag → synthesize flow from the discovery playbook. It reads the raw transcript, identifies key moments, tags them, and produces structured output. + +**Input:** Raw transcript `.md` file path (from `docs/research/transcripts/`) or pasted text + +**Output:** `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` +```yaml +--- +participant_id: user-001 +participant_role: Marketing Manager, B2B SaaS +date: 2026-02-10 +research_plan: dashboard-usability-study +focus: Dashboard usage patterns +duration_minutes: 30 +tags: [dashboard, export, morning-workflow, b2b-saas] +--- + +## Interview Snapshot + +**Participant:** Marketing Manager at mid-size B2B SaaS (3 years in role) +**Memorable Quote:** "First thing every morning, I check for red flags." + +### Experience Map +1. Arrives at work, opens dashboard before email +2. Scrolls past positive metrics looking for problems +3. Finds an anomaly, tries to export for stakeholder +4. Export is buried — takes 3 attempts to find button +5. Gives up, screenshots instead + +### Opportunities (needs, pain points, desires) +- Needs to surface problems quickly without scanning everything +- Needs to export data in static formats (PDF) for stakeholders +- Wants dashboard to proactively alert on anomalies + +### Follow-up Items +- How do other roles (engineers, executives) use the same dashboard? +- What does "red flag" mean specifically — thresholds? Trends? + +## Atomic Insights + +### Insight: Morning dashboard ritual +**Quote:** "First thing every morning, I check for red flags." +**Implication:** Dashboard needs to surface problems quickly, not show everything. +**Tags:** [information-hierarchy, morning-workflow, pain-point] + +### Insight: Export friction +**Quote:** "My boss wants a PDF, not a link." +**Implication:** Export to static formats is a core need, not a nice-to-have. +**Tags:** [reporting, export, workaround] + +### Insight: Screenshot workaround +**Observation:** Participant gave up on export after 3 attempts and used screenshots instead. +**Implication:** Workaround signals unmet need — export flow is broken, not just inconvenient. +**Tags:** [workaround, export, abandonment] + +## Behavioral Observations +- Opened dashboard before email +- Scrolled past charts to find the "alerts" section +- Attempted export 3 times before finding the button +- Fell back to screenshots when export failed + +## Hypotheses Supported/Challenged +- [SUPPORTED] Users check dashboards first thing in the morning +- [NEW] Users prioritize problems over positive metrics +- [NEW] Export is broken enough that users have workarounds +``` + +**Key features:** +- Produces **interview snapshots** (Teresa Torres) — one-page summaries with experience maps, not just raw notes +- Extracts **atomic research nuggets** — smallest reusable units of insight with tags for cross-interview search +- Uses a **tag taxonomy** (behavioral, emotional, need/pain point, descriptive) consistent across all interviews +- **Experience maps** — timeline of the participant's story showing key moments +- Identifies **opportunities** (needs, pain points, desires) — the language of the Opportunity Solution Tree +- Captures **workarounds** explicitly — strongest signal of unmet needs +- Links back to research plan and tracks hypothesis validation + +### 4. Skill: `persona-builder` + +Synthesizes insights across multiple interviews into living persona documents. + +**Output:** `docs/research/personas/persona-name.md` +```yaml +--- +name: The Data-Driven Manager +role: Marketing Manager +company_type: B2B SaaS +last_updated: 2026-02-10 +interview_count: 3 +confidence: medium +--- + +## Goals +1. Prove marketing ROI to leadership +2. Identify underperforming campaigns before they waste budget + +## Frustrations +1. Too much data, hard to find what matters +2. Exporting for reports is tedious — "My boss wants a PDF, not a link" +3. Dashboard doesn't surface problems proactively + +## Behaviors +- Checks dashboard first thing every morning (3/3 participants) +- Scrolls past positive metrics to find problems (2/3 participants) +- Exports data weekly for stakeholder reports (3/3 participants) + +## Quotes +- "First thing every morning, I check for red flags." +- "I need to see problems, not everything." +- "My boss wants a PDF, not a link." + +## Opportunities (for Opportunity Solution Tree) +| Opportunity | Evidence Strength | Source Interviews | +|-------------|------------------|-------------------| +| Users need to surface problems without scanning everything | Strong (3/3) | user-001, user-003, user-005 | +| Users need to export data in static formats for stakeholders | Strong (3/3) | user-001, user-003, user-005 | +| Users want proactive alerts instead of manual checking | Medium (2/3) | user-001, user-005 | + +## Evidence +- Based on interviews: user-001, user-003, user-005 +- Research plan: dashboard-usability-study +``` + +**Key features:** +- Synthesizes across multiple interviews (not just one) +- Tracks confidence level based on participant count +- Includes an **Opportunities table** using OST language (opportunities, not solutions — feeds directly into Opportunity Solution Trees) +- Links back to source interviews for traceability +- Updates incrementally as new interviews are processed + +### 5. Agent: `user-research-analyst` + +A research agent (parallel to `learnings-researcher`) that surfaces relevant personas and insights during brainstorming and planning. + +**Invoked by:** `/workflows:brainstorm` (Phase 1) and `/workflows:plan` (Step 1) + +**What it does:** +- Searches `docs/research/personas/` for personas relevant to the feature being planned +- Searches `docs/research/interviews/` for insights matching the feature area +- Returns a summary: relevant personas, key quotes, confidence levels, feature implications + +**Integration points:** +- `/workflows:brainstorm` Phase 1.1 — surfaces personas alongside repo research +- `/workflows:plan` Step 1 — runs in parallel with `learnings-researcher` and `repo-research-analyst` + +## Directory Structure + +``` +docs/research/ +├── plans/ # Research plans (discussion guides, hypotheses, outcomes) +│ └── dashboard-usability-study.md +├── transcripts/ # Raw interview transcripts as markdown +│ ├── 2026-02-10-user-001-transcript.md +│ ├── 2026-02-10-user-002-transcript.md +│ └── ... +├── interviews/ # Processed interview snapshots + atomic insights +│ ├── 2026-02-10-user-001.md +│ ├── 2026-02-10-user-002.md +│ └── ... +└── personas/ # Synthesized persona documents + ├── data-driven-manager.md + └── ... +``` + +**Transcripts are markdown files.** Users paste or save their raw interview transcripts as `.md` files in `transcripts/`. The `transcript-insights` skill reads from here and writes structured output to `interviews/`. Raw transcripts are kept as source-of-truth — the processed insights are derived artifacts that can be regenerated. + +## How Research Compounds + +``` +/workflows:research (Plan) → docs/research/plans/ + ↓ + (conduct interviews externally) + ↓ + (save transcript to docs/research/transcripts/) + ↓ +/workflows:research (Process) → docs/research/interviews/ + ↓ +/workflows:research (Personas) → docs/research/personas/ + +--- PR 1 boundary (above) --- +--- PR 2 boundary (below) --- + +/workflows:brainstorm ← user-research-analyst auto-surfaces personas +/workflows:plan ← user-research-analyst auto-surfaces insights + ↓ + (build feature, ship, observe) + ↓ +/workflows:research (Plan) → new research to validate +``` + +## Key Decisions + +1. **Workflow command with phases** — `/workflows:research` is one command with three phases (Plan, Process, Personas), matching how other workflow commands work +2. **Three modular skills** — each phase is a standalone skill the workflow orchestrates +3. **YAML frontmatter on everything** — follows the `docs/solutions/` pattern so AI agents can filter by metadata +4. **Raw transcripts stored as markdown** — `docs/research/transcripts/` holds raw `.md` transcripts; `interviews/` holds processed insights derived from them +5. **Living personas** — personas update incrementally as new interviews are processed, with confidence tracking +6. **Methodology baked in** — skills embed Teresa Torres (story-based interviewing, interview snapshots, OST) and Mom Test (past behavior, no pitching) principles directly into their output templates +7. **`docs/research/` directory** — follows existing `docs/solutions/`, `docs/brainstorms/`, `docs/plans/` pattern +8. **Two-PR delivery** — PR 1: research workflow, skills, agent, and directory structure. PR 2: modify `/workflows:brainstorm` and `/workflows:plan` to auto-call `user-research-analyst`. Keeps PRs focused and independently shippable. +9. **Discovery playbook as reference** — the discovery playbook is bundled as `references/discovery-playbook.md` in the skills, giving AI access to the full methodology + +## Open Questions + +1. **Experiment design (stretch goal):** Should `/workflows:research experiment` be a fourth phase that generates hypotheses and suggests validation approaches (A/B tests, usage metrics to watch)? +2. **Cross-interview theming:** Should `persona-builder` also generate a `docs/research/themes.md` that tracks cross-cutting themes and their evidence strength? +3. **Transcript input:** Should `transcript-insights` accept only pasted text and file paths (simplest), or also handle URLs to transcript services? Start with text/file, extend later if needed. + +## Reference Materials + +The following will be bundled as `references/` in the skills: + +- **`discovery-playbook.md`** — Continuous Product Discovery Playbook (Teresa Torres + Mom Test methodology, interview structure, snapshot format, tagging taxonomy, Opportunity Solution Trees) +- Source: `/Users/matthewthompson/Downloads/discovery-playbook.md` + +Key concepts incorporated from the playbook: +- **Outcome-focused research plans** — tied to metrics, not features (Section 2.1) +- **Story-based interviewing** — "Tell me about the last time..." not "Would you use...?" (Section 3.3) +- **Mom Test principles** — past behavior, no pitching, redirect generalizations (Section 3.2) +- **Interview snapshots** — one-page synthesis with experience maps, done within 15 min (Section 4.2) +- **Atomic research nuggets** — smallest reusable insight units with tags (Section 5.5) +- **Highlight → Tag → Theme flow** — structured analysis progression (Sections 5.2-5.4) +- **Opportunity language** — needs, pain points, desires (not features) for OST compatibility (Section 6.1) + +## Stretch Goals (Future) + +- `/workflows:research experiment` — design experiments to validate hypotheses, suggest A/B tests and metrics to watch (OST "Experiments" layer) +- `/workflows:research validate` — compare feature usage data against research predictions +- Cross-interview theme tracking with `docs/research/themes.md` and confidence aggregation +- Integration with analytics MCP servers for automated pattern detection (per Every guide's "more coming soon") +- Opportunity Solution Tree visualization — structured view connecting outcomes → opportunities → solutions → experiments diff --git a/docs/plans/2026-02-11-feat-user-research-workflow-plan.md b/docs/plans/2026-02-11-feat-user-research-workflow-plan.md new file mode 100644 index 00000000..fd7b84be --- /dev/null +++ b/docs/plans/2026-02-11-feat-user-research-workflow-plan.md @@ -0,0 +1,643 @@ +--- +title: "feat: Add user research workflow" +type: feat +date: 2026-02-11 +source_brainstorm: docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md +--- + +# feat: Add User Research Workflow + +## Enhancement Summary + +**Deepened on:** 2026-02-11 +**Sections enhanced:** 7 +**Research agents used:** create-agent-skills evaluator, brainstorming skill evaluator, best-practices-researcher, architecture-strategist, code-simplicity-reviewer, pattern-recognition-specialist, persona-merge-logic analyst + +### Key Improvements +1. Added exact skill description strings and agent `` blocks (no longer deferred to implementation) +2. Added field-by-field persona merge specification with contradiction handling and Divergences section +3. Fixed blocking frontmatter gap: interview snapshots now output separate `role`, `company_type`, and `source_transcript` fields +4. Simplified Phase 0 to lightweight menu (not a full phase), reduced handoff from 5 to 3 options +5. Added Human Review Checklist requirement to all AI-generated research output +6. Resolved agent `model` field: use `inherit` (not `haiku`) per v2.23.1 policy +7. Added exact CHANGELOG entry text and pre-existing metadata drift fix + +### New Considerations Discovered +- AI-generated quotes must be verified against source transcripts (highest-risk failure mode) +- Persona contradictions are normal in qualitative research -- need a Divergences section, not silent count updates +- Pre-existing README count drift (says 25 commands/16 skills, actually 24/18) and marketplace.json version drift (2.31.0 vs plugin.json 2.31.1) must be fixed in Phase 4 + +--- + +## Overview + +Add a user research workflow to the compound-engineering plugin that makes user research a first-class input to AI-assisted development. Today, the plugin has zero research capabilities -- insights from user interviews sit in Google Docs and never reach the developer. This workflow closes that gap by providing tools to plan research, process transcripts into structured insights, and synthesize personas. + +**Scope:** PR 1 only. Creates the research workflow, three skills, one agent, and directory structure. PR 2 (integration with `/workflows:brainstorm` and `/workflows:plan` to auto-surface research) is deferred to a follow-up. + +**Methodology:** Teresa Torres' *Continuous Discovery Habits* (story-based interviewing, interview snapshots, Opportunity Solution Trees) and Rob Fitzpatrick's *The Mom Test* (past behavior over future speculation). A discovery playbook reference document is bundled with each skill. + +## Problem Statement + +The compound engineering plugin follows the workflow: brainstorm -> plan -> work -> review -> compound. But there is no research step. User insights from interviews, customer calls, and usability tests are disconnected from the development workflow. This means: + +- Feature decisions are made without grounding in user evidence +- Research artifacts (transcripts, notes) rot in Google Docs +- Personas don't exist or are static documents that never update +- The "compounding" philosophy breaks down at the research-to-development boundary + +## Proposed Solution + +Add a parallel research track that mirrors existing workflow patterns: + +``` +/workflows:research (Plan) -> docs/research/plans/ + | + (conduct interviews externally) + | + (save transcript to docs/research/transcripts/) + | +/workflows:research (Process) -> docs/research/interviews/ + | +/workflows:research (Personas) -> docs/research/personas/ +``` + +**Five new components:** +1. `/workflows:research` command -- orchestrates the research loop with 3 phases +2. `research-plan` skill -- creates structured research plans +3. `transcript-insights` skill -- processes transcripts into interview snapshots +4. `persona-builder` skill -- synthesizes personas from interviews +5. `user-research-analyst` agent -- searches research artifacts (created but not wired into other workflows until PR 2) + +**One reference file copied into each skill:** +- `references/discovery-playbook.md` -- Continuous Product Discovery Playbook (source: `~/Downloads/discovery-playbook.md`) + +## Technical Approach + +### Architecture + +All components follow established plugin patterns exactly: + +- **Workflow command**: Phase-based structure with `#$ARGUMENTS`, AskUserQuestion at decision points, skill loading via `"Load the X skill"` directive. Matches `workflows:brainstorm.md`. +- **Skills**: YAML frontmatter (`name`, `description`), reference file linking via `[file.md](./references/file.md)`, imperative voice. Matches `brainstorming/SKILL.md`. +- **Agent**: YAML frontmatter (`name`, `description`, `model: inherit`), `` block, grep-first search strategy, structured output format. Matches `learnings-researcher.md`. + +### Research Insights: Architecture + +**Agent model field:** Use `model: inherit` (not `haiku`). The CHANGELOG v2.23.1 established that all agents use `model: inherit` so they match the user's configured model. Only `lint` keeps `model: haiku`. The `learnings-researcher` still shows `model: haiku` in its file but this is stale -- follow the policy, not the stale file. ([Source: pattern-recognition-specialist, architecture-strategist]) + +**Discovery playbook duplication (3 copies):** This follows established convention -- every existing skill with references has them in its own `references/` directory. No skill shares references with another. The 3x duplication of the 415-line playbook is a maintenance cost but the convention is correct. All three copies must be byte-identical. Phase 5 verification should include a checksum comparison. ([Source: architecture-strategist, code-simplicity-reviewer]) + +**Skill frontmatter considerations:** Each skill should specify `allowed-tools` in frontmatter (Read, Write, Bash, Grep) to reduce user permission prompts. Consider `disable-model-invocation: true` since these skills write files (side effects). ([Source: create-agent-skills evaluator]) + +**File path contracts:** Document these at the top of the workflow command so the coupling between command and skill output conventions is visible and maintainable: +- Plans: `docs/research/plans/YYYY-MM-DD--research-plan.md` +- Transcripts: `docs/research/transcripts/*.md` (user-provided) +- Interviews: `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` +- Personas: `docs/research/personas/.md` + +([Source: brainstorming evaluator]) + +**Directory structure created silently at command start (not a "phase"):** +``` +docs/research/ + plans/ # Research plans with discussion guides + transcripts/ # Raw interview transcripts (user-provided .md files) + interviews/ # Processed interview snapshots (generated) + personas/ # Synthesized persona documents (generated) +``` + +### Standardized Terminology + +Use these terms consistently across all components: +- **Research plan** (not "plan document" or "discussion guide document") +- **Interview snapshot** (not "processed interview" or "interview file") +- **Persona** (not "persona document" or "persona file") +- **Transcript** (not "raw transcript" or "transcript file") + +([Source: create-agent-skills evaluator]) + +### Implementation Phases + +#### Phase 1: Reference File and Skills (3 skills + 1 reference) + +Create the three skills and their shared reference file. These are the core knowledge components. + +**Tasks:** + +- [ ] Copy `~/Downloads/discovery-playbook.md` to three locations: + - `plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md` + - `plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md` + - `plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md` + +- [ ] Create `plugins/compound-engineering/skills/research-plan/SKILL.md` + - Frontmatter: + ```yaml + name: research-plan + description: "Create structured research plans with outcome-focused objectives, discussion guides, and screener questions. Use when planning user interviews, customer research, or discovery work." + ``` + - **Note: The current year is 2026.** (Include year note in skill body) + - Target length: ~200 lines + - Structure: Quick Start section, step-by-step Instructions, Output Template, Examples section + - Guides creating research plan at `docs/research/plans/YYYY-MM-DD--research-plan.md` + - Content: outcome-focused objectives, story-based discussion guide template, Mom Test principles, participant criteria, screener questions, post-interview checklist, "Three Most Important Things to Learn" section + - Output template with YAML frontmatter: `title`, `date`, `status: planned`, `outcome`, `hypotheses`, `participant_criteria`, `sample_size`, `screener_questions`, `interviews_completed: 0` + - References discovery playbook via `[discovery-playbook.md](./references/discovery-playbook.md)` + - Include `## Human Review Checklist` in the output template + +- [ ] Create `plugins/compound-engineering/skills/transcript-insights/SKILL.md` + - Frontmatter: + ```yaml + name: transcript-insights + description: "Process interview transcripts into structured snapshots with tagged insights, experience maps, and opportunity identification. Use when a transcript exists in docs/research/transcripts/ or when pasting interview content." + ``` + - **Note: The current year is 2026.** + - Target length: ~300 lines + - Structure: Quick Start, Instructions, Tag Taxonomy, Output Template, Examples + - Accepts file path from `docs/research/transcripts/` OR pasted text as input. Use `$ARGUMENTS` for file path; prompt if empty. + - Asks which research plan the transcript belongs to (list plans by title from frontmatter, most recent first, cap at 5-7 plus "Ad-hoc / no plan"). Handle "no plan" as ad-hoc. + - Generates interview snapshot at `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` + - Output includes: interview snapshot (Teresa Torres one-page format), experience map (timeline), atomic insights with two-tier tags, opportunities in OST language, hypothesis tracking (SUPPORTED/MIXED/CHALLENGED/NEW), behavioral observations + - **Interview frontmatter must output separate fields** (not a composite string): + ```yaml + participant_id: user-001 + role: Marketing Manager # Separate field (not composite) + company_type: B2B SaaS # Separate field (not composite) + date: 2026-02-10 + research_plan: dashboard-usability-study + source_transcript: 2026-02-10-user-001-transcript.md # Links back to source + focus: Dashboard usage patterns + duration_minutes: 30 + tags: [dashboard, export, morning-workflow] + ``` + - Tag taxonomy defined in skill: + - **Type tags (fixed set, exactly ONE per insight):** pain-point, need, desire, behavior, workaround, motivation + - **Topic tags (semi-open, 1-3 per insight):** lowercase, hyphenated, singular. Check existing interviews for existing tags before creating new ones. + - References discovery playbook via `[discovery-playbook.md](./references/discovery-playbook.md)` + - Include `## Human Review Checklist` in output: + ``` + - [ ] All quotes verified against source transcript + - [ ] Experience map accurately reflects story arc + - [ ] Opportunities reflect participant needs, not assumed solutions + - [ ] Tags accurate and consistent with existing taxonomy + - [ ] No insights fabricated or composited from multiple participants + ``` + +- [ ] Create `plugins/compound-engineering/skills/persona-builder/SKILL.md` + - Frontmatter: + ```yaml + name: persona-builder + description: "Synthesize personas from processed interview snapshots with confidence tracking and evidence-backed opportunities. Use when processed interviews exist in docs/research/interviews/ or when building or updating personas." + ``` + - **Note: The current year is 2026.** + - Target length: ~250 lines + - Structure: Quick Start, Instructions (Create New / Merge Existing), Merge Specification, Output Template, Examples + - Reads processed interviews from `docs/research/interviews/` + - **Persona matching and merge flow** (see Persona Merge Specification below) + - Generates persona at `docs/research/personas/.md` + - Output includes: goals, frustrations, behaviors (with participant counts), opportunities table with evidence strength, quotes (cap at 5-7), Divergences section (when contradictions exist), source interview links + - Output YAML frontmatter: + ```yaml + name: The Data-Driven Manager + role: Marketing Manager + company_type: B2B SaaS + last_updated: 2026-02-10 + interview_count: 3 + confidence: medium + source_interviews: [user-001, user-003, user-005] + version: 1 + ``` + - Confidence thresholds: 1 = low, 2-3 = medium, 4+ = high + - References discovery playbook via `[discovery-playbook.md](./references/discovery-playbook.md)` + - Include `## Human Review Checklist` in output + +### Research Insights: Persona Merge Specification + +This is the highest-complexity logic in the plan. The following rules govern how persona-builder handles merging new interview data into existing personas. + +**Matching algorithm:** +1. Extract `role` and `company_type` from the new interview's frontmatter +2. Scan existing personas in `docs/research/personas/` for matches on both fields +3. **Exact match** on both fields: present as merge candidate with context (persona name, interview count, confidence, key characteristics) +4. **Partial match** (role matches, company_type differs or vice versa): present as possible candidate with differences highlighted +5. **No match**: offer to create new persona (ask user for persona name) +6. **Multiple matches**: present numbered list of candidates with differentiators, plus "Create new" option +7. User always confirms the choice via AskUserQuestion + +**Confirmation prompt must show:** existing persona name, current interview count, confidence level, 2-3 key characteristics. Show the new interview's role and focus for comparison. + +**Field-by-field update rules when merging:** + +| Field Category | Update Strategy | +|---------------|----------------| +| Frontmatter metadata (`last_updated`, `interview_count`, `confidence`, `version`, `source_interviews`) | Always auto-update. Increment version, append participant_id to source_interviews, recalculate confidence. | +| Persona name and role | Preserve unless user explicitly requests change. | +| Goals | Append new goals not already listed. Flag potential duplicates with `[Review: possible overlap with Goal #N]`. | +| Frustrations | Append new frustrations. Flag potential duplicates. | +| Behaviors | Update participant counts as `(N/M participants)` where M = total interview count. When a behavior is not mentioned, do NOT change the count (absence is not evidence). Add new behaviors. | +| Quotes | Add the single most representative new quote. Keep total at 5-7 max. Note "Additional quotes in source interviews." | +| Opportunities table | Add new rows. Update evidence strength counts for existing rows only when the new interview explicitly addresses that opportunity. | +| Evidence section | Always append new participant_id and research plan. | + +**Contradiction handling:** +When a new interview contradicts an existing finding, do NOT silently update counts. Instead: +1. Keep both data points with their evidence counts +2. Add to a `## Divergences` section in the persona: + ``` + | Finding | Majority View | Minority View | Split | + |---------|--------------|---------------|-------| + | Morning dashboard check | Check first thing (3/4) | Check after standup (1/4) | 3:1 | + ``` +3. When divergences reach 40/60 split or closer, flag for potential persona segmentation +4. Surface contradictions in the merge confirmation prompt + +**Evidence strength thresholds:** +- Weak: less than 33% of participants, or only 1 interview +- Medium: 33-66% of participants +- Strong: 67%+ of participants + +**Hypothesis status transitions:** +- SUPPORTED: 75%+ of evidence supports +- MIXED: 40-75% support +- CHALLENGED: less than 40% support +- NEW: emerged from this interview, no prior evidence + +([Source: persona-merge-logic analyst, best-practices-researcher, architecture-strategist]) + +**Success criteria:** +- Each skill's `name` matches its directory name +- All `references/` files linked with `[filename.md](./references/filename.md)` syntax (no bare backticks) +- Descriptions follow "Does X. Use when Y." pattern +- Imperative voice throughout (no "you should") +- Interview frontmatter outputs separate `role`, `company_type`, and `source_transcript` fields + +#### Phase 2: Workflow Command + +Create the orchestrating command that ties the skills together. + +**Tasks:** + +- [ ] Create `plugins/compound-engineering/commands/workflows/research.md` + - Frontmatter: `name: workflows:research`, `description: Plan user research, process interview transcripts, and build personas from accumulated insights`, `argument-hint: "[plan|process|personas]"` + - Year note: "The current year is 2026." + - **File path contracts** documented at top of command (plans, transcripts, interviews, personas paths) + - Argument injection: ` #$ARGUMENTS ` + - If argument is empty, run phase selection. If argument matches a phase name, jump directly to that phase. If argument is unrecognized, show phase menu with note about valid arguments. + + **Directory setup (silent, always runs before any phase):** + - Create `docs/research/` directories with `mkdir -p` if they don't exist. This is boilerplate, not a named phase. + + **Phase selection (when no argument given):** + - Brief artifact status (2-3 lines max): "N plans, N transcripts (M unprocessed), N interviews, N personas" + - Unprocessed transcript detection: grep interview frontmatter for `source_transcript` field matching each transcript filename. Simpler fallback: count files in `transcripts/` minus files in `interviews/`. + - AskUserQuestion with three options: Plan, Process, Personas. Lead with recommendation based on state (e.g., unprocessed transcripts exist -> recommend Process). + + **Phase 1: Plan** + - Load the `research-plan` skill + - Skill handles all research plan creation logic + - **Return contract:** Skill creates a file at `docs/research/plans/YYYY-MM-DD--research-plan.md` + + **Phase 2: Process** + - Check for transcripts in `docs/research/transcripts/` + - If no transcripts: "No transcripts found in `docs/research/transcripts/`. Save your interview transcript as a `.md` file there, then re-run this phase." (Transcript format guidance belongs in the skill, not here.) + - If exactly one unprocessed transcript: confirm with user before proceeding + - If multiple unprocessed transcripts: list them, ask user to select via AskUserQuestion + - Load the `transcript-insights` skill with selected transcript + - **Return contract:** Skill creates a file at `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` + + **Phase 3: Personas** + - Check for processed interviews in `docs/research/interviews/` + - If no interviews: guide user to process transcripts first + - Load the `persona-builder` skill + - **Return contract:** Skill creates or updates a file at `docs/research/personas/.md` + + **Handoff (after any phase completes):** + - Announce the created/updated file path + - Use AskUserQuestion with three options: + 1. "Continue research" -- routes back to phase selection menu + 2. "Proceed to `/workflows:brainstorm`" -- hand off to brainstorm + 3. "Done for now" + +### Research Insights: Workflow Command + +**Orchestration vs. process knowledge separation:** The workflow command should handle ONLY flow control (which phase, what input). Skills handle "how to do the work." Move any logic about transcript format guidance, tag instructions, or processing methodology into the corresponding skill. ([Source: brainstorming evaluator]) + +**Graceful exit handling:** Each skill should have a stated exit condition. The workflow command should handle the case where a skill completes without producing output (user abandoned or input was invalid) by returning to the handoff menu. ([Source: brainstorming evaluator]) + +**Single-transcript confirmation:** When exactly one unprocessed transcript exists, do not auto-select. Present it with confirmation: "Found 1 unprocessed transcript: `filename.md`. Process this one?" This follows the brainstorming skill's principle of validating assumptions explicitly. ([Source: brainstorming evaluator]) + +**Success criteria:** +- Phase transitions work with both menu selection and direct arguments +- Phase selection is lightweight (2-3 lines of status, then AskUserQuestion) +- Graceful handling of empty directories (no errors, clear guidance) +- Handoff has exactly 3 options (matching brainstorm pattern) + +#### Phase 3: Research Agent + +Create the agent for searching research artifacts. + +**Tasks:** + +- [ ] Create `plugins/compound-engineering/agents/research/user-research-analyst.md` + - Frontmatter: + ```yaml + name: user-research-analyst + description: "Search research personas and interview insights for evidence relevant to the feature or task being planned. Use when planning user-facing features, evaluating design decisions, or brainstorming product improvements." + model: inherit + ``` + - **Note: The current year is 2026.** + - Role preamble: "You are an expert user research analyst specializing in surfacing relevant personas, insights, and opportunities from the team's research corpus." + - Include 3 `` blocks: + + ```xml + + + Context: User is planning a new feature for onboarding. + user: "I want to redesign the onboarding flow" + assistant: "I'll use the user-research-analyst agent to search for relevant personas and interview insights about onboarding experiences." + Since the user is planning a user-facing feature, search research artifacts for relevant personas and insights before proceeding. + + + Context: User is debugging a user-facing issue with exports. + user: "Users are complaining about the export feature being hard to find" + assistant: "Let me use the user-research-analyst agent to find any interview insights about export workflows and pain points." + The user is investigating a UX problem. Search research for relevant behavioral observations and workarounds. + + + Context: User is brainstorming improvements to the dashboard. + user: "We want to make the dashboard more useful for our customers" + assistant: "I'll use the user-research-analyst agent to surface relevant personas, their dashboard usage patterns, and identified opportunities." + The user is exploring improvements to a user-facing feature. Research insights will ground the brainstorm in evidence. + + + ``` + + - Grep-first search strategy: + 1. Extract keywords from feature/task description + 2. Grep pre-filter `docs/research/personas/` and `docs/research/interviews/` in parallel (case-insensitive) + 3. Read frontmatter of matched files (limit: 30 lines) + 4. Score relevance based on keyword overlap with tags, role, opportunities + 5. Full read of relevant files only (opportunities are in body content, not frontmatter -- grep body for opportunity keywords) + 6. Return distilled summaries + 7. **Fallback:** If grep returns fewer than 3 candidates, do a broader content search across all files + 8. **Always check:** Read the most recent persona files regardless of keyword match (they are the primary synthesis artifacts) + + - Structured output format following `learnings-researcher` pattern (adapt format during PR 2 integration as needed): + ``` + ## User Research Findings + + ### Search Context + - Feature/Task: [description] + - Keywords Used: [tags, roles, topics searched] + - Files Scanned: [X personas, Y interviews] + - Relevant Matches: [Z files] + + ### Relevant Personas + #### [Persona Name] (confidence: high/medium/low) + - Role: [role] + - Key Insight: [most relevant finding for this task] + - Relevant Opportunities: [from opportunities table] + - Source Interviews: [list] + + ### Key Quotes + - "[quote]" -- [participant_id], [context] + + ### Research Gaps + - [Areas where research coverage is thin or missing] + + ### Recommendations + - [Specific actions based on research findings] + ``` + + - Handle empty `docs/research/` gracefully: return "No user research data found. Run `/workflows:research` to start building your research corpus." + - DO/DON'T efficiency guidelines matching `learnings-researcher` pattern + - **Integration Points** section at bottom: "Intended callers (to be wired in PR 2): `/workflows:brainstorm` Phase 1.1, `/workflows:plan` Step 1. Will run in parallel with `learnings-researcher` and `repo-research-analyst`." + +### Research Insights: Agent + +**Opportunities require body grep:** The `opportunities` data lives in persona document body tables, not frontmatter. The agent spec must note that opportunity searching requires body content grep, not just frontmatter grep. ([Source: architecture-strategist]) + +**Invocation interface:** Will be invoked as `Task user-research-analyst(feature_description)` following the same pattern as `Task learnings-researcher(feature_description)`. ([Source: architecture-strategist]) + +**Success criteria:** +- Agent follows grep-first pattern with fallback for sparse results +- Output format is structured and machine-consumable (for PR 2 integration) +- Handles empty research directories without errors +- Examples clearly demonstrate when to invoke +- Uses `model: inherit` per v2.23.1 policy + +#### Phase 4: Metadata Updates + +Update all plugin metadata files with correct counts. **Note:** Fix pre-existing count drift in README (currently says 25 commands / 16 skills, actually 24 / 18) and marketplace.json version drift (2.31.0 vs plugin.json 2.31.1). + +**Tasks:** + +- [ ] Update `plugins/compound-engineering/.claude-plugin/plugin.json` + - Bump version from `2.31.1` to `2.32.0` (MINOR: new components) + - Update description: `"AI-powered development tools. 30 agents, 25 commands, 21 skills, 1 MCP server for code review, research, design, and workflow automation."` + +- [ ] Update `.claude-plugin/marketplace.json` + - Update compound-engineering plugin description: `"Includes 30 specialized agents, 25 commands, and 21 skills."` + - Update version to `2.32.0` (fixes pre-existing drift from 2.31.0) + +- [ ] Update `plugins/compound-engineering/README.md` + - **Fix pre-existing count errors** (currently says 25 commands, 16 skills) + - Update component count table: Agents 30, Commands 25, Skills 21 + - Add to Research agents table (currently 5, becomes 6): + ``` + | `user-research-analyst` | Search research artifacts for relevant personas and insights | + ``` + - Update Research section header count: "Research (6)" + - Add to Workflow Commands table: + ``` + | `/workflows:research` | Plan research, process transcripts, and build personas | + ``` + - Add new "User Research" skill category with 3 entries: + ``` + ### User Research + | Skill | Description | + |-------|-------------| + | `research-plan` | Create structured research plans with outcome-focused objectives | + | `transcript-insights` | Process interview transcripts into structured snapshots and insights | + | `persona-builder` | Synthesize insights across interviews into living persona documents | + ``` + +- [ ] Update `plugins/compound-engineering/CHANGELOG.md` + - Add exact entry: + ```markdown + ## [2.32.0] - 2026-02-11 + + ### Added + + - **`/workflows:research` command** - Plan user research, process interview transcripts, and build personas from accumulated insights + - **`research-plan` skill** - Create structured research plans with outcome-focused objectives and story-based discussion guides + - **`transcript-insights` skill** - Process interview transcripts into structured snapshots with tagged insights and experience maps + - **`persona-builder` skill** - Synthesize insights across interviews into living persona documents with confidence tracking + - **`user-research-analyst` agent** - Search research artifacts for relevant personas and insights (not yet wired into brainstorm/plan workflows -- see PR 2) + - **Discovery playbook reference** - Bundled Continuous Product Discovery Playbook (Teresa Torres + Mom Test methodology) as `references/discovery-playbook.md` in each research skill + ``` + +**Success criteria:** +- All four files updated with matching counts +- Version bumped consistently to 2.32.0 across plugin.json and marketplace.json +- Pre-existing README count errors corrected +- README tables include all new components in correct categories +- CHANGELOG follows Keep a Changelog format with exact entry above + +#### Phase 5: Verification + +Validate everything is correct before committing. + +**Tasks:** + +- [ ] Count components match descriptions: + ```bash + ls plugins/compound-engineering/agents/**/*.md | wc -l # Should be 30 + ls plugins/compound-engineering/commands/**/*.md | wc -l # Should be 25 + ls -d plugins/compound-engineering/skills/*/ | wc -l # Should be 21 + ``` + +- [ ] Validate JSON files: + ```bash + cat .claude-plugin/marketplace.json | jq . + cat plugins/compound-engineering/.claude-plugin/plugin.json | jq . + ``` + +- [ ] Verify no bare backtick references in skills: + ```bash + grep -E '`(references|assets|scripts)/[^`]+`' plugins/compound-engineering/skills/research-plan/SKILL.md + grep -E '`(references|assets|scripts)/[^`]+`' plugins/compound-engineering/skills/transcript-insights/SKILL.md + grep -E '`(references|assets|scripts)/[^`]+`' plugins/compound-engineering/skills/persona-builder/SKILL.md + # All three should return nothing + ``` + +- [ ] Verify description counts match across files: + ```bash + grep "30.*agents" plugins/compound-engineering/.claude-plugin/plugin.json + grep "25 commands" plugins/compound-engineering/.claude-plugin/plugin.json + grep "21 skills" plugins/compound-engineering/.claude-plugin/plugin.json + ``` + +- [ ] Verify discovery playbook exists in all three skill directories and is identical: + ```bash + ls plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md + ls plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md + ls plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md + md5 plugins/compound-engineering/skills/*/references/discovery-playbook.md + # All three checksums should be identical + ``` + +- [ ] Verify interview frontmatter has separate role/company_type/source_transcript fields (spot-check SKILL.md templates) + +## Acceptance Criteria + +### Functional Requirements + +- [ ] `/workflows:research` presents a lightweight phase selection when run without arguments +- [ ] `/workflows:research plan` loads the research-plan skill and creates a plan document +- [ ] `/workflows:research process` loads transcript-insights skill and processes a transcript +- [ ] `/workflows:research personas` loads persona-builder skill and creates/updates a persona +- [ ] Directory setup creates `docs/research/` directories silently if they don't exist +- [ ] Phase selection shows brief artifact counts and recommends next logical phase +- [ ] Process phase lists unprocessed transcripts when no specific file is given +- [ ] Process phase confirms single-transcript selection (does not auto-select) +- [ ] Process phase handles "no transcripts exist" with clear guidance +- [ ] Personas phase handles "no interviews exist" with clear guidance +- [ ] Persona matching presents candidates with context (name, interview count, confidence) and asks user to confirm +- [ ] Persona merge follows field-by-field update rules (see Persona Merge Specification) +- [ ] Contradicting interview data produces a Divergences section (not silent count updates) +- [ ] Each phase ends with a 3-option handoff menu (continue research, brainstorm, done) +- [ ] All three skills include `## Human Review Checklist` in output +- [ ] All three skills reference `discovery-playbook.md` with proper markdown links +- [ ] `user-research-analyst` agent returns structured output with personas, quotes, confidence levels, and gaps +- [ ] Agent handles empty `docs/research/` gracefully +- [ ] Interview snapshots output separate `role`, `company_type`, and `source_transcript` frontmatter fields + +### Non-Functional Requirements + +- [ ] All skills use imperative voice (no "you should") +- [ ] All YAML frontmatter follows established patterns +- [ ] No bare backtick references to files in `references/` +- [ ] Agent uses `model: inherit` per v2.23.1 policy +- [ ] Agent uses grep-first search strategy with fallback for sparse results +- [ ] Each skill includes year note ("The current year is 2026.") + +### Quality Gates + +- [ ] Component counts in plugin.json, marketplace.json, and README.md all match actual file counts +- [ ] Version bumped to 2.32.0 in both plugin.json and marketplace.json +- [ ] Pre-existing README count errors corrected +- [ ] CHANGELOG.md documents all changes with exact entry text +- [ ] JSON files pass `jq` validation +- [ ] Skill compliance checklist passes (from CLAUDE.md) +- [ ] Discovery playbook checksums match across all 3 skill directories + +## Success Metrics + +- All 5 new components created and follow established patterns +- Component counts accurate across all metadata files (including fixing pre-existing drift) +- Each skill produces well-structured output following the templates in the brainstorm +- The workflow command successfully orchestrates all three phases +- Research artifacts use consistent YAML frontmatter enabling future agent search +- Interview frontmatter supports reliable persona matching (separate fields, not composite strings) + +## Dependencies and Prerequisites + +- **Discovery playbook source file**: `~/Downloads/discovery-playbook.md` must exist (already verified) + +## Future Considerations + +- **PR 2**: Wire `user-research-analyst` into `/workflows:brainstorm` (Phase 1) and `/workflows:plan` (Step 1) to auto-surface research during planning +- **Experiment phase**: `/workflows:research experiment` to design validation approaches (A/B tests, metrics) +- **Cross-interview theming**: `docs/research/themes.md` tracking cross-cutting themes with evidence strength +- **Multi-persona attribution**: Allow interviews to be linked to multiple personas (primary + secondary) +- **Batch processing UX**: Group-then-confirm approach when multiple unlinked interviews exist +- **Tag governance**: Consider `docs/research/taxonomy.md` after sufficient usage establishes patterns + +## Documentation Plan + +- [ ] README.md updated with all new components (Phase 4) +- [ ] CHANGELOG.md documents v2.32.0 changes (Phase 4) +- [ ] Run `claude /release-docs` after merging to update documentation site + +## References and Research + +### Internal References + +- Brainstorm: `docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md` +- Workflow command pattern: `plugins/compound-engineering/commands/workflows/brainstorm.md` +- Skill pattern: `plugins/compound-engineering/skills/brainstorming/SKILL.md` +- Agent pattern: `plugins/compound-engineering/agents/research/learnings-researcher.md` +- Plugin CLAUDE.md: `plugins/compound-engineering/CLAUDE.md` (versioning, compliance checklist) +- Root CLAUDE.md: `CLAUDE.md` (component update checklist) +- Institutional learning: `docs/solutions/plugin-versioning-requirements.md` + +### External References + +- Teresa Torres, *Continuous Discovery Habits* -- interview snapshots, OST, story-based interviewing +- Rob Fitzpatrick, *The Mom Test* -- past behavior focus, no pitching +- Discovery playbook: `~/Downloads/discovery-playbook.md` (bundled as reference) +- [AI-Assisted Qualitative Analysis Guide (SAGE, 2025)](https://journals.sagepub.com/doi/10.1177/16094069251354863) -- Human review requirements for AI-generated research +- [What we learned from creating a tagging taxonomy (Dovetail)](https://dovetail.com/blog/what-we-learned-creating-tagging-taxonomy/) -- Tag taxonomy design +- [3 Persona Types (NNGroup)](https://www.nngroup.com/articles/persona-types/) -- Confidence/sample size norms +- [Taxonomy for UX Research Repository (Condens)](https://condens.io/taxonomy-for-ux-research-repository/) -- Two-tier tag structure + +## Key Decisions Log + +| # | Decision | Rationale | +|---|----------|-----------| +| 1 | Workflow command with 3 phases | Matches existing `workflows:brainstorm` pattern | +| 2 | Three modular skills | Each phase is independently useful; follows plugin's specialized tool pattern | +| 3 | Discovery playbook duplicated per skill | Follows established convention (every skill has its own `references/`). 3x maintenance cost documented. | +| 4 | Separate `role` + `company_type` fields in interview frontmatter | Required for reliable persona matching. Composite strings cause impedance mismatch. | +| 5 | `source_transcript` field in interview frontmatter | Enables unprocessed transcript detection without fragile filename matching. | +| 6 | Two-tier tag taxonomy (type + topic) | Type tags (6 fixed values) enable structured analysis; topic tags enable free-form discovery. Grounded in Dovetail/Condens best practices. | +| 7 | Persona merge requires user confirmation with context | Prevents incorrect persona corruption; shows name, count, confidence in prompt. | +| 8 | Persona Divergences section for contradictions | Contradictions are normal in qualitative research -- silent count updates are misleading. | +| 9 | Confidence thresholds: 1=low, 2-3=medium, 4+=high | Adjusted from original (2-4=medium) per qualitative research norms. Teresa Torres recommends at least 3 for credible patterns. | +| 10 | Agent model: inherit | Per v2.23.1 policy. All agents use `model: inherit` except `lint`. | +| 11 | Lightweight phase selection (not "Smart Phase 0") | Reduced from full phase to brief status + AskUserQuestion. Matches brainstorm pattern simplicity. | +| 12 | 3-option handoff (not 5) | Matches brainstorm pattern. "Continue research" routes back to phase selection for sub-routing. | +| 13 | Human Review Checklist in all output | AI-generated quotes can be fabricated. Checklist ensures human verification of critical elements. | +| 14 | PR 1 scope: agent created but not wired in | Keeps PR focused; integration is a separate, lower-risk change. | +| 15 | PII note in skill docs | Recommend gitignoring transcripts directory; research ethics consideration. | From ac87be2171811b80a73dcd8aab2f04a48a59f5c3 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Wed, 11 Feb 2026 11:09:00 -0500 Subject: [PATCH 02/13] feat: add user research workflow with 3 skills, 1 command, and 1 agent MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add a parallel research track to the compound engineering plugin: - `/workflows:research` command orchestrating plan/process/personas phases - `research-plan` skill for structured research plans with discussion guides - `transcript-insights` skill for processing transcripts into tagged snapshots - `persona-builder` skill for synthesizing personas with merge logic - `user-research-analyst` agent for searching research artifacts - Discovery playbook reference bundled with each research skill Fixes pre-existing README count drift (25→24 commands, 16→18 skills) and marketplace.json version drift (2.31.0→2.32.0). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.6 --- .claude-plugin/marketplace.json | 4 +- ...-02-11-feat-user-research-workflow-plan.md | 32 +- .../.claude-plugin/plugin.json | 4 +- plugins/compound-engineering/CHANGELOG.md | 18 + plugins/compound-engineering/README.md | 18 +- .../agents/research/user-research-analyst.md | 176 ++++++++ .../commands/workflows/research.md | 151 +++++++ .../skills/persona-builder/SKILL.md | 255 +++++++++++ .../references/discovery-playbook.md | 414 ++++++++++++++++++ .../skills/research-plan/SKILL.md | 223 ++++++++++ .../references/discovery-playbook.md | 414 ++++++++++++++++++ .../skills/transcript-insights/SKILL.md | 285 ++++++++++++ .../references/discovery-playbook.md | 414 ++++++++++++++++++ 13 files changed, 2384 insertions(+), 24 deletions(-) create mode 100644 plugins/compound-engineering/agents/research/user-research-analyst.md create mode 100644 plugins/compound-engineering/commands/workflows/research.md create mode 100644 plugins/compound-engineering/skills/persona-builder/SKILL.md create mode 100644 plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md create mode 100644 plugins/compound-engineering/skills/research-plan/SKILL.md create mode 100644 plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md create mode 100644 plugins/compound-engineering/skills/transcript-insights/SKILL.md create mode 100644 plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md diff --git a/.claude-plugin/marketplace.json b/.claude-plugin/marketplace.json index a1b7be99..3aa92738 100644 --- a/.claude-plugin/marketplace.json +++ b/.claude-plugin/marketplace.json @@ -11,8 +11,8 @@ "plugins": [ { "name": "compound-engineering", - "description": "AI-powered development tools that get smarter with every use. Make each unit of engineering work easier than the last. Includes 29 specialized agents, 22 commands, and 19 skills.", - "version": "2.34.0", + "description": "AI-powered development tools that get smarter with every use. Make each unit of engineering work easier than the last. Includes 30 specialized agents, 23 commands, and 22 skills.", + "version": "2.35.0", "author": { "name": "Kieran Klaassen", "url": "https://github.com/kieranklaassen", diff --git a/docs/plans/2026-02-11-feat-user-research-workflow-plan.md b/docs/plans/2026-02-11-feat-user-research-workflow-plan.md index fd7b84be..505f4fcd 100644 --- a/docs/plans/2026-02-11-feat-user-research-workflow-plan.md +++ b/docs/plans/2026-02-11-feat-user-research-workflow-plan.md @@ -125,12 +125,12 @@ Create the three skills and their shared reference file. These are the core know **Tasks:** -- [ ] Copy `~/Downloads/discovery-playbook.md` to three locations: +- [x] Copy `~/Downloads/discovery-playbook.md` to three locations: - `plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md` - `plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md` - `plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md` -- [ ] Create `plugins/compound-engineering/skills/research-plan/SKILL.md` +- [x] Create `plugins/compound-engineering/skills/research-plan/SKILL.md` - Frontmatter: ```yaml name: research-plan @@ -145,7 +145,7 @@ Create the three skills and their shared reference file. These are the core know - References discovery playbook via `[discovery-playbook.md](./references/discovery-playbook.md)` - Include `## Human Review Checklist` in the output template -- [ ] Create `plugins/compound-engineering/skills/transcript-insights/SKILL.md` +- [x] Create `plugins/compound-engineering/skills/transcript-insights/SKILL.md` - Frontmatter: ```yaml name: transcript-insights @@ -183,7 +183,7 @@ Create the three skills and their shared reference file. These are the core know - [ ] No insights fabricated or composited from multiple participants ``` -- [ ] Create `plugins/compound-engineering/skills/persona-builder/SKILL.md` +- [x] Create `plugins/compound-engineering/skills/persona-builder/SKILL.md` - Frontmatter: ```yaml name: persona-builder @@ -277,7 +277,7 @@ Create the orchestrating command that ties the skills together. **Tasks:** -- [ ] Create `plugins/compound-engineering/commands/workflows/research.md` +- [x] Create `plugins/compound-engineering/commands/workflows/research.md` - Frontmatter: `name: workflows:research`, `description: Plan user research, process interview transcripts, and build personas from accumulated insights`, `argument-hint: "[plan|process|personas]"` - Year note: "The current year is 2026." - **File path contracts** documented at top of command (plans, transcripts, interviews, personas paths) @@ -338,7 +338,7 @@ Create the agent for searching research artifacts. **Tasks:** -- [ ] Create `plugins/compound-engineering/agents/research/user-research-analyst.md` +- [x] Create `plugins/compound-engineering/agents/research/user-research-analyst.md` - Frontmatter: ```yaml name: user-research-analyst @@ -432,15 +432,15 @@ Update all plugin metadata files with correct counts. **Note:** Fix pre-existing **Tasks:** -- [ ] Update `plugins/compound-engineering/.claude-plugin/plugin.json` +- [x] Update `plugins/compound-engineering/.claude-plugin/plugin.json` - Bump version from `2.31.1` to `2.32.0` (MINOR: new components) - Update description: `"AI-powered development tools. 30 agents, 25 commands, 21 skills, 1 MCP server for code review, research, design, and workflow automation."` -- [ ] Update `.claude-plugin/marketplace.json` +- [x] Update `.claude-plugin/marketplace.json` - Update compound-engineering plugin description: `"Includes 30 specialized agents, 25 commands, and 21 skills."` - Update version to `2.32.0` (fixes pre-existing drift from 2.31.0) -- [ ] Update `plugins/compound-engineering/README.md` +- [x] Update `plugins/compound-engineering/README.md` - **Fix pre-existing count errors** (currently says 25 commands, 16 skills) - Update component count table: Agents 30, Commands 25, Skills 21 - Add to Research agents table (currently 5, becomes 6): @@ -462,7 +462,7 @@ Update all plugin metadata files with correct counts. **Note:** Fix pre-existing | `persona-builder` | Synthesize insights across interviews into living persona documents | ``` -- [ ] Update `plugins/compound-engineering/CHANGELOG.md` +- [x] Update `plugins/compound-engineering/CHANGELOG.md` - Add exact entry: ```markdown ## [2.32.0] - 2026-02-11 @@ -490,20 +490,20 @@ Validate everything is correct before committing. **Tasks:** -- [ ] Count components match descriptions: +- [x] Count components match descriptions: ```bash ls plugins/compound-engineering/agents/**/*.md | wc -l # Should be 30 ls plugins/compound-engineering/commands/**/*.md | wc -l # Should be 25 ls -d plugins/compound-engineering/skills/*/ | wc -l # Should be 21 ``` -- [ ] Validate JSON files: +- [x] Validate JSON files: ```bash cat .claude-plugin/marketplace.json | jq . cat plugins/compound-engineering/.claude-plugin/plugin.json | jq . ``` -- [ ] Verify no bare backtick references in skills: +- [x] Verify no bare backtick references in skills: ```bash grep -E '`(references|assets|scripts)/[^`]+`' plugins/compound-engineering/skills/research-plan/SKILL.md grep -E '`(references|assets|scripts)/[^`]+`' plugins/compound-engineering/skills/transcript-insights/SKILL.md @@ -511,14 +511,14 @@ Validate everything is correct before committing. # All three should return nothing ``` -- [ ] Verify description counts match across files: +- [x] Verify description counts match across files: ```bash grep "30.*agents" plugins/compound-engineering/.claude-plugin/plugin.json grep "25 commands" plugins/compound-engineering/.claude-plugin/plugin.json grep "21 skills" plugins/compound-engineering/.claude-plugin/plugin.json ``` -- [ ] Verify discovery playbook exists in all three skill directories and is identical: +- [x] Verify discovery playbook exists in all three skill directories and is identical: ```bash ls plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md ls plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md @@ -527,7 +527,7 @@ Validate everything is correct before committing. # All three checksums should be identical ``` -- [ ] Verify interview frontmatter has separate role/company_type/source_transcript fields (spot-check SKILL.md templates) +- [x] Verify interview frontmatter has separate role/company_type/source_transcript fields (spot-check SKILL.md templates) ## Acceptance Criteria diff --git a/plugins/compound-engineering/.claude-plugin/plugin.json b/plugins/compound-engineering/.claude-plugin/plugin.json index 9b35c5a7..f145c8a1 100644 --- a/plugins/compound-engineering/.claude-plugin/plugin.json +++ b/plugins/compound-engineering/.claude-plugin/plugin.json @@ -1,7 +1,7 @@ { "name": "compound-engineering", - "version": "2.34.0", - "description": "AI-powered development tools. 29 agents, 22 commands, 19 skills, 1 MCP server for code review, research, design, and workflow automation.", + "version": "2.35.0", + "description": "AI-powered development tools. 30 agents, 23 commands, 22 skills, 1 MCP server for code review, research, design, and workflow automation.", "author": { "name": "Kieran Klaassen", "email": "kieran@every.to", diff --git a/plugins/compound-engineering/CHANGELOG.md b/plugins/compound-engineering/CHANGELOG.md index 6819c484..0365708a 100644 --- a/plugins/compound-engineering/CHANGELOG.md +++ b/plugins/compound-engineering/CHANGELOG.md @@ -5,6 +5,24 @@ All notable changes to the compound-engineering plugin will be documented in thi The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [2.35.0] - 2026-02-16 + +### Added + +- **`/workflows:research` command** - Plan user research, process interview transcripts, and build personas from accumulated insights +- **`research-plan` skill** - Create structured research plans with outcome-focused objectives and story-based discussion guides +- **`transcript-insights` skill** - Process interview transcripts into structured snapshots with tagged insights and experience maps +- **`persona-builder` skill** - Synthesize insights across interviews into living persona documents with confidence tracking +- **`user-research-analyst` agent** - Search research artifacts for relevant personas and insights +- **Discovery playbook reference** - Bundled Continuous Product Discovery Playbook (Teresa Torres + Mom Test methodology) + +### Changed + +- **`/workflows:brainstorm`** - Now runs `user-research-analyst` in parallel; silently skips when no research data exists +- **`/workflows:plan`** - Research context integrated into Step 1.6 consolidation + +--- + ## [2.34.0] - 2026-02-14 ### Added diff --git a/plugins/compound-engineering/README.md b/plugins/compound-engineering/README.md index ec1ad83b..db6bc6b5 100644 --- a/plugins/compound-engineering/README.md +++ b/plugins/compound-engineering/README.md @@ -6,9 +6,9 @@ AI-powered development tools that get smarter with every use. Make each unit of | Component | Count | |-----------|-------| -| Agents | 29 | -| Commands | 22 | -| Skills | 19 | +| Agents | 30 | +| Commands | 23 | +| Skills | 22 | | MCP Servers | 1 | ## Agents @@ -35,7 +35,7 @@ Agents are organized into categories for easier discovery. | `schema-drift-detector` | Detect unrelated schema.rb changes in PRs | | `security-sentinel` | Security audits and vulnerability assessments | -### Research (5) +### Research (6) | Agent | Description | |-------|-------------| @@ -44,6 +44,7 @@ Agents are organized into categories for easier discovery. | `git-history-analyzer` | Analyze git history and code evolution | | `learnings-researcher` | Search institutional learnings for relevant past solutions | | `repo-research-analyst` | Research repository structure and conventions | +| `user-research-analyst` | Search research artifacts for relevant personas and insights | ### Design (3) @@ -79,6 +80,7 @@ Core workflow commands use `workflows:` prefix to avoid collisions with built-in |---------|-------------| | `/workflows:brainstorm` | Explore requirements and approaches before planning | | `/workflows:plan` | Create implementation plans | +| `/workflows:research` | Plan research, process transcripts, and build personas | | `/workflows:review` | Run comprehensive code reviews | | `/workflows:work` | Execute work items systematically | | `/workflows:compound` | Document solved problems to compound team knowledge | @@ -155,6 +157,14 @@ Core workflow commands use `workflows:` prefix to avoid collisions with built-in |-------|-------------| | `agent-browser` | CLI-based browser automation using Vercel's agent-browser | +### User Research + +| Skill | Description | +|-------|-------------| +| `research-plan` | Create structured research plans with outcome-focused objectives | +| `transcript-insights` | Process interview transcripts into structured snapshots and insights | +| `persona-builder` | Synthesize insights across interviews into living persona documents | + ### Image Generation | Skill | Description | diff --git a/plugins/compound-engineering/agents/research/user-research-analyst.md b/plugins/compound-engineering/agents/research/user-research-analyst.md new file mode 100644 index 00000000..39456911 --- /dev/null +++ b/plugins/compound-engineering/agents/research/user-research-analyst.md @@ -0,0 +1,176 @@ +--- +name: user-research-analyst +description: "Search research personas and interview insights for evidence relevant to the feature or task being planned. Use when planning user-facing features, evaluating design decisions, or brainstorming product improvements." +model: inherit +--- + + + +Context: User is planning a new feature for onboarding. +user: "I want to redesign the onboarding flow" +assistant: "I'll use the user-research-analyst agent to search for relevant personas and interview insights about onboarding experiences." +Since the user is planning a user-facing feature, search research artifacts for relevant personas and insights before proceeding. + + +Context: User is debugging a user-facing issue with exports. +user: "Users are complaining about the export feature being hard to find" +assistant: "Let me use the user-research-analyst agent to find any interview insights about export workflows and pain points." +The user is investigating a UX problem. Search research for relevant behavioral observations and workarounds. + + +Context: User is brainstorming improvements to the dashboard. +user: "We want to make the dashboard more useful for our customers" +assistant: "I'll use the user-research-analyst agent to surface relevant personas, their dashboard usage patterns, and identified opportunities." +The user is exploring improvements to a user-facing feature. Research insights will ground the brainstorm in evidence. + + + +**Note: The current year is 2026.** + +You are an expert user research analyst specializing in surfacing relevant personas, insights, and opportunities from the team's research corpus. Your mission is to find and distill applicable research findings before feature work begins, grounding product decisions in user evidence. + +## Search Strategy (Grep-First Filtering) + +The `docs/research/` directory contains personas and interview snapshots with YAML frontmatter. Use this efficient strategy to find relevant research: + +### Step 1: Extract Keywords from Feature Description + +From the feature/task description, identify: +- **User activities**: e.g., "dashboard", "export", "onboarding", "reporting" +- **User roles**: e.g., "manager", "analyst", "founder" +- **Pain indicators**: e.g., "slow", "confusing", "hard to find", "broken" +- **Workflow terms**: e.g., "morning routine", "weekly review", "sharing" + +### Step 2: Grep Pre-Filter (Critical for Efficiency) + +Run multiple Grep calls in parallel across both research directories: + +```bash +# Search personas (run in PARALLEL, case-insensitive) +Grep: pattern="[keyword]" path=docs/research/personas/ output_mode=files_with_matches -i=true +Grep: pattern="tags:.*(keyword1|keyword2)" path=docs/research/interviews/ output_mode=files_with_matches -i=true +Grep: pattern="role:.*(keyword)" path=docs/research/interviews/ output_mode=files_with_matches -i=true +Grep: pattern="focus:.*(keyword)" path=docs/research/interviews/ output_mode=files_with_matches -i=true +``` + +**Note:** Opportunities data lives in persona document body tables, not frontmatter. Search persona body content for opportunity keywords: +```bash +Grep: pattern="need.*[keyword]" path=docs/research/personas/ output_mode=files_with_matches -i=true +``` + +Combine results from all Grep calls to get candidate files. + +### Step 3: Read Frontmatter of Candidates + +For each candidate file, read the frontmatter (limit: 30 lines): + +```bash +Read: [file_path] with limit:30 +``` + +**For personas, extract:** name, role, company_type, interview_count, confidence, source_interviews +**For interviews, extract:** participant_id, role, company_type, focus, tags + +### Step 4: Score and Rank Relevance + +Match frontmatter fields against the feature/task description: + +**Strong matches (prioritize):** +- `role` or `tags` directly match feature keywords +- `focus` describes the relevant workflow +- Persona opportunities mention the feature area + +**Moderate matches (include):** +- Related roles or workflows +- Tags overlap with feature domain + +**Weak matches (skip):** +- No overlapping keywords, roles, or workflows + +### Step 5: Full Read of Relevant Files + +For files that pass the filter, read the complete document to extract: +- Relevant opportunities from the opportunities table +- Key quotes related to the feature +- Behavioral observations that inform design +- Divergences that indicate split user needs + +### Step 6: Always Check Recent Personas + +Regardless of keyword match results, read the most recent persona files (by `last_updated`). Personas are the primary synthesis artifacts and may contain broadly relevant insights not captured by keyword search. + +### Step 7: Fallback for Sparse Results + +If Grep returns fewer than 3 candidate files, do a broader content search: +```bash +Grep: pattern="[any feature keyword]" path=docs/research/ output_mode=files_with_matches -i=true +``` + +### Step 8: Handle Empty Research Directory + +If `docs/research/` does not exist or contains no files, return: +"No user research data found. Run `/workflows:research` to start building your research corpus." + +## Output Format + +```markdown +## User Research Findings + +### Search Context +- **Feature/Task:** [description] +- **Keywords Used:** [tags, roles, topics searched] +- **Files Scanned:** [X personas, Y interviews] +- **Relevant Matches:** [Z files] + +### Relevant Personas + +#### [Persona Name] (confidence: high/medium/low) +- **Role:** [role] +- **Key Insight:** [most relevant finding for this task] +- **Relevant Opportunities:** [from opportunities table] +- **Divergences:** [any split findings relevant to this feature] +- **Source Interviews:** [list] + +### Key Quotes +- "[quote]" -- [participant_id], [context] +- "[quote]" -- [participant_id], [context] + +### Behavioral Observations +- [Relevant behaviors, workarounds, or patterns from interviews] + +### Research Gaps +- [Areas where research coverage is thin or missing] +- [User roles or workflows not yet studied] + +### Recommendations +- [Specific actions based on research findings] +- [Suggested research to fill gaps before building] +``` + +## Efficiency Guidelines + +**DO:** +- Use Grep to pre-filter files BEFORE reading content +- Run multiple Grep calls in PARALLEL for different keywords +- Include body content Grep for opportunity keywords in personas +- Use `-i=true` for case-insensitive matching +- Always read recent persona files regardless of keyword match +- Do a broader Grep as fallback if fewer than 3 candidates found +- Distill findings into actionable insights +- Note research gaps explicitly + +**DON'T:** +- Read all files without pre-filtering +- Run Grep calls sequentially when they can be parallel +- Skip opportunity searching in persona body content +- Return raw document contents (distill instead) +- Include tangentially related findings +- Fabricate or extrapolate beyond what research data shows + +## Integration Points + +Intended callers (to be wired in PR 2): +- `/workflows:brainstorm` Phase 1.1 -- surface research before brainstorming +- `/workflows:plan` Step 1 -- inform planning with user evidence + +Will run in parallel with `learnings-researcher` and `repo-research-analyst` during planning phases. diff --git a/plugins/compound-engineering/commands/workflows/research.md b/plugins/compound-engineering/commands/workflows/research.md new file mode 100644 index 00000000..071a73ed --- /dev/null +++ b/plugins/compound-engineering/commands/workflows/research.md @@ -0,0 +1,151 @@ +--- +name: workflows:research +description: Plan user research, process interview transcripts, and build personas from accumulated insights +argument-hint: "[plan|process|personas]" +--- + +# Research Workflow + +**Note: The current year is 2026.** Use this when dating research documents. + +Orchestrate the user research loop: plan studies, process interview transcripts, and synthesize personas from accumulated insights. + +## File Path Contracts + +All research artifacts follow these path conventions: + +| Artifact | Path Pattern | +|----------|-------------| +| Research plans | `docs/research/plans/YYYY-MM-DD--research-plan.md` | +| Transcripts | `docs/research/transcripts/*.md` (user-provided) | +| Interview snapshots | `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` | +| Personas | `docs/research/personas/.md` | + +## Directory Setup + +Create research directories if they do not exist: + +```bash +mkdir -p docs/research/plans docs/research/transcripts docs/research/interviews docs/research/personas +``` + +Run this silently before any phase. + +## Research Phase + + #$ARGUMENTS + +**If argument matches a phase name** (`plan`, `process`, or `personas`), jump directly to that phase below. + +**If argument is unrecognized**, show the phase selection menu with a note: "Valid arguments: `plan`, `process`, `personas`." + +**If argument is empty**, run phase selection: + +### Phase Selection + +Show a brief artifact status (2-3 lines max): + +``` +Research status: +- N plans, N transcripts (M unprocessed), N interviews, N personas +``` + +**Counting unprocessed transcripts:** Count files in `docs/research/transcripts/`. Then check `docs/research/interviews/` frontmatter for `source_transcript` fields. Transcripts not referenced by any interview are unprocessed. Simpler fallback: count transcripts minus count of interviews. + +**Recommend the next logical phase** based on state: +- No plans exist → recommend Plan +- Unprocessed transcripts exist → recommend Process +- Interviews exist but no personas → recommend Personas +- All phases have artifacts → show neutral menu + +Use **AskUserQuestion** with three options: +1. **Plan** -- Create a new research plan with objectives and discussion guide +2. **Process** -- Process an interview transcript into a structured snapshot +3. **Personas** -- Build or update personas from processed interviews + +Lead with the recommended option. + +--- + +## Phase 1: Plan + +Load the `research-plan` skill. + +The skill handles all research plan creation logic including objective framing, discussion guide generation, and output file creation. + +**Return contract:** The skill creates a file at `docs/research/plans/YYYY-MM-DD--research-plan.md`. + +After the skill completes, proceed to **Handoff**. + +--- + +## Phase 2: Process + +### Check for Transcripts + +Look for `.md` files in `docs/research/transcripts/`. + +**If no transcripts exist:** +Report: "No transcripts found in `docs/research/transcripts/`. Save your interview transcript as a `.md` file there, then re-run this phase." +Proceed to **Handoff**. + +**If transcripts exist:** +Identify unprocessed transcripts (not yet referenced by any interview snapshot in `docs/research/interviews/`). + +**If no unprocessed transcripts:** +Report: "All transcripts have been processed. Add new transcripts to `docs/research/transcripts/` or re-process an existing one." +Proceed to **Handoff**. + +**If exactly one unprocessed transcript:** +Present it with confirmation via AskUserQuestion: "Found 1 unprocessed transcript: `[filename]`. Process this one?" +Do not auto-select. + +**If multiple unprocessed transcripts:** +List them and ask the user to select via AskUserQuestion. + +### Process Selected Transcript + +Load the `transcript-insights` skill with the selected transcript path. + +The skill handles all processing logic including plan linking, metadata gathering, insight extraction, and output file creation. + +**Return contract:** The skill creates a file at `docs/research/interviews/YYYY-MM-DD-participant-NNN.md`. + +After the skill completes, proceed to **Handoff**. + +--- + +## Phase 3: Personas + +### Check for Interviews + +Look for processed interviews in `docs/research/interviews/`. + +**If no interviews exist:** +Report: "No processed interviews found in `docs/research/interviews/`. Process transcripts first with `/workflows:research process`." +Proceed to **Handoff**. + +**If interviews exist:** +Load the `persona-builder` skill. + +The skill handles persona matching, creation, merging, and output file creation. + +**Return contract:** The skill creates or updates a file at `docs/research/personas/.md`. + +After the skill completes, proceed to **Handoff**. + +--- + +## Handoff + +Announce the created or updated file path. + +If the skill completed without producing output (user abandoned or input was invalid), skip the file announcement and proceed directly to the menu. + +Use **AskUserQuestion** with three options: + +1. **Continue research** -- Return to the phase selection menu +2. **Proceed to `/workflows:brainstorm`** -- Hand off to brainstorm workflow +3. **Done for now** + +If the user selects "Continue research", return to the **Phase Selection** section above. diff --git a/plugins/compound-engineering/skills/persona-builder/SKILL.md b/plugins/compound-engineering/skills/persona-builder/SKILL.md new file mode 100644 index 00000000..73d31bb4 --- /dev/null +++ b/plugins/compound-engineering/skills/persona-builder/SKILL.md @@ -0,0 +1,255 @@ +--- +name: persona-builder +description: "Synthesize personas from processed interview snapshots with confidence tracking and evidence-backed opportunities. Use when processed interviews exist in docs/research/interviews/ or when building or updating personas." +--- + +# Persona Builder + +**Note: The current year is 2026.** + +Synthesize personas from processed interview snapshots. Personas are living documents that grow more confident as interviews accumulate. Follow evidence-based persona construction with confidence tracking, opportunity tables, and contradiction handling via Divergences sections. + +**Reference:** [discovery-playbook.md](./references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. + +## Quick Start + +1. Read processed interviews from `docs/research/interviews/` +2. Match to existing personas or create new ones +3. Generate or update a persona at `docs/research/personas/.md` + +## Instructions + +### Step 1: Read Available Interviews + +Scan `docs/research/interviews/` for processed interview snapshots. Read frontmatter (first 30 lines) of each file to extract: +- `participant_id` +- `role` +- `company_type` +- `focus` +- `tags` + +If no interviews exist, report: "No processed interviews found in `docs/research/interviews/`. Run `/workflows:research process` to create interview snapshots from transcripts." + +Present the user with a summary of available interviews and ask which one(s) to incorporate. If the user invoked this skill from the workflow command, a specific interview may already be identified. + +### Step 2: Match to Existing Personas + +After identifying the interview to incorporate: + +1. Extract `role` and `company_type` from the interview's frontmatter +2. Scan existing personas in `docs/research/personas/` for matches + +**Matching algorithm:** + +| Match Type | Criteria | Action | +|-----------|----------|--------| +| Exact match | Both `role` AND `company_type` match | Present as merge candidate | +| Partial match | `role` matches, `company_type` differs (or vice versa) | Present as possible candidate with differences highlighted | +| No match | Neither field matches | Offer to create new persona | +| Multiple matches | More than one persona matches | Present numbered list with differentiators | + +3. Present match results to the user via AskUserQuestion + +**Confirmation prompt must show:** +- Existing persona name, current interview count, confidence level +- 2-3 key characteristics of the existing persona +- The new interview's role, company type, and focus +- Option to "Create new persona" (always available) + +The user always confirms the choice. Never auto-merge. + +### Step 3a: Create New Persona + +If creating a new persona: + +1. Ask the user for a persona name (suggest a descriptive archetype name like "The Data-Driven Manager" or "The Hands-On Founder") +2. Build the persona from the selected interview(s) +3. Set `confidence: low` (single interview), `version: 1` +4. Write to `docs/research/personas/.md` + +Ensure the `docs/research/personas/` directory exists before writing. + +### Step 3b: Merge into Existing Persona + +If merging into an existing persona, follow the field-by-field update rules below. + +Read the full existing persona document before merging. + +## Merge Specification + +### Field-by-Field Update Rules + +| Field Category | Update Strategy | +|---------------|----------------| +| **Metadata** (`last_updated`, `interview_count`, `confidence`, `version`, `source_interviews`) | Always auto-update. Increment version, append participant_id to source_interviews, recalculate confidence. | +| **Persona name and role** | Preserve unless user explicitly requests change. | +| **Goals** | Append new goals not already listed. Flag potential duplicates with `[Review: possible overlap with Goal #N]`. | +| **Frustrations** | Append new frustrations. Flag potential duplicates with `[Review: possible overlap with Frustration #N]`. | +| **Behaviors** | Update participant counts as `(N/M participants)` where M = total interview count. When a behavior is NOT mentioned in the new interview, do NOT change its count (absence is not evidence). Add new behaviors. | +| **Quotes** | Add the single most representative new quote. Keep total at 5-7 max. If at cap, note "Additional quotes in source interviews." | +| **Opportunities table** | Add new rows. Update evidence strength counts for existing rows only when the new interview explicitly addresses that opportunity. | +| **Evidence section** | Always append new participant_id and research plan. | + +### Confidence Thresholds + +| Interview Count | Confidence Level | +|----------------|-----------------| +| 1 | low | +| 2-3 | medium | +| 4+ | high | + +### Contradiction Handling + +When a new interview contradicts an existing finding, do NOT silently update counts. Instead: + +1. Keep both data points with their evidence counts +2. Add to the `## Divergences` section: + +```markdown +## Divergences + +| Finding | Majority View | Minority View | Split | +|---------|--------------|---------------|-------| +| [Topic] | [View] (N/M) | [Contradicting view] (N/M) | N:N | +``` + +3. When divergences reach 40/60 split or closer, flag for potential persona segmentation: `[Flag: Consider splitting this persona -- [finding] shows near-even split]` +4. Surface contradictions in the merge confirmation prompt so the user is aware before confirming + +### Evidence Strength Thresholds + +| Strength | Criteria | +|----------|---------| +| Weak | Less than 33% of participants, or only 1 interview | +| Medium | 33-66% of participants | +| Strong | 67%+ of participants | + +### Hypothesis Status Transitions + +| Status | Criteria | +|--------|---------| +| SUPPORTED | 75%+ of evidence supports | +| MIXED | 40-75% support | +| CHALLENGED | Less than 40% support | +| NEW | Emerged from this interview, no prior evidence | + +## Output Template + +```markdown +--- +name: "[Descriptive archetype name]" +role: "[Primary job title or function]" +company_type: "[Industry or company category]" +last_updated: YYYY-MM-DD +interview_count: N +confidence: low / medium / high +source_interviews: [user-001, user-003, user-005] +version: N +--- + +# [Persona Name] + +## Overview + +[2-3 paragraph narrative description of this persona -- who they are, what drives them, and how they work. Ground in evidence from interviews.] + +## Goals + +1. [Goal with evidence count] (N/M participants) +2. [Goal] (N/M participants) + +## Frustrations + +1. [Frustration with evidence count] (N/M participants) +2. [Frustration] (N/M participants) + +## Behaviors + +| Behavior | Frequency | Evidence | +|----------|-----------|----------| +| [What they do] | [Daily/Weekly/etc.] | (N/M participants) | +| [What they do] | [Frequency] | (N/M participants) | + +## Key Quotes + +> "[Representative quote]" +> -- user-001, [context] + +> "[Representative quote]" +> -- user-003, [context] + +[Cap at 5-7 quotes. Additional quotes in source interviews.] + +## Opportunities + +| # | Opportunity | Evidence Strength | Participants | Key Quote | +|---|-----------|------------------|-------------|-----------| +| 1 | Users need a way to [outcome] | Strong / Medium / Weak | user-001, user-003 | "[Quote]" | +| 2 | Users need a way to [outcome] | Strong / Medium / Weak | user-005 | "[Quote]" | + +## Divergences + +_No divergences identified yet._ + +[Or, when contradictions exist:] + +| Finding | Majority View | Minority View | Split | +|---------|--------------|---------------|-------| +| [Topic] | [View] (N/M) | [Contradicting view] (N/M) | N:N | + +## Evidence + +| Participant | Research Plan | Date | Focus | +|------------|--------------|------|-------| +| user-001 | [plan-slug] | YYYY-MM-DD | [Interview focus] | +| user-003 | [plan-slug] | YYYY-MM-DD | [Interview focus] | + +## Human Review Checklist + +- [ ] Goals and frustrations grounded in interview evidence +- [ ] Behavior counts accurate (absence not counted as negative) +- [ ] Quotes are exact (verified against source interviews) +- [ ] Opportunities framed as needs, not solutions +- [ ] Divergences section reflects actual contradictions +- [ ] Confidence level matches interview count threshold +``` + +## Examples + +**Example persona creation (from single interview):** + +Interview frontmatter: `role: Marketing Manager`, `company_type: B2B SaaS` + +Suggested persona name: "The Data-Driven Manager" +Confidence: low (1 interview) +All behaviors listed as (1/1 participants) + +**Example merge scenario:** + +Existing persona: "The Data-Driven Manager" (2 interviews, medium confidence) +New interview: `role: Marketing Manager`, `company_type: B2B SaaS` + +Match type: Exact match +Confirmation prompt shows: +- "The Data-Driven Manager" -- 2 interviews, medium confidence +- Key characteristics: morning dashboard routine, exports data weekly, manages team of 5 +- New interview: Marketing Manager at B2B SaaS, focus: reporting workflows + +After merge: interview_count: 3, confidence: medium, version: 3 + +**Example contradiction handling:** + +Existing finding: "Checks dashboard first thing in the morning" (2/2 participants) +New interview: Participant checks dashboard after standup, not first thing + +Result in Divergences table: + +| Finding | Majority View | Minority View | Split | +|---------|--------------|---------------|-------| +| Morning dashboard check | Check first thing (2/3) | Check after standup (1/3) | 2:1 | + +Behavior table updated: "Checks dashboard in the morning" (3/3 participants) -- all check it, timing differs. Divergence captures the timing disagreement. + +## Privacy Note + +Personas use anonymized participant IDs. Do not include real names or identifying details. The persona archetype name should be descriptive of the role, not the individual. diff --git a/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md b/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md new file mode 100644 index 00000000..ca626bc5 --- /dev/null +++ b/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md @@ -0,0 +1,414 @@ +# Continuous Product Discovery Playbook +## A Best-Practices Guide for Product Managers & UX Researchers + +*Structuring Interviews, Extracting Insights from Transcripts, and Building a Sustainable Discovery System* + +--- + +## 1. Foundational Principles + +### 1.1 What Is Continuous Discovery? + +Continuous discovery is an approach where product teams maintain **at minimum, weekly touchpoints with customers**, conducting small research activities in pursuit of a desired product outcome (Teresa Torres, *Continuous Discovery Habits*). It replaces the outdated "big-bang research phase" with a persistent feedback loop that runs alongside delivery. + +**Core tenets:** +- **Outcome-focused, not output-focused.** Every discovery activity ties back to a measurable outcome (e.g., reduce churn, increase activation), not a feature request. +- **Weekly cadence.** Small, frequent interactions compound into deep user intuition over time. +- **Cross-functional ownership.** Discovery is co-owned by the **product trio** — a Product Manager, a Designer/UX Researcher, and an Engineer — who participate together in interviews and synthesis. +- **Lightweight and sustainable.** Discovery should not require elaborate study designs every week. Adapt methods to fit the time available. + +### 1.2 Why It Matters + +- **Healthier backlog:** Prioritization is grounded in real user evidence, not opinions or loudest-voice requests. +- **Lower cost of learning:** You discover problems with rough sketches and conversations, not after building and shipping. +- **Less reactive culture:** Teams spot opportunities before they become urgent escalations. +- **Compounding product judgment:** Persistent exposure to customers builds stronger intuition across the entire team. + +--- + +## 2. Setting Up a Discovery System + +### 2.1 Start With a Clear Outcome + +Before touching an interview guide, align your product trio on: +- **What behavior are we trying to change or improve?** +- **How does this tie into our OKRs / North Star metric?** +- **What do we need to learn to make a better decision?** + +Ground discovery in an outcome, not a feature. This prevents the trap of running interviews to validate a solution you've already committed to. + +### 2.2 Assemble the Product Trio + +| Role | Discovery Responsibility | +|---|---| +| **Product Manager** | Defines the outcome, prioritizes opportunities, owns the Opportunity Solution Tree | +| **UX Researcher / Designer** | Designs interview guides, moderates sessions, leads synthesis | +| **Engineer** | Assesses feasibility, participates in interviews (builds empathy for constraints and possibilities) | + +All three should attend interviews together whenever possible. Shared exposure eliminates the "telephone game" that happens when one person interviews and then reports findings to others. + +### 2.3 Automate Recruiting + +Recruiting is the #1 reason continuous interviewing fails. If scheduling is manual, the habit dies within weeks. Automate it so that interviews appear on your calendar every week without effort. + +**Proven recruiting channels:** + +| Channel | Best For | How It Works | +|---|---|---| +| **In-app intercepts** (e.g., Ethnio, Orbital) | Consumer & SaaS products | A pop-up screener appears inside the product; qualifying users schedule a call | +| **Customer support triggers** | Enterprise / B2B | Support agents flag specific scenarios and route users to the product team | +| **Insider connections** | Enterprise with named accounts | CSMs or account managers introduce product team to specific contacts | +| **Email campaigns** | Broad base | Targeted email to specific segments offering an incentive for 30 min of time | +| **Paid recruiting panels** | Hard-to-reach users | Services like UserInterviews, Respondent, or Prolific | + +**Key automation elements:** +- **Targeting:** Recruit the *right* users at the *right* time (e.g., users who completed onboarding 2+ weeks ago). +- **Screener questions:** Qualify in/out based on criteria relevant to your current outcome. +- **Automated reminders:** Email + SMS reminders reduce no-shows. +- **Self-scheduling:** Let participants pick from available calendar slots (Calendly, SavvyCal, etc.). + +--- + +## 3. Structuring Discovery Interviews + +### 3.1 Research Questions vs. Interview Questions + +A critical distinction (from Teresa Torres): +- **Research questions** = what you want to *learn* (e.g., "How often do users watch Netflix?"). +- **Interview questions** = what you actually *ask* (e.g., "Tell me about the last time you watched Netflix."). + +Research questions often make terrible interview questions. They encourage short, speculative, System 1 answers. Transform your research questions into **story-based prompts** that ground the participant in specific past behavior. + +### 3.2 The Mom Test (Rob Fitzpatrick) + +Three rules to ensure you get truthful, useful data — even from people who want to be polite: + +1. **Talk about their life instead of your idea.** Don't pitch; explore their reality. +2. **Ask about specifics in the past instead of generics or opinions about the future.** "Tell me about the last time…" beats "Would you ever…?" +3. **Talk less and listen more.** Your job is to extract signal, not to fill silence. + +**Deflect bad data:** +- **Compliments** ("That sounds really cool!") → Redirect: "Thanks — but tell me more about how you handle this today." +- **Fluff / generalities** ("I usually…" / "I always…") → Anchor: "When did that last happen? Walk me through it." +- **Hypothetical promises** ("I would definitely pay for that") → Dig: "What have you tried so far to solve this?" + +**Pre-interview discipline:** Before every conversation, write down the **three most important things you need to learn**. This keeps interviews focused and prevents aimless chatting. + +### 3.3 Story-Based Interviewing (Teresa Torres) + +The most reliable method for uncovering goals, context, and unmet needs. Instead of asking about general experiences, ask for **specific stories about past behavior**. + +**Why stories work:** +- They activate **System 2 thinking** (deliberate recall), producing more accurate answers than fast System 1 generalizations. +- They surface **context** — when, where, why, what device, what mood, who else was involved. +- They reveal **needs, pain points, and desires** (collectively: **opportunities**) that the participant may not even be consciously aware of. + +**The core prompt structure:** + +> "Tell me about the last time you [did the relevant activity]." +> "Tell me about a specific time when [relevant scenario]." + +**Interview flow:** + +| Phase | Duration | Purpose | Techniques | +|---|---|---|---| +| **Warm-up** | 2–3 min | Build rapport, set expectations | Easy personal questions; explain the purpose; reassure there are no right/wrong answers | +| **Story collection** | 15–20 min | Collect 1–2 specific stories about past behavior | "Tell me about the last time…"; follow up with "What happened next?"; gently redirect generalizations back to the specific instance | +| **Deepening** | 5–8 min | Explore pain points, workarounds, emotional context | "Tell me more about that"; "Why was that important?"; "How did you feel at that point?"; "What did you do next?" | +| **Wrap-up** | 2–3 min | Catch anything missed; close gracefully | "Is there anything else I should have asked?"; "Who else should I talk to?" | + +**Active listening techniques:** +- **Echoing:** Repeat the participant's last few words as a question to prompt elaboration. +- **Mirroring:** Match their body language and tone to build trust. +- **Comfortable silence:** Don't rush to fill pauses — participants often volunteer their best insights after a beat of silence. +- **Redirect generalizations:** When participants drift into "I usually…" or "I tend to…", gently guide back: "Can you think of a specific time that happened?" + +### 3.4 Question Bank: Good vs. Bad Questions + +| ❌ Avoid (Speculative / Leading / Closed) | ✅ Use Instead (Story-Based / Open-Ended) | +|---|---| +| "Would you use a feature that does X?" | "Tell me about the last time you tried to accomplish [goal]. What happened?" | +| "Do you like our product?" | "Walk me through the last time you used [product]. Start from the beginning." | +| "How often do you do X?" | "Tell me about the most recent time you did X. When was it? What was happening?" | +| "What's your biggest pain point?" | "Tell me about a time when [relevant task] was really frustrating. What happened?" | +| "What would your dream product do?" | "How are you solving this problem today? What have you tried?" | +| "Would you pay $X for Y?" | "Where does the money come from for tools like this? What's the buying process?" | +| "Do you think having the button on the left makes you less likely to click?" | "Walk me through what you did on this page. Was it easy to complete your task? Why or why not?" | + +### 3.5 Preparing the Discussion Guide + +A discussion guide is **flexible, not a rigid script**. It ensures you cover key topics while leaving room to follow interesting threads. + +**Structure:** +1. **Research goal** (1–2 sentences): What outcome are we learning about? +2. **Screening criteria:** Who qualifies for this interview? +3. **Warm-up questions** (2–3): Easy openers to build rapport. +4. **Story prompts** (2–3): Core story-based questions tied to your research goal. +5. **Follow-up / probing questions** (5–8): Nested under each story prompt — use as needed. +6. **Wrap-up questions** (1–2): "Anything else?" and referral questions. +7. **Debrief checklist:** Reminders for what to capture in your interview snapshot immediately after. + +--- + +## 4. Synthesizing Interviews: The Interview Snapshot + +### 4.1 Why Immediate Synthesis Matters + +> "Synthesize each interview immediately after it ends. Capture your thoughts while they're fresh, rather than assuming you'll revisit the recording or notes later." — Teresa Torres + +Memory degrades rapidly. Schedule **15 minutes immediately after every interview** for synthesis. The product trio should do this together — co-creation builds shared understanding. + +### 4.2 The Interview Snapshot (Teresa Torres) + +A **one-page summary** that makes each interview memorable, actionable, and reference-able. The product trio collaborates to complete it in 15–20 minutes post-interview. + +**Seven components:** + +| Component | What to Capture | +|---|---| +| **1. Name & Photo** | Identify and remember the participant | +| **2. Quick Facts** | Key context: role, segment, tenure, relevant demographics | +| **3. Memorable Quote** | A single quote that captures the essence of the story — helps trigger recall later | +| **4. Experience Map** | A simple visual timeline of the story they told (beginning → middle → end) with key moments marked | +| **5. Opportunities** | Unmet needs, pain points, and desires that surfaced during the story | +| **6. Insights** | Interesting learnings that aren't yet opportunities but may become relevant later | +| **7. Follow-up Items** | Open questions, things to verify, people to talk to next | + +**Templates available in:** Miro (Product Talk template), FigJam, Google Slides, PowerPoint, Keynote. + +**Key principle:** The snapshot is **synthesis, not transcription**. You are distilling meaning, not capturing every word. + +--- + +## 5. Extracting Insights from Transcripts + +### 5.1 Transcription First + +Before analysis, convert recordings to searchable text. This is the foundation for all downstream work. + +| Method | Best For | Considerations | +|---|---|---| +| **Automated transcription** (Otter.ai, Rev, Dovetail, Condens) | Speed; most use cases | Review critical sections manually — AI struggles with names, jargon, crosstalk | +| **Human transcription** | High-stakes research; heavy accents/jargon | More accurate but slower and more expensive | +| **Hybrid** | Enterprise research | Auto-transcribe first, then human-proofread key sections | + +**Essential metadata per session:** +- Session ID (stable, unique code) +- Date and type (interview, usability test, support call) +- Participant profile fields (role, segment, plan tier, region) +- Moderator/researcher and study name +- Consent/usage notes +- Links to recording and transcript files + +### 5.2 Highlighting: Capture Atomic Evidence + +Before tagging or theming, **highlight** the meaningful moments in each transcript. Each highlight should be an **atomic evidence unit** — a single observation, quote, or behavior that can stand alone. + +**What to highlight:** +- Direct quotes expressing needs, pain points, or desires +- Descriptions of behavior (what the participant actually did) +- Emotional reactions (frustration, surprise, delight) +- Workarounds and hacks (signals of unmet needs) +- Contradictions between stated preferences and actual behavior + +**Principle:** Highlight first, tag second, synthesize third. Don't jump to themes too early. + +### 5.3 Coding / Tagging + +Coding (or tagging) is the process of labeling highlights to enable pattern discovery across multiple interviews. + +**Two approaches:** + +| Approach | Description | When to Use | +|---|---|---| +| **Deductive (top-down)** | Define codes *before* reviewing data, based on research questions and hypotheses | When you have specific questions to answer; faster for time-constrained projects | +| **Inductive (bottom-up)** | Let codes emerge *from* the data as you review | When exploring new territory; prevents premature categorization | +| **Hybrid** | Start with a small set of deductive codes, then add inductive codes as new themes emerge | Most common in practice; balances speed and openness | + +**Practical tagging taxonomy:** + +| Tag Category | Examples | Purpose | +|---|---|---| +| **Descriptive** | Location, device, role, task, feature area | Organize by context | +| **Emotional** | Frustration, delight, confusion, surprise | Build empathy; identify emotional peaks | +| **Behavioral** | Workaround, abandonment, comparison shopping, habit | Surface actual behavior patterns | +| **Need/Pain Point** | Unmet need, pain point, desire, blocker | Feed directly into opportunities | +| **Evaluative** | Like, dislike, strong preference, indifference | Capture sentiment toward specific elements | + +**Best practices for a shared codebook:** +- Keep the tag set small (15–25 tags) and expand only when needed. +- Write a 1-sentence definition for each tag so teammates apply them consistently. +- Review and consolidate tags periodically — merge synonyms, retire unused tags. +- Use a shared tool (Dovetail, Condens, Notion, or even a spreadsheet) so the whole team sees the same taxonomy. + +### 5.4 Affinity Mapping & Thematic Analysis + +Once you've highlighted and tagged across multiple interviews, affinity mapping helps you see the patterns. + +**Step-by-step process:** + +1. **Gather all highlights** — Pull tagged quotes, observations, and notes from all interviews onto a shared surface (digital whiteboard, Miro, FigJam, or physical sticky notes). +2. **Group by similarity** — Move items that feel related near each other. Don't overthink categories yet — trust your intuition. +3. **Name the clusters** — Once groups form, give each a descriptive label that captures the theme (e.g., "Users distrust automated recommendations," "Onboarding feels overwhelming in week 1"). +4. **Look for hierarchy** — Some clusters may be sub-themes of larger themes. Nest them. +5. **Quantify (loosely)** — Note how many participants contributed to each theme and from which segments. This isn't statistical analysis — it's pattern recognition. +6. **Identify outliers** — Don't ignore insights that don't fit neatly. Outliers can signal emerging opportunities. +7. **Document** — Write a theme statement for each cluster, supported by 2–3 representative quotes with source references. + +**Watch out for bias:** +- **Confirmation bias:** Gravitating toward themes that confirm your hypotheses. +- **Recency bias:** Over-weighting the most recent interviews. +- **Loudness bias:** Giving more weight to articulate or emotionally expressive participants. + +Affinity mapping in a group (the product trio + stakeholders) helps counter individual bias through diverse perspectives. + +### 5.5 Atomic Research Nuggets + +For teams running continuous discovery over months/years, the **atomic research** approach (developed by Tomer Sharon and Daniel Pidcock) prevents insights from getting buried in reports. + +**What is a nugget?** +A nugget is the smallest indivisible unit of research insight: +- **Observation:** A single finding or fact (e.g., "3 of 5 users abandoned the wizard at step 3") +- **Evidence:** The source data that supports it (quote, timestamp, video clip) +- **Tags:** Metadata for searchability (feature area, user segment, research study) + +**Why nuggets work:** +- They are **reusable** across projects — you don't re-run the same research because someone didn't read last year's report. +- They are **searchable** — stakeholders can self-serve insights from a research repository. +- They are **composable** — multiple nuggets combine into higher-level insights and themes. + +**Storage:** Use a research repository tool (Dovetail, Condens, EnjoyHQ, Notion) or a structured spreadsheet with consistent tagging. + +--- + +## 6. From Insights to Action: The Opportunity Solution Tree + +### 6.1 What Is an Opportunity Solution Tree (OST)? + +The Opportunity Solution Tree, popularized by Teresa Torres, is a visual framework that connects: + +``` +Outcome (metric) + └── Opportunities (needs, pain points, desires) + └── Solutions (ideas to address opportunities) + └── Experiments (tests to validate solutions) +``` + +It ensures every solution traces back to a real customer opportunity, which traces back to a measurable business outcome. + +### 6.2 How to Build an OST + +1. **Set the outcome** — Place your target metric at the top of the tree (e.g., "Increase weekly active users by 15%"). +2. **Create an experience map** — Have each member of the product trio draw what they believe the current customer experience looks like. Merge into a shared map. Gaps in the map guide your interviews. +3. **Map opportunities from interview snapshots** — Every 3–4 interviews, review your snapshots and pull out the opportunities (needs, pain points, desires). Place them on the tree under the relevant moment in the experience map. +4. **Structure the opportunity space** — Group and nest related opportunities. Parent opportunities are broad (e.g., "Users struggle to find relevant content"); child opportunities are more specific (e.g., "Search results don't account for past viewing history"). +5. **Select a target opportunity** — Compare and contrast opportunities. Choose one that is solvable, impactful, and aligned with your outcome. +6. **Generate solutions** — Brainstorm multiple solutions for the target opportunity (divergent thinking). Don't commit to the first idea. +7. **Design experiments** — For each promising solution, identify the riskiest assumption and design a small test to validate or invalidate it. +8. **Iterate** — As you learn, revise the tree. New interviews add new opportunities. Failed experiments redirect you to alternative solutions. + +### 6.3 Common Pitfalls + +- **Framing opportunities as solutions.** "We need a better search bar" is a solution. The opportunity is "Users can't find content relevant to their interests." Practice separating the two. +- **Overreacting to the latest interview.** The tree prevents this by providing a big-picture view. One interview = one data point. Update the tree after every 3–4 interviews, not after every single one. +- **Skipping opportunity mapping.** Teams that jump from interview to solution miss the chance to compare opportunities strategically. +- **Setting the wrong outcome.** If your outcome isn't connected to business strategy, the whole tree drifts. Re-validate your outcome quarterly. + +--- + +## 7. Tools for Continuous Discovery + +| Category | Tools | Purpose | +|---|---|---| +| **Recruiting & Scheduling** | Ethnio, Orbital, Great Question, Calendly, UserInterviews | Automate participant recruitment and scheduling | +| **Video & Transcription** | Zoom, Grain, Otter.ai, Rev, Descript | Record interviews and generate transcripts | +| **Research Repository & Analysis** | Dovetail, Condens, EnjoyHQ, Notion, Airtable | Store, tag, search, and synthesize research data | +| **Synthesis & Mapping** | Miro, FigJam, Figjam, MURAL | Interview snapshots, affinity maps, experience maps, OSTs | +| **Opportunity Solution Trees** | Miro (Product Talk templates), Vistaly, ProductBoard | Visualize and manage the opportunity space | +| **AI-Assisted Analysis** | Dovetail AI, Condens AI, ChatGPT, Claude | Auto-transcription, auto-tagging, summarization (always human-validate) | + +**A note on AI tools:** AI can speed up transcription, suggest tags, and draft theme summaries. However, **do not rely on AI exclusively for synthesis** (per Teresa Torres). The act of personally reviewing conversations and identifying patterns is where deep understanding forms. Use AI to surface things you might overlook, not to replace your thinking. + +--- + +## 8. Building the Habit: Making Discovery Stick + +### 8.1 Weekly Cadence Template + +| Day | Activity | Time | +|---|---|---| +| **Monday** | Review upcoming interview schedule (auto-populated) | 5 min | +| **Tuesday** | Conduct interview #1; complete interview snapshot | 45–60 min | +| **Thursday** | Conduct interview #2; complete interview snapshot | 45–60 min | +| **Friday** | Cross-interview synthesis: update OST, review patterns | 30–45 min | + +This is approximately **2–3 hours per week** — roughly 5-7% of a trio's working hours. + +### 8.2 Protect Discovery Time + +- **Treat discovery like sprint planning** — it's not optional; it's on the calendar. +- **Batch interviews** — Don't spread them across random slots. Dedicated blocks reduce context-switching. +- **Rotate moderation** — Each trio member should take turns leading interviews to build shared capability. +- **Share snapshots visibly** — Post them in a team channel (Slack, Teams) or a shared Miro board so stakeholders stay informed without attending every session. + +### 8.3 Scaling Across Teams + +- **Create a shared codebook** — Standard tags and definitions across teams enable cross-team insight discovery. +- **Maintain a centralized research repository** — All snapshots, nuggets, and themes live in one searchable place. +- **Run periodic "insight jams"** — Monthly sessions where multiple trios review each other's OSTs and cross-pollinate opportunities. +- **Train PMs and designers on story-based interviewing** — The skill gap is the bottleneck, not the process. + +--- + +## 9. Quick-Reference Checklists + +### Pre-Interview Checklist +- [ ] Outcome defined and agreed upon by the product trio +- [ ] Discussion guide prepared (2–3 story prompts, follow-up questions) +- [ ] Participant recruited and confirmed (screener passed) +- [ ] Recording tool set up and tested +- [ ] Trio roles assigned (moderator, note-taker, observer) +- [ ] Three most important learning goals written down (The Mom Test) + +### During-Interview Checklist +- [ ] Warm-up complete; participant is comfortable +- [ ] Collecting specific stories about past behavior (not opinions about the future) +- [ ] Redirecting generalizations back to specifics +- [ ] Using active listening (echoing, silence, "tell me more") +- [ ] Not pitching solutions or leading the witness +- [ ] Capturing timestamps of key moments for later reference + +### Post-Interview Checklist +- [ ] Interview snapshot completed within 15–20 minutes +- [ ] Opportunities and insights documented +- [ ] Experience map drawn for the story collected +- [ ] Snapshot shared with the team +- [ ] Follow-up items logged +- [ ] Opportunities added to the Opportunity Solution Tree (after every 3–4 interviews) + +### Transcript Analysis Checklist +- [ ] Transcript reviewed and cleaned (names, jargon corrected) +- [ ] Key moments highlighted as atomic evidence units +- [ ] Highlights tagged using shared codebook +- [ ] Themes identified through affinity mapping +- [ ] Themes documented with supporting quotes and source references +- [ ] Findings connected to existing opportunities on the OST +- [ ] Insights stored in research repository for future reference + +--- + +## 10. Recommended Reading & Sources + +| Resource | Author | Key Contribution | +|---|---|---| +| *Continuous Discovery Habits* | Teresa Torres | The definitive framework for weekly customer touchpoints, interview snapshots, and Opportunity Solution Trees | +| *The Mom Test* | Rob Fitzpatrick | Rules for asking questions that produce truthful, useful answers | +| Product Talk Blog (producttalk.org) | Teresa Torres | Story-based interviewing, opportunity mapping, and OST deep dives | +| NN/g User Interviews 101 | Nielsen Norman Group | Foundational interviewing methodology for UX researchers | +| *Thinking, Fast and Slow* | Daniel Kahneman | Understanding System 1 vs. System 2 thinking and why story-based questions produce better data | +| Atomic Research | Tomer Sharon & Daniel Pidcock | Breaking research into reusable, searchable nuggets | +| Dovetail/Condens Workflows | Various | Practical transcript-to-theme synthesis workflows | + +--- + +*This playbook is a living document. Update it as your team's discovery practice matures. The goal is not perfection — it's a sustainable habit of learning from your customers every single week.* diff --git a/plugins/compound-engineering/skills/research-plan/SKILL.md b/plugins/compound-engineering/skills/research-plan/SKILL.md new file mode 100644 index 00000000..2d4551ce --- /dev/null +++ b/plugins/compound-engineering/skills/research-plan/SKILL.md @@ -0,0 +1,223 @@ +--- +name: research-plan +description: "Create structured research plans with outcome-focused objectives, discussion guides, and screener questions. Use when planning user interviews, customer research, or discovery work." +--- + +# Research Plan + +**Note: The current year is 2026.** + +Create structured research plans grounded in Teresa Torres' Continuous Discovery Habits and Rob Fitzpatrick's Mom Test methodology. Plans focus on outcomes (not outputs), story-based interviewing, and past behavior over future speculation. + +**Reference:** [discovery-playbook.md](./references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. + +## Quick Start + +1. Ask the user for the research objective (what outcome or decision this research will inform) +2. Identify target participants and screener criteria +3. Generate a research plan at `docs/research/plans/YYYY-MM-DD--research-plan.md` + +## Instructions + +### Step 1: Define the Research Objective + +Ask the user what outcome this research will inform. Reframe feature-level requests into outcome-level objectives. + +**Reframing examples:** +- "We want to add a dashboard" → "Understand how users monitor their key metrics and where current tools fall short" +- "Users want export to PDF" → "Understand the end-to-end workflow when users share data with stakeholders" + +Identify 2-4 hypotheses to test. Frame hypotheses as falsifiable statements about user behavior: +- "Users check their dashboard first thing in the morning" +- "Export is primarily used for sharing with non-users" + +### Step 2: Define the "Three Most Important Things to Learn" + +Distill the research objective into exactly three questions. These anchor every interview: +1. What is the current behavior? (past actions, not future intent) +2. What pain points exist in the current workflow? +3. What outcomes matter most to participants? + +### Step 3: Identify Participant Criteria + +Define who to interview: +- Role or job function +- Company type or industry +- Specific behaviors or usage patterns that qualify them +- Exclusion criteria (who should NOT be interviewed) + +Write 3-5 screener questions that filter for the right participants. Screeners should identify actual behavior, not self-reported preferences: +- "How many times did you export data last month?" (concrete, verifiable) +- NOT "Do you find exporting useful?" (opinion, not behavior) + +### Step 4: Create the Discussion Guide + +Build a story-based discussion guide following these principles: + +**Opening (2-3 minutes):** +- Establish rapport +- Explain the format: "Tell me about the last time you..." +- No pitching, no leading questions + +**Story Elicitation (15-20 minutes):** +- Start with a specific recent experience: "Walk me through the last time you [relevant activity]" +- Follow the story arc: trigger → actions → obstacles → outcome +- Drill into specifics with Mom Test questions: + - "What happened next?" + - "How did you handle that?" + - "What did you do instead?" (for workarounds) + - "Can you show me?" + +**Depth Probes (5-10 minutes):** +- Explore motivations: "Why was that important?" +- Surface latent needs: "What would change if that were easier?" +- Validate hypotheses with past behavior: "Has that happened before?" + +**Closing (2-3 minutes):** +- "Is there anything else about [topic] that I should have asked about?" +- Ask for referrals if recruiting more participants + +**Mom Test Rules (apply throughout):** +- Ask about past behavior, never future intent +- Ask about specifics, not generalizations +- Listen for emotional signals (frustration, excitement, resignation) +- Never pitch or describe a solution during the interview +- Compliments and hypothetical commitments are not data + +### Step 5: Set Sample Size and Schedule + +Recommend a sample size based on research goals: +- **Exploratory research** (understanding problem space): 5-8 participants +- **Evaluative research** (testing specific hypothesis): 3-5 participants +- **Continuous discovery** (ongoing learning): 1-2 per week + +### Step 6: Write the Plan + +Generate the research plan file at `docs/research/plans/YYYY-MM-DD--research-plan.md`. + +Ensure the `docs/research/plans/` directory exists before writing. + +## Output Template + +```markdown +--- +title: "[Research objective - short descriptive title]" +date: YYYY-MM-DD +status: planned +outcome: "[The outcome or decision this research informs]" +hypotheses: + - "[Hypothesis 1 - falsifiable statement about user behavior]" + - "[Hypothesis 2]" +participant_criteria: "[Role/behavior/company type criteria]" +sample_size: N +interviews_completed: 0 +--- + +# [Research Plan Title] + +## Objective + +[1-2 paragraphs describing the research outcome, why it matters, and what decisions it will inform] + +## Three Most Important Things to Learn + +1. [Question about current behavior] +2. [Question about pain points] +3. [Question about desired outcomes] + +## Hypotheses + +| # | Hypothesis | Status | +|---|-----------|--------| +| 1 | [Falsifiable statement] | UNTESTED | +| 2 | [Falsifiable statement] | UNTESTED | + +## Participant Criteria + +**Include:** +- [Criterion 1 - based on behavior] +- [Criterion 2] + +**Exclude:** +- [Exclusion 1] + +### Screener Questions + +1. [Behavior-based screener question] +2. [Behavior-based screener question] +3. [Behavior-based screener question] + +## Discussion Guide + +### Opening (2-3 min) + +- Introduce yourself and the purpose (learning, not selling) +- "I'd love to hear about your experience with [topic]. There are no wrong answers." + +### Story Elicitation (15-20 min) + +**Primary story prompt:** +> "Walk me through the last time you [relevant activity]." + +**Follow-up probes:** +- "What happened next?" +- "How did you handle that?" +- "What were you trying to accomplish?" +- "What made that difficult?" +- "What did you do instead?" + +### Depth Probes (5-10 min) + +- [Hypothesis-specific probe 1] +- [Hypothesis-specific probe 2] +- "Why was that important to you?" +- "Has that happened before? How often?" + +### Closing (2-3 min) + +- "Is there anything about [topic] I should have asked?" +- "Who else should I talk to about this?" + +## Post-Interview Checklist + +- [ ] Write interview snapshot within 24 hours (run `/workflows:research process`) +- [ ] Note top 3 surprises from this interview +- [ ] Update hypothesis status in this plan +- [ ] Identify follow-up questions for next interview +- [ ] Add new screener criteria if participant fit was imperfect + +## Schedule + +| # | Participant | Date | Status | +|---|-----------|------|--------| +| 1 | TBD | TBD | Scheduled | + +## Human Review Checklist + +- [ ] Objective is outcome-focused (not feature-focused) +- [ ] Hypotheses are falsifiable statements about behavior +- [ ] Screener questions ask about past behavior, not opinions +- [ ] Discussion guide follows story-based structure +- [ ] No leading questions or solution pitching in guide +- [ ] Sample size appropriate for research type +``` + +## Examples + +**Example objective reframing:** + +Input: "We need research for our new reporting feature" +Output objective: "Understand how teams currently create, share, and act on data reports, and where the workflow breaks down" + +**Example screener (good vs. bad):** + +| Quality | Question | +|---------|----------| +| Good | "How many reports did you create last month?" | +| Good | "Walk me through what you did after your last monthly review meeting." | +| Bad | "Do you think reporting is important?" | +| Bad | "Would you use a better reporting tool?" | + +## Privacy Note + +Consider adding `docs/research/transcripts/` to `.gitignore` if transcripts contain personally identifiable information. Research plans and processed insights (with anonymized participant IDs) are generally safe to commit. diff --git a/plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md b/plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md new file mode 100644 index 00000000..ca626bc5 --- /dev/null +++ b/plugins/compound-engineering/skills/research-plan/references/discovery-playbook.md @@ -0,0 +1,414 @@ +# Continuous Product Discovery Playbook +## A Best-Practices Guide for Product Managers & UX Researchers + +*Structuring Interviews, Extracting Insights from Transcripts, and Building a Sustainable Discovery System* + +--- + +## 1. Foundational Principles + +### 1.1 What Is Continuous Discovery? + +Continuous discovery is an approach where product teams maintain **at minimum, weekly touchpoints with customers**, conducting small research activities in pursuit of a desired product outcome (Teresa Torres, *Continuous Discovery Habits*). It replaces the outdated "big-bang research phase" with a persistent feedback loop that runs alongside delivery. + +**Core tenets:** +- **Outcome-focused, not output-focused.** Every discovery activity ties back to a measurable outcome (e.g., reduce churn, increase activation), not a feature request. +- **Weekly cadence.** Small, frequent interactions compound into deep user intuition over time. +- **Cross-functional ownership.** Discovery is co-owned by the **product trio** — a Product Manager, a Designer/UX Researcher, and an Engineer — who participate together in interviews and synthesis. +- **Lightweight and sustainable.** Discovery should not require elaborate study designs every week. Adapt methods to fit the time available. + +### 1.2 Why It Matters + +- **Healthier backlog:** Prioritization is grounded in real user evidence, not opinions or loudest-voice requests. +- **Lower cost of learning:** You discover problems with rough sketches and conversations, not after building and shipping. +- **Less reactive culture:** Teams spot opportunities before they become urgent escalations. +- **Compounding product judgment:** Persistent exposure to customers builds stronger intuition across the entire team. + +--- + +## 2. Setting Up a Discovery System + +### 2.1 Start With a Clear Outcome + +Before touching an interview guide, align your product trio on: +- **What behavior are we trying to change or improve?** +- **How does this tie into our OKRs / North Star metric?** +- **What do we need to learn to make a better decision?** + +Ground discovery in an outcome, not a feature. This prevents the trap of running interviews to validate a solution you've already committed to. + +### 2.2 Assemble the Product Trio + +| Role | Discovery Responsibility | +|---|---| +| **Product Manager** | Defines the outcome, prioritizes opportunities, owns the Opportunity Solution Tree | +| **UX Researcher / Designer** | Designs interview guides, moderates sessions, leads synthesis | +| **Engineer** | Assesses feasibility, participates in interviews (builds empathy for constraints and possibilities) | + +All three should attend interviews together whenever possible. Shared exposure eliminates the "telephone game" that happens when one person interviews and then reports findings to others. + +### 2.3 Automate Recruiting + +Recruiting is the #1 reason continuous interviewing fails. If scheduling is manual, the habit dies within weeks. Automate it so that interviews appear on your calendar every week without effort. + +**Proven recruiting channels:** + +| Channel | Best For | How It Works | +|---|---|---| +| **In-app intercepts** (e.g., Ethnio, Orbital) | Consumer & SaaS products | A pop-up screener appears inside the product; qualifying users schedule a call | +| **Customer support triggers** | Enterprise / B2B | Support agents flag specific scenarios and route users to the product team | +| **Insider connections** | Enterprise with named accounts | CSMs or account managers introduce product team to specific contacts | +| **Email campaigns** | Broad base | Targeted email to specific segments offering an incentive for 30 min of time | +| **Paid recruiting panels** | Hard-to-reach users | Services like UserInterviews, Respondent, or Prolific | + +**Key automation elements:** +- **Targeting:** Recruit the *right* users at the *right* time (e.g., users who completed onboarding 2+ weeks ago). +- **Screener questions:** Qualify in/out based on criteria relevant to your current outcome. +- **Automated reminders:** Email + SMS reminders reduce no-shows. +- **Self-scheduling:** Let participants pick from available calendar slots (Calendly, SavvyCal, etc.). + +--- + +## 3. Structuring Discovery Interviews + +### 3.1 Research Questions vs. Interview Questions + +A critical distinction (from Teresa Torres): +- **Research questions** = what you want to *learn* (e.g., "How often do users watch Netflix?"). +- **Interview questions** = what you actually *ask* (e.g., "Tell me about the last time you watched Netflix."). + +Research questions often make terrible interview questions. They encourage short, speculative, System 1 answers. Transform your research questions into **story-based prompts** that ground the participant in specific past behavior. + +### 3.2 The Mom Test (Rob Fitzpatrick) + +Three rules to ensure you get truthful, useful data — even from people who want to be polite: + +1. **Talk about their life instead of your idea.** Don't pitch; explore their reality. +2. **Ask about specifics in the past instead of generics or opinions about the future.** "Tell me about the last time…" beats "Would you ever…?" +3. **Talk less and listen more.** Your job is to extract signal, not to fill silence. + +**Deflect bad data:** +- **Compliments** ("That sounds really cool!") → Redirect: "Thanks — but tell me more about how you handle this today." +- **Fluff / generalities** ("I usually…" / "I always…") → Anchor: "When did that last happen? Walk me through it." +- **Hypothetical promises** ("I would definitely pay for that") → Dig: "What have you tried so far to solve this?" + +**Pre-interview discipline:** Before every conversation, write down the **three most important things you need to learn**. This keeps interviews focused and prevents aimless chatting. + +### 3.3 Story-Based Interviewing (Teresa Torres) + +The most reliable method for uncovering goals, context, and unmet needs. Instead of asking about general experiences, ask for **specific stories about past behavior**. + +**Why stories work:** +- They activate **System 2 thinking** (deliberate recall), producing more accurate answers than fast System 1 generalizations. +- They surface **context** — when, where, why, what device, what mood, who else was involved. +- They reveal **needs, pain points, and desires** (collectively: **opportunities**) that the participant may not even be consciously aware of. + +**The core prompt structure:** + +> "Tell me about the last time you [did the relevant activity]." +> "Tell me about a specific time when [relevant scenario]." + +**Interview flow:** + +| Phase | Duration | Purpose | Techniques | +|---|---|---|---| +| **Warm-up** | 2–3 min | Build rapport, set expectations | Easy personal questions; explain the purpose; reassure there are no right/wrong answers | +| **Story collection** | 15–20 min | Collect 1–2 specific stories about past behavior | "Tell me about the last time…"; follow up with "What happened next?"; gently redirect generalizations back to the specific instance | +| **Deepening** | 5–8 min | Explore pain points, workarounds, emotional context | "Tell me more about that"; "Why was that important?"; "How did you feel at that point?"; "What did you do next?" | +| **Wrap-up** | 2–3 min | Catch anything missed; close gracefully | "Is there anything else I should have asked?"; "Who else should I talk to?" | + +**Active listening techniques:** +- **Echoing:** Repeat the participant's last few words as a question to prompt elaboration. +- **Mirroring:** Match their body language and tone to build trust. +- **Comfortable silence:** Don't rush to fill pauses — participants often volunteer their best insights after a beat of silence. +- **Redirect generalizations:** When participants drift into "I usually…" or "I tend to…", gently guide back: "Can you think of a specific time that happened?" + +### 3.4 Question Bank: Good vs. Bad Questions + +| ❌ Avoid (Speculative / Leading / Closed) | ✅ Use Instead (Story-Based / Open-Ended) | +|---|---| +| "Would you use a feature that does X?" | "Tell me about the last time you tried to accomplish [goal]. What happened?" | +| "Do you like our product?" | "Walk me through the last time you used [product]. Start from the beginning." | +| "How often do you do X?" | "Tell me about the most recent time you did X. When was it? What was happening?" | +| "What's your biggest pain point?" | "Tell me about a time when [relevant task] was really frustrating. What happened?" | +| "What would your dream product do?" | "How are you solving this problem today? What have you tried?" | +| "Would you pay $X for Y?" | "Where does the money come from for tools like this? What's the buying process?" | +| "Do you think having the button on the left makes you less likely to click?" | "Walk me through what you did on this page. Was it easy to complete your task? Why or why not?" | + +### 3.5 Preparing the Discussion Guide + +A discussion guide is **flexible, not a rigid script**. It ensures you cover key topics while leaving room to follow interesting threads. + +**Structure:** +1. **Research goal** (1–2 sentences): What outcome are we learning about? +2. **Screening criteria:** Who qualifies for this interview? +3. **Warm-up questions** (2–3): Easy openers to build rapport. +4. **Story prompts** (2–3): Core story-based questions tied to your research goal. +5. **Follow-up / probing questions** (5–8): Nested under each story prompt — use as needed. +6. **Wrap-up questions** (1–2): "Anything else?" and referral questions. +7. **Debrief checklist:** Reminders for what to capture in your interview snapshot immediately after. + +--- + +## 4. Synthesizing Interviews: The Interview Snapshot + +### 4.1 Why Immediate Synthesis Matters + +> "Synthesize each interview immediately after it ends. Capture your thoughts while they're fresh, rather than assuming you'll revisit the recording or notes later." — Teresa Torres + +Memory degrades rapidly. Schedule **15 minutes immediately after every interview** for synthesis. The product trio should do this together — co-creation builds shared understanding. + +### 4.2 The Interview Snapshot (Teresa Torres) + +A **one-page summary** that makes each interview memorable, actionable, and reference-able. The product trio collaborates to complete it in 15–20 minutes post-interview. + +**Seven components:** + +| Component | What to Capture | +|---|---| +| **1. Name & Photo** | Identify and remember the participant | +| **2. Quick Facts** | Key context: role, segment, tenure, relevant demographics | +| **3. Memorable Quote** | A single quote that captures the essence of the story — helps trigger recall later | +| **4. Experience Map** | A simple visual timeline of the story they told (beginning → middle → end) with key moments marked | +| **5. Opportunities** | Unmet needs, pain points, and desires that surfaced during the story | +| **6. Insights** | Interesting learnings that aren't yet opportunities but may become relevant later | +| **7. Follow-up Items** | Open questions, things to verify, people to talk to next | + +**Templates available in:** Miro (Product Talk template), FigJam, Google Slides, PowerPoint, Keynote. + +**Key principle:** The snapshot is **synthesis, not transcription**. You are distilling meaning, not capturing every word. + +--- + +## 5. Extracting Insights from Transcripts + +### 5.1 Transcription First + +Before analysis, convert recordings to searchable text. This is the foundation for all downstream work. + +| Method | Best For | Considerations | +|---|---|---| +| **Automated transcription** (Otter.ai, Rev, Dovetail, Condens) | Speed; most use cases | Review critical sections manually — AI struggles with names, jargon, crosstalk | +| **Human transcription** | High-stakes research; heavy accents/jargon | More accurate but slower and more expensive | +| **Hybrid** | Enterprise research | Auto-transcribe first, then human-proofread key sections | + +**Essential metadata per session:** +- Session ID (stable, unique code) +- Date and type (interview, usability test, support call) +- Participant profile fields (role, segment, plan tier, region) +- Moderator/researcher and study name +- Consent/usage notes +- Links to recording and transcript files + +### 5.2 Highlighting: Capture Atomic Evidence + +Before tagging or theming, **highlight** the meaningful moments in each transcript. Each highlight should be an **atomic evidence unit** — a single observation, quote, or behavior that can stand alone. + +**What to highlight:** +- Direct quotes expressing needs, pain points, or desires +- Descriptions of behavior (what the participant actually did) +- Emotional reactions (frustration, surprise, delight) +- Workarounds and hacks (signals of unmet needs) +- Contradictions between stated preferences and actual behavior + +**Principle:** Highlight first, tag second, synthesize third. Don't jump to themes too early. + +### 5.3 Coding / Tagging + +Coding (or tagging) is the process of labeling highlights to enable pattern discovery across multiple interviews. + +**Two approaches:** + +| Approach | Description | When to Use | +|---|---|---| +| **Deductive (top-down)** | Define codes *before* reviewing data, based on research questions and hypotheses | When you have specific questions to answer; faster for time-constrained projects | +| **Inductive (bottom-up)** | Let codes emerge *from* the data as you review | When exploring new territory; prevents premature categorization | +| **Hybrid** | Start with a small set of deductive codes, then add inductive codes as new themes emerge | Most common in practice; balances speed and openness | + +**Practical tagging taxonomy:** + +| Tag Category | Examples | Purpose | +|---|---|---| +| **Descriptive** | Location, device, role, task, feature area | Organize by context | +| **Emotional** | Frustration, delight, confusion, surprise | Build empathy; identify emotional peaks | +| **Behavioral** | Workaround, abandonment, comparison shopping, habit | Surface actual behavior patterns | +| **Need/Pain Point** | Unmet need, pain point, desire, blocker | Feed directly into opportunities | +| **Evaluative** | Like, dislike, strong preference, indifference | Capture sentiment toward specific elements | + +**Best practices for a shared codebook:** +- Keep the tag set small (15–25 tags) and expand only when needed. +- Write a 1-sentence definition for each tag so teammates apply them consistently. +- Review and consolidate tags periodically — merge synonyms, retire unused tags. +- Use a shared tool (Dovetail, Condens, Notion, or even a spreadsheet) so the whole team sees the same taxonomy. + +### 5.4 Affinity Mapping & Thematic Analysis + +Once you've highlighted and tagged across multiple interviews, affinity mapping helps you see the patterns. + +**Step-by-step process:** + +1. **Gather all highlights** — Pull tagged quotes, observations, and notes from all interviews onto a shared surface (digital whiteboard, Miro, FigJam, or physical sticky notes). +2. **Group by similarity** — Move items that feel related near each other. Don't overthink categories yet — trust your intuition. +3. **Name the clusters** — Once groups form, give each a descriptive label that captures the theme (e.g., "Users distrust automated recommendations," "Onboarding feels overwhelming in week 1"). +4. **Look for hierarchy** — Some clusters may be sub-themes of larger themes. Nest them. +5. **Quantify (loosely)** — Note how many participants contributed to each theme and from which segments. This isn't statistical analysis — it's pattern recognition. +6. **Identify outliers** — Don't ignore insights that don't fit neatly. Outliers can signal emerging opportunities. +7. **Document** — Write a theme statement for each cluster, supported by 2–3 representative quotes with source references. + +**Watch out for bias:** +- **Confirmation bias:** Gravitating toward themes that confirm your hypotheses. +- **Recency bias:** Over-weighting the most recent interviews. +- **Loudness bias:** Giving more weight to articulate or emotionally expressive participants. + +Affinity mapping in a group (the product trio + stakeholders) helps counter individual bias through diverse perspectives. + +### 5.5 Atomic Research Nuggets + +For teams running continuous discovery over months/years, the **atomic research** approach (developed by Tomer Sharon and Daniel Pidcock) prevents insights from getting buried in reports. + +**What is a nugget?** +A nugget is the smallest indivisible unit of research insight: +- **Observation:** A single finding or fact (e.g., "3 of 5 users abandoned the wizard at step 3") +- **Evidence:** The source data that supports it (quote, timestamp, video clip) +- **Tags:** Metadata for searchability (feature area, user segment, research study) + +**Why nuggets work:** +- They are **reusable** across projects — you don't re-run the same research because someone didn't read last year's report. +- They are **searchable** — stakeholders can self-serve insights from a research repository. +- They are **composable** — multiple nuggets combine into higher-level insights and themes. + +**Storage:** Use a research repository tool (Dovetail, Condens, EnjoyHQ, Notion) or a structured spreadsheet with consistent tagging. + +--- + +## 6. From Insights to Action: The Opportunity Solution Tree + +### 6.1 What Is an Opportunity Solution Tree (OST)? + +The Opportunity Solution Tree, popularized by Teresa Torres, is a visual framework that connects: + +``` +Outcome (metric) + └── Opportunities (needs, pain points, desires) + └── Solutions (ideas to address opportunities) + └── Experiments (tests to validate solutions) +``` + +It ensures every solution traces back to a real customer opportunity, which traces back to a measurable business outcome. + +### 6.2 How to Build an OST + +1. **Set the outcome** — Place your target metric at the top of the tree (e.g., "Increase weekly active users by 15%"). +2. **Create an experience map** — Have each member of the product trio draw what they believe the current customer experience looks like. Merge into a shared map. Gaps in the map guide your interviews. +3. **Map opportunities from interview snapshots** — Every 3–4 interviews, review your snapshots and pull out the opportunities (needs, pain points, desires). Place them on the tree under the relevant moment in the experience map. +4. **Structure the opportunity space** — Group and nest related opportunities. Parent opportunities are broad (e.g., "Users struggle to find relevant content"); child opportunities are more specific (e.g., "Search results don't account for past viewing history"). +5. **Select a target opportunity** — Compare and contrast opportunities. Choose one that is solvable, impactful, and aligned with your outcome. +6. **Generate solutions** — Brainstorm multiple solutions for the target opportunity (divergent thinking). Don't commit to the first idea. +7. **Design experiments** — For each promising solution, identify the riskiest assumption and design a small test to validate or invalidate it. +8. **Iterate** — As you learn, revise the tree. New interviews add new opportunities. Failed experiments redirect you to alternative solutions. + +### 6.3 Common Pitfalls + +- **Framing opportunities as solutions.** "We need a better search bar" is a solution. The opportunity is "Users can't find content relevant to their interests." Practice separating the two. +- **Overreacting to the latest interview.** The tree prevents this by providing a big-picture view. One interview = one data point. Update the tree after every 3–4 interviews, not after every single one. +- **Skipping opportunity mapping.** Teams that jump from interview to solution miss the chance to compare opportunities strategically. +- **Setting the wrong outcome.** If your outcome isn't connected to business strategy, the whole tree drifts. Re-validate your outcome quarterly. + +--- + +## 7. Tools for Continuous Discovery + +| Category | Tools | Purpose | +|---|---|---| +| **Recruiting & Scheduling** | Ethnio, Orbital, Great Question, Calendly, UserInterviews | Automate participant recruitment and scheduling | +| **Video & Transcription** | Zoom, Grain, Otter.ai, Rev, Descript | Record interviews and generate transcripts | +| **Research Repository & Analysis** | Dovetail, Condens, EnjoyHQ, Notion, Airtable | Store, tag, search, and synthesize research data | +| **Synthesis & Mapping** | Miro, FigJam, Figjam, MURAL | Interview snapshots, affinity maps, experience maps, OSTs | +| **Opportunity Solution Trees** | Miro (Product Talk templates), Vistaly, ProductBoard | Visualize and manage the opportunity space | +| **AI-Assisted Analysis** | Dovetail AI, Condens AI, ChatGPT, Claude | Auto-transcription, auto-tagging, summarization (always human-validate) | + +**A note on AI tools:** AI can speed up transcription, suggest tags, and draft theme summaries. However, **do not rely on AI exclusively for synthesis** (per Teresa Torres). The act of personally reviewing conversations and identifying patterns is where deep understanding forms. Use AI to surface things you might overlook, not to replace your thinking. + +--- + +## 8. Building the Habit: Making Discovery Stick + +### 8.1 Weekly Cadence Template + +| Day | Activity | Time | +|---|---|---| +| **Monday** | Review upcoming interview schedule (auto-populated) | 5 min | +| **Tuesday** | Conduct interview #1; complete interview snapshot | 45–60 min | +| **Thursday** | Conduct interview #2; complete interview snapshot | 45–60 min | +| **Friday** | Cross-interview synthesis: update OST, review patterns | 30–45 min | + +This is approximately **2–3 hours per week** — roughly 5-7% of a trio's working hours. + +### 8.2 Protect Discovery Time + +- **Treat discovery like sprint planning** — it's not optional; it's on the calendar. +- **Batch interviews** — Don't spread them across random slots. Dedicated blocks reduce context-switching. +- **Rotate moderation** — Each trio member should take turns leading interviews to build shared capability. +- **Share snapshots visibly** — Post them in a team channel (Slack, Teams) or a shared Miro board so stakeholders stay informed without attending every session. + +### 8.3 Scaling Across Teams + +- **Create a shared codebook** — Standard tags and definitions across teams enable cross-team insight discovery. +- **Maintain a centralized research repository** — All snapshots, nuggets, and themes live in one searchable place. +- **Run periodic "insight jams"** — Monthly sessions where multiple trios review each other's OSTs and cross-pollinate opportunities. +- **Train PMs and designers on story-based interviewing** — The skill gap is the bottleneck, not the process. + +--- + +## 9. Quick-Reference Checklists + +### Pre-Interview Checklist +- [ ] Outcome defined and agreed upon by the product trio +- [ ] Discussion guide prepared (2–3 story prompts, follow-up questions) +- [ ] Participant recruited and confirmed (screener passed) +- [ ] Recording tool set up and tested +- [ ] Trio roles assigned (moderator, note-taker, observer) +- [ ] Three most important learning goals written down (The Mom Test) + +### During-Interview Checklist +- [ ] Warm-up complete; participant is comfortable +- [ ] Collecting specific stories about past behavior (not opinions about the future) +- [ ] Redirecting generalizations back to specifics +- [ ] Using active listening (echoing, silence, "tell me more") +- [ ] Not pitching solutions or leading the witness +- [ ] Capturing timestamps of key moments for later reference + +### Post-Interview Checklist +- [ ] Interview snapshot completed within 15–20 minutes +- [ ] Opportunities and insights documented +- [ ] Experience map drawn for the story collected +- [ ] Snapshot shared with the team +- [ ] Follow-up items logged +- [ ] Opportunities added to the Opportunity Solution Tree (after every 3–4 interviews) + +### Transcript Analysis Checklist +- [ ] Transcript reviewed and cleaned (names, jargon corrected) +- [ ] Key moments highlighted as atomic evidence units +- [ ] Highlights tagged using shared codebook +- [ ] Themes identified through affinity mapping +- [ ] Themes documented with supporting quotes and source references +- [ ] Findings connected to existing opportunities on the OST +- [ ] Insights stored in research repository for future reference + +--- + +## 10. Recommended Reading & Sources + +| Resource | Author | Key Contribution | +|---|---|---| +| *Continuous Discovery Habits* | Teresa Torres | The definitive framework for weekly customer touchpoints, interview snapshots, and Opportunity Solution Trees | +| *The Mom Test* | Rob Fitzpatrick | Rules for asking questions that produce truthful, useful answers | +| Product Talk Blog (producttalk.org) | Teresa Torres | Story-based interviewing, opportunity mapping, and OST deep dives | +| NN/g User Interviews 101 | Nielsen Norman Group | Foundational interviewing methodology for UX researchers | +| *Thinking, Fast and Slow* | Daniel Kahneman | Understanding System 1 vs. System 2 thinking and why story-based questions produce better data | +| Atomic Research | Tomer Sharon & Daniel Pidcock | Breaking research into reusable, searchable nuggets | +| Dovetail/Condens Workflows | Various | Practical transcript-to-theme synthesis workflows | + +--- + +*This playbook is a living document. Update it as your team's discovery practice matures. The goal is not perfection — it's a sustainable habit of learning from your customers every single week.* diff --git a/plugins/compound-engineering/skills/transcript-insights/SKILL.md b/plugins/compound-engineering/skills/transcript-insights/SKILL.md new file mode 100644 index 00000000..a187861c --- /dev/null +++ b/plugins/compound-engineering/skills/transcript-insights/SKILL.md @@ -0,0 +1,285 @@ +--- +name: transcript-insights +description: "Process interview transcripts into structured snapshots with tagged insights, experience maps, and opportunity identification. Use when a transcript exists in docs/research/transcripts/ or when pasting interview content." +--- + +# Transcript Insights + +**Note: The current year is 2026.** + +Process raw interview transcripts into structured interview snapshots following Teresa Torres' one-page interview snapshot format. Extract atomic insights, map experience timelines, identify opportunities in Opportunity Solution Tree language, and track hypothesis status. + +**Reference:** [discovery-playbook.md](./references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. + +## Quick Start + +1. Accept a transcript file path or pasted content +2. Link to a research plan (or mark as ad-hoc) +3. Generate an interview snapshot at `docs/research/interviews/YYYY-MM-DD-participant-NNN.md` + +## Instructions + +### Step 1: Accept Input + +Check `$ARGUMENTS` for a file path. If empty, prompt: +- "Provide the path to a transcript in `docs/research/transcripts/`, or paste the transcript content directly." + +If a file path is given, read the transcript. If the file does not exist, report the error and stop. + +If content is pasted directly, proceed with that content (no file reference in output frontmatter). + +### Step 2: Link to Research Plan + +List existing research plans by reading frontmatter from files in `docs/research/plans/`: +- Show title, date, and status for each plan +- Most recent first, cap at 7 entries +- Include "Ad-hoc / no plan" as the final option + +Use AskUserQuestion to ask which plan this transcript belongs to. Store the plan slug (filename without date prefix and extension) in the output frontmatter. + +If "Ad-hoc" is selected, set `research_plan: ad-hoc` in frontmatter. + +### Step 3: Gather Metadata + +Ask the user for participant metadata (use AskUserQuestion where appropriate): +- **Participant ID**: Suggest format `user-NNN` based on existing interviews +- **Role**: Job title or function (e.g., "Marketing Manager") +- **Company type**: Industry or company category (e.g., "B2B SaaS") +- **Interview focus**: Brief topic description +- **Duration**: Approximate length in minutes + +Check existing interviews in `docs/research/interviews/` for the next available participant number. + +### Step 4: Process the Transcript + +Read the full transcript and extract the following components: + +#### 4a: Interview Summary + +Write a 3-5 sentence summary capturing the key narrative arc. Focus on what the participant actually did (past behavior), not what they said they would do. + +#### 4b: Experience Map + +Create a timeline of the participant's experience as described in the interview. Follow the story arc: + +``` +Trigger → Context → Actions → Obstacles → Workarounds → Outcome +``` + +For each step, note: +- What happened (factual) +- How the participant felt (emotional signals) +- What tools or processes were involved + +#### 4c: Atomic Insights + +Extract individual insights from the transcript. Each insight must be: +- **Atomic**: One observation per insight +- **Evidence-based**: Tied to a specific quote or described behavior +- **Tagged**: With exactly ONE type tag and 1-3 topic tags + +Quote every insight. Use the participant's exact words. Do not paraphrase, composite, or fabricate quotes. If the insight comes from observed behavior rather than a direct quote, note it as `[Observed behavior]` instead. + +#### 4d: Opportunities + +Frame opportunities in Opportunity Solution Tree language: +- Opportunities are unmet needs, pain points, or desires -- NOT solutions +- "Users need a way to [outcome]" not "Build a [feature]" +- Rate evidence strength based on how directly the participant expressed the need + +#### 4e: Hypothesis Tracking + +If linked to a research plan, evaluate each hypothesis from the plan: +- **SUPPORTED**: This interview provides evidence supporting the hypothesis +- **CHALLENGED**: This interview provides evidence contradicting the hypothesis +- **MIXED**: Evidence is ambiguous or partially supports +- **NEW**: A new hypothesis emerged from this interview (not in original plan) +- **NO DATA**: This interview did not address this hypothesis + +Provide the specific evidence (quote or behavior) for each status assignment. + +#### 4f: Behavioral Observations + +Note non-verbal or contextual observations: +- Tools or screens the participant mentioned or demonstrated +- Emotional reactions (frustration, excitement, confusion) +- Workarounds or hacks they described +- Frequency indicators ("every day", "once a month", "whenever I need to") + +### Step 5: Write the Interview Snapshot + +Generate the file at `docs/research/interviews/YYYY-MM-DD-participant-NNN.md`. + +Ensure the `docs/research/interviews/` directory exists before writing. + +## Tag Taxonomy + +### Type Tags (Fixed Set) + +Assign exactly ONE type tag per insight: + +| Tag | Use When | +|-----|----------| +| `pain-point` | Participant describes frustration, difficulty, or failure | +| `need` | Participant expresses a requirement or necessity | +| `desire` | Participant wishes for something beyond basic needs | +| `behavior` | Participant describes what they actually do (neutral observation) | +| `workaround` | Participant describes a hack or alternative to compensate for a gap | +| `motivation` | Participant explains why something matters to them | + +### Topic Tags (Semi-Open) + +Assign 1-3 topic tags per insight: +- Lowercase, hyphenated, singular (e.g., `dashboard`, `data-export`, `morning-workflow`) +- Before creating a new topic tag, check existing interviews for established tags +- Grep `docs/research/interviews/` for `tags:` lines to find existing tags +- Prefer existing tags over creating new synonyms + +## Output Template + +```markdown +--- +participant_id: user-NNN +role: "[Job title or function]" +company_type: "[Industry or company category]" +date: YYYY-MM-DD +research_plan: "[plan-slug or ad-hoc]" +source_transcript: "[transcript-filename.md]" +focus: "[Brief topic description]" +duration_minutes: NN +tags: [topic-tag-1, topic-tag-2, topic-tag-3] +--- + +# Interview Snapshot: [Participant ID] + +## Summary + +[3-5 sentence narrative summary focusing on past behavior and key story arc] + +## Experience Map + +``` +[Trigger] → [Context] → [Actions] → [Obstacles] → [Workarounds] → [Outcome] +``` + +| Step | What Happened | Feeling | Tools/Process | +|------|--------------|---------|---------------| +| Trigger | [Event that started the workflow] | [Emotional state] | [Tool/process] | +| Action 1 | [What they did] | [Emotional state] | [Tool/process] | +| Obstacle | [What blocked them] | [Emotional state] | - | +| Workaround | [How they got around it] | [Emotional state] | [Tool/process] | +| Outcome | [End result] | [Emotional state] | - | + +## Insights + +### Pain Points + +> "[Exact quote from transcript]" +- **Type:** pain-point +- **Topics:** [tag-1], [tag-2] +- **Context:** [Brief context for the quote] + +### Needs + +> "[Exact quote from transcript]" +- **Type:** need +- **Topics:** [tag-1] +- **Context:** [Brief context] + +### Behaviors + +> "[Exact quote or [Observed behavior] description]" +- **Type:** behavior +- **Topics:** [tag-1], [tag-2] +- **Context:** [Brief context] + +### Workarounds + +> "[Exact quote from transcript]" +- **Type:** workaround +- **Topics:** [tag-1] +- **Context:** [Brief context] + +### Desires + +> "[Exact quote from transcript]" +- **Type:** desire +- **Topics:** [tag-1] +- **Context:** [Brief context] + +### Motivations + +> "[Exact quote from transcript]" +- **Type:** motivation +- **Topics:** [tag-1] +- **Context:** [Brief context] + +## Opportunities + +Opportunities are unmet needs -- NOT solutions. + +| # | Opportunity | Evidence Strength | Quote | +|---|-----------|------------------|-------| +| 1 | Users need a way to [outcome] | Strong / Medium / Weak | "[Supporting quote]" | +| 2 | Users need a way to [outcome] | Strong / Medium / Weak | "[Supporting quote]" | + +**Evidence strength:** +- **Strong**: Participant explicitly described this need with emotional weight +- **Medium**: Participant mentioned this in passing or as part of a larger story +- **Weak**: Inferred from behavior or workaround, not directly stated + +## Hypothesis Tracking + +| # | Hypothesis | Status | Evidence | +|---|-----------|--------|----------| +| 1 | [From research plan] | SUPPORTED / CHALLENGED / MIXED / NEW / NO DATA | "[Quote or behavior]" | +| 2 | [From research plan] | SUPPORTED / CHALLENGED / MIXED / NEW / NO DATA | "[Quote or behavior]" | + +## Behavioral Observations + +- **Tools mentioned:** [List of tools, software, processes referenced] +- **Frequency indicators:** [How often activities occur] +- **Emotional signals:** [Notable reactions during interview] +- **Workaround patterns:** [Hacks or alternative approaches described] + +## Human Review Checklist + +- [ ] All quotes verified against source transcript +- [ ] Experience map accurately reflects story arc +- [ ] Opportunities reflect participant needs, not assumed solutions +- [ ] Tags accurate and consistent with existing taxonomy +- [ ] No insights fabricated or composited from multiple participants +``` + +## Examples + +**Example insight extraction:** + +Transcript excerpt: +> "Every morning I open three different tabs -- the dashboard, the Slack channel, and this spreadsheet I maintain. I basically copy numbers from the dashboard into my spreadsheet because the export never works right." + +Extracted insights: + +1. > "Every morning I open three different tabs -- the dashboard, the Slack channel, and this spreadsheet I maintain." + - **Type:** behavior + - **Topics:** morning-workflow, dashboard, multi-tool + - **Context:** Describing daily monitoring routine + +2. > "I basically copy numbers from the dashboard into my spreadsheet because the export never works right." + - **Type:** workaround + - **Topics:** data-export, dashboard + - **Context:** Manual data transfer to compensate for broken export + +Extracted opportunity: +- "Users need a reliable way to get dashboard data into their own tracking tools" (NOT "Build a better export button") + +**Example hypothesis tracking:** + +| Hypothesis | Status | Evidence | +|-----------|--------|----------| +| Users check dashboard first thing in the morning | SUPPORTED | "Every morning I open three different tabs -- the dashboard, the Slack channel, and this spreadsheet" | +| Export is primarily used for sharing with non-users | CHALLENGED | Export is used for personal tracking spreadsheet, not sharing | + +## Privacy Note + +Interview snapshots use anonymized participant IDs (user-001, user-002). Do not include real names, email addresses, or other PII in the snapshot. If the source transcript contains PII, strip it during processing. Consider adding `docs/research/transcripts/` to `.gitignore`. diff --git a/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md b/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md new file mode 100644 index 00000000..ca626bc5 --- /dev/null +++ b/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md @@ -0,0 +1,414 @@ +# Continuous Product Discovery Playbook +## A Best-Practices Guide for Product Managers & UX Researchers + +*Structuring Interviews, Extracting Insights from Transcripts, and Building a Sustainable Discovery System* + +--- + +## 1. Foundational Principles + +### 1.1 What Is Continuous Discovery? + +Continuous discovery is an approach where product teams maintain **at minimum, weekly touchpoints with customers**, conducting small research activities in pursuit of a desired product outcome (Teresa Torres, *Continuous Discovery Habits*). It replaces the outdated "big-bang research phase" with a persistent feedback loop that runs alongside delivery. + +**Core tenets:** +- **Outcome-focused, not output-focused.** Every discovery activity ties back to a measurable outcome (e.g., reduce churn, increase activation), not a feature request. +- **Weekly cadence.** Small, frequent interactions compound into deep user intuition over time. +- **Cross-functional ownership.** Discovery is co-owned by the **product trio** — a Product Manager, a Designer/UX Researcher, and an Engineer — who participate together in interviews and synthesis. +- **Lightweight and sustainable.** Discovery should not require elaborate study designs every week. Adapt methods to fit the time available. + +### 1.2 Why It Matters + +- **Healthier backlog:** Prioritization is grounded in real user evidence, not opinions or loudest-voice requests. +- **Lower cost of learning:** You discover problems with rough sketches and conversations, not after building and shipping. +- **Less reactive culture:** Teams spot opportunities before they become urgent escalations. +- **Compounding product judgment:** Persistent exposure to customers builds stronger intuition across the entire team. + +--- + +## 2. Setting Up a Discovery System + +### 2.1 Start With a Clear Outcome + +Before touching an interview guide, align your product trio on: +- **What behavior are we trying to change or improve?** +- **How does this tie into our OKRs / North Star metric?** +- **What do we need to learn to make a better decision?** + +Ground discovery in an outcome, not a feature. This prevents the trap of running interviews to validate a solution you've already committed to. + +### 2.2 Assemble the Product Trio + +| Role | Discovery Responsibility | +|---|---| +| **Product Manager** | Defines the outcome, prioritizes opportunities, owns the Opportunity Solution Tree | +| **UX Researcher / Designer** | Designs interview guides, moderates sessions, leads synthesis | +| **Engineer** | Assesses feasibility, participates in interviews (builds empathy for constraints and possibilities) | + +All three should attend interviews together whenever possible. Shared exposure eliminates the "telephone game" that happens when one person interviews and then reports findings to others. + +### 2.3 Automate Recruiting + +Recruiting is the #1 reason continuous interviewing fails. If scheduling is manual, the habit dies within weeks. Automate it so that interviews appear on your calendar every week without effort. + +**Proven recruiting channels:** + +| Channel | Best For | How It Works | +|---|---|---| +| **In-app intercepts** (e.g., Ethnio, Orbital) | Consumer & SaaS products | A pop-up screener appears inside the product; qualifying users schedule a call | +| **Customer support triggers** | Enterprise / B2B | Support agents flag specific scenarios and route users to the product team | +| **Insider connections** | Enterprise with named accounts | CSMs or account managers introduce product team to specific contacts | +| **Email campaigns** | Broad base | Targeted email to specific segments offering an incentive for 30 min of time | +| **Paid recruiting panels** | Hard-to-reach users | Services like UserInterviews, Respondent, or Prolific | + +**Key automation elements:** +- **Targeting:** Recruit the *right* users at the *right* time (e.g., users who completed onboarding 2+ weeks ago). +- **Screener questions:** Qualify in/out based on criteria relevant to your current outcome. +- **Automated reminders:** Email + SMS reminders reduce no-shows. +- **Self-scheduling:** Let participants pick from available calendar slots (Calendly, SavvyCal, etc.). + +--- + +## 3. Structuring Discovery Interviews + +### 3.1 Research Questions vs. Interview Questions + +A critical distinction (from Teresa Torres): +- **Research questions** = what you want to *learn* (e.g., "How often do users watch Netflix?"). +- **Interview questions** = what you actually *ask* (e.g., "Tell me about the last time you watched Netflix."). + +Research questions often make terrible interview questions. They encourage short, speculative, System 1 answers. Transform your research questions into **story-based prompts** that ground the participant in specific past behavior. + +### 3.2 The Mom Test (Rob Fitzpatrick) + +Three rules to ensure you get truthful, useful data — even from people who want to be polite: + +1. **Talk about their life instead of your idea.** Don't pitch; explore their reality. +2. **Ask about specifics in the past instead of generics or opinions about the future.** "Tell me about the last time…" beats "Would you ever…?" +3. **Talk less and listen more.** Your job is to extract signal, not to fill silence. + +**Deflect bad data:** +- **Compliments** ("That sounds really cool!") → Redirect: "Thanks — but tell me more about how you handle this today." +- **Fluff / generalities** ("I usually…" / "I always…") → Anchor: "When did that last happen? Walk me through it." +- **Hypothetical promises** ("I would definitely pay for that") → Dig: "What have you tried so far to solve this?" + +**Pre-interview discipline:** Before every conversation, write down the **three most important things you need to learn**. This keeps interviews focused and prevents aimless chatting. + +### 3.3 Story-Based Interviewing (Teresa Torres) + +The most reliable method for uncovering goals, context, and unmet needs. Instead of asking about general experiences, ask for **specific stories about past behavior**. + +**Why stories work:** +- They activate **System 2 thinking** (deliberate recall), producing more accurate answers than fast System 1 generalizations. +- They surface **context** — when, where, why, what device, what mood, who else was involved. +- They reveal **needs, pain points, and desires** (collectively: **opportunities**) that the participant may not even be consciously aware of. + +**The core prompt structure:** + +> "Tell me about the last time you [did the relevant activity]." +> "Tell me about a specific time when [relevant scenario]." + +**Interview flow:** + +| Phase | Duration | Purpose | Techniques | +|---|---|---|---| +| **Warm-up** | 2–3 min | Build rapport, set expectations | Easy personal questions; explain the purpose; reassure there are no right/wrong answers | +| **Story collection** | 15–20 min | Collect 1–2 specific stories about past behavior | "Tell me about the last time…"; follow up with "What happened next?"; gently redirect generalizations back to the specific instance | +| **Deepening** | 5–8 min | Explore pain points, workarounds, emotional context | "Tell me more about that"; "Why was that important?"; "How did you feel at that point?"; "What did you do next?" | +| **Wrap-up** | 2–3 min | Catch anything missed; close gracefully | "Is there anything else I should have asked?"; "Who else should I talk to?" | + +**Active listening techniques:** +- **Echoing:** Repeat the participant's last few words as a question to prompt elaboration. +- **Mirroring:** Match their body language and tone to build trust. +- **Comfortable silence:** Don't rush to fill pauses — participants often volunteer their best insights after a beat of silence. +- **Redirect generalizations:** When participants drift into "I usually…" or "I tend to…", gently guide back: "Can you think of a specific time that happened?" + +### 3.4 Question Bank: Good vs. Bad Questions + +| ❌ Avoid (Speculative / Leading / Closed) | ✅ Use Instead (Story-Based / Open-Ended) | +|---|---| +| "Would you use a feature that does X?" | "Tell me about the last time you tried to accomplish [goal]. What happened?" | +| "Do you like our product?" | "Walk me through the last time you used [product]. Start from the beginning." | +| "How often do you do X?" | "Tell me about the most recent time you did X. When was it? What was happening?" | +| "What's your biggest pain point?" | "Tell me about a time when [relevant task] was really frustrating. What happened?" | +| "What would your dream product do?" | "How are you solving this problem today? What have you tried?" | +| "Would you pay $X for Y?" | "Where does the money come from for tools like this? What's the buying process?" | +| "Do you think having the button on the left makes you less likely to click?" | "Walk me through what you did on this page. Was it easy to complete your task? Why or why not?" | + +### 3.5 Preparing the Discussion Guide + +A discussion guide is **flexible, not a rigid script**. It ensures you cover key topics while leaving room to follow interesting threads. + +**Structure:** +1. **Research goal** (1–2 sentences): What outcome are we learning about? +2. **Screening criteria:** Who qualifies for this interview? +3. **Warm-up questions** (2–3): Easy openers to build rapport. +4. **Story prompts** (2–3): Core story-based questions tied to your research goal. +5. **Follow-up / probing questions** (5–8): Nested under each story prompt — use as needed. +6. **Wrap-up questions** (1–2): "Anything else?" and referral questions. +7. **Debrief checklist:** Reminders for what to capture in your interview snapshot immediately after. + +--- + +## 4. Synthesizing Interviews: The Interview Snapshot + +### 4.1 Why Immediate Synthesis Matters + +> "Synthesize each interview immediately after it ends. Capture your thoughts while they're fresh, rather than assuming you'll revisit the recording or notes later." — Teresa Torres + +Memory degrades rapidly. Schedule **15 minutes immediately after every interview** for synthesis. The product trio should do this together — co-creation builds shared understanding. + +### 4.2 The Interview Snapshot (Teresa Torres) + +A **one-page summary** that makes each interview memorable, actionable, and reference-able. The product trio collaborates to complete it in 15–20 minutes post-interview. + +**Seven components:** + +| Component | What to Capture | +|---|---| +| **1. Name & Photo** | Identify and remember the participant | +| **2. Quick Facts** | Key context: role, segment, tenure, relevant demographics | +| **3. Memorable Quote** | A single quote that captures the essence of the story — helps trigger recall later | +| **4. Experience Map** | A simple visual timeline of the story they told (beginning → middle → end) with key moments marked | +| **5. Opportunities** | Unmet needs, pain points, and desires that surfaced during the story | +| **6. Insights** | Interesting learnings that aren't yet opportunities but may become relevant later | +| **7. Follow-up Items** | Open questions, things to verify, people to talk to next | + +**Templates available in:** Miro (Product Talk template), FigJam, Google Slides, PowerPoint, Keynote. + +**Key principle:** The snapshot is **synthesis, not transcription**. You are distilling meaning, not capturing every word. + +--- + +## 5. Extracting Insights from Transcripts + +### 5.1 Transcription First + +Before analysis, convert recordings to searchable text. This is the foundation for all downstream work. + +| Method | Best For | Considerations | +|---|---|---| +| **Automated transcription** (Otter.ai, Rev, Dovetail, Condens) | Speed; most use cases | Review critical sections manually — AI struggles with names, jargon, crosstalk | +| **Human transcription** | High-stakes research; heavy accents/jargon | More accurate but slower and more expensive | +| **Hybrid** | Enterprise research | Auto-transcribe first, then human-proofread key sections | + +**Essential metadata per session:** +- Session ID (stable, unique code) +- Date and type (interview, usability test, support call) +- Participant profile fields (role, segment, plan tier, region) +- Moderator/researcher and study name +- Consent/usage notes +- Links to recording and transcript files + +### 5.2 Highlighting: Capture Atomic Evidence + +Before tagging or theming, **highlight** the meaningful moments in each transcript. Each highlight should be an **atomic evidence unit** — a single observation, quote, or behavior that can stand alone. + +**What to highlight:** +- Direct quotes expressing needs, pain points, or desires +- Descriptions of behavior (what the participant actually did) +- Emotional reactions (frustration, surprise, delight) +- Workarounds and hacks (signals of unmet needs) +- Contradictions between stated preferences and actual behavior + +**Principle:** Highlight first, tag second, synthesize third. Don't jump to themes too early. + +### 5.3 Coding / Tagging + +Coding (or tagging) is the process of labeling highlights to enable pattern discovery across multiple interviews. + +**Two approaches:** + +| Approach | Description | When to Use | +|---|---|---| +| **Deductive (top-down)** | Define codes *before* reviewing data, based on research questions and hypotheses | When you have specific questions to answer; faster for time-constrained projects | +| **Inductive (bottom-up)** | Let codes emerge *from* the data as you review | When exploring new territory; prevents premature categorization | +| **Hybrid** | Start with a small set of deductive codes, then add inductive codes as new themes emerge | Most common in practice; balances speed and openness | + +**Practical tagging taxonomy:** + +| Tag Category | Examples | Purpose | +|---|---|---| +| **Descriptive** | Location, device, role, task, feature area | Organize by context | +| **Emotional** | Frustration, delight, confusion, surprise | Build empathy; identify emotional peaks | +| **Behavioral** | Workaround, abandonment, comparison shopping, habit | Surface actual behavior patterns | +| **Need/Pain Point** | Unmet need, pain point, desire, blocker | Feed directly into opportunities | +| **Evaluative** | Like, dislike, strong preference, indifference | Capture sentiment toward specific elements | + +**Best practices for a shared codebook:** +- Keep the tag set small (15–25 tags) and expand only when needed. +- Write a 1-sentence definition for each tag so teammates apply them consistently. +- Review and consolidate tags periodically — merge synonyms, retire unused tags. +- Use a shared tool (Dovetail, Condens, Notion, or even a spreadsheet) so the whole team sees the same taxonomy. + +### 5.4 Affinity Mapping & Thematic Analysis + +Once you've highlighted and tagged across multiple interviews, affinity mapping helps you see the patterns. + +**Step-by-step process:** + +1. **Gather all highlights** — Pull tagged quotes, observations, and notes from all interviews onto a shared surface (digital whiteboard, Miro, FigJam, or physical sticky notes). +2. **Group by similarity** — Move items that feel related near each other. Don't overthink categories yet — trust your intuition. +3. **Name the clusters** — Once groups form, give each a descriptive label that captures the theme (e.g., "Users distrust automated recommendations," "Onboarding feels overwhelming in week 1"). +4. **Look for hierarchy** — Some clusters may be sub-themes of larger themes. Nest them. +5. **Quantify (loosely)** — Note how many participants contributed to each theme and from which segments. This isn't statistical analysis — it's pattern recognition. +6. **Identify outliers** — Don't ignore insights that don't fit neatly. Outliers can signal emerging opportunities. +7. **Document** — Write a theme statement for each cluster, supported by 2–3 representative quotes with source references. + +**Watch out for bias:** +- **Confirmation bias:** Gravitating toward themes that confirm your hypotheses. +- **Recency bias:** Over-weighting the most recent interviews. +- **Loudness bias:** Giving more weight to articulate or emotionally expressive participants. + +Affinity mapping in a group (the product trio + stakeholders) helps counter individual bias through diverse perspectives. + +### 5.5 Atomic Research Nuggets + +For teams running continuous discovery over months/years, the **atomic research** approach (developed by Tomer Sharon and Daniel Pidcock) prevents insights from getting buried in reports. + +**What is a nugget?** +A nugget is the smallest indivisible unit of research insight: +- **Observation:** A single finding or fact (e.g., "3 of 5 users abandoned the wizard at step 3") +- **Evidence:** The source data that supports it (quote, timestamp, video clip) +- **Tags:** Metadata for searchability (feature area, user segment, research study) + +**Why nuggets work:** +- They are **reusable** across projects — you don't re-run the same research because someone didn't read last year's report. +- They are **searchable** — stakeholders can self-serve insights from a research repository. +- They are **composable** — multiple nuggets combine into higher-level insights and themes. + +**Storage:** Use a research repository tool (Dovetail, Condens, EnjoyHQ, Notion) or a structured spreadsheet with consistent tagging. + +--- + +## 6. From Insights to Action: The Opportunity Solution Tree + +### 6.1 What Is an Opportunity Solution Tree (OST)? + +The Opportunity Solution Tree, popularized by Teresa Torres, is a visual framework that connects: + +``` +Outcome (metric) + └── Opportunities (needs, pain points, desires) + └── Solutions (ideas to address opportunities) + └── Experiments (tests to validate solutions) +``` + +It ensures every solution traces back to a real customer opportunity, which traces back to a measurable business outcome. + +### 6.2 How to Build an OST + +1. **Set the outcome** — Place your target metric at the top of the tree (e.g., "Increase weekly active users by 15%"). +2. **Create an experience map** — Have each member of the product trio draw what they believe the current customer experience looks like. Merge into a shared map. Gaps in the map guide your interviews. +3. **Map opportunities from interview snapshots** — Every 3–4 interviews, review your snapshots and pull out the opportunities (needs, pain points, desires). Place them on the tree under the relevant moment in the experience map. +4. **Structure the opportunity space** — Group and nest related opportunities. Parent opportunities are broad (e.g., "Users struggle to find relevant content"); child opportunities are more specific (e.g., "Search results don't account for past viewing history"). +5. **Select a target opportunity** — Compare and contrast opportunities. Choose one that is solvable, impactful, and aligned with your outcome. +6. **Generate solutions** — Brainstorm multiple solutions for the target opportunity (divergent thinking). Don't commit to the first idea. +7. **Design experiments** — For each promising solution, identify the riskiest assumption and design a small test to validate or invalidate it. +8. **Iterate** — As you learn, revise the tree. New interviews add new opportunities. Failed experiments redirect you to alternative solutions. + +### 6.3 Common Pitfalls + +- **Framing opportunities as solutions.** "We need a better search bar" is a solution. The opportunity is "Users can't find content relevant to their interests." Practice separating the two. +- **Overreacting to the latest interview.** The tree prevents this by providing a big-picture view. One interview = one data point. Update the tree after every 3–4 interviews, not after every single one. +- **Skipping opportunity mapping.** Teams that jump from interview to solution miss the chance to compare opportunities strategically. +- **Setting the wrong outcome.** If your outcome isn't connected to business strategy, the whole tree drifts. Re-validate your outcome quarterly. + +--- + +## 7. Tools for Continuous Discovery + +| Category | Tools | Purpose | +|---|---|---| +| **Recruiting & Scheduling** | Ethnio, Orbital, Great Question, Calendly, UserInterviews | Automate participant recruitment and scheduling | +| **Video & Transcription** | Zoom, Grain, Otter.ai, Rev, Descript | Record interviews and generate transcripts | +| **Research Repository & Analysis** | Dovetail, Condens, EnjoyHQ, Notion, Airtable | Store, tag, search, and synthesize research data | +| **Synthesis & Mapping** | Miro, FigJam, Figjam, MURAL | Interview snapshots, affinity maps, experience maps, OSTs | +| **Opportunity Solution Trees** | Miro (Product Talk templates), Vistaly, ProductBoard | Visualize and manage the opportunity space | +| **AI-Assisted Analysis** | Dovetail AI, Condens AI, ChatGPT, Claude | Auto-transcription, auto-tagging, summarization (always human-validate) | + +**A note on AI tools:** AI can speed up transcription, suggest tags, and draft theme summaries. However, **do not rely on AI exclusively for synthesis** (per Teresa Torres). The act of personally reviewing conversations and identifying patterns is where deep understanding forms. Use AI to surface things you might overlook, not to replace your thinking. + +--- + +## 8. Building the Habit: Making Discovery Stick + +### 8.1 Weekly Cadence Template + +| Day | Activity | Time | +|---|---|---| +| **Monday** | Review upcoming interview schedule (auto-populated) | 5 min | +| **Tuesday** | Conduct interview #1; complete interview snapshot | 45–60 min | +| **Thursday** | Conduct interview #2; complete interview snapshot | 45–60 min | +| **Friday** | Cross-interview synthesis: update OST, review patterns | 30–45 min | + +This is approximately **2–3 hours per week** — roughly 5-7% of a trio's working hours. + +### 8.2 Protect Discovery Time + +- **Treat discovery like sprint planning** — it's not optional; it's on the calendar. +- **Batch interviews** — Don't spread them across random slots. Dedicated blocks reduce context-switching. +- **Rotate moderation** — Each trio member should take turns leading interviews to build shared capability. +- **Share snapshots visibly** — Post them in a team channel (Slack, Teams) or a shared Miro board so stakeholders stay informed without attending every session. + +### 8.3 Scaling Across Teams + +- **Create a shared codebook** — Standard tags and definitions across teams enable cross-team insight discovery. +- **Maintain a centralized research repository** — All snapshots, nuggets, and themes live in one searchable place. +- **Run periodic "insight jams"** — Monthly sessions where multiple trios review each other's OSTs and cross-pollinate opportunities. +- **Train PMs and designers on story-based interviewing** — The skill gap is the bottleneck, not the process. + +--- + +## 9. Quick-Reference Checklists + +### Pre-Interview Checklist +- [ ] Outcome defined and agreed upon by the product trio +- [ ] Discussion guide prepared (2–3 story prompts, follow-up questions) +- [ ] Participant recruited and confirmed (screener passed) +- [ ] Recording tool set up and tested +- [ ] Trio roles assigned (moderator, note-taker, observer) +- [ ] Three most important learning goals written down (The Mom Test) + +### During-Interview Checklist +- [ ] Warm-up complete; participant is comfortable +- [ ] Collecting specific stories about past behavior (not opinions about the future) +- [ ] Redirecting generalizations back to specifics +- [ ] Using active listening (echoing, silence, "tell me more") +- [ ] Not pitching solutions or leading the witness +- [ ] Capturing timestamps of key moments for later reference + +### Post-Interview Checklist +- [ ] Interview snapshot completed within 15–20 minutes +- [ ] Opportunities and insights documented +- [ ] Experience map drawn for the story collected +- [ ] Snapshot shared with the team +- [ ] Follow-up items logged +- [ ] Opportunities added to the Opportunity Solution Tree (after every 3–4 interviews) + +### Transcript Analysis Checklist +- [ ] Transcript reviewed and cleaned (names, jargon corrected) +- [ ] Key moments highlighted as atomic evidence units +- [ ] Highlights tagged using shared codebook +- [ ] Themes identified through affinity mapping +- [ ] Themes documented with supporting quotes and source references +- [ ] Findings connected to existing opportunities on the OST +- [ ] Insights stored in research repository for future reference + +--- + +## 10. Recommended Reading & Sources + +| Resource | Author | Key Contribution | +|---|---|---| +| *Continuous Discovery Habits* | Teresa Torres | The definitive framework for weekly customer touchpoints, interview snapshots, and Opportunity Solution Trees | +| *The Mom Test* | Rob Fitzpatrick | Rules for asking questions that produce truthful, useful answers | +| Product Talk Blog (producttalk.org) | Teresa Torres | Story-based interviewing, opportunity mapping, and OST deep dives | +| NN/g User Interviews 101 | Nielsen Norman Group | Foundational interviewing methodology for UX researchers | +| *Thinking, Fast and Slow* | Daniel Kahneman | Understanding System 1 vs. System 2 thinking and why story-based questions produce better data | +| Atomic Research | Tomer Sharon & Daniel Pidcock | Breaking research into reusable, searchable nuggets | +| Dovetail/Condens Workflows | Various | Practical transcript-to-theme synthesis workflows | + +--- + +*This playbook is a living document. Update it as your team's discovery practice matures. The goal is not perfection — it's a sustainable habit of learning from your customers every single week.* From 694b46a8de4c16074cf7664e490e780954b655f3 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Wed, 11 Feb 2026 20:35:02 -0500 Subject: [PATCH 03/13] docs: add sample research artifacts to test user research workflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Test the full research workflow (plan → process → personas) with real transcripts from SAM interviews, producing a research plan, two interview snapshots, and a synthesized persona. Co-Authored-By: Claude Opus 4.6 --- .../interviews/2026-01-29-participant-002.md | 169 +++++ .../interviews/2026-02-09-participant-001.md | 152 ++++ .../the-front-line-account-guardian.md | 101 +++ ...-management-effectiveness-research-plan.md | 128 ++++ ...e,_and_SelectHealth_accounts_transcript.md | 556 ++++++++++++++ ...rformance_review_with_Krista_transcript.md | 718 ++++++++++++++++++ 6 files changed, 1824 insertions(+) create mode 100644 docs/research/interviews/2026-01-29-participant-002.md create mode 100644 docs/research/interviews/2026-02-09-participant-001.md create mode 100644 docs/research/personas/the-front-line-account-guardian.md create mode 100644 docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md create mode 100644 docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md create mode 100644 docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md diff --git a/docs/research/interviews/2026-01-29-participant-002.md b/docs/research/interviews/2026-01-29-participant-002.md new file mode 100644 index 00000000..538a19c5 --- /dev/null +++ b/docs/research/interviews/2026-01-29-participant-002.md @@ -0,0 +1,169 @@ +--- +participant_id: user-002 +role: "Strategic Account Manager" +company_type: "Healthcare analytics vendor" +date: 2026-01-29 +research_plan: "account-management-effectiveness" +source_transcript: "2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md" +focus: "Tool fragmentation, user activity reporting, account health signals, leadership reporting burden" +duration_minutes: 26 +tags: [tool-fragmentation, user-activity, account-health, churn-risk, reporting-burden, contract-management] +--- + +# Interview Snapshot: user-002 + +## Summary + +This SAM described managing accounts across 8+ different tools (Salesforce, Program Manager, DPI, Jira, SharePoint, Confluence, Upslow, Dropbox, SFTP) and the resulting friction in day-to-day work. The conversation centered on building user activity reports for WellCare, Centene, and SelectHealth -- revealing that the SAM needs licensed-vs-active user data for contract negotiations but has no single place to find it. The participant articulated clear signals for account health (regular contact, responsiveness, silence as a warning) and described a weekly Monday ritual of updating every Salesforce opportunity only to still be asked the same questions by leadership. They validated the account health dashboard concept as valuable for leadership, SAMs, and sales alike. + +## Experience Map + +``` +Trigger → Context → Actions → Obstacles → Workarounds → Outcome +``` + +| Step | What Happened | Feeling | Tools/Process | +|------|--------------|---------|---------------| +| Trigger | Needed user activity reports for WellCare, Centene, and SelectHealth accounts | Practical, task-oriented | DPI, Metabase | +| Context | Manages accounts across 8+ different tools daily | Overwhelmed ("Oh, no... Good grief. Hang on.") | Salesforce, Program Manager, DPI, Jira, SharePoint, Confluence, Upslow, Dropbox, SFTP | +| Action 1 | Built filtered user reports together (active users, never-logged-in users, by account) | Engaged, collaborative | Metabase/DPI admin | +| Obstacle | Can't determine how many users are licensed per contract vs. how many exist in the system | Frustrated ("I need to know how many users they have registered under their contract, and I don't know how we do that") | Salesforce (contracts), DPI (users) | +| Action 2 | Updates every Salesforce opportunity every Monday with latest status for leadership | Exhausted ("it's exhausting from a Sam perspective") | Salesforce | +| Obstacle | Leadership still asks for status updates despite Monday updates | Frustrated ("I literally do this every single Monday") | Salesforce, email/Teams | +| Workaround | Maintains churn risk opportunities AND renewal notes separately in Salesforce to double-flag risks | Resigned ("there's two places that this says the exact same thing") | Salesforce | +| Action 3 | Reviewed account health dashboard prototype | Enthusiastic ("That is great... Yeah. That's excellent.") | Custom dashboard | +| Outcome | Received automated monthly email reports; agreed dashboard approach is valuable for leadership and SAMs | Positive | Metabase email subscription | + +## Insights + +### Pain Points + +> "Salesforce, program manager, decision point insights, Jira... SharePoint, Confluence... Upslow... Dropbox... SFTP crap." +- **Type:** pain-point +- **Topics:** tool-fragmentation +- **Context:** When asked how many systems they log into, the SAM listed 8-9 tools in rapid succession with visible exasperation + +> "I literally do this every single Monday. So it's hard, but I get it." +- **Type:** pain-point +- **Topics:** reporting-burden +- **Context:** Updates every Salesforce opportunity weekly, but leadership still asks for the same information + +> "a leader can't go through these one by one to see what they say" +- **Type:** pain-point +- **Topics:** reporting-burden, account-health +- **Context:** Explaining why leadership can't self-serve account status from Salesforce -- too many opportunities across too many accounts + +> "all I'm doing every week is giving you those notes and you're... she's managing five of me, and we all have 12, 15 accounts" +- **Type:** pain-point +- **Topics:** reporting-burden +- **Context:** Describing the scale of the status update burden -- 5 SAMs x 12-15 accounts each + +> "I had a customer where we didn't send messages for an entire quarter. And no one caught it. The customer caught it." +- **Type:** pain-point +- **Topics:** account-health, churn-risk +- **Context:** Describing a major service failure that went undetected internally until the customer escalated + +> "they've been asked for people to get access who already have access, so they don't even know who does and doesn't" +- **Type:** pain-point +- **Topics:** user-activity, contract-management +- **Context:** SelectHealth requesting access for users who already have accounts, showing they lack visibility into their own user base + +### Needs + +> "I need to know how many users they have registered under their contract, and I don't know how we do that." +- **Type:** need +- **Topics:** user-activity, contract-management +- **Context:** Needs licensed-vs-active user count per contract for renewal negotiations, but data lives in separate systems + +> "I need some sort of report that can tell me." +- **Type:** need +- **Topics:** user-activity, reporting-burden +- **Context:** Needs automated user activity reporting rather than manual data gathering across systems + +> "I also need to know if they are active because they have asked for who's not using this and who is." +- **Type:** need +- **Topics:** user-activity +- **Context:** Client (WellCare) specifically requesting active vs. inactive user data, which the SAM can't easily provide + +### Behaviors + +> "I go and update these every single Monday. I update every single opportunity that I have with what's the latest and greatest update so that leadership has it" +- **Type:** behavior +- **Topics:** reporting-burden +- **Context:** Weekly Monday ritual of updating all Salesforce opportunities with current status + +> "I have in the notes here, like, at risk, met with Centene, blah blah blah. So there's that. And then on top of that, I have the churn risk opportunity." +- **Type:** behavior +- **Topics:** churn-risk, account-health +- **Context:** Maintaining duplicate risk documentation -- both in renewal notes and as separate churn risk opportunities in Salesforce + +> "you can enter in risk scores on, like, where that's at" +- **Type:** behavior +- **Topics:** account-health +- **Context:** Using Salesforce risk score history feature to track account health over time + +### Workarounds + +> "I wanted to call it out. Like, hey. There is a churn risk for this... So there's two places that this says the exact same thing." +- **Type:** workaround +- **Topics:** churn-risk, reporting-burden +- **Context:** Duplicating churn risk documentation in both renewal opportunity notes and a separate churn risk opportunity to make sure it's visible + +> "my goal is to put into their upcoming renewal... You can have 50 users. Or they pay a fee of some time to maintain 40,000 users." +- **Type:** workaround +- **Topics:** contract-management, user-activity +- **Context:** Planning to use renewal negotiations to cap unlimited user growth at SelectHealth (40,000 users with no contractual limit) + +### Desires + +> "Can it be community instead of Jira? Because I know plan trying to get us completely out of Jira and selfishly, I hate Jira." +- **Type:** desire +- **Topics:** tool-fragmentation +- **Context:** Strong preference for Salesforce Community over Jira for task management; wants fewer tools, not more + +### Motivations + +> "knowing this information... help me so I can have a conversation with them." +- **Type:** motivation +- **Topics:** user-activity, contract-management +- **Context:** User activity data isn't just internal -- it's ammunition for contract negotiations with clients + +> "from a leadership perspective, your thing could be very helpful. From a Sam, there's aspects of it that would be helpful. And then for sure from sales, it would be good." +- **Type:** motivation +- **Topics:** account-health, dashboard +- **Context:** Validating the account health dashboard as valuable across three audiences: leadership, SAMs, and sales + +## Opportunities + +| # | Opportunity | Evidence Strength | Quote | +|---|-----------|------------------|-------| +| 1 | SAMs need a single view of account health that leadership can self-serve without asking for manual updates | Strong | "a leader can't go through these one by one... so it's just easier to ask a Sam, but it's exhausting" | +| 2 | SAMs need a way to see licensed-vs-active users per contract in one place | Strong | "I need to know how many users they have registered under their contract, and I don't know how we do that" | +| 3 | Teams need automated detection when service delivery fails (e.g., messages not sent for a quarter) | Strong | "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." | +| 4 | SAMs need consolidated account data without logging into 8+ tools | Strong | Lists 8-9 tools when asked; visible exasperation | +| 5 | SAMs need a way to share user activity data with clients for joint account governance | Medium | "they've been asked for people to get access who already have access, so they don't even know who does and doesn't" | +| 6 | SAMs need automated churn risk detection based on usage signals so they can intervene early | Medium | Validated the concept of automated flags: "your thing could be very helpful" across three audiences | + +## Hypothesis Tracking + +| # | Hypothesis | Status | Evidence | +|---|-----------|--------|----------| +| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates | SUPPORTED | "I literally do this every single Monday" -- updates all opportunities weekly; "all I'm doing every week is giving you those notes"; leadership still asks despite documentation | +| 2 | Account health is primarily assessed through relationship signals rather than quantitative data | SUPPORTED | Health signals described are almost entirely relational: "regular contact," "responsive," "quiet," "fidgety and weird about contracts." Quantitative data is fragmented across 8+ tools | +| 3 | Higher product usage correlates with lower churn risk | SUPPORTED | Validated when shown the dashboard concept; agreed with the MAU axis as meaningful; "to the right is larger... K." (engaged with the metric) | +| 4 | Support ticket triage gaps are invisible to SAMs until customer escalation | SUPPORTED | "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." -- direct evidence of invisible service failure | + +## Behavioral Observations + +- **Tools mentioned:** Salesforce (opportunities, risk scores, community/cases), Program Manager, DPI (Decision Point Insights), Jira, SharePoint, Confluence, Upslow (billing), Dropbox (analytics files), SFTP, Metabase (reporting), Microsoft Teams +- **Frequency indicators:** Updates every opportunity "every single Monday"; gets status requests from leadership weekly; SelectHealth adds new users "every single week" +- **Emotional signals:** Exasperation listing tools ("Oh, no... Good grief"); exhaustion about reporting burden ("it's exhausting"); enthusiasm about the dashboard concept ("That is great... Yeah. That's excellent"); frustration about redundant asks ("I literally do this every single Monday") +- **Workaround patterns:** Duplicating churn risk in two Salesforce locations for visibility; planning contract caps as workaround for unlimited user growth; building one-off Metabase reports to answer recurring data needs + +## Human Review Checklist + +- [ ] All quotes verified against source transcript +- [ ] Experience map accurately reflects story arc +- [ ] Opportunities reflect participant needs, not assumed solutions +- [ ] Tags accurate and consistent with existing taxonomy +- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/interviews/2026-02-09-participant-001.md b/docs/research/interviews/2026-02-09-participant-001.md new file mode 100644 index 00000000..ee48c2f8 --- /dev/null +++ b/docs/research/interviews/2026-02-09-participant-001.md @@ -0,0 +1,152 @@ +--- +participant_id: user-001 +role: "Strategic Account Manager" +company_type: "Healthcare analytics vendor" +date: 2026-02-09 +research_plan: "account-management-effectiveness" +source_transcript: "2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md" +focus: "Account management leadership changes, account health dashboard feedback, support ticket triage" +duration_minutes: 27 +tags: [account-health, management-style, dashboard, support-triage, churn-risk, product-usage] +--- + +# Interview Snapshot: user-001 + +## Summary + +This SAM described the recent departure of their director of customer success and its impact on the team, contrasting the departed leader's punitive management style with a more supportive interim approach. The participant validated a prototype account health dashboard that plots accounts by monthly active users and renewal/churn status, confirming the core thesis that higher product usage correlates with lower churn risk. The conversation also surfaced ongoing support ticket triage failures where tickets auto-close after 5 days without proper routing, and QA constraints caused by offshore teams lacking access to production data. + +## Experience Map + +``` +Trigger → Context → Actions → Obstacles → Workarounds → Outcome +``` + +| Step | What Happened | Feeling | Tools/Process | +|------|--------------|---------|---------------| +| Trigger | Director of customer success departed; team restructured | Relieved but concerned ("we still need that person") | - | +| Context | Team performance had dropped from 80%+ to 70% to plan under previous leadership | Validated ("it was part of my quarter one predictions") | Salesforce (renewals) | +| Action 1 | Shown prototype account health dashboard plotting accounts by MAU and health status | Impressed ("I cannot believe you created it... this is what the director should have built") | Custom dashboard prototype | +| Action 2 | Reviewed specific accounts (Highmark, Elevance, Medical Mutual) on the dashboard | Engaged, correcting misclassifications | Dashboard, Salesforce | +| Obstacle | Metrics don't distinguish organic churn (member population decline) from voluntary churn | Constructive ("that's their business, not like the client said we don't want you") | - | +| Action 3 | Discussed support ticket triage failures -- tickets auto-closing, improper routing | Frustrated ("no one took the steps to...") | Community (Salesforce), Jira | +| Workaround | Creating duplicate Teams threads for every support case to ensure visibility | Resigned ("I'm just gonna double up") | Microsoft Teams, Community | +| Outcome | Agreed to continue reviewing dashboard; identified Jackie and Beth as next stakeholders | Positive, forward-looking | - | + +## Insights + +### Pain Points + +> "our meetings were, like, the tone of it was just like, you're a bad kid, problem child. You didn't write this note, and I need it this way." +- **Type:** pain-point +- **Topics:** management-style +- **Context:** Describing the departed director's approach to team meetings, which killed motivation + +> "I don't think the negative tone, like, motivated anyone to, like, go above and beyond. You know? You're just like, oh, they're churning. Not gonna try to save it." +- **Type:** pain-point +- **Topics:** management-style, churn-risk +- **Context:** Linking punitive management directly to reduced effort on saving at-risk accounts + +> "someone will... and I'm like, those people I could be the person and get it better triage sometimes... And I won't know about it." +- **Type:** pain-point +- **Topics:** support-triage +- **Context:** Support tickets not routed to the right people; SAM unaware of client issues + +> "Mark? Does Mark know he needs to create tickets on other boards?" +- **Type:** pain-point +- **Topics:** support-triage +- **Context:** New support staff not trained on cross-board ticket routing + +> "there's that rule for five days, no response on the ticket. So the... not gonna respond. Then it auto closes" +- **Type:** pain-point +- **Topics:** support-triage +- **Context:** Tickets auto-close without resolution because they weren't properly routed + +### Needs + +> "until there's more things structured, operationalized, I think for one person, sucks." +- **Type:** need +- **Topics:** account-health, operational-process +- **Context:** Acknowledging the account management role is too much for one person without better systems + +> "I think some improvements on the triaging because I'll notice, like, someone will... and I could be the person and get it better triage sometimes." +- **Type:** need +- **Topics:** support-triage +- **Context:** Better ticket triage routing so the right people see client issues + +### Behaviors + +> "Kim's style is like, hey. You got a million dollars coming up. Let me know how it's going. Like, and just open floor and keeping it light." +- **Type:** behavior +- **Topics:** management-style +- **Context:** Describing the effective interim manager's approach -- supportive, outcome-focused + +> "I should try to schedule, like, once you have a new release, set up time with product or even lead it yourself. Like, show them the new feature. Connect it with something they've mentioned in a meeting before. Offer trainings, send them things." +- **Type:** behavior +- **Topics:** product-usage, account-health +- **Context:** Describing proactive engagement tactics to drive product adoption and reduce churn + +> "prior to her coming in, we were always hitting the 80% threshold. If not achieving 85% to plan 90%" +- **Type:** behavior +- **Topics:** account-health, churn-risk +- **Context:** Historical team performance on renewal targets before leadership change + +### Workarounds + +> "every case, I'm gonna create a Teams thread just so people know that it's there." +- **Type:** workaround +- **Topics:** support-triage +- **Context:** Duplicating every support case into Teams because the ticketing system doesn't ensure visibility to the right people + +### Desires + +> "I could be the provider scorecard's project manager. I could also do QA." +- **Type:** desire +- **Topics:** product-quality, operational-process +- **Context:** SAM volunteering to take on product/QA responsibilities because current quality assurance is inadequate + +### Motivations + +> "the more people are using it, the less likely you are to churn. Like, in general." +- **Type:** motivation +- **Topics:** product-usage, churn-risk +- **Context:** Validating the core hypothesis that product usage is the key leading indicator of account health + +> "If it's more sticky, they've adopted it more. It meets 80% of what they need in a tool." +- **Type:** motivation +- **Topics:** product-usage, account-health +- **Context:** Explaining why usage correlates with retention -- it reflects genuine value delivery + +## Opportunities + +| # | Opportunity | Evidence Strength | Quote | +|---|-----------|------------------|-------| +| 1 | SAMs need a way to distinguish organic churn (member population changes) from voluntary churn (client dissatisfaction) | Strong | "that's their business... it's not like the client said we don't want you" | +| 2 | SAMs need a way to monitor support ticket status for their accounts without manually duplicating into Teams | Strong | "every case, I'm gonna create a Teams thread just so people know that it's there" | +| 3 | Account management needs a visual overview of account health that plots usage against renewal/churn status | Strong | "I cannot believe you, of all people, created it... this is what the director of customer success should have been partnering with ops to create" | +| 4 | SAMs need a way to identify which accounts have low product usage so they can proactively drive adoption | Medium | "as a Sam, there are no users are logging in. I should try to schedule, like, once you have a new release, set up time" | +| 5 | Support teams need clearer triage routing so tickets reach the right team before auto-close | Strong | "no one took the steps to... then it auto closes" | + +## Hypothesis Tracking + +| # | Hypothesis | Status | Evidence | +|---|-----------|--------|----------| +| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates | MIXED | Not directly discussed in this interview, but the SAM described duplicating support cases into Teams as a workaround -- a form of redundant manual work | +| 2 | Account health is primarily assessed through relationship signals rather than quantitative data | SUPPORTED | SAM validated the dashboard concept as novel -- "I cannot believe you created it... this is what the director should have built" -- implying this quantitative view didn't exist before | +| 3 | Higher product usage correlates with lower churn risk | SUPPORTED | "the more people are using it, the less likely you are to churn" and "If it's more sticky, they've adopted it more" | +| 4 | Support ticket triage gaps are invisible to SAMs until customer escalation | SUPPORTED | "And I won't know about it" -- SAM unaware of ticket issues; Blue Shield download error was set to auto-close until SAM "caught it, changed the status" after returning from PTO | + +## Behavioral Observations + +- **Tools mentioned:** Salesforce (renewals/opportunities), Jira (tickets), Community (Salesforce support cases), Microsoft Teams (informal communication), DPI (Decision Point Insights -- the analytics product), prototype account health dashboard +- **Frequency indicators:** "always hitting 80% threshold" (quarterly renewals), support ticket auto-close at 5 days, team meetings (regular cadence implied) +- **Emotional signals:** Relief about leadership change ("I feel good"), frustration with support triage ("what are you doing?"), genuine enthusiasm about dashboard prototype ("this is excellent"), resignation about workarounds ("I'm just gonna double up") +- **Workaround patterns:** Duplicating support cases into Teams threads for visibility; volunteering to do QA and PM work outside their role because current processes are inadequate + +## Human Review Checklist + +- [ ] All quotes verified against source transcript +- [ ] Experience map accurately reflects story arc +- [ ] Opportunities reflect participant needs, not assumed solutions +- [ ] Tags accurate and consistent with existing taxonomy +- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/personas/the-front-line-account-guardian.md b/docs/research/personas/the-front-line-account-guardian.md new file mode 100644 index 00000000..bceecc3e --- /dev/null +++ b/docs/research/personas/the-front-line-account-guardian.md @@ -0,0 +1,101 @@ +--- +name: "The Front-Line Account Guardian" +role: "Strategic Account Manager" +company_type: "Healthcare analytics vendor" +last_updated: 2026-02-11 +interview_count: 2 +confidence: medium +source_interviews: [user-001, user-002] +version: 1 +--- + +# The Front-Line Account Guardian + +## Overview + +The Front-Line Account Guardian is a Strategic Account Manager at a healthcare analytics vendor who manages 12-15 enterprise accounts spanning health plans, provider networks, and corporate entities. They are the primary human interface between the company and its customers -- the first to notice when something's wrong and the last line of defense before churn. + +They operate across 8+ tools daily (Salesforce, Jira, DPI, SharePoint, Confluence, Dropbox, and more), piecing together a fragmented picture of account health from relationship signals, meeting tone, and scattered data. Despite being highly organized and proactive -- updating every Salesforce opportunity weekly, creating duplicate documentation for visibility, and maintaining direct client relationships -- they are buried under a reporting burden that leadership relies on because the systems themselves don't surface what matters. + +They assess account health primarily through relationship quality (responsiveness, meeting engagement, willingness to discuss contracts) rather than quantitative metrics, not because they distrust data, but because no single tool gives them a consolidated view. When shown a prototype that plots accounts by usage and health status, they immediately validated it and wished it had existed sooner. Their deepest frustration is doing work that systems should automate -- and still being asked for it manually. + +## Goals + +1. Keep accounts healthy and renewing by maintaining strong client relationships (2/2 participants) +2. Have a consolidated, at-a-glance view of account health without logging into 8+ tools (2/2 participants) +3. Reduce time spent on internal reporting so more time goes to client-facing work (2/2 participants) +4. Identify churn risks early enough to intervene effectively (2/2 participants) +5. Use data (user activity, contract utilization) as leverage in renewal negotiations (1/2 participants) + +## Frustrations + +1. Leadership asks for status updates that are already documented in Salesforce (2/2 participants) +2. Too many tools to log into daily to get a complete picture of accounts (1/2 participants) +3. Support tickets get improperly triaged or auto-close without resolution, and SAMs aren't notified (2/2 participants) +4. No single place to see licensed-vs-active users per contract (1/2 participants) +5. Punitive management styles kill motivation to go above and beyond on at-risk accounts (1/2 participants) +6. Service delivery failures (e.g., messages not sent for a quarter) go undetected until the customer escalates (1/2 participants) +7. QA limitations (offshore team can't access production data) mean SAMs catch bugs that should have been caught earlier (1/2 participants) + +## Behaviors + +| Behavior | Frequency | Evidence | +|----------|-----------|----------| +| Updates every Salesforce opportunity with latest status | Weekly (Mondays) | (1/2 participants) | +| Maintains duplicate risk documentation in multiple Salesforce locations | Ongoing | (1/2 participants) | +| Assesses account health through relationship signals (meeting tone, responsiveness, silence) | Continuous | (2/2 participants) | +| Proactively schedules product demos and trainings for clients with low usage | As needed | (1/2 participants) | +| Creates parallel Teams threads for support cases to ensure internal visibility | Per case | (1/2 participants) | +| Enters and updates risk scores in Salesforce risk score history | Ongoing | (1/2 participants) | +| Validates product prototypes with enthusiasm when they address real pain points | When shown | (2/2 participants) | + +## Key Quotes + +> "all I'm doing every week is giving you those notes and you're... she's managing five of me, and we all have 12, 15 accounts." +> -- user-002, on the reporting burden to leadership + +> "the more people are using it, the less likely you are to churn. Like, in general." +> -- user-001, validating usage as a leading indicator of account health + +> "I cannot believe you, of all people, created it... this is what the director of customer success should have been partnering with ops to create." +> -- user-001, reacting to the account health dashboard prototype + +> "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." +> -- user-002, describing an invisible service delivery failure + +> "I literally do this every single Monday. So it's hard, but I get it." +> -- user-002, on the weekly Salesforce update ritual that leadership still asks about + +## Opportunities + +| # | Opportunity | Evidence Strength | Participants | Key Quote | +|---|-----------|------------------|-------------|-----------| +| 1 | SAMs need a self-serve account health view that leadership can access without asking for manual updates | Strong | user-001, user-002 | "a leader can't go through these one by one... so it's just easier to ask a Sam, but it's exhausting" | +| 2 | SAMs need automated detection of service delivery failures and churn risk signals | Strong | user-001, user-002 | "we didn't send messages for an entire quarter. And no one caught it." | +| 3 | SAMs need support ticket visibility -- proper routing and notification before auto-close | Strong | user-001, user-002 | "no one took the steps to... then it auto closes" / "And I won't know about it" | +| 4 | SAMs need consolidated account data without logging into 8+ fragmented tools | Medium | user-002 | Lists 8-9 tools with visible exasperation | +| 5 | SAMs need licensed-vs-active user data per contract for renewal negotiations | Medium | user-002 | "I need to know how many users they have registered under their contract, and I don't know how we do that" | +| 6 | SAMs need a way to distinguish organic churn (member population decline) from voluntary churn | Medium | user-001 | "that's their business... it's not like the client said we don't want you" | +| 7 | SAMs need a way to share user activity data with clients for joint account governance | Weak | user-002 | "they don't even know who does and doesn't" | + +## Divergences + +_No divergences identified yet._ + +Both participants are closely aligned on the core pain points (reporting burden, fragmented tools, support triage gaps) and the value of consolidated account health monitoring. No contradictions surfaced across the two interviews. + +## Evidence + +| Participant | Research Plan | Date | Focus | +|------------|--------------|------|-------| +| user-001 | account-management-effectiveness | 2026-02-09 | Account management leadership changes, account health dashboard feedback, support ticket triage | +| user-002 | account-management-effectiveness | 2026-01-29 | Tool fragmentation, user activity reporting, account health signals, leadership reporting burden | + +## Human Review Checklist + +- [ ] Goals and frustrations grounded in interview evidence +- [ ] Behavior counts accurate (absence not counted as negative) +- [ ] Quotes are exact (verified against source interviews) +- [ ] Opportunities framed as needs, not solutions +- [ ] Divergences section reflects actual contradictions +- [ ] Confidence level matches interview count threshold diff --git a/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md b/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md new file mode 100644 index 00000000..5e193900 --- /dev/null +++ b/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md @@ -0,0 +1,128 @@ +--- +title: "Account management team effectiveness and tooling" +date: 2026-02-11 +status: planned +outcome: "Inform the design of consolidated account health tooling and streamlined SAM workflows that reduce manual reporting, surface churn risks automatically, and give leadership visibility without burdening individual contributors" +hypotheses: + - "SAMs spend a disproportionate amount of time on manual reporting and status updates that leadership could get from existing systems" + - "Account health is primarily assessed through relationship signals (meeting tone, responsiveness) rather than quantitative data because quantitative data is fragmented across too many tools" + - "Higher product usage (monthly active users) correlates with lower churn risk, and SAMs intuitively know this but lack a systematic way to monitor it" + - "Support ticket triage and resolution gaps are invisible to SAMs and product teams until the customer escalates" +participant_criteria: "Strategic Account Managers (SAMs), Account Managers, and CS leadership managing healthcare/enterprise analytics accounts" +sample_size: 6 +interviews_completed: 0 +--- + +# Account Management Team Effectiveness and Tooling + +## Objective + +Understand how account management teams currently assess account health, manage their workflows across fragmented tools, and communicate status to leadership -- to inform building consolidated tooling that automates churn risk detection, surfaces upsell opportunities, and reduces the manual reporting burden on SAMs. + +This research will inform both product decisions (what to build in an account health system) and process decisions (how to restructure workflows so SAMs spend more time with customers and less time on internal reporting). + +## Three Most Important Things to Learn + +1. **Current behavior:** How do SAMs actually assess whether an account is healthy or at risk today? What signals do they use, and where do those signals live across their tool stack? +2. **Pain points:** Where do SAMs lose the most time in their weekly workflow, and what are the biggest gaps between what leadership needs to know and what's easily accessible? +3. **Desired outcomes:** What would "good" look like for SAMs and their leaders? What would change about their day-to-day if account health were automatically monitored? + +## Hypotheses + +| # | Hypothesis | Status | +|---|-----------|--------| +| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates that leadership could get from existing systems | UNTESTED | +| 2 | Account health is primarily assessed through relationship signals (meeting tone, responsiveness) rather than quantitative data because quantitative data is fragmented across too many tools | UNTESTED | +| 3 | Higher product usage (monthly active users) correlates with lower churn risk, and SAMs intuitively know this but lack a systematic way to monitor it | UNTESTED | +| 4 | Support ticket triage and resolution gaps are invisible to SAMs and product teams until the customer escalates | UNTESTED | + +## Participant Criteria + +**Include:** +- Strategic Account Managers or Account Managers who manage 5+ accounts +- CS leaders who manage teams of SAMs and need cross-account visibility +- Delivery or product managers who interact with account health data +- People who have experienced at least one churn or churn risk in the past 6 months + +**Exclude:** +- Sales-only roles who don't manage ongoing accounts +- SAMs with fewer than 6 months tenure (not enough workflow patterns established) +- Offshore support staff (different tool access and workflows) + +### Screener Questions + +1. "How many accounts do you currently manage or oversee?" +2. "How many different tools do you log into on a typical Monday to stay on top of your accounts?" +3. "In the last 3 months, how many times has leadership asked you for an account status update that you'd already documented somewhere?" +4. "Describe the last account you considered at risk for churn -- how did you first know something was wrong?" + +## Discussion Guide + +### Opening (2-3 min) + +- Introduce yourself and the purpose: learning about how account management works day-to-day, not evaluating anyone's performance +- "I'd love to hear about your actual experience managing accounts. There are no wrong answers -- I'm trying to understand the real workflow, not the ideal one." + +### Story Elicitation (15-20 min) + +**Primary story prompt:** +> "Walk me through your last Monday morning. From when you sat down at your desk, what did you do first to get up to speed on your accounts?" + +**Follow-up probes:** +- "What happened next?" +- "Which tool did you open first? Why that one?" +- "How did you know where things stood with [specific account]?" +- "What were you trying to figure out?" +- "Was there anything you couldn't find or had to piece together from multiple places?" + +**Second story prompt (churn/risk specific):** +> "Tell me about the last time you realized an account was in trouble. Walk me through how you first noticed." + +**Follow-up probes:** +- "What was the first signal that something was off?" +- "How long had it been going on before you noticed?" +- "What did you do about it?" +- "Who else needed to know? How did you communicate it?" +- "Was there anything that could have alerted you earlier?" + +### Depth Probes (5-10 min) + +- "You mentioned updating [Salesforce/Jira/etc.] -- how much time do you spend on that per week?" +- "When leadership asks for a status update, what do they actually need that they can't get themselves?" +- "Has there been a time when a support ticket or product issue affected an account and you didn't know about it until the customer told you?" +- "If something could automatically flag at-risk accounts for you, what signals would you trust it to look at?" +- "Why was [that specific workflow/workaround] important to you?" + +### Closing (2-3 min) + +- "Is there anything about how you manage accounts that I should have asked about?" +- "If you could change one thing about your tools or process tomorrow, what would it be?" +- "Who else on the team should I talk to about this? Anyone who does things very differently from you?" + +## Post-Interview Checklist + +- [ ] Write interview snapshot within 24 hours (run `/workflows:research process`) +- [ ] Note top 3 surprises from this interview +- [ ] Update hypothesis status in this plan +- [ ] Identify follow-up questions for next interview +- [ ] Add new screener criteria if participant fit was imperfect + +## Schedule + +| # | Participant | Date | Status | +|---|-----------|------|--------| +| 1 | Krista (SAM - WellCare/Centene/SelectHealth) | 2026-02-09 | Completed (transcript available) | +| 2 | Ashley (SAM - WellCare/Centene/SelectHealth) | 2026-01-29 | Completed (transcript available) | +| 3 | TBD | TBD | Not scheduled | +| 4 | TBD | TBD | Not scheduled | +| 5 | TBD | TBD | Not scheduled | +| 6 | TBD | TBD | Not scheduled | + +## Human Review Checklist + +- [ ] Objective is outcome-focused (not feature-focused) +- [ ] Hypotheses are falsifiable statements about behavior +- [ ] Screener questions ask about past behavior, not opinions +- [ ] Discussion guide follows story-based structure +- [ ] No leading questions or solution pitching in guide +- [ ] Sample size appropriate for research type diff --git a/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md b/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md new file mode 100644 index 00000000..71e82f05 --- /dev/null +++ b/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md @@ -0,0 +1,556 @@ +--- +title: "User activity report for WellCare, Centene, and SelectHealth accounts" +id: faafe8f5-1d43-4979-9d0f-e7c70092d222 +created_at: 2026-01-29T18:56:32.272Z +updated_at: 2026-01-29T19:22:41.841Z +source: granola +type: transcript +linked_note: 2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts.md +--- +# User activity report for WellCare, Centene, and SelectHealth accounts — Transcript + +**You:** **[18:56:32]** Many different systems do you have to log into to find all the right data that you wanna know about your accounts. + +**Other:** **[18:56:43]** Oh, no. Are we only talking about data as in no. Okay. Alright. Well, Salesforce, + +**You:** **[18:56:59]** Yep. + +**Other:** **[18:57:00]** program manager, + +**You:** **[18:57:02]** Okay. + +**Other:** **[18:57:03]** decision point insights, + +**You:** **[18:57:07]** Okay. + +**Other:** **[18:57:08]** Jira, + +**You:** **[18:57:10]** Yep. + +**Other:** **[18:57:14]** Good grief. Hang on. Let's see. I feel like I mean, just Microsoft in general, like SharePoint, Confluence, + +**You:** **[18:57:24]** Yeah. Yep. + +**Other:** **[18:57:27]** So there's five. Upslow, + +**You:** **[18:57:30]** What's that one? + +**Other:** **[18:57:30]** Billing. So there's six. + +**You:** **[18:57:33]** Okay. + +**Other:** **[18:57:38]** Dropbox, seven. + +**You:** **[18:57:40]** What's in Dropbox? + +**Other:** **[18:57:42]** Analytics garbage. + +**You:** **[18:57:46]** Yeah. They had it. I told them. They they've been on that garbage for + +**Other:** **[18:57:46]** Not garbage, but yeah. + +**You:** **[18:57:50]** a while. + +**Other:** **[18:57:52]** Yes. So we're at seven with Dropbox. I mean, SFTP crap. I luckily haven't had to log on there for a bit because they're now starting to do that, but I have access to it. + +**You:** **[18:58:08]** Okay. + +**Other:** **[18:58:09]** That's eight That's what I can think of. Right now. + +**You:** **[18:58:19]** Off the top of your head, pretty good list. + +**Other:** **[18:58:20]** Yeah. + +**You:** **[18:58:22]** Okay. Cool. + +**Other:** **[18:58:22]** Yep. + +**You:** **[18:58:25]** Alright. Well, I would love because I feel like I I have not as big of a purview or like I have my focus is narrower, but, yeah, I I feel the same pain at least in my, like, little world of just + +**Other:** **[18:58:40]** Yeah. + +**You:** **[18:58:42]** focusing on the predict. Folks. And so alright, I have something I wanna show you in a sec, but I am gonna fix this first. Okay. Do you want everyone in WellCare + +**Other:** **[18:58:53]** Yes + +**You:** **[18:58:56]** Do you want everybody who's active? Or you only want active even if they're think there was one that we made one that was like non care managers or network users. + +**Other:** **[18:59:07]** I think we don't just want that. Like, I need to know + +**You:** **[18:59:08]** You need everybody. + +**Other:** **[18:59:12]** I need everyone that we have, like, licenses for because they're paying other license + +**You:** **[18:59:13]** Okay? + +**Other:** **[18:59:17]** level. + +**You:** **[18:59:17]** Okay. Cool. + +**Other:** **[18:59:18]** So whether they're active or not. But I also need to know if they are active because they have asked for who's not using this and who is. + +**You:** **[18:59:23]** Got it. Okay. Cool. Who let's do two things. Let me alright. Alright. Role name contains corp. Is active is true. Let's take this role name out. + +**Other:** **[18:59:45]** Report another. + +**You:** **[18:59:47]** Okay. + +**Other:** **[18:59:49]** Sorry. There was one more. Nine. Got you another one. + +**You:** **[18:59:57]** Alright. How about this? This is something that's pretty good. The I'll have to take out filter username. Email, we just need to make sure it's not + +**Other:** **[19:00:16]** The searching point? + +**You:** **[19:00:20]** yeah. + +**Other:** **[19:00:21]** Or impulse. + +**You:** **[19:00:22]** Email. Visit user user user does not contain decision point or impulse. Okay. Add Enhance. Enhance. + +**Other:** **[19:00:46]** Sweet. + +**You:** **[19:00:46]** Enhance. Alright. There you go. And then so this here is How can I make the alright? Here, we're gonna I'm gonna save this. And we're gonna call save this as a new question. We're gonna call + +**Other:** **[19:01:08]** K. + +**You:** **[19:01:13]** active WellCare users all. + +**Other:** **[19:01:16]** K. + +**You:** **[19:01:17]** All WellCare active, non impulse, Okay. That's one thing. Save. Add this to a dashboard. Ashley. Okay. And I'm gonna take this one out. And then the next thing I'm gonna do is I'm gonna email all this to you, so we don't have to worry about how it looks here. + +**Other:** **[19:01:45]** Okay. + +**You:** **[19:01:46]** It's just gonna be emailed every time. Let's do another one. Which is Do you want all of this with somebody who's logged in in the last three months. + +**Other:** **[19:02:09]** Yeah. + +**You:** **[19:02:11]** When is that the good time is that a good time, Mark? Or you want people who have not + +**Other:** **[19:02:14]** Yeah. I think that that's fair. We we can start with three months. Well, + +**You:** **[19:02:18]** Is it any better? Because, like, it's not + +**Other:** **[19:02:21]** the last login, can I + +**You:** **[19:02:21]** or both? Yeah. I'm gonna just make another one so that we can just + +**Other:** **[19:02:24]** filter by blanks? + +**You:** **[19:02:27]** get another XLS so that you don't have to do your own filtering. + +**Other:** **[19:02:27]** Okay. + +**You:** **[19:02:30]** Unless you whatever. It's you can do your own here, but we could also just make it so that it's + +**Other:** **[19:02:34]** Yeah. Okay. + +**You:** **[19:02:38]** let me + +**Other:** **[19:02:38]** Sure. + +**You:** **[19:02:39]** let's just do that. Okay. So editor, do you want + +**Other:** **[19:02:45]** Let's do have not. + +**You:** **[19:02:45]** people who have or have not logged in? Okay. Last login. + +**Other:** **[19:02:54]** Never. Should be never. If you can. + +**You:** **[19:03:00]** We can. Exclude oh, is empty. Last login. Exclude. Okay. Save. Wait. Cancel. Okay. 427 people who've never logged in. + +**Other:** **[19:03:33]** K. + +**You:** **[19:03:36]** And are + +**Other:** **[19:03:39]** Okay. + +**You:** **[19:03:39]** active. + +**Other:** **[19:03:42]** K. + +**You:** **[19:03:42]** Okay. + +**Other:** **[19:03:44]** Yep. Because then I can go to him and say, what's get rid of these people. + +**You:** **[19:03:48]** Yeah. And there yeah. I can't I don't know what they're a little weird because they have the SSO login. So you wanna just they have, like, the provider network user. Do know how that works? For that, like, weird other side of the thing that Brad works with? + +**Other:** **[19:04:06]** So yeah. So then in curious because if there's 427 people who are active, And I remember you giving me a list where there was only, like, 47 users, actually. + +**You:** **[19:04:19]** Yeah. Yeah. Yeah. That's because it was, like, all people who were not those were only for, like, Sunshine Health. Or Centene I remember there was, like, one Centene Corp one that we, like, tracked all the way down. + +**Other:** **[19:04:39]** So I think what I need is I need to know how many users they have registered under their contract, and I don't know how we do that. + +**You:** **[19:04:47]** Right. + +**Other:** **[19:04:51]** So because they they're only allowed 75, so they + +**You:** **[19:04:51]** Well, yeah, we can we can + +**Other:** **[19:04:55]** clearly have more than that. But + +**You:** **[19:04:56]** One Right. And this is all the whole thing, and they have, like, all the, like, SSO login. There's a whole their whole network team basically logs in to decision point. And some of them, like, only do it, like, once. They're like, I'm going to see doctor so and so Let me print out their reports, and then I'm out. So here's what I'm gonna do. I'm gonna give you all And then I would check with Brad on, like, how we would wanna break it up And I can tell you how some of them let's do let's do this at least. Okay. So we're gonna save This is a new question. It's gonna be called all active users all and never logged in. Okay. + +**Other:** **[19:05:50]** k. + +**You:** **[19:05:52]** And who have never logged in. Okay. We're gonna add this to dashboard. Okay. I think we should go back to that list. That we made earlier and then we should filter out those ones again because it's Okay. I don't think you want any care managers MEM. Centene, + +**Other:** **[19:07:05]** Yes. + +**You:** **[19:07:05]** because I think it's you're the you're a corp. Right? Okay. + +**Other:** **[19:07:13]** I mean, yes, the specific contract is court. + +**You:** **[19:07:18]** Okay. This is the 42 rows. This is the 42 rows. + +**Other:** **[19:07:19]** While you're looking at this, okay. Okay. + +**You:** **[19:07:24]** So I'm gonna save it as a new one, and this is gonna be + +**Other:** **[19:07:25]** Okay. No offense. Centimeters. + +**You:** **[19:07:28]** as a new one. All active WellCare's Centene. + +**Other:** **[19:07:36]** Okay. + +**You:** **[19:07:38]** Okay. + +**Other:** **[19:07:44]** One other question for you because so Centene and then SelectHealth is my only other analytics customer. + +**You:** **[19:07:51]** Okay. + +**Other:** **[19:07:55]** Can I get one for Select? Because + +**You:** **[19:07:55]** Yeah. + +**Other:** **[19:07:59]** I think they have 40,000 users, and they for new users, I feel like, every single week. We have nothing in the contract that prevents them from doing this. So my goal is to put into their + +**You:** **[19:08:15]** Yeah. + +**Other:** **[19:08:15]** upcoming renewal No. We're done. We're done with this. + +**You:** **[19:08:19]** Or it's really yeah. Or yeah. + +**Other:** **[19:08:19]** You can have 50 users. + +**You:** **[19:08:22]** Or it's + +**Other:** **[19:08:25]** Or they pay a fee of some time to maintain + +**You:** **[19:08:25]** yeah. They can tell you that. + +**Other:** **[19:08:28]** 40,000 users. + +**You:** **[19:08:29]** Yeah. Exactly. Let's let me I can figure this out. But not all active, but I + +**Other:** **[19:08:35]** But knowing this information, well, + +**You:** **[19:08:36]** corp. Yeah. Yeah. Yeah. I always tell. + +**Other:** **[19:08:40]** help me so I can have a conversation with them. + +**You:** **[19:08:42]** No. Like, we need it. You're like Do you does this person who's never logged in really need + +**Other:** **[19:08:44]** Right. + +**You:** **[19:08:47]** Okay. I'm gonna edit this. This is gonna be your Centene tab. + +**Other:** **[19:08:47]** Right. And then on top of that, they've been asked for people to get access who already have access, so they don't even know + +**You:** **[19:08:55]** Yeah. Yeah. Yeah. + +**Other:** **[19:08:59]** who does and doesn't. So I need some sort of report that can tell me. + +**You:** **[19:09:00]** Yeah. Yep. Alright. Alright. Edit. Client ID. Client name, client I gotta do ID. Gotta figure out which one it is. Alright. While we're back to philosophizing, what while we're waiting. How do you characterize what is a good account or an account that's like fine versus an account that's in trouble. + +**Other:** **[19:10:07]** Good question. So, I mean, I have regular contact with + +**You:** **[19:10:16]** Yep. + +**Other:** **[19:10:18]** all of my accounts. + +**You:** **[19:10:21]** So some of it's a feel. + +**Other:** **[19:10:22]** Yeah. So I would know if something was off just by that, but I mean, obviously, if they tell me point blank, + +**You:** **[19:10:32]** Right. + +**Other:** **[19:10:32]** hey. We're not happy about this. Hey. Whatever. Another one that's easy to identify is if we've recently had a pretty big issue. For them. That + +**You:** **[19:10:43]** Like, in a case, + +**Other:** **[19:10:44]** yeah. + +**You:** **[19:10:44]** Yeah. Yeah. Yeah. + +**Other:** **[19:10:46]** Mean, like, for I know your analytics, but a good example of for engagement, I had a customer where we didn't send messages for an entire quarter. And no one no one caught it. The customer caught it. + +**You:** **[19:10:57]** No. No one knew. Yeah. + +**Other:** **[19:10:59]** So for me, it's like, oh, yep. This is a this is a big risk, folks, because we were done. So that's that's an easy way Silence + +**You:** **[19:11:12]** Yeah. + +**Other:** **[19:11:12]** if I, like, get on meetings. And there's some customers that just don't talk. Right? They're they're happy and whatever, but I feel like you get on meetings and you're asking them questions and they can't answer questions or they're just really quiet, + +**You:** **[19:11:23]** We'll get back to you. Or yeah. I'm not sure. + +**Other:** **[19:11:26]** Yeah. + +**You:** **[19:11:28]** Oh, we gotta ask him about it. + +**Other:** **[19:11:28]** Not very, like, yeah, responsive or committal to anything. That's another key indicator that something's usually wrong. When we talk about, like, contracts, if they're I guess it goes back to the nonresponsive, but when they they get kind of, like, fidgety and weird about talking about contracts, What else? For a good account, I feel like you're having regular contact with them Your phone calls are productive. They are interested in knowing more, learning more, or they're content with what they have and + +**You:** **[19:12:13]** Yeah. + +**Other:** **[19:12:14]** and they're happy. + +**You:** **[19:12:15]** Right. They're like, + +**Other:** **[19:12:16]** And they express as much. Like, hey. We don't need any more, but + +**You:** **[19:12:19]** We will yeah. You guys are great. We love you. + +**Other:** **[19:12:20]** we feel great about what we have. Yeah. + +**You:** **[19:12:23]** Li like, let us but this goes back to enjoying you while we're + +**Other:** **[19:12:27]** Yeah. + +**You:** **[19:12:27]** doing our other jobs. + +**Other:** **[19:12:29]** Yep. Yep. + +**You:** **[19:12:31]** Know. Some + +**Other:** **[19:12:32]** So I felt like those are kind of the big the big things. I feel like most customers will be transparent but there are definitely warning signs. Like, what I mentioned. + +**You:** **[19:12:42]** Okay. Cool. Alright. Here's what I got. I got now here is you have active WellCare users, active you WellCare users who've never logged in. Active WellCare users with Centene Corp, + +**Other:** **[19:12:52]** And it's Okay. + +**You:** **[19:12:55]** and now you have active select users. + +**Other:** **[19:12:56]** Okay. Perfect. That is great. + +**You:** **[19:12:59]** Alright. And then now I'm going to + +**Other:** **[19:13:05]** And is this + +**You:** **[19:13:08]** Auto + +**Other:** **[19:13:09]** is this in DPI, like, when I log in? + +**You:** **[19:13:11]** No. I'm gonna email this to you. + +**Other:** **[19:13:12]** Okay. Gotcha. + +**You:** **[19:13:17]** Every month. + +**Other:** **[19:13:17]** K. + +**You:** **[19:13:19]** And I'm gonna make sure I get filter values. I'm gonna attach the files as or the results the files to results. I'm gonna have them send only attachments. No charts. That's fine. And gonna send this email now. + +**Other:** **[19:13:39]** K. + +**You:** **[19:13:40]** Then you'll let me know. Okay. + +**Other:** **[19:13:48]** Alright. + +**You:** **[19:13:51]** Done. I think it think it might come from either support or Metabase. + +**Other:** **[19:14:07]** K. I haven't gotten anything yet, but I'm guessing because there's files that attached, it'll take a sec. + +**You:** **[19:14:13]** Alright. Here's something I've been working on. + +**Other:** **[19:14:13]** K. + +**You:** **[19:14:16]** Because I have also been struggling with this. This is a big chart and all of these dots are + +**Other:** **[19:14:27]** K. + +**You:** **[19:14:28]** DPI customers. And so here's, like, CalOptima, They're looking pretty good. The way that chart works is it takes left to right is monthly active users. So like, seventy five. + +**Other:** **[19:14:50]** So to the right is larger. K. + +**You:** **[19:14:53]** Yeah. And then yeah, the Calyto has got 46. And then you go into, like, Horizon Blue Cross Blue Shield's got one. + +**Other:** **[19:15:01]** Okay. + +**You:** **[19:15:02]** And then top section are people with upsell. Account or upsell opportunities in Salesforce. Middle is renewals and just like I think, like regular business, new business. And then the bottom section are ones with churn. Or downgrade opportunities. + +**Other:** **[19:15:24]** Okay. + +**You:** **[19:15:26]** What is this one? Amended scope ARR. Yeah. I guess this is, like, down. Yeah. And then let's do which one? Okay. Care first. My old. My old crew. Let's look at one here. VNS is new. Regional Blue Shield is also new. Would something like this, like this your triple s This also shows the Jira tickets. So, like, the quarterly model refresh, so the data ops, like, the the monthly refreshes, and then if any of them are blocked, or whatever. It also shows STARZ data. I don't know. This one doesn't have STARS data for some reason, but is something like this helpful + +**Other:** **[19:16:26]** And this would only be for analytics. Right? + +**You:** **[19:16:29]** Well, if this is helpful for analytics, we can try to, like, continue to expand and then just have different views of, like, only Ashley's accounts or like, whatever. We'd have to figure out what the right metrics are to, like, put them in these different zones. Like, + +**Other:** **[19:16:46]** Yes. + +**You:** **[19:16:47]** analytics is nice because there's at least, like, monthly active users, and that's, like, a an easy to tell metric of like, hey, if you have 46 people logging in, that's a good sign. If you have + +**Other:** **[19:16:59]** Yeah. + +**You:** **[19:16:59]** zero or one, bad sign. + +**Other:** **[19:17:03]** Okay. So what + +**You:** **[19:17:09]** Well, we're gonna + +**Other:** **[19:17:09]** if a Sam was + +**You:** **[19:17:11]** the way that I'm thinking about it, you're probably not the best. Well, I think you're the best example because we wanna make people more like you. But, like, what I'm thinking is not everybody is actually meeting with their customers regularly. + +**Other:** **[19:17:23]** Yeah. + +**You:** **[19:17:24]** And then my goal is that they don't ever log in to this, + +**Other:** **[19:17:25]** Yeah. + +**You:** **[19:17:28]** The system detects like, here's a churn risk because, like, the the monthly + +**Other:** **[19:17:31]** Yeah. + +**You:** **[19:17:34]** active users have dropped. Like, get in contact with this person immediately and it either makes like a Jira card for them and then we can track the Jira board. Like, the Jira board of like, churn risk accounts or like upsell check ins then there's just a big board and then you guys actually check that Jira board and it just gets assigned to like I don't even know. Whoever is someone who's not checking in with their accounts regularly. + +**Other:** **[19:17:59]** Can it be community instead of Jira? Because I know plan trying to get us completely out of Jira and + +**You:** **[19:18:06]** All in + +**Other:** **[19:18:06]** selfishly, I hate Jira. + +**You:** **[19:18:07]** hey. That's fine. That's fine. I can't I'm not allowed to leave Jira, + +**Other:** **[19:18:09]** So yeah. Right. Right. + +**You:** **[19:18:13]** but I can I can save you, save other people? + +**Other:** **[19:18:14]** But for a Sam, yeah. + +**You:** **[19:18:17]** Is it a Salesforce? Is it a commute there are there, like, cases that are basically internal only? I guess I could ask Amir, though. + +**Other:** **[19:18:21]** Yep. Yes. Yeah. There's internal only quesos + +**You:** **[19:18:27]** Smooth. + +**Other:** **[19:18:28]** There's just, like, a check mark box in community. So you can do internal, and then there's external facing ones. I think it would be helpful for two things. One, oh, one other question I have before I jump into that. How would it be getting updated? Like, is it automatic based on + +**You:** **[19:18:45]** That's what yeah. It's pulling Salesforce data + +**Other:** **[19:18:46]** okay. + +**You:** **[19:18:48]** It's pulling the DPI data. It's pulling the data, like, + +**Other:** **[19:18:51]** It's not like a Sam has to go in and update anything. + +**You:** **[19:18:53]** no. + +**Other:** **[19:18:54]** Okay. + +**You:** **[19:18:54]** The whole point is, like, we're doing this for Sam's and wanna do this for salespeople and basically say, like, + +**Other:** **[19:18:55]** So I think yeah. + +**You:** **[19:18:59]** you guys are already using Salesforce. This is just something that says, here is automated upsell identified opportunities. Go qualify this lead. Like, go like, here's somebody who has caps. They should have Haas. They have low Haas score. + +**Other:** **[19:19:09]** Yeah. + +**You:** **[19:19:12]** Talk to them about it. + +**Other:** **[19:19:13]** Okay. + +**You:** **[19:19:14]** Immediately. And then, like, + +**Other:** **[19:19:15]** Yeah. Okay. + +**You:** **[19:19:16]** they don't have to log in to that, but, like, it's helpful if you do. + +**Other:** **[19:19:16]** Right. So one thing I think would be really helpful is we as Sam's, constantly get asked questions about customers like, are they at risk for churn? Me your top three customers that are at risk, blah blah blah blah blah. It's like, go into Salesforce. But the problem is in Salesforce, like, they have to go into each account individually. Look at the opportunities, see, like, what the notes are in there. So it's very difficult for a leader to do that, so it's just easier to ask + +**You:** **[19:19:47]** Right. + +**Other:** **[19:19:49]** a Sam, but it's exhausting from a Sam perspective when + +**You:** **[19:19:50]** Right. + +**Other:** **[19:19:52]** all I'm doing every week is giving you those notes + +**You:** **[19:19:54]** One what's the how do you figure out if like, how do you look at an account and their opportunities? + +**Other:** **[19:19:56]** and you're + +**You:** **[19:20:01]** Like, let's pick one. Here. Eleventh. No. + +**Other:** **[19:20:05]** want me to share my screen? + +**You:** **[19:20:06]** Here. Yeah. You share your screen. + +**Other:** **[19:20:10]** Let me share. Like, I'll show you. I mean, every here's a good one. So Centene corporate, here's the account. You go to opportunities. And then in here, like, there's a shit ton of opportunities. Right? Like, a leader can't go through these one by one to see what they say. So these two, like, I happen to enter in a churn risk because I do think there's a high probability they will churn. + +**You:** **[19:20:34]** Yeah. Yep. + +**Other:** **[19:20:37]** But, like, this is the renewal, and I have in the oh, sorry. Wrong renewal. Where in the hell is oh, right here. Here's the renewal. And I have in the notes here, like, at risk, met with + +**You:** **[19:20:53]** Yeah. Yeah. Yeah. Yeah. + +**Other:** **[19:20:54]** Centene, blah blah blah blah blah blah blah. So there's that. And then on top of that, I have the churn risk opportunity. So there's two places + +**You:** **[19:21:03]** Right. + +**Other:** **[19:21:05]** that this says this exact same thing. But I wanted to call it out. Like, hey. There is a churn risk for this lung. You also and I go and update these every single Monday. I update every single opportunity that I have + +**You:** **[19:21:17]** Right. + +**Other:** **[19:21:19]** with what's the latest and greatest update so that leadership has it, + +**You:** **[19:21:22]** And then they feel and then everyone still ask you. + +**Other:** **[19:21:23]** Yes. And then everyone still asks. + +**You:** **[19:21:24]** And then yeah. + +**Other:** **[19:21:27]** Like, today, I got a question today. Hey. Give me this. And I'm like, I I literally do this every single Monday. So it it's it's hard, but I get it. From a leader perspective, it's like, she's managing + +**You:** **[19:21:39]** Right. + +**Other:** **[19:21:39]** five of me, and we all have + +**You:** **[19:21:41]** Right. + +**Other:** **[19:21:42]** 12, 15 accounts. So she doesn't know. Right? So I get why she asks, but just frustrating from a Sam. And then another thing we do is, like, risk score history. So you can enter in risk scores + +**You:** **[19:21:52]** Yeah. + +**Other:** **[19:21:54]** on, like, where that's at. So that's another place + +**You:** **[19:21:57]** Yeah. Yeah. + +**Other:** **[19:21:58]** too that it could maybe pull information for your report. So I think from a leadership perspective, your thing could be very helpful. From a Sam, there's + +**You:** **[19:22:05]** Right. Yeah. Yeah. + +**Other:** **[19:22:07]** aspects of it that would be helpful. And then for sure from sales, it would be good. + +**You:** **[19:22:10]** Okay. Good. Okay. Sweet. Thank you. Has been super helpful. Okay. + +**Other:** **[19:22:16]** Yeah. Yeah. + +**You:** **[19:22:18]** I will, catch up with you later. + +**Other:** **[19:22:19]** Okay. K. Sounds good. Thanks. + +**You:** **[19:22:22]** You get the email? Did it actually work? + +**Other:** **[19:22:23]** Let me look really quick. Yeah. I did get it. + +**You:** **[19:22:27]** Alright. Check it later. + +**Other:** **[19:22:28]** Okay. Sounds good. + +**You:** **[19:22:29]** Alright. Later. + +**Other:** **[19:22:31]** K. Bye. + +**You:** **[19:22:32]** Bye. diff --git a/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md b/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md new file mode 100644 index 00000000..e7d565b9 --- /dev/null +++ b/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md @@ -0,0 +1,718 @@ +--- +title: "Holly's departure and account management performance review with Krista" +id: 77647056-1119-4ab0-80f6-83b68d45074d +created_at: 2026-02-09T19:58:45.536Z +updated_at: 2026-02-09T20:27:50.197Z +source: granola +type: transcript +linked_note: 2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista.md +--- +# Holly's departure and account management performance review with Krista — Transcript + +**Other:** **[20:00:27]** Matt, did you go away? + +**You:** **[20:00:33]** Hello. Sorry. Hold on one sec. Okay. Okay. Hey. + +**Other:** **[20:00:50]** How you doing? + +**You:** **[20:00:51]** Good. How are you? + +**Other:** **[20:00:51]** It's this one. Great. Great. Great. Great. + +**You:** **[20:00:56]** Did you so one was two two ones for me. Was wanna check-in how you work because I I heard heard from a bird that Holly's not out or Holly's out. + +**Other:** **[20:01:08]** Let's correct. She's out. As of last Thursday. + +**You:** **[20:01:14]** How are you? How are you holding up? Alright. + +**Other:** **[20:01:17]** No. I got the news while I was away. Like, Jackie called me. And I was like, she was my proxy. So I was like, oh, shit. So, like, she would not call me. + +**You:** **[20:01:25]** Someone someone yeah. Like, she would not call you unless something blew up. + +**Other:** **[20:01:26]** So I was like, what? My clients are freaking ridiculous. I left in every like, I leave at a good spot. I'm like, I never, like, leave the reds. You know what I mean? I was like, how It's like, Norden. And then she sent me a text. She was like, Krista, you must call me right now. And I was like, oh, shit. And then she was like, Krista, we just got an email. And I was like, o m g. + +**You:** **[20:01:54]** So + +**Other:** **[20:01:55]** It was part of my quarter one predictions. I don't know if you know any back story, but I was just like, I don't know. I don't know how perception is with the other leaders. Like, if she plays nice and, like, is moving things forward, Like, I I the effectiveness wasn't coming across super strong, not just from, like, a personable, like, interpersonal, like, do I like working with her? + +**You:** **[20:02:15]** Yeah. Right. Right. Right. + +**Other:** **[20:02:17]** But I was like, I don't like, it seems like Kim is doing a lot of these, like, trying to piece together some of the things and then tackling, like, the bigger escalations, which maybe, to be fair, was just breaking up the the bandwidth of work. + +**You:** **[20:02:30]** Right. + +**Other:** **[20:02:31]** But then, like, with our 70% to plan quarter four, I was like, prior to her coming in, we were always hitting the 80% threshold. If not achieving 85% to plan 90% as of vertical of rev like, getting our attention. Then our renewals done. She comes in, and now we're 70%. Not saying, like, + +**You:** **[20:02:55]** Right now, yeah, it's + +**Other:** **[20:02:56]** she was to blame, but, like, if + +**You:** **[20:02:57]** brought in right. There's, like, + +**Other:** **[20:02:58]** if + +**You:** **[20:02:59]** yeah. + +**Other:** **[20:03:00]** if I'm a bigger, like, leader, I'm just gonna be like, what's cause and effect? + +**You:** **[20:03:02]** I could try it. Yeah. + +**Other:** **[20:03:05]** They were hitting 80%. You come in, and now there's a million germs? + +**You:** **[20:03:09]** Right. + +**Other:** **[20:03:09]** So I was just like, I know. I feel like things a little cutthroat and whether it was her her because of her or not, + +**You:** **[20:03:17]** Right. + +**Other:** **[20:03:19]** I feel like it was was risky. I felt like all through quarter one, it's just been like, I don't know what's gonna happen. + +**You:** **[20:03:24]** Right. + +**Other:** **[20:03:26]** And I feel like, yeah. But I don't know if something incidentally, like, it's never really just one thing. You know what I mean? Like, it's all unless, like, you I don't know. + +**You:** **[20:03:33]** What exactly is, like well, yeah. Like, + +**Other:** **[20:03:36]** Yeah. + +**You:** **[20:03:37]** one thing, like, becomes the reason. There's something that become the reason, but it was like and that's kinda what I took at least in my impression that I shared it with, like, at least and then I I don't know what it does, but, like, the people I talked to are Phil and Brian. And I was like, I was concerned about account management, + +**Other:** **[20:03:59]** Yeah. + +**You:** **[20:04:00]** for a while. Holly came + +**Other:** **[20:04:03]** Yeah. Because + +**You:** **[20:04:04]** and then, like, + +**Other:** **[20:04:04]** yeah, No. + +**You:** **[20:04:05]** my concern did not go away. Like, it actually, like, my my concern increased, and it was, like, already high. + +**Other:** **[20:04:09]** Yeah. + +**You:** **[20:04:13]** And I think, like, all that I all and I don't know anything is that it was, like, I think even if you're because you are always, like, you know, the people who make these decisions are two, three steps removed. + +**Other:** **[20:04:30]** Yeah. + +**You:** **[20:04:31]** Seeing her, and there can be very different impression of, like, her from one on one, but it's like, + +**Other:** **[20:04:35]** Yeah. + +**You:** **[20:04:35]** hey. People are leaving. On her team are leaving. + +**Other:** **[20:04:39]** That like, either immediately bolted to other teams + +**You:** **[20:04:41]** The bad job In a bad job market. + +**Other:** **[20:04:44]** Yeah. People left. + +**You:** **[20:04:46]** Voluntary, like, + +**Other:** **[20:04:47]** Who she picked up on. + +**You:** **[20:04:49]** in a bad job market. + +**Other:** **[20:04:50]** Yeah. + +**You:** **[20:04:51]** It's not usual. True numbers, like, + +**Other:** **[20:04:55]** Yeah. + +**You:** **[20:04:57]** are bad. And then I I think + +**Other:** **[20:04:59]** And, like, even in meetings, again, I think her work like, I think the role as it is now is current is currently and I said that to Kim. I was like, + +**You:** **[20:05:09]** Right. Yeah. + +**Other:** **[20:05:11]** until there's more things structured, operationalized, + +**You:** **[20:05:14]** Right. + +**Other:** **[20:05:15]** I I think for one person, sucks. + +**You:** **[20:05:18]** Right. + +**Other:** **[20:05:18]** But in meetings, like, even with, like, other leaders, like, she'd be multitasking. And, like, you tell and not, like, driving things forward or coming at it from, like, a directive. She'd just be like, oh, it's in their plate. Like, from my perspective, I have nothing to do. And it's like, that's not I don't know how that + +**You:** **[20:05:37]** Right. + +**Other:** **[20:05:38]** others perceived it. Like, I can only assume they were just like, not really present and, like, willing to learn and + +**You:** **[20:05:46]** Right. + +**Other:** **[20:05:47]** yeah. + +**You:** **[20:05:47]** Yeah. No. I yeah. I did not, Yeah. I got that. And it I I I just was sorry that it, like, took so long, and then it's sort of like it does feel, like, a little bit, like, hey. Good thing. Like, it should not have persisted. It's not persisting anymore. But then it's, like, also, like, hey. Like, kinda like two steps. + +**Other:** **[20:06:10]** Yeah. Yeah. + +**You:** **[20:06:11]** Like, okay. Well, Yeah. We're not. Like, it like, we still need help. + +**Other:** **[20:06:17]** Yeah. We still need that person. I don't think it can be Kim. + +**You:** **[20:06:17]** Right. + +**Other:** **[20:06:21]** Although, in her limited capacity, I have found to be more effective + +**You:** **[20:06:26]** Right now. Yeah. Exactly. + +**Other:** **[20:06:27]** Like, when I needed help, + +**You:** **[20:06:27]** Right. + +**Other:** **[20:06:31]** and, like, kept things like, even though we were experiencing the churns, like, our meetings were, like, the tone of it was just like, you're a bad kid, problem child. You didn't write this note, and I need it this way. And then Kim's style is like, hey. You got a million dollars coming up. Let me know how it's going. Like, + +**You:** **[20:06:49]** Yeah. Yeah. Right. + +**Other:** **[20:06:50]** and just open floor and keeping it, like, light. Although, like, we get it. We have to + +**You:** **[20:06:51]** Right. + +**Other:** **[20:06:55]** get numbers. We gotta get things to sign. But it was just, like, a completely different way + +**You:** **[20:06:56]** Yeah. + +**Other:** **[20:07:00]** of managing + +**You:** **[20:07:01]** Right. + +**Other:** **[20:07:02]** And I can't I I don't think the negative tone, like, motivated anyone to, like, + +**You:** **[20:07:07]** Right. Exactly. Right. Yeah. Exactly. That + +**Other:** **[20:07:09]** to go above and beyond. You know? You're just like, oh, they're they're churning. + +**You:** **[20:07:10]** right. Right. What part yeah. It's one of those things you put, like, an I've been + +**Other:** **[20:07:14]** Not gonna try to save it. + +**You:** **[20:07:19]** different context, but what I've seen is those things are taught like, they are those are rarely are like, it's and I again, I've been in some pretty bad work environments, but usually, even in bad work environments, like, types of grading personalities are tolerated like + +**Other:** **[20:07:44]** If you hear + +**You:** **[20:07:45]** Right. They're yeah. Exactly. They're they're + +**Other:** **[20:07:46]** You gotta there's + +**You:** **[20:07:47]** correlated assuming, like, above average performance or, like you know? And that, like, it's like, alright. Like, it's not good, but it's like, okay. But it's like, as soon as the something doesn't it's like, hey. That's not, like, that's not working. And, like, are quitting and the numbers aren't where they're supposed to be. Like so it's it you kinda put a target on your back. You know, in those types of situations. + +**Other:** **[20:08:11]** Yeah. Yeah. So I felt that way for a while, but then I'm like, is this just my personal, like, + +**You:** **[20:08:19]** No. + +**Other:** **[20:08:20]** I would love to see it change. And yeah, I put it down in my predictions. I thought it would be, like, March, like, once our fiscals went live. Like, I thought they would give it a little bit more + +**You:** **[20:08:30]** Yeah. + +**Other:** **[20:08:33]** time. But they promoted Beth and Jackie, and then the following two days, K. We don't need Holly anymore. We put some people in place to kind of + +**You:** **[20:08:42]** Oh, so who is it? I didn't see who it so + +**Other:** **[20:08:43]** Jackie is promoted. She two people report to her. + +**You:** **[20:08:48]** okay. + +**Other:** **[20:08:50]** Might be more now that Holly's not there, and they haven't backed + +**You:** **[20:08:50]** Right. + +**Other:** **[20:08:53]** obviously, backfilled the role. And then Beth McDaniel, she is new. + +**You:** **[20:08:55]** Oh, yeah. + +**Other:** **[20:08:56]** But she's more tenured + +**You:** **[20:08:57]** Yeah. + +**Other:** **[20:08:58]** Definitely, like, a bet way better. I mean, an amazing personality. So and, like, + +**You:** **[20:09:00]** Yeah. + +**Other:** **[20:09:04]** trying to learn, I've no I'm reporting to her. They asked me today, which is funny because, like, Holly was just just like, when she told me, she was like, I just wanna let you know because reporting changes. And then I had a meeting with Kim, and she was like, you know, like, we've had con conversations leadership wise and, like, you know, I think you're very buttoned up. You've been here a while. + +**You:** **[20:09:21]** I think you're great. + +**Other:** **[20:09:22]** You you can successful. Like, would you wanna be a people manager? And I was like, thank you for at least asking me because that is, like, nice. But and then I was + +**You:** **[20:09:26]** I think so. + +**Other:** **[20:09:31]** like, for just full transparency, like, I don't have any interest in it. I really like my life. + +**You:** **[20:09:32]** Right. Yeah. + +**Other:** **[20:09:36]** Now. And I speak holding that much, like, responsibility freaks me out. And I don't wanna have meetings with leaders all day. I like just meeting with clients and being an individual contributor. But I was like, I appreciate, like, it is nice to be asked + +**You:** **[20:09:48]** Yeah. No. That's the difference between yeah. + +**Other:** **[20:09:50]** and to be considered. But Holly was just, like, made it seem like mean, I got, like, what, meets expectations essentially for my + +**You:** **[20:09:57]** Well, I think yeah. Well, I think because it was, like, and usual usual at least, I don't know. Again, + +**Other:** **[20:10:04]** Different context. + +**You:** **[20:10:05]** like those people tend to be paranoid. Right? And you don't like you're not interested in promoting, like, you're like, I need Chris. Like, I can't, like, + +**Other:** **[20:10:14]** Yeah. Yeah. + +**You:** **[20:10:15]** I can't have this promoted because, a, she's, a threat because she'd be good. And then, like, what are they gonna think of me? And then also, like, I don't usually, those people are + +**Other:** **[20:10:25]** You need aces in their places. + +**You:** **[20:10:26]** Yeah. Exactly. Yeah. Exactly. + +**Other:** **[20:10:27]** Yeah. Yeah. So I feel good. And then they had they might promote someone else. They said they might have three managers at that level, and I just told them Hala. But Hala's kind of could could be a different story now that Hala's out of the picture. + +**You:** **[20:10:40]** Right. + +**Other:** **[20:10:44]** But Holly was like, I'm just interested in making money. Like, I don't wanna I don't wanna be leader. I would just wanna focus on, like, renewals because that's kinda how we make + +**You:** **[20:10:49]** Yeah. Right. Right. + +**Other:** **[20:10:52]** we that's how we make our extra money is, like, hitting renewals. + +**You:** **[20:10:53]** Yeah. Right. Exactly. No. Yeah. That makes that makes sense. + +**Other:** **[20:10:59]** Yeah. So crazy crazy that occurred on my birthday. Like, I cannot + +**You:** **[20:11:01]** Birthday. Oh, yeah. + +**Other:** **[20:11:05]** My phone was blowing up. Like, my friends because I was visiting friends, they were like I was like, + +**You:** **[20:11:11]** Because when I went out to because you were there because I went up to Boston. I was there when Monica was gone, and Brian, Monica, and I went out to dinner. And that was her last night. And we're because we're, like, we were talking about that. + +**Other:** **[20:11:21]** Yes. + +**You:** **[20:11:25]** And Brian told me a little bit ago, and then he's like, hold on. Let me text, like, Monica. Should text her. And he he said, like like, he texted her, and, like, as soon as he put his phone down, was like dot dot dot. Like, she was, like, right off the response back. + +**Other:** **[20:11:42]** Yeah. + +**You:** **[20:11:44]** So + +**Other:** **[20:11:45]** It's crazy. + +**You:** **[20:11:47]** yeah. It's nuts. Speaking of + +**Other:** **[20:11:51]** Alice. + +**You:** **[20:11:53]** and I'll I'll go No. No. I mean, it's this is all, like, hopefully, a good thing. But I have something I'm curious on your opinion on. This is totally different, totally random. + +**Other:** **[20:12:00]** Okay. + +**You:** **[20:12:03]** Okay. Alright. So I was when I was trying to figure out account management and because I was like, I need help. Like, I know Krista's got her stuff together, but, I don't know what the heck is going on anywhere else. And, like, I'm + +**Other:** **[20:12:15]** Mhmm. + +**You:** **[20:12:18]** nervous. And so I came up with something that I was like, more just, like, for DPI, but this is bay all these dots are accounts. + +**Other:** **[20:12:29]** Oh, + +**You:** **[20:12:30]** And so this is, like, Molina And it shows them on the perspective of, like, whether they're healthy, like, sure like, whether they have renewals, like or upsells, + +**Other:** **[20:12:43]** Mhmm. + +**You:** **[20:12:45]** in this top top section. If you have churns, you're in the bottom section. And then left to right is monthly active users in DPI. + +**Other:** **[20:12:54]** Oh, yeah. Yeah. + +**You:** **[20:12:54]** So you kinda tell, like, where people are. And then, the I also brought in so these are all the, like, opportunities that are in Salesforce. And then the work is these are all the Jira tickets across the client analytics board and the DPI or in the monthly refreshes. You could kinda see, like, what's going on, like, what is happening. And then STARZ data. So, like, where are they good, where they need help. So you kinda + +**Other:** **[20:13:29]** Yeah. + +**You:** **[20:13:32]** know what's going on when you're talking to them and then being able to see the breakdown, like, by contract. And see which ones are good or bad. + +**Other:** **[20:13:41]** How how are the stars being? Is it just caps ratings? What is + +**You:** **[20:13:46]** These are, yeah, these are basically all the I gotta figure out. These are + +**Other:** **[20:13:51]** Are those their master cap score? + +**You:** **[20:13:52]** these are overall these are contracts or overall ratings. + +**Other:** **[20:13:53]** Mhmm. + +**You:** **[20:13:57]** Then these are, like, the average of them. I think so. But I gotta figure this. I gotta figure out. How this exactly works. But, yeah, that's a good point. I should I need to figure that out. But This was and then I guess, yeah, the other thing that was going on was basically try to start to identify, like, opportunities Like, hey. Should we talk to them about Haas? Like, you know, if they don't have Haas, talk to them about Haas. If they do have Haas, like, talk about them inflection points or something else. So it's all about, like, also where we should be thinking about opportunities with them. But wanted to get your perspective from, like, just this like, a Sam perspective. Does this let me actually see. Let's see. Who Highmark actually has a chur, and this is out of the ching. So they have + +**Other:** **[20:15:03]** What was the churn? + +**You:** **[20:15:04]** renewal down they have downgrade. + +**Other:** **[20:15:06]** Oh, yeah. Because that was based on number lives. Like, sorry. That's that's their business. Like, that's + +**You:** **[20:15:13]** Right. + +**Other:** **[20:15:14]** it's not like + +**You:** **[20:15:15]** So this is not a this is not a really good thing. Like, they're actually doing + +**Other:** **[20:15:16]** Highmark came and said I only want yeah. + +**You:** **[20:15:20]** well. Like, this is not, like, okay. I'm gonna go too. + +**Other:** **[20:15:23]** Oh, yeah. Yeah. I see what you're getting at. Like but + +**You:** **[20:15:24]** It's like, + +**Other:** **[20:15:26]** it's not like the client said we have + +**You:** **[20:15:26]** do you + +**Other:** **[20:15:28]** 200,000 lives, and now we only want a 100,000 of them. It's like, no. They're base population + +**You:** **[20:15:34]** It's going down. + +**Other:** **[20:15:35]** external outside of us is going down. + +**You:** **[20:15:37]** Right. + +**Other:** **[20:15:38]** So it's more like if they remove the contract or a line of business, like, that level + +**You:** **[20:15:42]** That's, like, from, like, over above, like, + +**Other:** **[20:15:43]** like, a downgrade, But + +**You:** **[20:15:46]** like, 20% of the contract value of, like, + +**Other:** **[20:15:48]** yeah, + +**You:** **[20:15:50]** total overall value or thing. It's like yeah. There's there should be, like, some limiters in there. + +**Other:** **[20:15:53]** Because Memorize also can go on the opposite Sometimes we get a upsell, but it's not that no one had to work really hard for that. It would + +**You:** **[20:16:04]** Right. Right. Right. + +**Other:** **[20:16:05]** like, it's organic, but it's not based on, like, a driver of us as a vendor. It's just they have more members now. + +**You:** **[20:16:11]** Right. And then, yeah, then I have, like so then Medical Mutual of Omaha + +**Other:** **[20:16:17]** Mhmm. + +**You:** **[20:16:18]** of Ohio. Sorry. What do they have? They had a downgrade interesting, for twenty five, twenty six. I don't know what that one is. But I was trying to do, like, and then Geisinger and, obviously, like, these Mercy Health New England, Florida Blue, like, Providence, and Elevate like, Elevance I don't know. Do you agree with, like, them being in the, like, higher risk perspective? + +**Other:** **[20:16:51]** Yeah. I feel like they're mix and match. And they'll change things on a whim. Like, + +**You:** **[20:16:56]** Right. + +**Other:** **[20:16:58]** if you have 10 programs now, yeah, they're gonna renew some of them, and they're gonna get rid of some, and they're gonna get new ones. Know what I mean? It's just always changing. But they're consistent for, like, I'd say within maybe, like, the sixtieth percentage of their total revenue. Like, that's pretty much always gonna renew. But some of it is pretty variable just because they'll be like, + +**You:** **[20:17:19]** Right. They're all good. I like new things. Yeah. + +**Other:** **[20:17:22]** no. Don't need this anymore. So they're not a risk client. Like, they they've even said, like, they could not find them platform, at least on the engagement side. It does what we do. They like it. They're gonna keep using it, but it's the programs themselves that are variable. + +**You:** **[20:17:37]** Right. + +**Other:** **[20:17:39]** Like, what are they gonna do this year? + +**You:** **[20:17:41]** Yep. No. That that's helpful. And then I have, + +**Other:** **[20:17:44]** But this is a good like, I like where you're going. I cannot believe you, of all people, created it. I mean, I can, but it's, like, not your like, this is what the director of customer success should have been partnering with with ops or someone to create. + +**You:** **[20:17:55]** No. Yeah. Feel like yeah. This is this is kind of like, also, like, do you you kinda get closer and then you're like, I think I actually can, like, do some of this. And then it's also helpful because then you get to show it to people and they're like, oh, like, yeah, that makes sense or, like, that doesn't make sense. + +**Other:** **[20:18:12]** Yeah. + +**You:** **[20:18:14]** But I would say, like, would + +**Other:** **[20:18:18]** It's hitting, like, 80% of what you need. + +**You:** **[20:18:20]** Yeah. Exactly. And then I + +**Other:** **[20:18:21]** If you're just thinking of DPI clients, + +**You:** **[20:18:24]** well, this is basically right now because I don't have a good + +**Other:** **[20:18:26]** analytics. + +**You:** **[20:18:28]** I don't know if other other people might have different metrics. Like, monthly average users doesn't make sense for engagement. Like, I guess it might be, like, number of programs or something. Or, like, the quality of their programs. But because, like, I didn't have, like, a good matrix kinda, like, a two by two to, like, put them up against. And I also well, I I care most about DPI, obviously. But no. + +**Other:** **[20:18:53]** Yeah. + +**You:** **[20:18:54]** I care about the enterprise. But + +**Other:** **[20:18:54]** I mean, although analytics + +**You:** **[20:18:56]** then I + +**Other:** **[20:18:57]** is a smaller amount of revenue, it is the most consistent. Like, + +**You:** **[20:19:00]** Yeah. No. Exactly. And I think well, then and then I was wondering about this because do you agree with this? Like, basically, like, and I think some of the metrics reported, but, basically, the more you're the more people are using it, the less likely you are to churn. Like, in general. + +**Other:** **[20:19:15]** Yes. + +**You:** **[20:19:17]** And I guess the question, like, + +**Other:** **[20:19:20]** If it's more sticky, + +**You:** **[20:19:20]** yeah. + +**Other:** **[20:19:22]** they've adopted it more. It meets + +**You:** **[20:19:24]** And then that's the question of, like, hey. + +**Other:** **[20:19:25]** 80% of what they need in a tool. It'll never be per like, + +**You:** **[20:19:28]** No. And it's more like, how do you move + +**Other:** **[20:19:29]** every time we train a call team, they're like, well, we have this system. And I'm like, we're never gonna be that system. + +**You:** **[20:19:33]** Right. Exactly. Exactly. + +**Other:** **[20:19:34]** Like, + +**You:** **[20:19:36]** I think it's, like, just moving it. Like, how do we move people from, like, this left side over to the right side more? Like, there's always gonna be ups and downs, it'll be different path. But, like, at least now you kind of like, hey. We know + +**Other:** **[20:19:49]** Yeah. + +**You:** **[20:19:50]** like, this is kinda what we're shooting for. + +**Other:** **[20:19:51]** Or, you know, as a Sam, there are no users are logging in. I should try to schedule, like, once + +**You:** **[20:19:57]** Right. Exactly. Like, + +**Other:** **[20:20:01]** you have a new release, set up time with + +**You:** **[20:20:02]** right. + +**Other:** **[20:20:03]** product or even leave it yourself. Like, show them the new feature. + +**You:** **[20:20:06]** Exact + +**Other:** **[20:20:07]** Connect it with something they've mentioned in a meeting before. + +**You:** **[20:20:07]** Yeah. Right. + +**Other:** **[20:20:10]** Offer trainings, + +**You:** **[20:20:11]** Okay. + +**Other:** **[20:20:13]** send them things. I know. Yeah. That's excellent. + +**You:** **[20:20:17]** And then so who is who it's like, Jackie and Beth. Probably would be more interested in this sort of stuff you think or at least + +**Other:** **[20:20:24]** Pretty yeah. In the current structure, yes. + +**You:** **[20:20:29]** alright. I'll set up time with + +**Other:** **[20:20:30]** They would be the ones to take it, but you can always run it by me. + +**You:** **[20:20:34]** Obviously, no. Yeah. That's, exactly what I did. + +**Other:** **[20:20:35]** Yeah. Because we have this meeting reoccurring. + +**You:** **[20:20:38]** Exactly. Exactly. Alright. Sweet. I will go work on that, and I wonder what the the Jira synced a while ago. That's interesting. But, alright. That's what I got. + +**Other:** **[20:20:52]** Yeah. Oh, also, like, you should I you saw it today on the HXI. I'm trying to train my clients on community. Like, you need to choose the right things. + +**You:** **[20:21:04]** I you're doing a really good job with that. + +**Other:** **[20:21:06]** Thank you. I do think some improvements on the triaging because I'll notice, like, someone will I and I'm like, those people I could be the person and get it better triage sometimes. + +**You:** **[20:21:15]** Yeah. And that I know. I know. That's why I'm I'm like, + +**Other:** **[20:21:19]** And I won't know about it. + +**You:** **[20:21:21]** this is, like, one of the things where they're like, they took it away, and I'm like, alright. Like, + +**Other:** **[20:21:21]** If you until you + +**You:** **[20:21:25]** you're running it. And then, + +**Other:** **[20:21:27]** Yeah. Because it's like, does what's his name? + +**You:** **[20:21:31]** Mark Yeah. I know. + +**Other:** **[20:21:32]** Mark? Does Mark know he needs to create tickets on other boards? Like, + +**You:** **[20:21:33]** Yeah. I think I think + +**Other:** **[20:21:36]** I all I see, and I'm I think he's new. + +**You:** **[20:21:39]** yeah. + +**Other:** **[20:21:39]** He just replies, like, to the client, like, this is being worked on. + +**You:** **[20:21:42]** Right. + +**Other:** **[20:21:43]** And then there's that rule for five days, no response on the ticket. So the + +**You:** **[20:21:43]** Yeah. Yeah. Yeah. + +**Other:** **[20:21:47]** not gonna respond. Then it auto closes, and I'm like, + +**You:** **[20:21:48]** What are you doing? Yeah. I know. I know. That's what + +**Other:** **[20:21:50]** no one took the steps to + +**You:** **[20:21:52]** I think that's what, like, Sunny yeah. + +**Other:** **[20:21:53]** it needs + +**You:** **[20:21:54]** I'm gonna I'll always say it to, like, Silee and because I have time with them. + +**Other:** **[20:22:00]** Yeah. + +**You:** **[20:22:01]** And let me say, like, hey. Alright. Hey. At Sai Lee and Sunny. + +**Other:** **[20:22:06]** Maybe there's more options needed in those drop downs that then gets it + +**You:** **[20:22:10]** Yeah. Wanted to + +**Other:** **[20:22:11]** triaged. There's, like, DPI enhancement, DPI bug, I don't we'd have to look at previous types of tickets we used to get on the support side or at least you know, that channel we used to have, and we'd be like, issue. Issue. + +**You:** **[20:22:24]** I know. Yeah. Exactly. + +**Other:** **[20:22:27]** Classify them and give the the clients more drop downs. + +**You:** **[20:22:29]** That's what we did. And that's what yeah. That's what but, like and then but then they're like, no. This the came in. They're like, these are the + +**Other:** **[20:22:35]** Yeah. These are the works the work streams. + +**You:** **[20:22:36]** you get them. Right. + +**Other:** **[20:22:40]** I'm like, but not everything fits in there. + +**You:** **[20:22:42]** They're like, they + +**Other:** **[20:22:42]** And then simple things like a DPI user, I'm like, that's + +**You:** **[20:22:43]** yeah. + +**Other:** **[20:22:45]** should be three days max. I can go in and do it right now. + +**You:** **[20:22:48]** I know. That's why and it's, like, it's, like, the bad thing of, like, that's when did did you put it? Or yeah. It's the chatter. + +**Other:** **[20:23:00]** Yeah. So my only one right now is that + +**You:** **[20:23:02]** There you go. + +**Other:** **[20:23:03]** yeah. Because now I'm like, I'm just gonna double up. I'm gonna, like, every case, I'm gonna create a Teams thread just so people know + +**You:** **[20:23:07]** I know. Exactly. We're + +**Other:** **[20:23:10]** that it's there. + +**You:** **[20:23:11]** yeah. Exactly. Seems like + +**Other:** **[20:23:12]** Yeah. The + +**You:** **[20:23:16]** DPI tickets are being are not being triaged properly, but then they get auto closed after five days. + +**Other:** **[20:23:29]** Yeah. Like or an example because people kept asking about that, that you know, the for Blue Shield California downloading the segment, they got + +**You:** **[20:23:39]** So I + +**Other:** **[20:23:39]** thrown an error. + +**You:** **[20:23:40]** so I I I figured + +**Other:** **[20:23:41]** But that was that was set to auto close. And I just came back from PTO + +**You:** **[20:23:42]** I Right. Yeah. Yeah. Yeah. I just what the what the f? + +**Other:** **[20:23:45]** caught it, changed the status. Yeah. + +**You:** **[20:23:49]** And that I never heard it. Like, I didn't hear about it. And I didn't so + +**Other:** **[20:23:51]** Because just since, I feel like they're just, like, + +**You:** **[20:23:53]** and that one + +**Other:** **[20:23:56]** well, + +**You:** **[20:23:56]** I'm really hoping + +**Other:** **[20:23:57]** someone reaches back out. + +**You:** **[20:23:57]** I we kinda also well, yeah, then it's, like, kinda lucky time You gotta take what you can get. What's he gonna say? Shoot. I was able to figure out what's going on. There was a problem with exporting for dynamic columns. That they fixed, and then I think that accidentally inadvertently broke something else. We also don't have the best QA. We don't have good QA. Like, in general, and it's like it sucks. No. It no. It's you and it's me. And then it's like, I'm and I have been like I'm like, there's a reason I have my job and not someone else's job. In QA because I'm not good at it. Like, I need help. And + +**Other:** **[20:24:32]** QA is me. Or the client? + +**You:** **[20:24:48]** it's we got, like, some QA person, but they're not + +**Other:** **[20:24:50]** You gotta learn that it takes a while. + +**You:** **[20:24:51]** It's not great. And they're they're offshore. Looking offshore, and they so they can only do it in in test. So they don't have access to staging. They don't + +**Other:** **[20:25:01]** Oh, yeah. Because a lot of clients, they can't have their data. + +**You:** **[20:25:03]** all the problems happen in real data. + +**Other:** **[20:25:05]** In prod. But a lot of clients, Blue Shield included, they have a clause in their SOW that + +**You:** **[20:25:10]** Right. Exactly. + +**Other:** **[20:25:12]** no offshore folks are allowed to work on it. + +**You:** **[20:25:14]** Exactly. But that's why it's because no so no offshore people have access to any PHI because + +**Other:** **[20:25:15]** Yeah. + +**You:** **[20:25:19]** you can't, like, delineate between and so it sucks. So yeah. Anyway, + +**Other:** **[20:25:25]** You know what? Just move me over. + +**You:** **[20:25:26]** it's my + +**Other:** **[20:25:28]** I I think at this point, I + +**You:** **[20:25:28]** it's + +**Other:** **[20:25:30]** if and I would leave the Callaway Management as whole. But I could be the provider scorecard's project manager I could also do QA. + +**You:** **[20:25:39]** You wanted me like, I mean, if you want to, like, there because there's, like, this whole delivery + +**Other:** **[20:25:39]** In product all the time. You can match my base salary + +**You:** **[20:25:45]** Yeah. There's a whole delivery org. I don't know. Do you wanna hang out with engineers though? I don't know how much + +**Other:** **[20:25:50]** Yeah. Because I know you have to be very explicit with them. + +**You:** **[20:25:51]** like, yeah. + +**Other:** **[20:25:54]** Actually, I'd ask Keana if she thinks I interact with the engineer as well. Because I'm always like, + +**You:** **[20:25:58]** I think you do because, like, a lot of it is it's having you have to be comfortable being, like, why am I the only person talking in this situation all the time? They're like, oh, yeah. We + +**Other:** **[20:26:07]** Yeah. + +**You:** **[20:26:08]** don't usually talk. Like yeah. So and then you have to be, yeah, very explicit. I mean, if you want the delivery role, + +**Other:** **[20:26:13]** Is it posted? + +**You:** **[20:26:14]** I'm sure like, that that I don't think I don't know. It's in Arrow Koala's role, but Sunny or not Manju. Reached out to me asking about who, like, I didn't really know who that was, but, I mean, if you want, + +**Other:** **[20:26:32]** I'd be interested in reading it. Like, + +**You:** **[20:26:34]** alright. + +**Other:** **[20:26:36]** it'd probably be, like, a six month wait. + +**You:** **[20:26:36]** I was just saying yeah. Because I think they + +**Other:** **[20:26:41]** I'd love to leave, Clydes. And just be the we, like, also could go to a + +**You:** **[20:26:45]** it would + +**Other:** **[20:26:47]** client meeting and help someone out, like an account manager to talk about things. Like, could still be hybrid. But I would love Not that I'm unhappy in my current role, especially given all that just occurred in the last week. + +**You:** **[20:26:59]** Right. + +**Other:** **[20:27:01]** But I've I've constantly said, like, Zoey, should I just be the PM for this? Because I'm the only one who seems to get it. + +**You:** **[20:27:05]** I know. Exactly. + +**Other:** **[20:27:10]** Yeah. Okay. + +**You:** **[20:27:12]** On. I'm + +**Other:** **[20:27:12]** That's cool. + +**You:** **[20:27:13]** get it for at at Krista Schindler. Okay. Alright. I'll figure it out. + +**Other:** **[20:27:35]** Okay. Cool. + +**You:** **[20:27:36]** You rock. I'll talk to you later. + +**Other:** **[20:27:39]** Alright. See you, Matt. + +**You:** **[20:27:40]** Alright. Bye. From 1f9c52c449dd2b50ef195abfbf8ca7850a0a75c0 Mon Sep 17 00:00:00 2001 From: Claude Date: Fri, 13 Feb 2026 14:13:29 +0000 Subject: [PATCH 04/13] Add brainstorm for user research integration into brainstorm and plan workflows MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Explores 16 integration ideas across 5 tiers, narrows to 6 items in a phased approach: - Phase 1: Wire user-research-analyst into brainstorm Phase 1.1 and plan Step 1 - Phase 2: Deeper brainstorm integrations (opportunity-driven initiation, research-informed questions, persona-grounded evaluation, evidence in capture docs) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude https://claude.ai/code/session_014DYcTSd4s126qgbtYiXaVA --- ...esearch-workflow-integration-brainstorm.md | 184 ++++++++++++++++++ 1 file changed, 184 insertions(+) create mode 100644 docs/brainstorms/2026-02-13-user-research-workflow-integration-brainstorm.md diff --git a/docs/brainstorms/2026-02-13-user-research-workflow-integration-brainstorm.md b/docs/brainstorms/2026-02-13-user-research-workflow-integration-brainstorm.md new file mode 100644 index 00000000..4981d560 --- /dev/null +++ b/docs/brainstorms/2026-02-13-user-research-workflow-integration-brainstorm.md @@ -0,0 +1,184 @@ +--- +date: 2026-02-13 +topic: user-research-workflow-integration +--- + +# User Research Integration into Brainstorm & Plan Workflows + +## What We're Building + +Wire user research artifacts (personas, interview insights, opportunities) into the brainstorm and plan workflows so that product decisions are grounded in user evidence rather than assumptions. + +Today, `/workflows:research` produces structured artifacts in `docs/research/` (personas, interview snapshots, research plans) and the `user-research-analyst` agent knows how to search them — but nothing in brainstorm or plan actually calls it. The research workflow hands off to brainstorm, but brainstorm doesn't consume the output. This integration closes that gap. + +## Why This Approach + +We considered three scoping strategies: + +1. **All at once** — wire basics + deeper integrations + plan + work in a single pass. Rejected: too broad, hard to validate incrementally. +2. **Brainstorm-only** — focus exclusively on brainstorm, defer plan. Rejected: the plan wiring is trivial and documented as a TODO already. +3. **Phased approach** (chosen) — Phase 1 wires the basics into both brainstorm and plan. Phase 2 layers in deeper brainstorm integrations. This lets us ship value quickly and validate each layer before adding the next. + +## Key Decisions + +- **Phased delivery**: Phase 1 (basic wiring) then Phase 2 (deeper brainstorm integrations) +- **Brainstorm + Plan first**: These are where research evidence has the highest leverage. Work workflow integration deferred. +- **Parallel execution**: `user-research-analyst` runs in parallel with existing agents — no serial bottleneck added +- **Graceful degradation**: When `docs/research/` is empty, the agent returns a note suggesting `/workflows:research` — no errors, no blocking + +## Phase 1: Basic Wiring + +### 1a. Wire `user-research-analyst` into Brainstorm Phase 1.1 + +**Current state:** Brainstorm Phase 1.1 runs only `repo-research-analyst`. + +**Change:** Add `user-research-analyst` as a parallel agent in Phase 1.1. The brainstorm would run: + +``` +- Task repo-research-analyst("Understand existing patterns related to: ") +- Task user-research-analyst("Surface research relevant to: ") +``` + +**What this unlocks:** Before the collaborative dialogue begins, the brainstormer knows which personas are relevant, what opportunities exist, and what pain points users have expressed. The "understand the idea" conversation becomes evidence-informed. + +**Output flow:** Present a brief summary of relevant research findings (personas, key insights, research gaps) before starting the collaborative dialogue in Phase 1.2. Even without the deeper Phase 2 integrations, this gives the user and the brainstorm process shared context from real user evidence. + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/brainstorm.md` — Phase 1.1 +- `plugins/compound-engineering/agents/research/user-research-analyst.md` — remove "to be wired in PR 2" note + +### 1b. Wire `user-research-analyst` into Plan Step 1 + +**Current state:** Plan Step 1 runs `repo-research-analyst` + `learnings-researcher` in parallel. + +**Change:** Add `user-research-analyst` as a third parallel agent: + +``` +- Task repo-research-analyst(feature_description) +- Task learnings-researcher(feature_description) +- Task user-research-analyst(feature_description) +``` + +**Step 1.6 (Consolidate Research) update:** Add "User Research Findings" as a consolidation category alongside repo patterns and institutional learnings. Structure: + +- Relevant personas and their relationship to this feature +- Key insights and quotes from interviews +- Research gaps (areas where coverage is thin) + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/plan.md` — Step 1 and Step 1.6 +- `plugins/compound-engineering/agents/research/user-research-analyst.md` — remove "to be wired in PR 2" note + +## Phase 2: Deeper Brainstorm Integrations + +### 2a. Opportunity-Driven Brainstorm Initiation + +**When:** Brainstorm starts with no feature description or a vague one. + +**Behavior:** Check persona opportunity tables for unaddressed opportunities. Present them as starting points: + +> "Your research has identified these opportunities: +> 1. [Persona X] needs faster data export (high frequency, low satisfaction) +> 2. [Persona Y] struggles with team collaboration features +> 3. [Persona Z] wants better mobile support +> +> Would you like to explore one of these, or describe something else?" + +This flips brainstorming from "what should we build?" to "your users told you what they need." + +**Dependency:** Requires persona opportunity tables to be populated — the `persona-builder` skill must have run at least once via `/workflows:research personas`. When no personas exist, this step is skipped gracefully (same as Phase 1 graceful degradation). + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/brainstorm.md` — new Phase 0.5 between assessment and Phase 1 +- May need a lightweight "opportunity scanner" helper or extend `user-research-analyst` output + +### 2b. Research-Informed Question Generation + +**When:** After `user-research-analyst` surfaces findings in Phase 1.1, entering Phase 1.2 (Collaborative Dialogue). + +**Behavior:** Use research findings to shape questions. Instead of generic questions: + +- Generic: "Who are the users of this feature?" +- Research-informed: "Your research shows two user types here — [Persona A] who uses this daily vs [Persona B] who uses it quarterly. Should we optimize for one or both?" + +- Generic: "What problem does this solve?" +- Research-informed: "Interviews show users currently work around this by [workaround]. Should we replace that workaround entirely, or build alongside it?" + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/brainstorm.md` — Phase 1.2 guidelines +- `plugins/compound-engineering/skills/brainstorming/SKILL.md` — add "research-informed questioning" technique + +### 2c. Persona-Grounded Approach Evaluation + +**When:** Phase 2 (Explore Approaches), evaluating 2-3 approaches. + +**Behavior:** Evaluate each approach against relevant personas: + +> **Approach A: Simple Export Button** +> - Serves [Persona X] well (matches their "quick export" workflow from interviews) +> - Doesn't address [Persona Y]'s need for scheduled exports +> +> **Approach B: Export Configuration Panel** +> - Addresses both [Persona X] and [Persona Y] +> - Higher complexity; [Persona X] may find it slower than current workaround + +This makes trade-off discussions concrete and user-grounded instead of hypothetical. + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/brainstorm.md` — Phase 2 guidelines +- `plugins/compound-engineering/skills/brainstorming/SKILL.md` — add persona-grounded evaluation pattern + +### 2d. Research Evidence in Brainstorm Capture + +**When:** Phase 3 (Capture the Design), writing the brainstorm document. + +**Behavior:** Add a "Research Evidence" section to the brainstorm document template: + +```markdown +## Research Evidence + +### Relevant Personas +- **[Persona Name]** (confidence: high) — [one-line relevance] + +### Key Quotes +- "[quote]" — participant NNN, on [topic] + +### Opportunities Addressed +- [Opportunity from persona table] → addressed by [decision] + +### Research Gaps +- [Areas where we're making assumptions without research backing] +``` + +This creates a traceable chain: research → brainstorm decisions → plan → implementation. + +**Files to modify:** +- `plugins/compound-engineering/commands/workflows/brainstorm.md` — Phase 3 template +- `plugins/compound-engineering/skills/brainstorming/SKILL.md` — update design doc structure + +## Open Questions + +- **`deepen-plan` coverage: resolved.** `deepen-plan` explicitly runs ALL agents from ALL sources with no relevance filtering (Step 5 rule: "Do NOT filter agents by relevance - run them ALL"). So `user-research-analyst` will be picked up automatically — no changes needed there. +- **Opportunity tracking:** Should we track which opportunities have been addressed across brainstorms? This was deferred (Tier 5) but could be lightweight metadata in the brainstorm YAML frontmatter. +- **Research freshness:** Should we warn when persona data is stale (>90 days)? Deferred but low-effort to add. + +## Deferred Ideas (Future Phases) + +These were explored but deferred based on priority: + +| Idea | Tier | Reason Deferred | +|------|------|----------------| +| Persona-informed stakeholder analysis in plan | 3 | Plan wiring covers the basics; deeper integration later | +| Research-backed acceptance criteria in plan | 3 | Valuable but complex; needs careful design | +| Research gap detection warnings | 3 | Nice-to-have; basic wiring surfaces gaps naturally | +| Persona context during work setup | 4 | Light touch; work executes plans that already have context | +| Persona-driven test scenarios | 4 | Interesting but speculative; validate Phase 1-2 first | +| Research-to-brainstorm handoff with context | 5 | Can be addressed when improving research workflow | +| Brainstorm-to-research feedback loop | 5 | Closes the loop; depends on Phase 2 working well | +| Opportunity tracking across workflows | 5 | Needs Phase 2a working first | +| Research freshness indicators | 5 | Low-effort but low-priority | + +## Next Steps + +→ `/workflows:plan` for Phase 1 implementation (basic wiring into brainstorm + plan) +→ Validate Phase 1 works with real research data before starting Phase 2 From f2a281bcb87fe4b227e5ae2ed2aa043050a06179 Mon Sep 17 00:00:00 2001 From: Claude Date: Fri, 13 Feb 2026 14:18:41 +0000 Subject: [PATCH 05/13] chore: update bun.lock format https://claude.ai/code/session_014DYcTSd4s126qgbtYiXaVA --- bun.lock | 1 + 1 file changed, 1 insertion(+) diff --git a/bun.lock b/bun.lock index 26361fc8..3a07728e 100644 --- a/bun.lock +++ b/bun.lock @@ -1,5 +1,6 @@ { "lockfileVersion": 1, + "configVersion": 0, "workspaces": { "": { "name": "compound-plugin", From f360bb52c8aa44be0f4b367b93f4f0091de33c35 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 13:29:24 -0500 Subject: [PATCH 06/13] =?UTF-8?q?refactor:=20review=20fixes=20=E2=80=94=20?= =?UTF-8?q?remove=20PII=20sample=20data,=20deduplicate=20playbook,=20wire?= =?UTF-8?q?=20research=20into=20workflows?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Remove 6 sample data files containing real names, company names, and confidential discussions (P1 privacy blocker) - Add docs/research/transcripts/*.md to .gitignore - Deduplicate discovery-playbook.md (keep canonical copy in research-plan, reference from other skills via relative path) - Wire user-research-analyst into brainstorm Phase 1.1 and plan Step 1 as parallel agent with silent graceful degradation when no data exists - Strengthen PII stripping guidance in transcript-insights skill - Simplify persona-builder evidence strength / hypothesis status tables - Fix research command phase recommendation to prioritize unprocessed transcripts over missing plans Co-Authored-By: Claude Opus 4.6 --- .gitignore | 3 + docs/research/interviews/.gitkeep | 0 .../interviews/2026-01-29-participant-002.md | 169 ----- .../interviews/2026-02-09-participant-001.md | 152 ---- docs/research/personas/.gitkeep | 0 .../the-front-line-account-guardian.md | 101 --- docs/research/plans/.gitkeep | 0 ...-management-effectiveness-research-plan.md | 128 ---- docs/research/transcripts/.gitkeep | 0 ...e,_and_SelectHealth_accounts_transcript.md | 556 -------------- ...rformance_review_with_Krista_transcript.md | 718 ------------------ .../agents/research/user-research-analyst.md | 8 +- .../commands/workflows/brainstorm.md | 7 +- .../commands/workflows/plan.md | 3 + .../commands/workflows/research.md | 4 +- .../skills/persona-builder/SKILL.md | 24 +- .../references/discovery-playbook.md | 414 ---------- .../skills/transcript-insights/SKILL.md | 11 +- .../references/discovery-playbook.md | 414 ---------- 19 files changed, 36 insertions(+), 2676 deletions(-) create mode 100644 docs/research/interviews/.gitkeep delete mode 100644 docs/research/interviews/2026-01-29-participant-002.md delete mode 100644 docs/research/interviews/2026-02-09-participant-001.md create mode 100644 docs/research/personas/.gitkeep delete mode 100644 docs/research/personas/the-front-line-account-guardian.md create mode 100644 docs/research/plans/.gitkeep delete mode 100644 docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md create mode 100644 docs/research/transcripts/.gitkeep delete mode 100644 docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md delete mode 100644 docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md delete mode 100644 plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md delete mode 100644 plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md diff --git a/.gitignore b/.gitignore index f8f7b971..7dc2ae43 100644 --- a/.gitignore +++ b/.gitignore @@ -3,3 +3,6 @@ node_modules/ .codex/ todos/ + +# Research data - transcripts contain raw interview data with PII +docs/research/transcripts/*.md diff --git a/docs/research/interviews/.gitkeep b/docs/research/interviews/.gitkeep new file mode 100644 index 00000000..e69de29b diff --git a/docs/research/interviews/2026-01-29-participant-002.md b/docs/research/interviews/2026-01-29-participant-002.md deleted file mode 100644 index 538a19c5..00000000 --- a/docs/research/interviews/2026-01-29-participant-002.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -participant_id: user-002 -role: "Strategic Account Manager" -company_type: "Healthcare analytics vendor" -date: 2026-01-29 -research_plan: "account-management-effectiveness" -source_transcript: "2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md" -focus: "Tool fragmentation, user activity reporting, account health signals, leadership reporting burden" -duration_minutes: 26 -tags: [tool-fragmentation, user-activity, account-health, churn-risk, reporting-burden, contract-management] ---- - -# Interview Snapshot: user-002 - -## Summary - -This SAM described managing accounts across 8+ different tools (Salesforce, Program Manager, DPI, Jira, SharePoint, Confluence, Upslow, Dropbox, SFTP) and the resulting friction in day-to-day work. The conversation centered on building user activity reports for WellCare, Centene, and SelectHealth -- revealing that the SAM needs licensed-vs-active user data for contract negotiations but has no single place to find it. The participant articulated clear signals for account health (regular contact, responsiveness, silence as a warning) and described a weekly Monday ritual of updating every Salesforce opportunity only to still be asked the same questions by leadership. They validated the account health dashboard concept as valuable for leadership, SAMs, and sales alike. - -## Experience Map - -``` -Trigger → Context → Actions → Obstacles → Workarounds → Outcome -``` - -| Step | What Happened | Feeling | Tools/Process | -|------|--------------|---------|---------------| -| Trigger | Needed user activity reports for WellCare, Centene, and SelectHealth accounts | Practical, task-oriented | DPI, Metabase | -| Context | Manages accounts across 8+ different tools daily | Overwhelmed ("Oh, no... Good grief. Hang on.") | Salesforce, Program Manager, DPI, Jira, SharePoint, Confluence, Upslow, Dropbox, SFTP | -| Action 1 | Built filtered user reports together (active users, never-logged-in users, by account) | Engaged, collaborative | Metabase/DPI admin | -| Obstacle | Can't determine how many users are licensed per contract vs. how many exist in the system | Frustrated ("I need to know how many users they have registered under their contract, and I don't know how we do that") | Salesforce (contracts), DPI (users) | -| Action 2 | Updates every Salesforce opportunity every Monday with latest status for leadership | Exhausted ("it's exhausting from a Sam perspective") | Salesforce | -| Obstacle | Leadership still asks for status updates despite Monday updates | Frustrated ("I literally do this every single Monday") | Salesforce, email/Teams | -| Workaround | Maintains churn risk opportunities AND renewal notes separately in Salesforce to double-flag risks | Resigned ("there's two places that this says the exact same thing") | Salesforce | -| Action 3 | Reviewed account health dashboard prototype | Enthusiastic ("That is great... Yeah. That's excellent.") | Custom dashboard | -| Outcome | Received automated monthly email reports; agreed dashboard approach is valuable for leadership and SAMs | Positive | Metabase email subscription | - -## Insights - -### Pain Points - -> "Salesforce, program manager, decision point insights, Jira... SharePoint, Confluence... Upslow... Dropbox... SFTP crap." -- **Type:** pain-point -- **Topics:** tool-fragmentation -- **Context:** When asked how many systems they log into, the SAM listed 8-9 tools in rapid succession with visible exasperation - -> "I literally do this every single Monday. So it's hard, but I get it." -- **Type:** pain-point -- **Topics:** reporting-burden -- **Context:** Updates every Salesforce opportunity weekly, but leadership still asks for the same information - -> "a leader can't go through these one by one to see what they say" -- **Type:** pain-point -- **Topics:** reporting-burden, account-health -- **Context:** Explaining why leadership can't self-serve account status from Salesforce -- too many opportunities across too many accounts - -> "all I'm doing every week is giving you those notes and you're... she's managing five of me, and we all have 12, 15 accounts" -- **Type:** pain-point -- **Topics:** reporting-burden -- **Context:** Describing the scale of the status update burden -- 5 SAMs x 12-15 accounts each - -> "I had a customer where we didn't send messages for an entire quarter. And no one caught it. The customer caught it." -- **Type:** pain-point -- **Topics:** account-health, churn-risk -- **Context:** Describing a major service failure that went undetected internally until the customer escalated - -> "they've been asked for people to get access who already have access, so they don't even know who does and doesn't" -- **Type:** pain-point -- **Topics:** user-activity, contract-management -- **Context:** SelectHealth requesting access for users who already have accounts, showing they lack visibility into their own user base - -### Needs - -> "I need to know how many users they have registered under their contract, and I don't know how we do that." -- **Type:** need -- **Topics:** user-activity, contract-management -- **Context:** Needs licensed-vs-active user count per contract for renewal negotiations, but data lives in separate systems - -> "I need some sort of report that can tell me." -- **Type:** need -- **Topics:** user-activity, reporting-burden -- **Context:** Needs automated user activity reporting rather than manual data gathering across systems - -> "I also need to know if they are active because they have asked for who's not using this and who is." -- **Type:** need -- **Topics:** user-activity -- **Context:** Client (WellCare) specifically requesting active vs. inactive user data, which the SAM can't easily provide - -### Behaviors - -> "I go and update these every single Monday. I update every single opportunity that I have with what's the latest and greatest update so that leadership has it" -- **Type:** behavior -- **Topics:** reporting-burden -- **Context:** Weekly Monday ritual of updating all Salesforce opportunities with current status - -> "I have in the notes here, like, at risk, met with Centene, blah blah blah. So there's that. And then on top of that, I have the churn risk opportunity." -- **Type:** behavior -- **Topics:** churn-risk, account-health -- **Context:** Maintaining duplicate risk documentation -- both in renewal notes and as separate churn risk opportunities in Salesforce - -> "you can enter in risk scores on, like, where that's at" -- **Type:** behavior -- **Topics:** account-health -- **Context:** Using Salesforce risk score history feature to track account health over time - -### Workarounds - -> "I wanted to call it out. Like, hey. There is a churn risk for this... So there's two places that this says the exact same thing." -- **Type:** workaround -- **Topics:** churn-risk, reporting-burden -- **Context:** Duplicating churn risk documentation in both renewal opportunity notes and a separate churn risk opportunity to make sure it's visible - -> "my goal is to put into their upcoming renewal... You can have 50 users. Or they pay a fee of some time to maintain 40,000 users." -- **Type:** workaround -- **Topics:** contract-management, user-activity -- **Context:** Planning to use renewal negotiations to cap unlimited user growth at SelectHealth (40,000 users with no contractual limit) - -### Desires - -> "Can it be community instead of Jira? Because I know plan trying to get us completely out of Jira and selfishly, I hate Jira." -- **Type:** desire -- **Topics:** tool-fragmentation -- **Context:** Strong preference for Salesforce Community over Jira for task management; wants fewer tools, not more - -### Motivations - -> "knowing this information... help me so I can have a conversation with them." -- **Type:** motivation -- **Topics:** user-activity, contract-management -- **Context:** User activity data isn't just internal -- it's ammunition for contract negotiations with clients - -> "from a leadership perspective, your thing could be very helpful. From a Sam, there's aspects of it that would be helpful. And then for sure from sales, it would be good." -- **Type:** motivation -- **Topics:** account-health, dashboard -- **Context:** Validating the account health dashboard as valuable across three audiences: leadership, SAMs, and sales - -## Opportunities - -| # | Opportunity | Evidence Strength | Quote | -|---|-----------|------------------|-------| -| 1 | SAMs need a single view of account health that leadership can self-serve without asking for manual updates | Strong | "a leader can't go through these one by one... so it's just easier to ask a Sam, but it's exhausting" | -| 2 | SAMs need a way to see licensed-vs-active users per contract in one place | Strong | "I need to know how many users they have registered under their contract, and I don't know how we do that" | -| 3 | Teams need automated detection when service delivery fails (e.g., messages not sent for a quarter) | Strong | "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." | -| 4 | SAMs need consolidated account data without logging into 8+ tools | Strong | Lists 8-9 tools when asked; visible exasperation | -| 5 | SAMs need a way to share user activity data with clients for joint account governance | Medium | "they've been asked for people to get access who already have access, so they don't even know who does and doesn't" | -| 6 | SAMs need automated churn risk detection based on usage signals so they can intervene early | Medium | Validated the concept of automated flags: "your thing could be very helpful" across three audiences | - -## Hypothesis Tracking - -| # | Hypothesis | Status | Evidence | -|---|-----------|--------|----------| -| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates | SUPPORTED | "I literally do this every single Monday" -- updates all opportunities weekly; "all I'm doing every week is giving you those notes"; leadership still asks despite documentation | -| 2 | Account health is primarily assessed through relationship signals rather than quantitative data | SUPPORTED | Health signals described are almost entirely relational: "regular contact," "responsive," "quiet," "fidgety and weird about contracts." Quantitative data is fragmented across 8+ tools | -| 3 | Higher product usage correlates with lower churn risk | SUPPORTED | Validated when shown the dashboard concept; agreed with the MAU axis as meaningful; "to the right is larger... K." (engaged with the metric) | -| 4 | Support ticket triage gaps are invisible to SAMs until customer escalation | SUPPORTED | "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." -- direct evidence of invisible service failure | - -## Behavioral Observations - -- **Tools mentioned:** Salesforce (opportunities, risk scores, community/cases), Program Manager, DPI (Decision Point Insights), Jira, SharePoint, Confluence, Upslow (billing), Dropbox (analytics files), SFTP, Metabase (reporting), Microsoft Teams -- **Frequency indicators:** Updates every opportunity "every single Monday"; gets status requests from leadership weekly; SelectHealth adds new users "every single week" -- **Emotional signals:** Exasperation listing tools ("Oh, no... Good grief"); exhaustion about reporting burden ("it's exhausting"); enthusiasm about the dashboard concept ("That is great... Yeah. That's excellent"); frustration about redundant asks ("I literally do this every single Monday") -- **Workaround patterns:** Duplicating churn risk in two Salesforce locations for visibility; planning contract caps as workaround for unlimited user growth; building one-off Metabase reports to answer recurring data needs - -## Human Review Checklist - -- [ ] All quotes verified against source transcript -- [ ] Experience map accurately reflects story arc -- [ ] Opportunities reflect participant needs, not assumed solutions -- [ ] Tags accurate and consistent with existing taxonomy -- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/interviews/2026-02-09-participant-001.md b/docs/research/interviews/2026-02-09-participant-001.md deleted file mode 100644 index ee48c2f8..00000000 --- a/docs/research/interviews/2026-02-09-participant-001.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -participant_id: user-001 -role: "Strategic Account Manager" -company_type: "Healthcare analytics vendor" -date: 2026-02-09 -research_plan: "account-management-effectiveness" -source_transcript: "2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md" -focus: "Account management leadership changes, account health dashboard feedback, support ticket triage" -duration_minutes: 27 -tags: [account-health, management-style, dashboard, support-triage, churn-risk, product-usage] ---- - -# Interview Snapshot: user-001 - -## Summary - -This SAM described the recent departure of their director of customer success and its impact on the team, contrasting the departed leader's punitive management style with a more supportive interim approach. The participant validated a prototype account health dashboard that plots accounts by monthly active users and renewal/churn status, confirming the core thesis that higher product usage correlates with lower churn risk. The conversation also surfaced ongoing support ticket triage failures where tickets auto-close after 5 days without proper routing, and QA constraints caused by offshore teams lacking access to production data. - -## Experience Map - -``` -Trigger → Context → Actions → Obstacles → Workarounds → Outcome -``` - -| Step | What Happened | Feeling | Tools/Process | -|------|--------------|---------|---------------| -| Trigger | Director of customer success departed; team restructured | Relieved but concerned ("we still need that person") | - | -| Context | Team performance had dropped from 80%+ to 70% to plan under previous leadership | Validated ("it was part of my quarter one predictions") | Salesforce (renewals) | -| Action 1 | Shown prototype account health dashboard plotting accounts by MAU and health status | Impressed ("I cannot believe you created it... this is what the director should have built") | Custom dashboard prototype | -| Action 2 | Reviewed specific accounts (Highmark, Elevance, Medical Mutual) on the dashboard | Engaged, correcting misclassifications | Dashboard, Salesforce | -| Obstacle | Metrics don't distinguish organic churn (member population decline) from voluntary churn | Constructive ("that's their business, not like the client said we don't want you") | - | -| Action 3 | Discussed support ticket triage failures -- tickets auto-closing, improper routing | Frustrated ("no one took the steps to...") | Community (Salesforce), Jira | -| Workaround | Creating duplicate Teams threads for every support case to ensure visibility | Resigned ("I'm just gonna double up") | Microsoft Teams, Community | -| Outcome | Agreed to continue reviewing dashboard; identified Jackie and Beth as next stakeholders | Positive, forward-looking | - | - -## Insights - -### Pain Points - -> "our meetings were, like, the tone of it was just like, you're a bad kid, problem child. You didn't write this note, and I need it this way." -- **Type:** pain-point -- **Topics:** management-style -- **Context:** Describing the departed director's approach to team meetings, which killed motivation - -> "I don't think the negative tone, like, motivated anyone to, like, go above and beyond. You know? You're just like, oh, they're churning. Not gonna try to save it." -- **Type:** pain-point -- **Topics:** management-style, churn-risk -- **Context:** Linking punitive management directly to reduced effort on saving at-risk accounts - -> "someone will... and I'm like, those people I could be the person and get it better triage sometimes... And I won't know about it." -- **Type:** pain-point -- **Topics:** support-triage -- **Context:** Support tickets not routed to the right people; SAM unaware of client issues - -> "Mark? Does Mark know he needs to create tickets on other boards?" -- **Type:** pain-point -- **Topics:** support-triage -- **Context:** New support staff not trained on cross-board ticket routing - -> "there's that rule for five days, no response on the ticket. So the... not gonna respond. Then it auto closes" -- **Type:** pain-point -- **Topics:** support-triage -- **Context:** Tickets auto-close without resolution because they weren't properly routed - -### Needs - -> "until there's more things structured, operationalized, I think for one person, sucks." -- **Type:** need -- **Topics:** account-health, operational-process -- **Context:** Acknowledging the account management role is too much for one person without better systems - -> "I think some improvements on the triaging because I'll notice, like, someone will... and I could be the person and get it better triage sometimes." -- **Type:** need -- **Topics:** support-triage -- **Context:** Better ticket triage routing so the right people see client issues - -### Behaviors - -> "Kim's style is like, hey. You got a million dollars coming up. Let me know how it's going. Like, and just open floor and keeping it light." -- **Type:** behavior -- **Topics:** management-style -- **Context:** Describing the effective interim manager's approach -- supportive, outcome-focused - -> "I should try to schedule, like, once you have a new release, set up time with product or even lead it yourself. Like, show them the new feature. Connect it with something they've mentioned in a meeting before. Offer trainings, send them things." -- **Type:** behavior -- **Topics:** product-usage, account-health -- **Context:** Describing proactive engagement tactics to drive product adoption and reduce churn - -> "prior to her coming in, we were always hitting the 80% threshold. If not achieving 85% to plan 90%" -- **Type:** behavior -- **Topics:** account-health, churn-risk -- **Context:** Historical team performance on renewal targets before leadership change - -### Workarounds - -> "every case, I'm gonna create a Teams thread just so people know that it's there." -- **Type:** workaround -- **Topics:** support-triage -- **Context:** Duplicating every support case into Teams because the ticketing system doesn't ensure visibility to the right people - -### Desires - -> "I could be the provider scorecard's project manager. I could also do QA." -- **Type:** desire -- **Topics:** product-quality, operational-process -- **Context:** SAM volunteering to take on product/QA responsibilities because current quality assurance is inadequate - -### Motivations - -> "the more people are using it, the less likely you are to churn. Like, in general." -- **Type:** motivation -- **Topics:** product-usage, churn-risk -- **Context:** Validating the core hypothesis that product usage is the key leading indicator of account health - -> "If it's more sticky, they've adopted it more. It meets 80% of what they need in a tool." -- **Type:** motivation -- **Topics:** product-usage, account-health -- **Context:** Explaining why usage correlates with retention -- it reflects genuine value delivery - -## Opportunities - -| # | Opportunity | Evidence Strength | Quote | -|---|-----------|------------------|-------| -| 1 | SAMs need a way to distinguish organic churn (member population changes) from voluntary churn (client dissatisfaction) | Strong | "that's their business... it's not like the client said we don't want you" | -| 2 | SAMs need a way to monitor support ticket status for their accounts without manually duplicating into Teams | Strong | "every case, I'm gonna create a Teams thread just so people know that it's there" | -| 3 | Account management needs a visual overview of account health that plots usage against renewal/churn status | Strong | "I cannot believe you, of all people, created it... this is what the director of customer success should have been partnering with ops to create" | -| 4 | SAMs need a way to identify which accounts have low product usage so they can proactively drive adoption | Medium | "as a Sam, there are no users are logging in. I should try to schedule, like, once you have a new release, set up time" | -| 5 | Support teams need clearer triage routing so tickets reach the right team before auto-close | Strong | "no one took the steps to... then it auto closes" | - -## Hypothesis Tracking - -| # | Hypothesis | Status | Evidence | -|---|-----------|--------|----------| -| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates | MIXED | Not directly discussed in this interview, but the SAM described duplicating support cases into Teams as a workaround -- a form of redundant manual work | -| 2 | Account health is primarily assessed through relationship signals rather than quantitative data | SUPPORTED | SAM validated the dashboard concept as novel -- "I cannot believe you created it... this is what the director should have built" -- implying this quantitative view didn't exist before | -| 3 | Higher product usage correlates with lower churn risk | SUPPORTED | "the more people are using it, the less likely you are to churn" and "If it's more sticky, they've adopted it more" | -| 4 | Support ticket triage gaps are invisible to SAMs until customer escalation | SUPPORTED | "And I won't know about it" -- SAM unaware of ticket issues; Blue Shield download error was set to auto-close until SAM "caught it, changed the status" after returning from PTO | - -## Behavioral Observations - -- **Tools mentioned:** Salesforce (renewals/opportunities), Jira (tickets), Community (Salesforce support cases), Microsoft Teams (informal communication), DPI (Decision Point Insights -- the analytics product), prototype account health dashboard -- **Frequency indicators:** "always hitting 80% threshold" (quarterly renewals), support ticket auto-close at 5 days, team meetings (regular cadence implied) -- **Emotional signals:** Relief about leadership change ("I feel good"), frustration with support triage ("what are you doing?"), genuine enthusiasm about dashboard prototype ("this is excellent"), resignation about workarounds ("I'm just gonna double up") -- **Workaround patterns:** Duplicating support cases into Teams threads for visibility; volunteering to do QA and PM work outside their role because current processes are inadequate - -## Human Review Checklist - -- [ ] All quotes verified against source transcript -- [ ] Experience map accurately reflects story arc -- [ ] Opportunities reflect participant needs, not assumed solutions -- [ ] Tags accurate and consistent with existing taxonomy -- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/personas/.gitkeep b/docs/research/personas/.gitkeep new file mode 100644 index 00000000..e69de29b diff --git a/docs/research/personas/the-front-line-account-guardian.md b/docs/research/personas/the-front-line-account-guardian.md deleted file mode 100644 index bceecc3e..00000000 --- a/docs/research/personas/the-front-line-account-guardian.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -name: "The Front-Line Account Guardian" -role: "Strategic Account Manager" -company_type: "Healthcare analytics vendor" -last_updated: 2026-02-11 -interview_count: 2 -confidence: medium -source_interviews: [user-001, user-002] -version: 1 ---- - -# The Front-Line Account Guardian - -## Overview - -The Front-Line Account Guardian is a Strategic Account Manager at a healthcare analytics vendor who manages 12-15 enterprise accounts spanning health plans, provider networks, and corporate entities. They are the primary human interface between the company and its customers -- the first to notice when something's wrong and the last line of defense before churn. - -They operate across 8+ tools daily (Salesforce, Jira, DPI, SharePoint, Confluence, Dropbox, and more), piecing together a fragmented picture of account health from relationship signals, meeting tone, and scattered data. Despite being highly organized and proactive -- updating every Salesforce opportunity weekly, creating duplicate documentation for visibility, and maintaining direct client relationships -- they are buried under a reporting burden that leadership relies on because the systems themselves don't surface what matters. - -They assess account health primarily through relationship quality (responsiveness, meeting engagement, willingness to discuss contracts) rather than quantitative metrics, not because they distrust data, but because no single tool gives them a consolidated view. When shown a prototype that plots accounts by usage and health status, they immediately validated it and wished it had existed sooner. Their deepest frustration is doing work that systems should automate -- and still being asked for it manually. - -## Goals - -1. Keep accounts healthy and renewing by maintaining strong client relationships (2/2 participants) -2. Have a consolidated, at-a-glance view of account health without logging into 8+ tools (2/2 participants) -3. Reduce time spent on internal reporting so more time goes to client-facing work (2/2 participants) -4. Identify churn risks early enough to intervene effectively (2/2 participants) -5. Use data (user activity, contract utilization) as leverage in renewal negotiations (1/2 participants) - -## Frustrations - -1. Leadership asks for status updates that are already documented in Salesforce (2/2 participants) -2. Too many tools to log into daily to get a complete picture of accounts (1/2 participants) -3. Support tickets get improperly triaged or auto-close without resolution, and SAMs aren't notified (2/2 participants) -4. No single place to see licensed-vs-active users per contract (1/2 participants) -5. Punitive management styles kill motivation to go above and beyond on at-risk accounts (1/2 participants) -6. Service delivery failures (e.g., messages not sent for a quarter) go undetected until the customer escalates (1/2 participants) -7. QA limitations (offshore team can't access production data) mean SAMs catch bugs that should have been caught earlier (1/2 participants) - -## Behaviors - -| Behavior | Frequency | Evidence | -|----------|-----------|----------| -| Updates every Salesforce opportunity with latest status | Weekly (Mondays) | (1/2 participants) | -| Maintains duplicate risk documentation in multiple Salesforce locations | Ongoing | (1/2 participants) | -| Assesses account health through relationship signals (meeting tone, responsiveness, silence) | Continuous | (2/2 participants) | -| Proactively schedules product demos and trainings for clients with low usage | As needed | (1/2 participants) | -| Creates parallel Teams threads for support cases to ensure internal visibility | Per case | (1/2 participants) | -| Enters and updates risk scores in Salesforce risk score history | Ongoing | (1/2 participants) | -| Validates product prototypes with enthusiasm when they address real pain points | When shown | (2/2 participants) | - -## Key Quotes - -> "all I'm doing every week is giving you those notes and you're... she's managing five of me, and we all have 12, 15 accounts." -> -- user-002, on the reporting burden to leadership - -> "the more people are using it, the less likely you are to churn. Like, in general." -> -- user-001, validating usage as a leading indicator of account health - -> "I cannot believe you, of all people, created it... this is what the director of customer success should have been partnering with ops to create." -> -- user-001, reacting to the account health dashboard prototype - -> "we didn't send messages for an entire quarter. And no one caught it. The customer caught it." -> -- user-002, describing an invisible service delivery failure - -> "I literally do this every single Monday. So it's hard, but I get it." -> -- user-002, on the weekly Salesforce update ritual that leadership still asks about - -## Opportunities - -| # | Opportunity | Evidence Strength | Participants | Key Quote | -|---|-----------|------------------|-------------|-----------| -| 1 | SAMs need a self-serve account health view that leadership can access without asking for manual updates | Strong | user-001, user-002 | "a leader can't go through these one by one... so it's just easier to ask a Sam, but it's exhausting" | -| 2 | SAMs need automated detection of service delivery failures and churn risk signals | Strong | user-001, user-002 | "we didn't send messages for an entire quarter. And no one caught it." | -| 3 | SAMs need support ticket visibility -- proper routing and notification before auto-close | Strong | user-001, user-002 | "no one took the steps to... then it auto closes" / "And I won't know about it" | -| 4 | SAMs need consolidated account data without logging into 8+ fragmented tools | Medium | user-002 | Lists 8-9 tools with visible exasperation | -| 5 | SAMs need licensed-vs-active user data per contract for renewal negotiations | Medium | user-002 | "I need to know how many users they have registered under their contract, and I don't know how we do that" | -| 6 | SAMs need a way to distinguish organic churn (member population decline) from voluntary churn | Medium | user-001 | "that's their business... it's not like the client said we don't want you" | -| 7 | SAMs need a way to share user activity data with clients for joint account governance | Weak | user-002 | "they don't even know who does and doesn't" | - -## Divergences - -_No divergences identified yet._ - -Both participants are closely aligned on the core pain points (reporting burden, fragmented tools, support triage gaps) and the value of consolidated account health monitoring. No contradictions surfaced across the two interviews. - -## Evidence - -| Participant | Research Plan | Date | Focus | -|------------|--------------|------|-------| -| user-001 | account-management-effectiveness | 2026-02-09 | Account management leadership changes, account health dashboard feedback, support ticket triage | -| user-002 | account-management-effectiveness | 2026-01-29 | Tool fragmentation, user activity reporting, account health signals, leadership reporting burden | - -## Human Review Checklist - -- [ ] Goals and frustrations grounded in interview evidence -- [ ] Behavior counts accurate (absence not counted as negative) -- [ ] Quotes are exact (verified against source interviews) -- [ ] Opportunities framed as needs, not solutions -- [ ] Divergences section reflects actual contradictions -- [ ] Confidence level matches interview count threshold diff --git a/docs/research/plans/.gitkeep b/docs/research/plans/.gitkeep new file mode 100644 index 00000000..e69de29b diff --git a/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md b/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md deleted file mode 100644 index 5e193900..00000000 --- a/docs/research/plans/2026-02-11-account-management-effectiveness-research-plan.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: "Account management team effectiveness and tooling" -date: 2026-02-11 -status: planned -outcome: "Inform the design of consolidated account health tooling and streamlined SAM workflows that reduce manual reporting, surface churn risks automatically, and give leadership visibility without burdening individual contributors" -hypotheses: - - "SAMs spend a disproportionate amount of time on manual reporting and status updates that leadership could get from existing systems" - - "Account health is primarily assessed through relationship signals (meeting tone, responsiveness) rather than quantitative data because quantitative data is fragmented across too many tools" - - "Higher product usage (monthly active users) correlates with lower churn risk, and SAMs intuitively know this but lack a systematic way to monitor it" - - "Support ticket triage and resolution gaps are invisible to SAMs and product teams until the customer escalates" -participant_criteria: "Strategic Account Managers (SAMs), Account Managers, and CS leadership managing healthcare/enterprise analytics accounts" -sample_size: 6 -interviews_completed: 0 ---- - -# Account Management Team Effectiveness and Tooling - -## Objective - -Understand how account management teams currently assess account health, manage their workflows across fragmented tools, and communicate status to leadership -- to inform building consolidated tooling that automates churn risk detection, surfaces upsell opportunities, and reduces the manual reporting burden on SAMs. - -This research will inform both product decisions (what to build in an account health system) and process decisions (how to restructure workflows so SAMs spend more time with customers and less time on internal reporting). - -## Three Most Important Things to Learn - -1. **Current behavior:** How do SAMs actually assess whether an account is healthy or at risk today? What signals do they use, and where do those signals live across their tool stack? -2. **Pain points:** Where do SAMs lose the most time in their weekly workflow, and what are the biggest gaps between what leadership needs to know and what's easily accessible? -3. **Desired outcomes:** What would "good" look like for SAMs and their leaders? What would change about their day-to-day if account health were automatically monitored? - -## Hypotheses - -| # | Hypothesis | Status | -|---|-----------|--------| -| 1 | SAMs spend a disproportionate amount of time on manual reporting and status updates that leadership could get from existing systems | UNTESTED | -| 2 | Account health is primarily assessed through relationship signals (meeting tone, responsiveness) rather than quantitative data because quantitative data is fragmented across too many tools | UNTESTED | -| 3 | Higher product usage (monthly active users) correlates with lower churn risk, and SAMs intuitively know this but lack a systematic way to monitor it | UNTESTED | -| 4 | Support ticket triage and resolution gaps are invisible to SAMs and product teams until the customer escalates | UNTESTED | - -## Participant Criteria - -**Include:** -- Strategic Account Managers or Account Managers who manage 5+ accounts -- CS leaders who manage teams of SAMs and need cross-account visibility -- Delivery or product managers who interact with account health data -- People who have experienced at least one churn or churn risk in the past 6 months - -**Exclude:** -- Sales-only roles who don't manage ongoing accounts -- SAMs with fewer than 6 months tenure (not enough workflow patterns established) -- Offshore support staff (different tool access and workflows) - -### Screener Questions - -1. "How many accounts do you currently manage or oversee?" -2. "How many different tools do you log into on a typical Monday to stay on top of your accounts?" -3. "In the last 3 months, how many times has leadership asked you for an account status update that you'd already documented somewhere?" -4. "Describe the last account you considered at risk for churn -- how did you first know something was wrong?" - -## Discussion Guide - -### Opening (2-3 min) - -- Introduce yourself and the purpose: learning about how account management works day-to-day, not evaluating anyone's performance -- "I'd love to hear about your actual experience managing accounts. There are no wrong answers -- I'm trying to understand the real workflow, not the ideal one." - -### Story Elicitation (15-20 min) - -**Primary story prompt:** -> "Walk me through your last Monday morning. From when you sat down at your desk, what did you do first to get up to speed on your accounts?" - -**Follow-up probes:** -- "What happened next?" -- "Which tool did you open first? Why that one?" -- "How did you know where things stood with [specific account]?" -- "What were you trying to figure out?" -- "Was there anything you couldn't find or had to piece together from multiple places?" - -**Second story prompt (churn/risk specific):** -> "Tell me about the last time you realized an account was in trouble. Walk me through how you first noticed." - -**Follow-up probes:** -- "What was the first signal that something was off?" -- "How long had it been going on before you noticed?" -- "What did you do about it?" -- "Who else needed to know? How did you communicate it?" -- "Was there anything that could have alerted you earlier?" - -### Depth Probes (5-10 min) - -- "You mentioned updating [Salesforce/Jira/etc.] -- how much time do you spend on that per week?" -- "When leadership asks for a status update, what do they actually need that they can't get themselves?" -- "Has there been a time when a support ticket or product issue affected an account and you didn't know about it until the customer told you?" -- "If something could automatically flag at-risk accounts for you, what signals would you trust it to look at?" -- "Why was [that specific workflow/workaround] important to you?" - -### Closing (2-3 min) - -- "Is there anything about how you manage accounts that I should have asked about?" -- "If you could change one thing about your tools or process tomorrow, what would it be?" -- "Who else on the team should I talk to about this? Anyone who does things very differently from you?" - -## Post-Interview Checklist - -- [ ] Write interview snapshot within 24 hours (run `/workflows:research process`) -- [ ] Note top 3 surprises from this interview -- [ ] Update hypothesis status in this plan -- [ ] Identify follow-up questions for next interview -- [ ] Add new screener criteria if participant fit was imperfect - -## Schedule - -| # | Participant | Date | Status | -|---|-----------|------|--------| -| 1 | Krista (SAM - WellCare/Centene/SelectHealth) | 2026-02-09 | Completed (transcript available) | -| 2 | Ashley (SAM - WellCare/Centene/SelectHealth) | 2026-01-29 | Completed (transcript available) | -| 3 | TBD | TBD | Not scheduled | -| 4 | TBD | TBD | Not scheduled | -| 5 | TBD | TBD | Not scheduled | -| 6 | TBD | TBD | Not scheduled | - -## Human Review Checklist - -- [ ] Objective is outcome-focused (not feature-focused) -- [ ] Hypotheses are falsifiable statements about behavior -- [ ] Screener questions ask about past behavior, not opinions -- [ ] Discussion guide follows story-based structure -- [ ] No leading questions or solution pitching in guide -- [ ] Sample size appropriate for research type diff --git a/docs/research/transcripts/.gitkeep b/docs/research/transcripts/.gitkeep new file mode 100644 index 00000000..e69de29b diff --git a/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md b/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md deleted file mode 100644 index 71e82f05..00000000 --- a/docs/research/transcripts/2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts_transcript.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -title: "User activity report for WellCare, Centene, and SelectHealth accounts" -id: faafe8f5-1d43-4979-9d0f-e7c70092d222 -created_at: 2026-01-29T18:56:32.272Z -updated_at: 2026-01-29T19:22:41.841Z -source: granola -type: transcript -linked_note: 2026-01-29_User_activity_report_for_WellCare,_Centene,_and_SelectHealth_accounts.md ---- -# User activity report for WellCare, Centene, and SelectHealth accounts — Transcript - -**You:** **[18:56:32]** Many different systems do you have to log into to find all the right data that you wanna know about your accounts. - -**Other:** **[18:56:43]** Oh, no. Are we only talking about data as in no. Okay. Alright. Well, Salesforce, - -**You:** **[18:56:59]** Yep. - -**Other:** **[18:57:00]** program manager, - -**You:** **[18:57:02]** Okay. - -**Other:** **[18:57:03]** decision point insights, - -**You:** **[18:57:07]** Okay. - -**Other:** **[18:57:08]** Jira, - -**You:** **[18:57:10]** Yep. - -**Other:** **[18:57:14]** Good grief. Hang on. Let's see. I feel like I mean, just Microsoft in general, like SharePoint, Confluence, - -**You:** **[18:57:24]** Yeah. Yep. - -**Other:** **[18:57:27]** So there's five. Upslow, - -**You:** **[18:57:30]** What's that one? - -**Other:** **[18:57:30]** Billing. So there's six. - -**You:** **[18:57:33]** Okay. - -**Other:** **[18:57:38]** Dropbox, seven. - -**You:** **[18:57:40]** What's in Dropbox? - -**Other:** **[18:57:42]** Analytics garbage. - -**You:** **[18:57:46]** Yeah. They had it. I told them. They they've been on that garbage for - -**Other:** **[18:57:46]** Not garbage, but yeah. - -**You:** **[18:57:50]** a while. - -**Other:** **[18:57:52]** Yes. So we're at seven with Dropbox. I mean, SFTP crap. I luckily haven't had to log on there for a bit because they're now starting to do that, but I have access to it. - -**You:** **[18:58:08]** Okay. - -**Other:** **[18:58:09]** That's eight That's what I can think of. Right now. - -**You:** **[18:58:19]** Off the top of your head, pretty good list. - -**Other:** **[18:58:20]** Yeah. - -**You:** **[18:58:22]** Okay. Cool. - -**Other:** **[18:58:22]** Yep. - -**You:** **[18:58:25]** Alright. Well, I would love because I feel like I I have not as big of a purview or like I have my focus is narrower, but, yeah, I I feel the same pain at least in my, like, little world of just - -**Other:** **[18:58:40]** Yeah. - -**You:** **[18:58:42]** focusing on the predict. Folks. And so alright, I have something I wanna show you in a sec, but I am gonna fix this first. Okay. Do you want everyone in WellCare - -**Other:** **[18:58:53]** Yes - -**You:** **[18:58:56]** Do you want everybody who's active? Or you only want active even if they're think there was one that we made one that was like non care managers or network users. - -**Other:** **[18:59:07]** I think we don't just want that. Like, I need to know - -**You:** **[18:59:08]** You need everybody. - -**Other:** **[18:59:12]** I need everyone that we have, like, licenses for because they're paying other license - -**You:** **[18:59:13]** Okay? - -**Other:** **[18:59:17]** level. - -**You:** **[18:59:17]** Okay. Cool. - -**Other:** **[18:59:18]** So whether they're active or not. But I also need to know if they are active because they have asked for who's not using this and who is. - -**You:** **[18:59:23]** Got it. Okay. Cool. Who let's do two things. Let me alright. Alright. Role name contains corp. Is active is true. Let's take this role name out. - -**Other:** **[18:59:45]** Report another. - -**You:** **[18:59:47]** Okay. - -**Other:** **[18:59:49]** Sorry. There was one more. Nine. Got you another one. - -**You:** **[18:59:57]** Alright. How about this? This is something that's pretty good. The I'll have to take out filter username. Email, we just need to make sure it's not - -**Other:** **[19:00:16]** The searching point? - -**You:** **[19:00:20]** yeah. - -**Other:** **[19:00:21]** Or impulse. - -**You:** **[19:00:22]** Email. Visit user user user does not contain decision point or impulse. Okay. Add Enhance. Enhance. - -**Other:** **[19:00:46]** Sweet. - -**You:** **[19:00:46]** Enhance. Alright. There you go. And then so this here is How can I make the alright? Here, we're gonna I'm gonna save this. And we're gonna call save this as a new question. We're gonna call - -**Other:** **[19:01:08]** K. - -**You:** **[19:01:13]** active WellCare users all. - -**Other:** **[19:01:16]** K. - -**You:** **[19:01:17]** All WellCare active, non impulse, Okay. That's one thing. Save. Add this to a dashboard. Ashley. Okay. And I'm gonna take this one out. And then the next thing I'm gonna do is I'm gonna email all this to you, so we don't have to worry about how it looks here. - -**Other:** **[19:01:45]** Okay. - -**You:** **[19:01:46]** It's just gonna be emailed every time. Let's do another one. Which is Do you want all of this with somebody who's logged in in the last three months. - -**Other:** **[19:02:09]** Yeah. - -**You:** **[19:02:11]** When is that the good time is that a good time, Mark? Or you want people who have not - -**Other:** **[19:02:14]** Yeah. I think that that's fair. We we can start with three months. Well, - -**You:** **[19:02:18]** Is it any better? Because, like, it's not - -**Other:** **[19:02:21]** the last login, can I - -**You:** **[19:02:21]** or both? Yeah. I'm gonna just make another one so that we can just - -**Other:** **[19:02:24]** filter by blanks? - -**You:** **[19:02:27]** get another XLS so that you don't have to do your own filtering. - -**Other:** **[19:02:27]** Okay. - -**You:** **[19:02:30]** Unless you whatever. It's you can do your own here, but we could also just make it so that it's - -**Other:** **[19:02:34]** Yeah. Okay. - -**You:** **[19:02:38]** let me - -**Other:** **[19:02:38]** Sure. - -**You:** **[19:02:39]** let's just do that. Okay. So editor, do you want - -**Other:** **[19:02:45]** Let's do have not. - -**You:** **[19:02:45]** people who have or have not logged in? Okay. Last login. - -**Other:** **[19:02:54]** Never. Should be never. If you can. - -**You:** **[19:03:00]** We can. Exclude oh, is empty. Last login. Exclude. Okay. Save. Wait. Cancel. Okay. 427 people who've never logged in. - -**Other:** **[19:03:33]** K. - -**You:** **[19:03:36]** And are - -**Other:** **[19:03:39]** Okay. - -**You:** **[19:03:39]** active. - -**Other:** **[19:03:42]** K. - -**You:** **[19:03:42]** Okay. - -**Other:** **[19:03:44]** Yep. Because then I can go to him and say, what's get rid of these people. - -**You:** **[19:03:48]** Yeah. And there yeah. I can't I don't know what they're a little weird because they have the SSO login. So you wanna just they have, like, the provider network user. Do know how that works? For that, like, weird other side of the thing that Brad works with? - -**Other:** **[19:04:06]** So yeah. So then in curious because if there's 427 people who are active, And I remember you giving me a list where there was only, like, 47 users, actually. - -**You:** **[19:04:19]** Yeah. Yeah. Yeah. That's because it was, like, all people who were not those were only for, like, Sunshine Health. Or Centene I remember there was, like, one Centene Corp one that we, like, tracked all the way down. - -**Other:** **[19:04:39]** So I think what I need is I need to know how many users they have registered under their contract, and I don't know how we do that. - -**You:** **[19:04:47]** Right. - -**Other:** **[19:04:51]** So because they they're only allowed 75, so they - -**You:** **[19:04:51]** Well, yeah, we can we can - -**Other:** **[19:04:55]** clearly have more than that. But - -**You:** **[19:04:56]** One Right. And this is all the whole thing, and they have, like, all the, like, SSO login. There's a whole their whole network team basically logs in to decision point. And some of them, like, only do it, like, once. They're like, I'm going to see doctor so and so Let me print out their reports, and then I'm out. So here's what I'm gonna do. I'm gonna give you all And then I would check with Brad on, like, how we would wanna break it up And I can tell you how some of them let's do let's do this at least. Okay. So we're gonna save This is a new question. It's gonna be called all active users all and never logged in. Okay. - -**Other:** **[19:05:50]** k. - -**You:** **[19:05:52]** And who have never logged in. Okay. We're gonna add this to dashboard. Okay. I think we should go back to that list. That we made earlier and then we should filter out those ones again because it's Okay. I don't think you want any care managers MEM. Centene, - -**Other:** **[19:07:05]** Yes. - -**You:** **[19:07:05]** because I think it's you're the you're a corp. Right? Okay. - -**Other:** **[19:07:13]** I mean, yes, the specific contract is court. - -**You:** **[19:07:18]** Okay. This is the 42 rows. This is the 42 rows. - -**Other:** **[19:07:19]** While you're looking at this, okay. Okay. - -**You:** **[19:07:24]** So I'm gonna save it as a new one, and this is gonna be - -**Other:** **[19:07:25]** Okay. No offense. Centimeters. - -**You:** **[19:07:28]** as a new one. All active WellCare's Centene. - -**Other:** **[19:07:36]** Okay. - -**You:** **[19:07:38]** Okay. - -**Other:** **[19:07:44]** One other question for you because so Centene and then SelectHealth is my only other analytics customer. - -**You:** **[19:07:51]** Okay. - -**Other:** **[19:07:55]** Can I get one for Select? Because - -**You:** **[19:07:55]** Yeah. - -**Other:** **[19:07:59]** I think they have 40,000 users, and they for new users, I feel like, every single week. We have nothing in the contract that prevents them from doing this. So my goal is to put into their - -**You:** **[19:08:15]** Yeah. - -**Other:** **[19:08:15]** upcoming renewal No. We're done. We're done with this. - -**You:** **[19:08:19]** Or it's really yeah. Or yeah. - -**Other:** **[19:08:19]** You can have 50 users. - -**You:** **[19:08:22]** Or it's - -**Other:** **[19:08:25]** Or they pay a fee of some time to maintain - -**You:** **[19:08:25]** yeah. They can tell you that. - -**Other:** **[19:08:28]** 40,000 users. - -**You:** **[19:08:29]** Yeah. Exactly. Let's let me I can figure this out. But not all active, but I - -**Other:** **[19:08:35]** But knowing this information, well, - -**You:** **[19:08:36]** corp. Yeah. Yeah. Yeah. I always tell. - -**Other:** **[19:08:40]** help me so I can have a conversation with them. - -**You:** **[19:08:42]** No. Like, we need it. You're like Do you does this person who's never logged in really need - -**Other:** **[19:08:44]** Right. - -**You:** **[19:08:47]** Okay. I'm gonna edit this. This is gonna be your Centene tab. - -**Other:** **[19:08:47]** Right. And then on top of that, they've been asked for people to get access who already have access, so they don't even know - -**You:** **[19:08:55]** Yeah. Yeah. Yeah. - -**Other:** **[19:08:59]** who does and doesn't. So I need some sort of report that can tell me. - -**You:** **[19:09:00]** Yeah. Yep. Alright. Alright. Edit. Client ID. Client name, client I gotta do ID. Gotta figure out which one it is. Alright. While we're back to philosophizing, what while we're waiting. How do you characterize what is a good account or an account that's like fine versus an account that's in trouble. - -**Other:** **[19:10:07]** Good question. So, I mean, I have regular contact with - -**You:** **[19:10:16]** Yep. - -**Other:** **[19:10:18]** all of my accounts. - -**You:** **[19:10:21]** So some of it's a feel. - -**Other:** **[19:10:22]** Yeah. So I would know if something was off just by that, but I mean, obviously, if they tell me point blank, - -**You:** **[19:10:32]** Right. - -**Other:** **[19:10:32]** hey. We're not happy about this. Hey. Whatever. Another one that's easy to identify is if we've recently had a pretty big issue. For them. That - -**You:** **[19:10:43]** Like, in a case, - -**Other:** **[19:10:44]** yeah. - -**You:** **[19:10:44]** Yeah. Yeah. Yeah. - -**Other:** **[19:10:46]** Mean, like, for I know your analytics, but a good example of for engagement, I had a customer where we didn't send messages for an entire quarter. And no one no one caught it. The customer caught it. - -**You:** **[19:10:57]** No. No one knew. Yeah. - -**Other:** **[19:10:59]** So for me, it's like, oh, yep. This is a this is a big risk, folks, because we were done. So that's that's an easy way Silence - -**You:** **[19:11:12]** Yeah. - -**Other:** **[19:11:12]** if I, like, get on meetings. And there's some customers that just don't talk. Right? They're they're happy and whatever, but I feel like you get on meetings and you're asking them questions and they can't answer questions or they're just really quiet, - -**You:** **[19:11:23]** We'll get back to you. Or yeah. I'm not sure. - -**Other:** **[19:11:26]** Yeah. - -**You:** **[19:11:28]** Oh, we gotta ask him about it. - -**Other:** **[19:11:28]** Not very, like, yeah, responsive or committal to anything. That's another key indicator that something's usually wrong. When we talk about, like, contracts, if they're I guess it goes back to the nonresponsive, but when they they get kind of, like, fidgety and weird about talking about contracts, What else? For a good account, I feel like you're having regular contact with them Your phone calls are productive. They are interested in knowing more, learning more, or they're content with what they have and - -**You:** **[19:12:13]** Yeah. - -**Other:** **[19:12:14]** and they're happy. - -**You:** **[19:12:15]** Right. They're like, - -**Other:** **[19:12:16]** And they express as much. Like, hey. We don't need any more, but - -**You:** **[19:12:19]** We will yeah. You guys are great. We love you. - -**Other:** **[19:12:20]** we feel great about what we have. Yeah. - -**You:** **[19:12:23]** Li like, let us but this goes back to enjoying you while we're - -**Other:** **[19:12:27]** Yeah. - -**You:** **[19:12:27]** doing our other jobs. - -**Other:** **[19:12:29]** Yep. Yep. - -**You:** **[19:12:31]** Know. Some - -**Other:** **[19:12:32]** So I felt like those are kind of the big the big things. I feel like most customers will be transparent but there are definitely warning signs. Like, what I mentioned. - -**You:** **[19:12:42]** Okay. Cool. Alright. Here's what I got. I got now here is you have active WellCare users, active you WellCare users who've never logged in. Active WellCare users with Centene Corp, - -**Other:** **[19:12:52]** And it's Okay. - -**You:** **[19:12:55]** and now you have active select users. - -**Other:** **[19:12:56]** Okay. Perfect. That is great. - -**You:** **[19:12:59]** Alright. And then now I'm going to - -**Other:** **[19:13:05]** And is this - -**You:** **[19:13:08]** Auto - -**Other:** **[19:13:09]** is this in DPI, like, when I log in? - -**You:** **[19:13:11]** No. I'm gonna email this to you. - -**Other:** **[19:13:12]** Okay. Gotcha. - -**You:** **[19:13:17]** Every month. - -**Other:** **[19:13:17]** K. - -**You:** **[19:13:19]** And I'm gonna make sure I get filter values. I'm gonna attach the files as or the results the files to results. I'm gonna have them send only attachments. No charts. That's fine. And gonna send this email now. - -**Other:** **[19:13:39]** K. - -**You:** **[19:13:40]** Then you'll let me know. Okay. - -**Other:** **[19:13:48]** Alright. - -**You:** **[19:13:51]** Done. I think it think it might come from either support or Metabase. - -**Other:** **[19:14:07]** K. I haven't gotten anything yet, but I'm guessing because there's files that attached, it'll take a sec. - -**You:** **[19:14:13]** Alright. Here's something I've been working on. - -**Other:** **[19:14:13]** K. - -**You:** **[19:14:16]** Because I have also been struggling with this. This is a big chart and all of these dots are - -**Other:** **[19:14:27]** K. - -**You:** **[19:14:28]** DPI customers. And so here's, like, CalOptima, They're looking pretty good. The way that chart works is it takes left to right is monthly active users. So like, seventy five. - -**Other:** **[19:14:50]** So to the right is larger. K. - -**You:** **[19:14:53]** Yeah. And then yeah, the Calyto has got 46. And then you go into, like, Horizon Blue Cross Blue Shield's got one. - -**Other:** **[19:15:01]** Okay. - -**You:** **[19:15:02]** And then top section are people with upsell. Account or upsell opportunities in Salesforce. Middle is renewals and just like I think, like regular business, new business. And then the bottom section are ones with churn. Or downgrade opportunities. - -**Other:** **[19:15:24]** Okay. - -**You:** **[19:15:26]** What is this one? Amended scope ARR. Yeah. I guess this is, like, down. Yeah. And then let's do which one? Okay. Care first. My old. My old crew. Let's look at one here. VNS is new. Regional Blue Shield is also new. Would something like this, like this your triple s This also shows the Jira tickets. So, like, the quarterly model refresh, so the data ops, like, the the monthly refreshes, and then if any of them are blocked, or whatever. It also shows STARZ data. I don't know. This one doesn't have STARS data for some reason, but is something like this helpful - -**Other:** **[19:16:26]** And this would only be for analytics. Right? - -**You:** **[19:16:29]** Well, if this is helpful for analytics, we can try to, like, continue to expand and then just have different views of, like, only Ashley's accounts or like, whatever. We'd have to figure out what the right metrics are to, like, put them in these different zones. Like, - -**Other:** **[19:16:46]** Yes. - -**You:** **[19:16:47]** analytics is nice because there's at least, like, monthly active users, and that's, like, a an easy to tell metric of like, hey, if you have 46 people logging in, that's a good sign. If you have - -**Other:** **[19:16:59]** Yeah. - -**You:** **[19:16:59]** zero or one, bad sign. - -**Other:** **[19:17:03]** Okay. So what - -**You:** **[19:17:09]** Well, we're gonna - -**Other:** **[19:17:09]** if a Sam was - -**You:** **[19:17:11]** the way that I'm thinking about it, you're probably not the best. Well, I think you're the best example because we wanna make people more like you. But, like, what I'm thinking is not everybody is actually meeting with their customers regularly. - -**Other:** **[19:17:23]** Yeah. - -**You:** **[19:17:24]** And then my goal is that they don't ever log in to this, - -**Other:** **[19:17:25]** Yeah. - -**You:** **[19:17:28]** The system detects like, here's a churn risk because, like, the the monthly - -**Other:** **[19:17:31]** Yeah. - -**You:** **[19:17:34]** active users have dropped. Like, get in contact with this person immediately and it either makes like a Jira card for them and then we can track the Jira board. Like, the Jira board of like, churn risk accounts or like upsell check ins then there's just a big board and then you guys actually check that Jira board and it just gets assigned to like I don't even know. Whoever is someone who's not checking in with their accounts regularly. - -**Other:** **[19:17:59]** Can it be community instead of Jira? Because I know plan trying to get us completely out of Jira and - -**You:** **[19:18:06]** All in - -**Other:** **[19:18:06]** selfishly, I hate Jira. - -**You:** **[19:18:07]** hey. That's fine. That's fine. I can't I'm not allowed to leave Jira, - -**Other:** **[19:18:09]** So yeah. Right. Right. - -**You:** **[19:18:13]** but I can I can save you, save other people? - -**Other:** **[19:18:14]** But for a Sam, yeah. - -**You:** **[19:18:17]** Is it a Salesforce? Is it a commute there are there, like, cases that are basically internal only? I guess I could ask Amir, though. - -**Other:** **[19:18:21]** Yep. Yes. Yeah. There's internal only quesos - -**You:** **[19:18:27]** Smooth. - -**Other:** **[19:18:28]** There's just, like, a check mark box in community. So you can do internal, and then there's external facing ones. I think it would be helpful for two things. One, oh, one other question I have before I jump into that. How would it be getting updated? Like, is it automatic based on - -**You:** **[19:18:45]** That's what yeah. It's pulling Salesforce data - -**Other:** **[19:18:46]** okay. - -**You:** **[19:18:48]** It's pulling the DPI data. It's pulling the data, like, - -**Other:** **[19:18:51]** It's not like a Sam has to go in and update anything. - -**You:** **[19:18:53]** no. - -**Other:** **[19:18:54]** Okay. - -**You:** **[19:18:54]** The whole point is, like, we're doing this for Sam's and wanna do this for salespeople and basically say, like, - -**Other:** **[19:18:55]** So I think yeah. - -**You:** **[19:18:59]** you guys are already using Salesforce. This is just something that says, here is automated upsell identified opportunities. Go qualify this lead. Like, go like, here's somebody who has caps. They should have Haas. They have low Haas score. - -**Other:** **[19:19:09]** Yeah. - -**You:** **[19:19:12]** Talk to them about it. - -**Other:** **[19:19:13]** Okay. - -**You:** **[19:19:14]** Immediately. And then, like, - -**Other:** **[19:19:15]** Yeah. Okay. - -**You:** **[19:19:16]** they don't have to log in to that, but, like, it's helpful if you do. - -**Other:** **[19:19:16]** Right. So one thing I think would be really helpful is we as Sam's, constantly get asked questions about customers like, are they at risk for churn? Me your top three customers that are at risk, blah blah blah blah blah. It's like, go into Salesforce. But the problem is in Salesforce, like, they have to go into each account individually. Look at the opportunities, see, like, what the notes are in there. So it's very difficult for a leader to do that, so it's just easier to ask - -**You:** **[19:19:47]** Right. - -**Other:** **[19:19:49]** a Sam, but it's exhausting from a Sam perspective when - -**You:** **[19:19:50]** Right. - -**Other:** **[19:19:52]** all I'm doing every week is giving you those notes - -**You:** **[19:19:54]** One what's the how do you figure out if like, how do you look at an account and their opportunities? - -**Other:** **[19:19:56]** and you're - -**You:** **[19:20:01]** Like, let's pick one. Here. Eleventh. No. - -**Other:** **[19:20:05]** want me to share my screen? - -**You:** **[19:20:06]** Here. Yeah. You share your screen. - -**Other:** **[19:20:10]** Let me share. Like, I'll show you. I mean, every here's a good one. So Centene corporate, here's the account. You go to opportunities. And then in here, like, there's a shit ton of opportunities. Right? Like, a leader can't go through these one by one to see what they say. So these two, like, I happen to enter in a churn risk because I do think there's a high probability they will churn. - -**You:** **[19:20:34]** Yeah. Yep. - -**Other:** **[19:20:37]** But, like, this is the renewal, and I have in the oh, sorry. Wrong renewal. Where in the hell is oh, right here. Here's the renewal. And I have in the notes here, like, at risk, met with - -**You:** **[19:20:53]** Yeah. Yeah. Yeah. Yeah. - -**Other:** **[19:20:54]** Centene, blah blah blah blah blah blah blah. So there's that. And then on top of that, I have the churn risk opportunity. So there's two places - -**You:** **[19:21:03]** Right. - -**Other:** **[19:21:05]** that this says this exact same thing. But I wanted to call it out. Like, hey. There is a churn risk for this lung. You also and I go and update these every single Monday. I update every single opportunity that I have - -**You:** **[19:21:17]** Right. - -**Other:** **[19:21:19]** with what's the latest and greatest update so that leadership has it, - -**You:** **[19:21:22]** And then they feel and then everyone still ask you. - -**Other:** **[19:21:23]** Yes. And then everyone still asks. - -**You:** **[19:21:24]** And then yeah. - -**Other:** **[19:21:27]** Like, today, I got a question today. Hey. Give me this. And I'm like, I I literally do this every single Monday. So it it's it's hard, but I get it. From a leader perspective, it's like, she's managing - -**You:** **[19:21:39]** Right. - -**Other:** **[19:21:39]** five of me, and we all have - -**You:** **[19:21:41]** Right. - -**Other:** **[19:21:42]** 12, 15 accounts. So she doesn't know. Right? So I get why she asks, but just frustrating from a Sam. And then another thing we do is, like, risk score history. So you can enter in risk scores - -**You:** **[19:21:52]** Yeah. - -**Other:** **[19:21:54]** on, like, where that's at. So that's another place - -**You:** **[19:21:57]** Yeah. Yeah. - -**Other:** **[19:21:58]** too that it could maybe pull information for your report. So I think from a leadership perspective, your thing could be very helpful. From a Sam, there's - -**You:** **[19:22:05]** Right. Yeah. Yeah. - -**Other:** **[19:22:07]** aspects of it that would be helpful. And then for sure from sales, it would be good. - -**You:** **[19:22:10]** Okay. Good. Okay. Sweet. Thank you. Has been super helpful. Okay. - -**Other:** **[19:22:16]** Yeah. Yeah. - -**You:** **[19:22:18]** I will, catch up with you later. - -**Other:** **[19:22:19]** Okay. K. Sounds good. Thanks. - -**You:** **[19:22:22]** You get the email? Did it actually work? - -**Other:** **[19:22:23]** Let me look really quick. Yeah. I did get it. - -**You:** **[19:22:27]** Alright. Check it later. - -**Other:** **[19:22:28]** Okay. Sounds good. - -**You:** **[19:22:29]** Alright. Later. - -**Other:** **[19:22:31]** K. Bye. - -**You:** **[19:22:32]** Bye. diff --git a/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md b/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md deleted file mode 100644 index e7d565b9..00000000 --- a/docs/research/transcripts/2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista_transcript.md +++ /dev/null @@ -1,718 +0,0 @@ ---- -title: "Holly's departure and account management performance review with Krista" -id: 77647056-1119-4ab0-80f6-83b68d45074d -created_at: 2026-02-09T19:58:45.536Z -updated_at: 2026-02-09T20:27:50.197Z -source: granola -type: transcript -linked_note: 2026-02-09_Holly's_departure_and_account_management_performance_review_with_Krista.md ---- -# Holly's departure and account management performance review with Krista — Transcript - -**Other:** **[20:00:27]** Matt, did you go away? - -**You:** **[20:00:33]** Hello. Sorry. Hold on one sec. Okay. Okay. Hey. - -**Other:** **[20:00:50]** How you doing? - -**You:** **[20:00:51]** Good. How are you? - -**Other:** **[20:00:51]** It's this one. Great. Great. Great. Great. - -**You:** **[20:00:56]** Did you so one was two two ones for me. Was wanna check-in how you work because I I heard heard from a bird that Holly's not out or Holly's out. - -**Other:** **[20:01:08]** Let's correct. She's out. As of last Thursday. - -**You:** **[20:01:14]** How are you? How are you holding up? Alright. - -**Other:** **[20:01:17]** No. I got the news while I was away. Like, Jackie called me. And I was like, she was my proxy. So I was like, oh, shit. So, like, she would not call me. - -**You:** **[20:01:25]** Someone someone yeah. Like, she would not call you unless something blew up. - -**Other:** **[20:01:26]** So I was like, what? My clients are freaking ridiculous. I left in every like, I leave at a good spot. I'm like, I never, like, leave the reds. You know what I mean? I was like, how It's like, Norden. And then she sent me a text. She was like, Krista, you must call me right now. And I was like, oh, shit. And then she was like, Krista, we just got an email. And I was like, o m g. - -**You:** **[20:01:54]** So - -**Other:** **[20:01:55]** It was part of my quarter one predictions. I don't know if you know any back story, but I was just like, I don't know. I don't know how perception is with the other leaders. Like, if she plays nice and, like, is moving things forward, Like, I I the effectiveness wasn't coming across super strong, not just from, like, a personable, like, interpersonal, like, do I like working with her? - -**You:** **[20:02:15]** Yeah. Right. Right. Right. - -**Other:** **[20:02:17]** But I was like, I don't like, it seems like Kim is doing a lot of these, like, trying to piece together some of the things and then tackling, like, the bigger escalations, which maybe, to be fair, was just breaking up the the bandwidth of work. - -**You:** **[20:02:30]** Right. - -**Other:** **[20:02:31]** But then, like, with our 70% to plan quarter four, I was like, prior to her coming in, we were always hitting the 80% threshold. If not achieving 85% to plan 90% as of vertical of rev like, getting our attention. Then our renewals done. She comes in, and now we're 70%. Not saying, like, - -**You:** **[20:02:55]** Right now, yeah, it's - -**Other:** **[20:02:56]** she was to blame, but, like, if - -**You:** **[20:02:57]** brought in right. There's, like, - -**Other:** **[20:02:58]** if - -**You:** **[20:02:59]** yeah. - -**Other:** **[20:03:00]** if I'm a bigger, like, leader, I'm just gonna be like, what's cause and effect? - -**You:** **[20:03:02]** I could try it. Yeah. - -**Other:** **[20:03:05]** They were hitting 80%. You come in, and now there's a million germs? - -**You:** **[20:03:09]** Right. - -**Other:** **[20:03:09]** So I was just like, I know. I feel like things a little cutthroat and whether it was her her because of her or not, - -**You:** **[20:03:17]** Right. - -**Other:** **[20:03:19]** I feel like it was was risky. I felt like all through quarter one, it's just been like, I don't know what's gonna happen. - -**You:** **[20:03:24]** Right. - -**Other:** **[20:03:26]** And I feel like, yeah. But I don't know if something incidentally, like, it's never really just one thing. You know what I mean? Like, it's all unless, like, you I don't know. - -**You:** **[20:03:33]** What exactly is, like well, yeah. Like, - -**Other:** **[20:03:36]** Yeah. - -**You:** **[20:03:37]** one thing, like, becomes the reason. There's something that become the reason, but it was like and that's kinda what I took at least in my impression that I shared it with, like, at least and then I I don't know what it does, but, like, the people I talked to are Phil and Brian. And I was like, I was concerned about account management, - -**Other:** **[20:03:59]** Yeah. - -**You:** **[20:04:00]** for a while. Holly came - -**Other:** **[20:04:03]** Yeah. Because - -**You:** **[20:04:04]** and then, like, - -**Other:** **[20:04:04]** yeah, No. - -**You:** **[20:04:05]** my concern did not go away. Like, it actually, like, my my concern increased, and it was, like, already high. - -**Other:** **[20:04:09]** Yeah. - -**You:** **[20:04:13]** And I think, like, all that I all and I don't know anything is that it was, like, I think even if you're because you are always, like, you know, the people who make these decisions are two, three steps removed. - -**Other:** **[20:04:30]** Yeah. - -**You:** **[20:04:31]** Seeing her, and there can be very different impression of, like, her from one on one, but it's like, - -**Other:** **[20:04:35]** Yeah. - -**You:** **[20:04:35]** hey. People are leaving. On her team are leaving. - -**Other:** **[20:04:39]** That like, either immediately bolted to other teams - -**You:** **[20:04:41]** The bad job In a bad job market. - -**Other:** **[20:04:44]** Yeah. People left. - -**You:** **[20:04:46]** Voluntary, like, - -**Other:** **[20:04:47]** Who she picked up on. - -**You:** **[20:04:49]** in a bad job market. - -**Other:** **[20:04:50]** Yeah. - -**You:** **[20:04:51]** It's not usual. True numbers, like, - -**Other:** **[20:04:55]** Yeah. - -**You:** **[20:04:57]** are bad. And then I I think - -**Other:** **[20:04:59]** And, like, even in meetings, again, I think her work like, I think the role as it is now is current is currently and I said that to Kim. I was like, - -**You:** **[20:05:09]** Right. Yeah. - -**Other:** **[20:05:11]** until there's more things structured, operationalized, - -**You:** **[20:05:14]** Right. - -**Other:** **[20:05:15]** I I think for one person, sucks. - -**You:** **[20:05:18]** Right. - -**Other:** **[20:05:18]** But in meetings, like, even with, like, other leaders, like, she'd be multitasking. And, like, you tell and not, like, driving things forward or coming at it from, like, a directive. She'd just be like, oh, it's in their plate. Like, from my perspective, I have nothing to do. And it's like, that's not I don't know how that - -**You:** **[20:05:37]** Right. - -**Other:** **[20:05:38]** others perceived it. Like, I can only assume they were just like, not really present and, like, willing to learn and - -**You:** **[20:05:46]** Right. - -**Other:** **[20:05:47]** yeah. - -**You:** **[20:05:47]** Yeah. No. I yeah. I did not, Yeah. I got that. And it I I I just was sorry that it, like, took so long, and then it's sort of like it does feel, like, a little bit, like, hey. Good thing. Like, it should not have persisted. It's not persisting anymore. But then it's, like, also, like, hey. Like, kinda like two steps. - -**Other:** **[20:06:10]** Yeah. Yeah. - -**You:** **[20:06:11]** Like, okay. Well, Yeah. We're not. Like, it like, we still need help. - -**Other:** **[20:06:17]** Yeah. We still need that person. I don't think it can be Kim. - -**You:** **[20:06:17]** Right. - -**Other:** **[20:06:21]** Although, in her limited capacity, I have found to be more effective - -**You:** **[20:06:26]** Right now. Yeah. Exactly. - -**Other:** **[20:06:27]** Like, when I needed help, - -**You:** **[20:06:27]** Right. - -**Other:** **[20:06:31]** and, like, kept things like, even though we were experiencing the churns, like, our meetings were, like, the tone of it was just like, you're a bad kid, problem child. You didn't write this note, and I need it this way. And then Kim's style is like, hey. You got a million dollars coming up. Let me know how it's going. Like, - -**You:** **[20:06:49]** Yeah. Yeah. Right. - -**Other:** **[20:06:50]** and just open floor and keeping it, like, light. Although, like, we get it. We have to - -**You:** **[20:06:51]** Right. - -**Other:** **[20:06:55]** get numbers. We gotta get things to sign. But it was just, like, a completely different way - -**You:** **[20:06:56]** Yeah. - -**Other:** **[20:07:00]** of managing - -**You:** **[20:07:01]** Right. - -**Other:** **[20:07:02]** And I can't I I don't think the negative tone, like, motivated anyone to, like, - -**You:** **[20:07:07]** Right. Exactly. Right. Yeah. Exactly. That - -**Other:** **[20:07:09]** to go above and beyond. You know? You're just like, oh, they're they're churning. - -**You:** **[20:07:10]** right. Right. What part yeah. It's one of those things you put, like, an I've been - -**Other:** **[20:07:14]** Not gonna try to save it. - -**You:** **[20:07:19]** different context, but what I've seen is those things are taught like, they are those are rarely are like, it's and I again, I've been in some pretty bad work environments, but usually, even in bad work environments, like, types of grading personalities are tolerated like - -**Other:** **[20:07:44]** If you hear - -**You:** **[20:07:45]** Right. They're yeah. Exactly. They're they're - -**Other:** **[20:07:46]** You gotta there's - -**You:** **[20:07:47]** correlated assuming, like, above average performance or, like you know? And that, like, it's like, alright. Like, it's not good, but it's like, okay. But it's like, as soon as the something doesn't it's like, hey. That's not, like, that's not working. And, like, are quitting and the numbers aren't where they're supposed to be. Like so it's it you kinda put a target on your back. You know, in those types of situations. - -**Other:** **[20:08:11]** Yeah. Yeah. So I felt that way for a while, but then I'm like, is this just my personal, like, - -**You:** **[20:08:19]** No. - -**Other:** **[20:08:20]** I would love to see it change. And yeah, I put it down in my predictions. I thought it would be, like, March, like, once our fiscals went live. Like, I thought they would give it a little bit more - -**You:** **[20:08:30]** Yeah. - -**Other:** **[20:08:33]** time. But they promoted Beth and Jackie, and then the following two days, K. We don't need Holly anymore. We put some people in place to kind of - -**You:** **[20:08:42]** Oh, so who is it? I didn't see who it so - -**Other:** **[20:08:43]** Jackie is promoted. She two people report to her. - -**You:** **[20:08:48]** okay. - -**Other:** **[20:08:50]** Might be more now that Holly's not there, and they haven't backed - -**You:** **[20:08:50]** Right. - -**Other:** **[20:08:53]** obviously, backfilled the role. And then Beth McDaniel, she is new. - -**You:** **[20:08:55]** Oh, yeah. - -**Other:** **[20:08:56]** But she's more tenured - -**You:** **[20:08:57]** Yeah. - -**Other:** **[20:08:58]** Definitely, like, a bet way better. I mean, an amazing personality. So and, like, - -**You:** **[20:09:00]** Yeah. - -**Other:** **[20:09:04]** trying to learn, I've no I'm reporting to her. They asked me today, which is funny because, like, Holly was just just like, when she told me, she was like, I just wanna let you know because reporting changes. And then I had a meeting with Kim, and she was like, you know, like, we've had con conversations leadership wise and, like, you know, I think you're very buttoned up. You've been here a while. - -**You:** **[20:09:21]** I think you're great. - -**Other:** **[20:09:22]** You you can successful. Like, would you wanna be a people manager? And I was like, thank you for at least asking me because that is, like, nice. But and then I was - -**You:** **[20:09:26]** I think so. - -**Other:** **[20:09:31]** like, for just full transparency, like, I don't have any interest in it. I really like my life. - -**You:** **[20:09:32]** Right. Yeah. - -**Other:** **[20:09:36]** Now. And I speak holding that much, like, responsibility freaks me out. And I don't wanna have meetings with leaders all day. I like just meeting with clients and being an individual contributor. But I was like, I appreciate, like, it is nice to be asked - -**You:** **[20:09:48]** Yeah. No. That's the difference between yeah. - -**Other:** **[20:09:50]** and to be considered. But Holly was just, like, made it seem like mean, I got, like, what, meets expectations essentially for my - -**You:** **[20:09:57]** Well, I think yeah. Well, I think because it was, like, and usual usual at least, I don't know. Again, - -**Other:** **[20:10:04]** Different context. - -**You:** **[20:10:05]** like those people tend to be paranoid. Right? And you don't like you're not interested in promoting, like, you're like, I need Chris. Like, I can't, like, - -**Other:** **[20:10:14]** Yeah. Yeah. - -**You:** **[20:10:15]** I can't have this promoted because, a, she's, a threat because she'd be good. And then, like, what are they gonna think of me? And then also, like, I don't usually, those people are - -**Other:** **[20:10:25]** You need aces in their places. - -**You:** **[20:10:26]** Yeah. Exactly. Yeah. Exactly. - -**Other:** **[20:10:27]** Yeah. Yeah. So I feel good. And then they had they might promote someone else. They said they might have three managers at that level, and I just told them Hala. But Hala's kind of could could be a different story now that Hala's out of the picture. - -**You:** **[20:10:40]** Right. - -**Other:** **[20:10:44]** But Holly was like, I'm just interested in making money. Like, I don't wanna I don't wanna be leader. I would just wanna focus on, like, renewals because that's kinda how we make - -**You:** **[20:10:49]** Yeah. Right. Right. - -**Other:** **[20:10:52]** we that's how we make our extra money is, like, hitting renewals. - -**You:** **[20:10:53]** Yeah. Right. Exactly. No. Yeah. That makes that makes sense. - -**Other:** **[20:10:59]** Yeah. So crazy crazy that occurred on my birthday. Like, I cannot - -**You:** **[20:11:01]** Birthday. Oh, yeah. - -**Other:** **[20:11:05]** My phone was blowing up. Like, my friends because I was visiting friends, they were like I was like, - -**You:** **[20:11:11]** Because when I went out to because you were there because I went up to Boston. I was there when Monica was gone, and Brian, Monica, and I went out to dinner. And that was her last night. And we're because we're, like, we were talking about that. - -**Other:** **[20:11:21]** Yes. - -**You:** **[20:11:25]** And Brian told me a little bit ago, and then he's like, hold on. Let me text, like, Monica. Should text her. And he he said, like like, he texted her, and, like, as soon as he put his phone down, was like dot dot dot. Like, she was, like, right off the response back. - -**Other:** **[20:11:42]** Yeah. - -**You:** **[20:11:44]** So - -**Other:** **[20:11:45]** It's crazy. - -**You:** **[20:11:47]** yeah. It's nuts. Speaking of - -**Other:** **[20:11:51]** Alice. - -**You:** **[20:11:53]** and I'll I'll go No. No. I mean, it's this is all, like, hopefully, a good thing. But I have something I'm curious on your opinion on. This is totally different, totally random. - -**Other:** **[20:12:00]** Okay. - -**You:** **[20:12:03]** Okay. Alright. So I was when I was trying to figure out account management and because I was like, I need help. Like, I know Krista's got her stuff together, but, I don't know what the heck is going on anywhere else. And, like, I'm - -**Other:** **[20:12:15]** Mhmm. - -**You:** **[20:12:18]** nervous. And so I came up with something that I was like, more just, like, for DPI, but this is bay all these dots are accounts. - -**Other:** **[20:12:29]** Oh, - -**You:** **[20:12:30]** And so this is, like, Molina And it shows them on the perspective of, like, whether they're healthy, like, sure like, whether they have renewals, like or upsells, - -**Other:** **[20:12:43]** Mhmm. - -**You:** **[20:12:45]** in this top top section. If you have churns, you're in the bottom section. And then left to right is monthly active users in DPI. - -**Other:** **[20:12:54]** Oh, yeah. Yeah. - -**You:** **[20:12:54]** So you kinda tell, like, where people are. And then, the I also brought in so these are all the, like, opportunities that are in Salesforce. And then the work is these are all the Jira tickets across the client analytics board and the DPI or in the monthly refreshes. You could kinda see, like, what's going on, like, what is happening. And then STARZ data. So, like, where are they good, where they need help. So you kinda - -**Other:** **[20:13:29]** Yeah. - -**You:** **[20:13:32]** know what's going on when you're talking to them and then being able to see the breakdown, like, by contract. And see which ones are good or bad. - -**Other:** **[20:13:41]** How how are the stars being? Is it just caps ratings? What is - -**You:** **[20:13:46]** These are, yeah, these are basically all the I gotta figure out. These are - -**Other:** **[20:13:51]** Are those their master cap score? - -**You:** **[20:13:52]** these are overall these are contracts or overall ratings. - -**Other:** **[20:13:53]** Mhmm. - -**You:** **[20:13:57]** Then these are, like, the average of them. I think so. But I gotta figure this. I gotta figure out. How this exactly works. But, yeah, that's a good point. I should I need to figure that out. But This was and then I guess, yeah, the other thing that was going on was basically try to start to identify, like, opportunities Like, hey. Should we talk to them about Haas? Like, you know, if they don't have Haas, talk to them about Haas. If they do have Haas, like, talk about them inflection points or something else. So it's all about, like, also where we should be thinking about opportunities with them. But wanted to get your perspective from, like, just this like, a Sam perspective. Does this let me actually see. Let's see. Who Highmark actually has a chur, and this is out of the ching. So they have - -**Other:** **[20:15:03]** What was the churn? - -**You:** **[20:15:04]** renewal down they have downgrade. - -**Other:** **[20:15:06]** Oh, yeah. Because that was based on number lives. Like, sorry. That's that's their business. Like, that's - -**You:** **[20:15:13]** Right. - -**Other:** **[20:15:14]** it's not like - -**You:** **[20:15:15]** So this is not a this is not a really good thing. Like, they're actually doing - -**Other:** **[20:15:16]** Highmark came and said I only want yeah. - -**You:** **[20:15:20]** well. Like, this is not, like, okay. I'm gonna go too. - -**Other:** **[20:15:23]** Oh, yeah. Yeah. I see what you're getting at. Like but - -**You:** **[20:15:24]** It's like, - -**Other:** **[20:15:26]** it's not like the client said we have - -**You:** **[20:15:26]** do you - -**Other:** **[20:15:28]** 200,000 lives, and now we only want a 100,000 of them. It's like, no. They're base population - -**You:** **[20:15:34]** It's going down. - -**Other:** **[20:15:35]** external outside of us is going down. - -**You:** **[20:15:37]** Right. - -**Other:** **[20:15:38]** So it's more like if they remove the contract or a line of business, like, that level - -**You:** **[20:15:42]** That's, like, from, like, over above, like, - -**Other:** **[20:15:43]** like, a downgrade, But - -**You:** **[20:15:46]** like, 20% of the contract value of, like, - -**Other:** **[20:15:48]** yeah, - -**You:** **[20:15:50]** total overall value or thing. It's like yeah. There's there should be, like, some limiters in there. - -**Other:** **[20:15:53]** Because Memorize also can go on the opposite Sometimes we get a upsell, but it's not that no one had to work really hard for that. It would - -**You:** **[20:16:04]** Right. Right. Right. - -**Other:** **[20:16:05]** like, it's organic, but it's not based on, like, a driver of us as a vendor. It's just they have more members now. - -**You:** **[20:16:11]** Right. And then, yeah, then I have, like so then Medical Mutual of Omaha - -**Other:** **[20:16:17]** Mhmm. - -**You:** **[20:16:18]** of Ohio. Sorry. What do they have? They had a downgrade interesting, for twenty five, twenty six. I don't know what that one is. But I was trying to do, like, and then Geisinger and, obviously, like, these Mercy Health New England, Florida Blue, like, Providence, and Elevate like, Elevance I don't know. Do you agree with, like, them being in the, like, higher risk perspective? - -**Other:** **[20:16:51]** Yeah. I feel like they're mix and match. And they'll change things on a whim. Like, - -**You:** **[20:16:56]** Right. - -**Other:** **[20:16:58]** if you have 10 programs now, yeah, they're gonna renew some of them, and they're gonna get rid of some, and they're gonna get new ones. Know what I mean? It's just always changing. But they're consistent for, like, I'd say within maybe, like, the sixtieth percentage of their total revenue. Like, that's pretty much always gonna renew. But some of it is pretty variable just because they'll be like, - -**You:** **[20:17:19]** Right. They're all good. I like new things. Yeah. - -**Other:** **[20:17:22]** no. Don't need this anymore. So they're not a risk client. Like, they they've even said, like, they could not find them platform, at least on the engagement side. It does what we do. They like it. They're gonna keep using it, but it's the programs themselves that are variable. - -**You:** **[20:17:37]** Right. - -**Other:** **[20:17:39]** Like, what are they gonna do this year? - -**You:** **[20:17:41]** Yep. No. That that's helpful. And then I have, - -**Other:** **[20:17:44]** But this is a good like, I like where you're going. I cannot believe you, of all people, created it. I mean, I can, but it's, like, not your like, this is what the director of customer success should have been partnering with with ops or someone to create. - -**You:** **[20:17:55]** No. Yeah. Feel like yeah. This is this is kind of like, also, like, do you you kinda get closer and then you're like, I think I actually can, like, do some of this. And then it's also helpful because then you get to show it to people and they're like, oh, like, yeah, that makes sense or, like, that doesn't make sense. - -**Other:** **[20:18:12]** Yeah. - -**You:** **[20:18:14]** But I would say, like, would - -**Other:** **[20:18:18]** It's hitting, like, 80% of what you need. - -**You:** **[20:18:20]** Yeah. Exactly. And then I - -**Other:** **[20:18:21]** If you're just thinking of DPI clients, - -**You:** **[20:18:24]** well, this is basically right now because I don't have a good - -**Other:** **[20:18:26]** analytics. - -**You:** **[20:18:28]** I don't know if other other people might have different metrics. Like, monthly average users doesn't make sense for engagement. Like, I guess it might be, like, number of programs or something. Or, like, the quality of their programs. But because, like, I didn't have, like, a good matrix kinda, like, a two by two to, like, put them up against. And I also well, I I care most about DPI, obviously. But no. - -**Other:** **[20:18:53]** Yeah. - -**You:** **[20:18:54]** I care about the enterprise. But - -**Other:** **[20:18:54]** I mean, although analytics - -**You:** **[20:18:56]** then I - -**Other:** **[20:18:57]** is a smaller amount of revenue, it is the most consistent. Like, - -**You:** **[20:19:00]** Yeah. No. Exactly. And I think well, then and then I was wondering about this because do you agree with this? Like, basically, like, and I think some of the metrics reported, but, basically, the more you're the more people are using it, the less likely you are to churn. Like, in general. - -**Other:** **[20:19:15]** Yes. - -**You:** **[20:19:17]** And I guess the question, like, - -**Other:** **[20:19:20]** If it's more sticky, - -**You:** **[20:19:20]** yeah. - -**Other:** **[20:19:22]** they've adopted it more. It meets - -**You:** **[20:19:24]** And then that's the question of, like, hey. - -**Other:** **[20:19:25]** 80% of what they need in a tool. It'll never be per like, - -**You:** **[20:19:28]** No. And it's more like, how do you move - -**Other:** **[20:19:29]** every time we train a call team, they're like, well, we have this system. And I'm like, we're never gonna be that system. - -**You:** **[20:19:33]** Right. Exactly. Exactly. - -**Other:** **[20:19:34]** Like, - -**You:** **[20:19:36]** I think it's, like, just moving it. Like, how do we move people from, like, this left side over to the right side more? Like, there's always gonna be ups and downs, it'll be different path. But, like, at least now you kind of like, hey. We know - -**Other:** **[20:19:49]** Yeah. - -**You:** **[20:19:50]** like, this is kinda what we're shooting for. - -**Other:** **[20:19:51]** Or, you know, as a Sam, there are no users are logging in. I should try to schedule, like, once - -**You:** **[20:19:57]** Right. Exactly. Like, - -**Other:** **[20:20:01]** you have a new release, set up time with - -**You:** **[20:20:02]** right. - -**Other:** **[20:20:03]** product or even leave it yourself. Like, show them the new feature. - -**You:** **[20:20:06]** Exact - -**Other:** **[20:20:07]** Connect it with something they've mentioned in a meeting before. - -**You:** **[20:20:07]** Yeah. Right. - -**Other:** **[20:20:10]** Offer trainings, - -**You:** **[20:20:11]** Okay. - -**Other:** **[20:20:13]** send them things. I know. Yeah. That's excellent. - -**You:** **[20:20:17]** And then so who is who it's like, Jackie and Beth. Probably would be more interested in this sort of stuff you think or at least - -**Other:** **[20:20:24]** Pretty yeah. In the current structure, yes. - -**You:** **[20:20:29]** alright. I'll set up time with - -**Other:** **[20:20:30]** They would be the ones to take it, but you can always run it by me. - -**You:** **[20:20:34]** Obviously, no. Yeah. That's, exactly what I did. - -**Other:** **[20:20:35]** Yeah. Because we have this meeting reoccurring. - -**You:** **[20:20:38]** Exactly. Exactly. Alright. Sweet. I will go work on that, and I wonder what the the Jira synced a while ago. That's interesting. But, alright. That's what I got. - -**Other:** **[20:20:52]** Yeah. Oh, also, like, you should I you saw it today on the HXI. I'm trying to train my clients on community. Like, you need to choose the right things. - -**You:** **[20:21:04]** I you're doing a really good job with that. - -**Other:** **[20:21:06]** Thank you. I do think some improvements on the triaging because I'll notice, like, someone will I and I'm like, those people I could be the person and get it better triage sometimes. - -**You:** **[20:21:15]** Yeah. And that I know. I know. That's why I'm I'm like, - -**Other:** **[20:21:19]** And I won't know about it. - -**You:** **[20:21:21]** this is, like, one of the things where they're like, they took it away, and I'm like, alright. Like, - -**Other:** **[20:21:21]** If you until you - -**You:** **[20:21:25]** you're running it. And then, - -**Other:** **[20:21:27]** Yeah. Because it's like, does what's his name? - -**You:** **[20:21:31]** Mark Yeah. I know. - -**Other:** **[20:21:32]** Mark? Does Mark know he needs to create tickets on other boards? Like, - -**You:** **[20:21:33]** Yeah. I think I think - -**Other:** **[20:21:36]** I all I see, and I'm I think he's new. - -**You:** **[20:21:39]** yeah. - -**Other:** **[20:21:39]** He just replies, like, to the client, like, this is being worked on. - -**You:** **[20:21:42]** Right. - -**Other:** **[20:21:43]** And then there's that rule for five days, no response on the ticket. So the - -**You:** **[20:21:43]** Yeah. Yeah. Yeah. - -**Other:** **[20:21:47]** not gonna respond. Then it auto closes, and I'm like, - -**You:** **[20:21:48]** What are you doing? Yeah. I know. I know. That's what - -**Other:** **[20:21:50]** no one took the steps to - -**You:** **[20:21:52]** I think that's what, like, Sunny yeah. - -**Other:** **[20:21:53]** it needs - -**You:** **[20:21:54]** I'm gonna I'll always say it to, like, Silee and because I have time with them. - -**Other:** **[20:22:00]** Yeah. - -**You:** **[20:22:01]** And let me say, like, hey. Alright. Hey. At Sai Lee and Sunny. - -**Other:** **[20:22:06]** Maybe there's more options needed in those drop downs that then gets it - -**You:** **[20:22:10]** Yeah. Wanted to - -**Other:** **[20:22:11]** triaged. There's, like, DPI enhancement, DPI bug, I don't we'd have to look at previous types of tickets we used to get on the support side or at least you know, that channel we used to have, and we'd be like, issue. Issue. - -**You:** **[20:22:24]** I know. Yeah. Exactly. - -**Other:** **[20:22:27]** Classify them and give the the clients more drop downs. - -**You:** **[20:22:29]** That's what we did. And that's what yeah. That's what but, like and then but then they're like, no. This the came in. They're like, these are the - -**Other:** **[20:22:35]** Yeah. These are the works the work streams. - -**You:** **[20:22:36]** you get them. Right. - -**Other:** **[20:22:40]** I'm like, but not everything fits in there. - -**You:** **[20:22:42]** They're like, they - -**Other:** **[20:22:42]** And then simple things like a DPI user, I'm like, that's - -**You:** **[20:22:43]** yeah. - -**Other:** **[20:22:45]** should be three days max. I can go in and do it right now. - -**You:** **[20:22:48]** I know. That's why and it's, like, it's, like, the bad thing of, like, that's when did did you put it? Or yeah. It's the chatter. - -**Other:** **[20:23:00]** Yeah. So my only one right now is that - -**You:** **[20:23:02]** There you go. - -**Other:** **[20:23:03]** yeah. Because now I'm like, I'm just gonna double up. I'm gonna, like, every case, I'm gonna create a Teams thread just so people know - -**You:** **[20:23:07]** I know. Exactly. We're - -**Other:** **[20:23:10]** that it's there. - -**You:** **[20:23:11]** yeah. Exactly. Seems like - -**Other:** **[20:23:12]** Yeah. The - -**You:** **[20:23:16]** DPI tickets are being are not being triaged properly, but then they get auto closed after five days. - -**Other:** **[20:23:29]** Yeah. Like or an example because people kept asking about that, that you know, the for Blue Shield California downloading the segment, they got - -**You:** **[20:23:39]** So I - -**Other:** **[20:23:39]** thrown an error. - -**You:** **[20:23:40]** so I I I figured - -**Other:** **[20:23:41]** But that was that was set to auto close. And I just came back from PTO - -**You:** **[20:23:42]** I Right. Yeah. Yeah. Yeah. I just what the what the f? - -**Other:** **[20:23:45]** caught it, changed the status. Yeah. - -**You:** **[20:23:49]** And that I never heard it. Like, I didn't hear about it. And I didn't so - -**Other:** **[20:23:51]** Because just since, I feel like they're just, like, - -**You:** **[20:23:53]** and that one - -**Other:** **[20:23:56]** well, - -**You:** **[20:23:56]** I'm really hoping - -**Other:** **[20:23:57]** someone reaches back out. - -**You:** **[20:23:57]** I we kinda also well, yeah, then it's, like, kinda lucky time You gotta take what you can get. What's he gonna say? Shoot. I was able to figure out what's going on. There was a problem with exporting for dynamic columns. That they fixed, and then I think that accidentally inadvertently broke something else. We also don't have the best QA. We don't have good QA. Like, in general, and it's like it sucks. No. It no. It's you and it's me. And then it's like, I'm and I have been like I'm like, there's a reason I have my job and not someone else's job. In QA because I'm not good at it. Like, I need help. And - -**Other:** **[20:24:32]** QA is me. Or the client? - -**You:** **[20:24:48]** it's we got, like, some QA person, but they're not - -**Other:** **[20:24:50]** You gotta learn that it takes a while. - -**You:** **[20:24:51]** It's not great. And they're they're offshore. Looking offshore, and they so they can only do it in in test. So they don't have access to staging. They don't - -**Other:** **[20:25:01]** Oh, yeah. Because a lot of clients, they can't have their data. - -**You:** **[20:25:03]** all the problems happen in real data. - -**Other:** **[20:25:05]** In prod. But a lot of clients, Blue Shield included, they have a clause in their SOW that - -**You:** **[20:25:10]** Right. Exactly. - -**Other:** **[20:25:12]** no offshore folks are allowed to work on it. - -**You:** **[20:25:14]** Exactly. But that's why it's because no so no offshore people have access to any PHI because - -**Other:** **[20:25:15]** Yeah. - -**You:** **[20:25:19]** you can't, like, delineate between and so it sucks. So yeah. Anyway, - -**Other:** **[20:25:25]** You know what? Just move me over. - -**You:** **[20:25:26]** it's my - -**Other:** **[20:25:28]** I I think at this point, I - -**You:** **[20:25:28]** it's - -**Other:** **[20:25:30]** if and I would leave the Callaway Management as whole. But I could be the provider scorecard's project manager I could also do QA. - -**You:** **[20:25:39]** You wanted me like, I mean, if you want to, like, there because there's, like, this whole delivery - -**Other:** **[20:25:39]** In product all the time. You can match my base salary - -**You:** **[20:25:45]** Yeah. There's a whole delivery org. I don't know. Do you wanna hang out with engineers though? I don't know how much - -**Other:** **[20:25:50]** Yeah. Because I know you have to be very explicit with them. - -**You:** **[20:25:51]** like, yeah. - -**Other:** **[20:25:54]** Actually, I'd ask Keana if she thinks I interact with the engineer as well. Because I'm always like, - -**You:** **[20:25:58]** I think you do because, like, a lot of it is it's having you have to be comfortable being, like, why am I the only person talking in this situation all the time? They're like, oh, yeah. We - -**Other:** **[20:26:07]** Yeah. - -**You:** **[20:26:08]** don't usually talk. Like yeah. So and then you have to be, yeah, very explicit. I mean, if you want the delivery role, - -**Other:** **[20:26:13]** Is it posted? - -**You:** **[20:26:14]** I'm sure like, that that I don't think I don't know. It's in Arrow Koala's role, but Sunny or not Manju. Reached out to me asking about who, like, I didn't really know who that was, but, I mean, if you want, - -**Other:** **[20:26:32]** I'd be interested in reading it. Like, - -**You:** **[20:26:34]** alright. - -**Other:** **[20:26:36]** it'd probably be, like, a six month wait. - -**You:** **[20:26:36]** I was just saying yeah. Because I think they - -**Other:** **[20:26:41]** I'd love to leave, Clydes. And just be the we, like, also could go to a - -**You:** **[20:26:45]** it would - -**Other:** **[20:26:47]** client meeting and help someone out, like an account manager to talk about things. Like, could still be hybrid. But I would love Not that I'm unhappy in my current role, especially given all that just occurred in the last week. - -**You:** **[20:26:59]** Right. - -**Other:** **[20:27:01]** But I've I've constantly said, like, Zoey, should I just be the PM for this? Because I'm the only one who seems to get it. - -**You:** **[20:27:05]** I know. Exactly. - -**Other:** **[20:27:10]** Yeah. Okay. - -**You:** **[20:27:12]** On. I'm - -**Other:** **[20:27:12]** That's cool. - -**You:** **[20:27:13]** get it for at at Krista Schindler. Okay. Alright. I'll figure it out. - -**Other:** **[20:27:35]** Okay. Cool. - -**You:** **[20:27:36]** You rock. I'll talk to you later. - -**Other:** **[20:27:39]** Alright. See you, Matt. - -**You:** **[20:27:40]** Alright. Bye. diff --git a/plugins/compound-engineering/agents/research/user-research-analyst.md b/plugins/compound-engineering/agents/research/user-research-analyst.md index 39456911..fe8ed4b4 100644 --- a/plugins/compound-engineering/agents/research/user-research-analyst.md +++ b/plugins/compound-engineering/agents/research/user-research-analyst.md @@ -169,8 +169,8 @@ If `docs/research/` does not exist or contains no files, return: ## Integration Points -Intended callers (to be wired in PR 2): -- `/workflows:brainstorm` Phase 1.1 -- surface research before brainstorming -- `/workflows:plan` Step 1 -- inform planning with user evidence +This agent is called by: +- `/workflows:brainstorm` Phase 1.1 -- surfaces research before brainstorming +- `/workflows:plan` Step 1 -- informs planning with user evidence -Will run in parallel with `learnings-researcher` and `repo-research-analyst` during planning phases. +Runs in parallel with `learnings-researcher` and `repo-research-analyst` during planning phases. diff --git a/plugins/compound-engineering/commands/workflows/brainstorm.md b/plugins/compound-engineering/commands/workflows/brainstorm.md index b4f3a0f6..d41be77a 100644 --- a/plugins/compound-engineering/commands/workflows/brainstorm.md +++ b/plugins/compound-engineering/commands/workflows/brainstorm.md @@ -37,14 +37,17 @@ Use **AskUserQuestion tool** to suggest: "Your requirements seem detailed enough ### Phase 1: Understand the Idea -#### 1.1 Repository Research (Lightweight) +#### 1.1 Repository & User Research (Lightweight) -Run a quick repo scan to understand existing patterns: +Run these agents **in parallel** to understand existing patterns and user context: - Task repo-research-analyst("Understand existing patterns related to: ") +- Task user-research-analyst("Surface research relevant to: ") Focus on: similar features, established patterns, CLAUDE.md guidance. +If `user-research-analyst` returns relevant findings (personas, insights, opportunities), briefly summarize them before starting the collaborative dialogue. If no research data exists, skip the summary silently and proceed directly to the collaborative dialogue — do not mention the absence of research or suggest running `/workflows:research`. + #### 1.2 Collaborative Dialogue Use the **AskUserQuestion tool** to ask questions **one at a time**. diff --git a/plugins/compound-engineering/commands/workflows/plan.md b/plugins/compound-engineering/commands/workflows/plan.md index 631bccc6..f2f319af 100644 --- a/plugins/compound-engineering/commands/workflows/plan.md +++ b/plugins/compound-engineering/commands/workflows/plan.md @@ -76,10 +76,12 @@ Run these agents **in parallel** to gather local context: - Task repo-research-analyst(feature_description) - Task learnings-researcher(feature_description) +- Task user-research-analyst(feature_description) **What to look for:** - **Repo research:** existing patterns, CLAUDE.md guidance, technology familiarity, pattern consistency - **Learnings:** documented solutions in `docs/solutions/` that might apply (gotchas, patterns, lessons learned) +- **User research:** relevant personas, interview insights, opportunities, and research gaps from `docs/research/` These findings inform the next step. @@ -114,6 +116,7 @@ After all research steps complete, consolidate findings: - Document relevant file paths from repo research (e.g., `app/services/example_service.rb:42`) - **Include relevant institutional learnings** from `docs/solutions/` (key insights, gotchas to avoid) +- **If user research findings were returned**: include relevant personas and their relationship to this feature, key insights and quotes from interviews, research gaps (areas where coverage is thin). If no research data was found, skip this bullet silently. - Note external documentation URLs and best practices (if external research was done) - List related issues or PRs discovered - Capture CLAUDE.md conventions diff --git a/plugins/compound-engineering/commands/workflows/research.md b/plugins/compound-engineering/commands/workflows/research.md index 071a73ed..6f226bb4 100644 --- a/plugins/compound-engineering/commands/workflows/research.md +++ b/plugins/compound-engineering/commands/workflows/research.md @@ -53,9 +53,9 @@ Research status: **Counting unprocessed transcripts:** Count files in `docs/research/transcripts/`. Then check `docs/research/interviews/` frontmatter for `source_transcript` fields. Transcripts not referenced by any interview are unprocessed. Simpler fallback: count transcripts minus count of interviews. **Recommend the next logical phase** based on state: -- No plans exist → recommend Plan -- Unprocessed transcripts exist → recommend Process +- Unprocessed transcripts exist → recommend Process (ready-to-process data takes priority) - Interviews exist but no personas → recommend Personas +- No plans and no transcripts → recommend Plan - All phases have artifacts → show neutral menu Use **AskUserQuestion** with three options: diff --git a/plugins/compound-engineering/skills/persona-builder/SKILL.md b/plugins/compound-engineering/skills/persona-builder/SKILL.md index 73d31bb4..96ce83cd 100644 --- a/plugins/compound-engineering/skills/persona-builder/SKILL.md +++ b/plugins/compound-engineering/skills/persona-builder/SKILL.md @@ -9,7 +9,7 @@ description: "Synthesize personas from processed interview snapshots with confid Synthesize personas from processed interview snapshots. Personas are living documents that grow more confident as interviews accumulate. Follow evidence-based persona construction with confidence tracking, opportunity tables, and contradiction handling via Divergences sections. -**Reference:** [discovery-playbook.md](./references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. +**Reference:** [discovery-playbook.md](../research-plan/references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. ## Quick Start @@ -116,22 +116,18 @@ When a new interview contradicts an existing finding, do NOT silently update cou 3. When divergences reach 40/60 split or closer, flag for potential persona segmentation: `[Flag: Consider splitting this persona -- [finding] shows near-even split]` 4. Surface contradictions in the merge confirmation prompt so the user is aware before confirming -### Evidence Strength Thresholds +### Evidence Strength -| Strength | Criteria | -|----------|---------| -| Weak | Less than 33% of participants, or only 1 interview | -| Medium | 33-66% of participants | -| Strong | 67%+ of participants | +- **Weak**: Only 1 participant, or a small minority +- **Medium**: Roughly half of participants +- **Strong**: Most participants (clear majority) -### Hypothesis Status Transitions +### Hypothesis Status -| Status | Criteria | -|--------|---------| -| SUPPORTED | 75%+ of evidence supports | -| MIXED | 40-75% support | -| CHALLENGED | Less than 40% support | -| NEW | Emerged from this interview, no prior evidence | +- **SUPPORTED**: Most evidence supports +- **MIXED**: Evidence is split +- **CHALLENGED**: Most evidence contradicts +- **NEW**: Emerged from this interview, no prior evidence ## Output Template diff --git a/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md b/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md deleted file mode 100644 index ca626bc5..00000000 --- a/plugins/compound-engineering/skills/persona-builder/references/discovery-playbook.md +++ /dev/null @@ -1,414 +0,0 @@ -# Continuous Product Discovery Playbook -## A Best-Practices Guide for Product Managers & UX Researchers - -*Structuring Interviews, Extracting Insights from Transcripts, and Building a Sustainable Discovery System* - ---- - -## 1. Foundational Principles - -### 1.1 What Is Continuous Discovery? - -Continuous discovery is an approach where product teams maintain **at minimum, weekly touchpoints with customers**, conducting small research activities in pursuit of a desired product outcome (Teresa Torres, *Continuous Discovery Habits*). It replaces the outdated "big-bang research phase" with a persistent feedback loop that runs alongside delivery. - -**Core tenets:** -- **Outcome-focused, not output-focused.** Every discovery activity ties back to a measurable outcome (e.g., reduce churn, increase activation), not a feature request. -- **Weekly cadence.** Small, frequent interactions compound into deep user intuition over time. -- **Cross-functional ownership.** Discovery is co-owned by the **product trio** — a Product Manager, a Designer/UX Researcher, and an Engineer — who participate together in interviews and synthesis. -- **Lightweight and sustainable.** Discovery should not require elaborate study designs every week. Adapt methods to fit the time available. - -### 1.2 Why It Matters - -- **Healthier backlog:** Prioritization is grounded in real user evidence, not opinions or loudest-voice requests. -- **Lower cost of learning:** You discover problems with rough sketches and conversations, not after building and shipping. -- **Less reactive culture:** Teams spot opportunities before they become urgent escalations. -- **Compounding product judgment:** Persistent exposure to customers builds stronger intuition across the entire team. - ---- - -## 2. Setting Up a Discovery System - -### 2.1 Start With a Clear Outcome - -Before touching an interview guide, align your product trio on: -- **What behavior are we trying to change or improve?** -- **How does this tie into our OKRs / North Star metric?** -- **What do we need to learn to make a better decision?** - -Ground discovery in an outcome, not a feature. This prevents the trap of running interviews to validate a solution you've already committed to. - -### 2.2 Assemble the Product Trio - -| Role | Discovery Responsibility | -|---|---| -| **Product Manager** | Defines the outcome, prioritizes opportunities, owns the Opportunity Solution Tree | -| **UX Researcher / Designer** | Designs interview guides, moderates sessions, leads synthesis | -| **Engineer** | Assesses feasibility, participates in interviews (builds empathy for constraints and possibilities) | - -All three should attend interviews together whenever possible. Shared exposure eliminates the "telephone game" that happens when one person interviews and then reports findings to others. - -### 2.3 Automate Recruiting - -Recruiting is the #1 reason continuous interviewing fails. If scheduling is manual, the habit dies within weeks. Automate it so that interviews appear on your calendar every week without effort. - -**Proven recruiting channels:** - -| Channel | Best For | How It Works | -|---|---|---| -| **In-app intercepts** (e.g., Ethnio, Orbital) | Consumer & SaaS products | A pop-up screener appears inside the product; qualifying users schedule a call | -| **Customer support triggers** | Enterprise / B2B | Support agents flag specific scenarios and route users to the product team | -| **Insider connections** | Enterprise with named accounts | CSMs or account managers introduce product team to specific contacts | -| **Email campaigns** | Broad base | Targeted email to specific segments offering an incentive for 30 min of time | -| **Paid recruiting panels** | Hard-to-reach users | Services like UserInterviews, Respondent, or Prolific | - -**Key automation elements:** -- **Targeting:** Recruit the *right* users at the *right* time (e.g., users who completed onboarding 2+ weeks ago). -- **Screener questions:** Qualify in/out based on criteria relevant to your current outcome. -- **Automated reminders:** Email + SMS reminders reduce no-shows. -- **Self-scheduling:** Let participants pick from available calendar slots (Calendly, SavvyCal, etc.). - ---- - -## 3. Structuring Discovery Interviews - -### 3.1 Research Questions vs. Interview Questions - -A critical distinction (from Teresa Torres): -- **Research questions** = what you want to *learn* (e.g., "How often do users watch Netflix?"). -- **Interview questions** = what you actually *ask* (e.g., "Tell me about the last time you watched Netflix."). - -Research questions often make terrible interview questions. They encourage short, speculative, System 1 answers. Transform your research questions into **story-based prompts** that ground the participant in specific past behavior. - -### 3.2 The Mom Test (Rob Fitzpatrick) - -Three rules to ensure you get truthful, useful data — even from people who want to be polite: - -1. **Talk about their life instead of your idea.** Don't pitch; explore their reality. -2. **Ask about specifics in the past instead of generics or opinions about the future.** "Tell me about the last time…" beats "Would you ever…?" -3. **Talk less and listen more.** Your job is to extract signal, not to fill silence. - -**Deflect bad data:** -- **Compliments** ("That sounds really cool!") → Redirect: "Thanks — but tell me more about how you handle this today." -- **Fluff / generalities** ("I usually…" / "I always…") → Anchor: "When did that last happen? Walk me through it." -- **Hypothetical promises** ("I would definitely pay for that") → Dig: "What have you tried so far to solve this?" - -**Pre-interview discipline:** Before every conversation, write down the **three most important things you need to learn**. This keeps interviews focused and prevents aimless chatting. - -### 3.3 Story-Based Interviewing (Teresa Torres) - -The most reliable method for uncovering goals, context, and unmet needs. Instead of asking about general experiences, ask for **specific stories about past behavior**. - -**Why stories work:** -- They activate **System 2 thinking** (deliberate recall), producing more accurate answers than fast System 1 generalizations. -- They surface **context** — when, where, why, what device, what mood, who else was involved. -- They reveal **needs, pain points, and desires** (collectively: **opportunities**) that the participant may not even be consciously aware of. - -**The core prompt structure:** - -> "Tell me about the last time you [did the relevant activity]." -> "Tell me about a specific time when [relevant scenario]." - -**Interview flow:** - -| Phase | Duration | Purpose | Techniques | -|---|---|---|---| -| **Warm-up** | 2–3 min | Build rapport, set expectations | Easy personal questions; explain the purpose; reassure there are no right/wrong answers | -| **Story collection** | 15–20 min | Collect 1–2 specific stories about past behavior | "Tell me about the last time…"; follow up with "What happened next?"; gently redirect generalizations back to the specific instance | -| **Deepening** | 5–8 min | Explore pain points, workarounds, emotional context | "Tell me more about that"; "Why was that important?"; "How did you feel at that point?"; "What did you do next?" | -| **Wrap-up** | 2–3 min | Catch anything missed; close gracefully | "Is there anything else I should have asked?"; "Who else should I talk to?" | - -**Active listening techniques:** -- **Echoing:** Repeat the participant's last few words as a question to prompt elaboration. -- **Mirroring:** Match their body language and tone to build trust. -- **Comfortable silence:** Don't rush to fill pauses — participants often volunteer their best insights after a beat of silence. -- **Redirect generalizations:** When participants drift into "I usually…" or "I tend to…", gently guide back: "Can you think of a specific time that happened?" - -### 3.4 Question Bank: Good vs. Bad Questions - -| ❌ Avoid (Speculative / Leading / Closed) | ✅ Use Instead (Story-Based / Open-Ended) | -|---|---| -| "Would you use a feature that does X?" | "Tell me about the last time you tried to accomplish [goal]. What happened?" | -| "Do you like our product?" | "Walk me through the last time you used [product]. Start from the beginning." | -| "How often do you do X?" | "Tell me about the most recent time you did X. When was it? What was happening?" | -| "What's your biggest pain point?" | "Tell me about a time when [relevant task] was really frustrating. What happened?" | -| "What would your dream product do?" | "How are you solving this problem today? What have you tried?" | -| "Would you pay $X for Y?" | "Where does the money come from for tools like this? What's the buying process?" | -| "Do you think having the button on the left makes you less likely to click?" | "Walk me through what you did on this page. Was it easy to complete your task? Why or why not?" | - -### 3.5 Preparing the Discussion Guide - -A discussion guide is **flexible, not a rigid script**. It ensures you cover key topics while leaving room to follow interesting threads. - -**Structure:** -1. **Research goal** (1–2 sentences): What outcome are we learning about? -2. **Screening criteria:** Who qualifies for this interview? -3. **Warm-up questions** (2–3): Easy openers to build rapport. -4. **Story prompts** (2–3): Core story-based questions tied to your research goal. -5. **Follow-up / probing questions** (5–8): Nested under each story prompt — use as needed. -6. **Wrap-up questions** (1–2): "Anything else?" and referral questions. -7. **Debrief checklist:** Reminders for what to capture in your interview snapshot immediately after. - ---- - -## 4. Synthesizing Interviews: The Interview Snapshot - -### 4.1 Why Immediate Synthesis Matters - -> "Synthesize each interview immediately after it ends. Capture your thoughts while they're fresh, rather than assuming you'll revisit the recording or notes later." — Teresa Torres - -Memory degrades rapidly. Schedule **15 minutes immediately after every interview** for synthesis. The product trio should do this together — co-creation builds shared understanding. - -### 4.2 The Interview Snapshot (Teresa Torres) - -A **one-page summary** that makes each interview memorable, actionable, and reference-able. The product trio collaborates to complete it in 15–20 minutes post-interview. - -**Seven components:** - -| Component | What to Capture | -|---|---| -| **1. Name & Photo** | Identify and remember the participant | -| **2. Quick Facts** | Key context: role, segment, tenure, relevant demographics | -| **3. Memorable Quote** | A single quote that captures the essence of the story — helps trigger recall later | -| **4. Experience Map** | A simple visual timeline of the story they told (beginning → middle → end) with key moments marked | -| **5. Opportunities** | Unmet needs, pain points, and desires that surfaced during the story | -| **6. Insights** | Interesting learnings that aren't yet opportunities but may become relevant later | -| **7. Follow-up Items** | Open questions, things to verify, people to talk to next | - -**Templates available in:** Miro (Product Talk template), FigJam, Google Slides, PowerPoint, Keynote. - -**Key principle:** The snapshot is **synthesis, not transcription**. You are distilling meaning, not capturing every word. - ---- - -## 5. Extracting Insights from Transcripts - -### 5.1 Transcription First - -Before analysis, convert recordings to searchable text. This is the foundation for all downstream work. - -| Method | Best For | Considerations | -|---|---|---| -| **Automated transcription** (Otter.ai, Rev, Dovetail, Condens) | Speed; most use cases | Review critical sections manually — AI struggles with names, jargon, crosstalk | -| **Human transcription** | High-stakes research; heavy accents/jargon | More accurate but slower and more expensive | -| **Hybrid** | Enterprise research | Auto-transcribe first, then human-proofread key sections | - -**Essential metadata per session:** -- Session ID (stable, unique code) -- Date and type (interview, usability test, support call) -- Participant profile fields (role, segment, plan tier, region) -- Moderator/researcher and study name -- Consent/usage notes -- Links to recording and transcript files - -### 5.2 Highlighting: Capture Atomic Evidence - -Before tagging or theming, **highlight** the meaningful moments in each transcript. Each highlight should be an **atomic evidence unit** — a single observation, quote, or behavior that can stand alone. - -**What to highlight:** -- Direct quotes expressing needs, pain points, or desires -- Descriptions of behavior (what the participant actually did) -- Emotional reactions (frustration, surprise, delight) -- Workarounds and hacks (signals of unmet needs) -- Contradictions between stated preferences and actual behavior - -**Principle:** Highlight first, tag second, synthesize third. Don't jump to themes too early. - -### 5.3 Coding / Tagging - -Coding (or tagging) is the process of labeling highlights to enable pattern discovery across multiple interviews. - -**Two approaches:** - -| Approach | Description | When to Use | -|---|---|---| -| **Deductive (top-down)** | Define codes *before* reviewing data, based on research questions and hypotheses | When you have specific questions to answer; faster for time-constrained projects | -| **Inductive (bottom-up)** | Let codes emerge *from* the data as you review | When exploring new territory; prevents premature categorization | -| **Hybrid** | Start with a small set of deductive codes, then add inductive codes as new themes emerge | Most common in practice; balances speed and openness | - -**Practical tagging taxonomy:** - -| Tag Category | Examples | Purpose | -|---|---|---| -| **Descriptive** | Location, device, role, task, feature area | Organize by context | -| **Emotional** | Frustration, delight, confusion, surprise | Build empathy; identify emotional peaks | -| **Behavioral** | Workaround, abandonment, comparison shopping, habit | Surface actual behavior patterns | -| **Need/Pain Point** | Unmet need, pain point, desire, blocker | Feed directly into opportunities | -| **Evaluative** | Like, dislike, strong preference, indifference | Capture sentiment toward specific elements | - -**Best practices for a shared codebook:** -- Keep the tag set small (15–25 tags) and expand only when needed. -- Write a 1-sentence definition for each tag so teammates apply them consistently. -- Review and consolidate tags periodically — merge synonyms, retire unused tags. -- Use a shared tool (Dovetail, Condens, Notion, or even a spreadsheet) so the whole team sees the same taxonomy. - -### 5.4 Affinity Mapping & Thematic Analysis - -Once you've highlighted and tagged across multiple interviews, affinity mapping helps you see the patterns. - -**Step-by-step process:** - -1. **Gather all highlights** — Pull tagged quotes, observations, and notes from all interviews onto a shared surface (digital whiteboard, Miro, FigJam, or physical sticky notes). -2. **Group by similarity** — Move items that feel related near each other. Don't overthink categories yet — trust your intuition. -3. **Name the clusters** — Once groups form, give each a descriptive label that captures the theme (e.g., "Users distrust automated recommendations," "Onboarding feels overwhelming in week 1"). -4. **Look for hierarchy** — Some clusters may be sub-themes of larger themes. Nest them. -5. **Quantify (loosely)** — Note how many participants contributed to each theme and from which segments. This isn't statistical analysis — it's pattern recognition. -6. **Identify outliers** — Don't ignore insights that don't fit neatly. Outliers can signal emerging opportunities. -7. **Document** — Write a theme statement for each cluster, supported by 2–3 representative quotes with source references. - -**Watch out for bias:** -- **Confirmation bias:** Gravitating toward themes that confirm your hypotheses. -- **Recency bias:** Over-weighting the most recent interviews. -- **Loudness bias:** Giving more weight to articulate or emotionally expressive participants. - -Affinity mapping in a group (the product trio + stakeholders) helps counter individual bias through diverse perspectives. - -### 5.5 Atomic Research Nuggets - -For teams running continuous discovery over months/years, the **atomic research** approach (developed by Tomer Sharon and Daniel Pidcock) prevents insights from getting buried in reports. - -**What is a nugget?** -A nugget is the smallest indivisible unit of research insight: -- **Observation:** A single finding or fact (e.g., "3 of 5 users abandoned the wizard at step 3") -- **Evidence:** The source data that supports it (quote, timestamp, video clip) -- **Tags:** Metadata for searchability (feature area, user segment, research study) - -**Why nuggets work:** -- They are **reusable** across projects — you don't re-run the same research because someone didn't read last year's report. -- They are **searchable** — stakeholders can self-serve insights from a research repository. -- They are **composable** — multiple nuggets combine into higher-level insights and themes. - -**Storage:** Use a research repository tool (Dovetail, Condens, EnjoyHQ, Notion) or a structured spreadsheet with consistent tagging. - ---- - -## 6. From Insights to Action: The Opportunity Solution Tree - -### 6.1 What Is an Opportunity Solution Tree (OST)? - -The Opportunity Solution Tree, popularized by Teresa Torres, is a visual framework that connects: - -``` -Outcome (metric) - └── Opportunities (needs, pain points, desires) - └── Solutions (ideas to address opportunities) - └── Experiments (tests to validate solutions) -``` - -It ensures every solution traces back to a real customer opportunity, which traces back to a measurable business outcome. - -### 6.2 How to Build an OST - -1. **Set the outcome** — Place your target metric at the top of the tree (e.g., "Increase weekly active users by 15%"). -2. **Create an experience map** — Have each member of the product trio draw what they believe the current customer experience looks like. Merge into a shared map. Gaps in the map guide your interviews. -3. **Map opportunities from interview snapshots** — Every 3–4 interviews, review your snapshots and pull out the opportunities (needs, pain points, desires). Place them on the tree under the relevant moment in the experience map. -4. **Structure the opportunity space** — Group and nest related opportunities. Parent opportunities are broad (e.g., "Users struggle to find relevant content"); child opportunities are more specific (e.g., "Search results don't account for past viewing history"). -5. **Select a target opportunity** — Compare and contrast opportunities. Choose one that is solvable, impactful, and aligned with your outcome. -6. **Generate solutions** — Brainstorm multiple solutions for the target opportunity (divergent thinking). Don't commit to the first idea. -7. **Design experiments** — For each promising solution, identify the riskiest assumption and design a small test to validate or invalidate it. -8. **Iterate** — As you learn, revise the tree. New interviews add new opportunities. Failed experiments redirect you to alternative solutions. - -### 6.3 Common Pitfalls - -- **Framing opportunities as solutions.** "We need a better search bar" is a solution. The opportunity is "Users can't find content relevant to their interests." Practice separating the two. -- **Overreacting to the latest interview.** The tree prevents this by providing a big-picture view. One interview = one data point. Update the tree after every 3–4 interviews, not after every single one. -- **Skipping opportunity mapping.** Teams that jump from interview to solution miss the chance to compare opportunities strategically. -- **Setting the wrong outcome.** If your outcome isn't connected to business strategy, the whole tree drifts. Re-validate your outcome quarterly. - ---- - -## 7. Tools for Continuous Discovery - -| Category | Tools | Purpose | -|---|---|---| -| **Recruiting & Scheduling** | Ethnio, Orbital, Great Question, Calendly, UserInterviews | Automate participant recruitment and scheduling | -| **Video & Transcription** | Zoom, Grain, Otter.ai, Rev, Descript | Record interviews and generate transcripts | -| **Research Repository & Analysis** | Dovetail, Condens, EnjoyHQ, Notion, Airtable | Store, tag, search, and synthesize research data | -| **Synthesis & Mapping** | Miro, FigJam, Figjam, MURAL | Interview snapshots, affinity maps, experience maps, OSTs | -| **Opportunity Solution Trees** | Miro (Product Talk templates), Vistaly, ProductBoard | Visualize and manage the opportunity space | -| **AI-Assisted Analysis** | Dovetail AI, Condens AI, ChatGPT, Claude | Auto-transcription, auto-tagging, summarization (always human-validate) | - -**A note on AI tools:** AI can speed up transcription, suggest tags, and draft theme summaries. However, **do not rely on AI exclusively for synthesis** (per Teresa Torres). The act of personally reviewing conversations and identifying patterns is where deep understanding forms. Use AI to surface things you might overlook, not to replace your thinking. - ---- - -## 8. Building the Habit: Making Discovery Stick - -### 8.1 Weekly Cadence Template - -| Day | Activity | Time | -|---|---|---| -| **Monday** | Review upcoming interview schedule (auto-populated) | 5 min | -| **Tuesday** | Conduct interview #1; complete interview snapshot | 45–60 min | -| **Thursday** | Conduct interview #2; complete interview snapshot | 45–60 min | -| **Friday** | Cross-interview synthesis: update OST, review patterns | 30–45 min | - -This is approximately **2–3 hours per week** — roughly 5-7% of a trio's working hours. - -### 8.2 Protect Discovery Time - -- **Treat discovery like sprint planning** — it's not optional; it's on the calendar. -- **Batch interviews** — Don't spread them across random slots. Dedicated blocks reduce context-switching. -- **Rotate moderation** — Each trio member should take turns leading interviews to build shared capability. -- **Share snapshots visibly** — Post them in a team channel (Slack, Teams) or a shared Miro board so stakeholders stay informed without attending every session. - -### 8.3 Scaling Across Teams - -- **Create a shared codebook** — Standard tags and definitions across teams enable cross-team insight discovery. -- **Maintain a centralized research repository** — All snapshots, nuggets, and themes live in one searchable place. -- **Run periodic "insight jams"** — Monthly sessions where multiple trios review each other's OSTs and cross-pollinate opportunities. -- **Train PMs and designers on story-based interviewing** — The skill gap is the bottleneck, not the process. - ---- - -## 9. Quick-Reference Checklists - -### Pre-Interview Checklist -- [ ] Outcome defined and agreed upon by the product trio -- [ ] Discussion guide prepared (2–3 story prompts, follow-up questions) -- [ ] Participant recruited and confirmed (screener passed) -- [ ] Recording tool set up and tested -- [ ] Trio roles assigned (moderator, note-taker, observer) -- [ ] Three most important learning goals written down (The Mom Test) - -### During-Interview Checklist -- [ ] Warm-up complete; participant is comfortable -- [ ] Collecting specific stories about past behavior (not opinions about the future) -- [ ] Redirecting generalizations back to specifics -- [ ] Using active listening (echoing, silence, "tell me more") -- [ ] Not pitching solutions or leading the witness -- [ ] Capturing timestamps of key moments for later reference - -### Post-Interview Checklist -- [ ] Interview snapshot completed within 15–20 minutes -- [ ] Opportunities and insights documented -- [ ] Experience map drawn for the story collected -- [ ] Snapshot shared with the team -- [ ] Follow-up items logged -- [ ] Opportunities added to the Opportunity Solution Tree (after every 3–4 interviews) - -### Transcript Analysis Checklist -- [ ] Transcript reviewed and cleaned (names, jargon corrected) -- [ ] Key moments highlighted as atomic evidence units -- [ ] Highlights tagged using shared codebook -- [ ] Themes identified through affinity mapping -- [ ] Themes documented with supporting quotes and source references -- [ ] Findings connected to existing opportunities on the OST -- [ ] Insights stored in research repository for future reference - ---- - -## 10. Recommended Reading & Sources - -| Resource | Author | Key Contribution | -|---|---|---| -| *Continuous Discovery Habits* | Teresa Torres | The definitive framework for weekly customer touchpoints, interview snapshots, and Opportunity Solution Trees | -| *The Mom Test* | Rob Fitzpatrick | Rules for asking questions that produce truthful, useful answers | -| Product Talk Blog (producttalk.org) | Teresa Torres | Story-based interviewing, opportunity mapping, and OST deep dives | -| NN/g User Interviews 101 | Nielsen Norman Group | Foundational interviewing methodology for UX researchers | -| *Thinking, Fast and Slow* | Daniel Kahneman | Understanding System 1 vs. System 2 thinking and why story-based questions produce better data | -| Atomic Research | Tomer Sharon & Daniel Pidcock | Breaking research into reusable, searchable nuggets | -| Dovetail/Condens Workflows | Various | Practical transcript-to-theme synthesis workflows | - ---- - -*This playbook is a living document. Update it as your team's discovery practice matures. The goal is not perfection — it's a sustainable habit of learning from your customers every single week.* diff --git a/plugins/compound-engineering/skills/transcript-insights/SKILL.md b/plugins/compound-engineering/skills/transcript-insights/SKILL.md index a187861c..a3c643b2 100644 --- a/plugins/compound-engineering/skills/transcript-insights/SKILL.md +++ b/plugins/compound-engineering/skills/transcript-insights/SKILL.md @@ -9,7 +9,7 @@ description: "Process interview transcripts into structured snapshots with tagge Process raw interview transcripts into structured interview snapshots following Teresa Torres' one-page interview snapshot format. Extract atomic insights, map experience timelines, identify opportunities in Opportunity Solution Tree language, and track hypothesis status. -**Reference:** [discovery-playbook.md](./references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. +**Reference:** [discovery-playbook.md](../research-plan/references/discovery-playbook.md) -- Continuous Product Discovery Playbook with detailed methodology. ## Quick Start @@ -282,4 +282,11 @@ Extracted opportunity: ## Privacy Note -Interview snapshots use anonymized participant IDs (user-001, user-002). Do not include real names, email addresses, or other PII in the snapshot. If the source transcript contains PII, strip it during processing. Consider adding `docs/research/transcripts/` to `.gitignore`. +Interview snapshots use anonymized participant IDs (user-001, user-002). Do not include real names, email addresses, or other identifying information in the snapshot output. When processing transcripts: + +- **Replace all real names** with anonymized IDs (e.g., "user-001") in quotes and context +- **Replace company names** with generic descriptors (e.g., "a regional health plan") unless the company is public knowledge and relevant to the insight +- **Strip identifying details** from the `source_transcript` frontmatter field -- use a descriptive slug, not the original filename if it contains names +- **Quotes must be exact** from the transcript, but with PII replaced inline (e.g., `"[user-001] said the export was broken"`) + +Transcripts in `docs/research/transcripts/` contain raw interview data with PII and MUST NOT be committed to public repositories. The `.gitignore` includes `docs/research/transcripts/*.md` by default. diff --git a/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md b/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md deleted file mode 100644 index ca626bc5..00000000 --- a/plugins/compound-engineering/skills/transcript-insights/references/discovery-playbook.md +++ /dev/null @@ -1,414 +0,0 @@ -# Continuous Product Discovery Playbook -## A Best-Practices Guide for Product Managers & UX Researchers - -*Structuring Interviews, Extracting Insights from Transcripts, and Building a Sustainable Discovery System* - ---- - -## 1. Foundational Principles - -### 1.1 What Is Continuous Discovery? - -Continuous discovery is an approach where product teams maintain **at minimum, weekly touchpoints with customers**, conducting small research activities in pursuit of a desired product outcome (Teresa Torres, *Continuous Discovery Habits*). It replaces the outdated "big-bang research phase" with a persistent feedback loop that runs alongside delivery. - -**Core tenets:** -- **Outcome-focused, not output-focused.** Every discovery activity ties back to a measurable outcome (e.g., reduce churn, increase activation), not a feature request. -- **Weekly cadence.** Small, frequent interactions compound into deep user intuition over time. -- **Cross-functional ownership.** Discovery is co-owned by the **product trio** — a Product Manager, a Designer/UX Researcher, and an Engineer — who participate together in interviews and synthesis. -- **Lightweight and sustainable.** Discovery should not require elaborate study designs every week. Adapt methods to fit the time available. - -### 1.2 Why It Matters - -- **Healthier backlog:** Prioritization is grounded in real user evidence, not opinions or loudest-voice requests. -- **Lower cost of learning:** You discover problems with rough sketches and conversations, not after building and shipping. -- **Less reactive culture:** Teams spot opportunities before they become urgent escalations. -- **Compounding product judgment:** Persistent exposure to customers builds stronger intuition across the entire team. - ---- - -## 2. Setting Up a Discovery System - -### 2.1 Start With a Clear Outcome - -Before touching an interview guide, align your product trio on: -- **What behavior are we trying to change or improve?** -- **How does this tie into our OKRs / North Star metric?** -- **What do we need to learn to make a better decision?** - -Ground discovery in an outcome, not a feature. This prevents the trap of running interviews to validate a solution you've already committed to. - -### 2.2 Assemble the Product Trio - -| Role | Discovery Responsibility | -|---|---| -| **Product Manager** | Defines the outcome, prioritizes opportunities, owns the Opportunity Solution Tree | -| **UX Researcher / Designer** | Designs interview guides, moderates sessions, leads synthesis | -| **Engineer** | Assesses feasibility, participates in interviews (builds empathy for constraints and possibilities) | - -All three should attend interviews together whenever possible. Shared exposure eliminates the "telephone game" that happens when one person interviews and then reports findings to others. - -### 2.3 Automate Recruiting - -Recruiting is the #1 reason continuous interviewing fails. If scheduling is manual, the habit dies within weeks. Automate it so that interviews appear on your calendar every week without effort. - -**Proven recruiting channels:** - -| Channel | Best For | How It Works | -|---|---|---| -| **In-app intercepts** (e.g., Ethnio, Orbital) | Consumer & SaaS products | A pop-up screener appears inside the product; qualifying users schedule a call | -| **Customer support triggers** | Enterprise / B2B | Support agents flag specific scenarios and route users to the product team | -| **Insider connections** | Enterprise with named accounts | CSMs or account managers introduce product team to specific contacts | -| **Email campaigns** | Broad base | Targeted email to specific segments offering an incentive for 30 min of time | -| **Paid recruiting panels** | Hard-to-reach users | Services like UserInterviews, Respondent, or Prolific | - -**Key automation elements:** -- **Targeting:** Recruit the *right* users at the *right* time (e.g., users who completed onboarding 2+ weeks ago). -- **Screener questions:** Qualify in/out based on criteria relevant to your current outcome. -- **Automated reminders:** Email + SMS reminders reduce no-shows. -- **Self-scheduling:** Let participants pick from available calendar slots (Calendly, SavvyCal, etc.). - ---- - -## 3. Structuring Discovery Interviews - -### 3.1 Research Questions vs. Interview Questions - -A critical distinction (from Teresa Torres): -- **Research questions** = what you want to *learn* (e.g., "How often do users watch Netflix?"). -- **Interview questions** = what you actually *ask* (e.g., "Tell me about the last time you watched Netflix."). - -Research questions often make terrible interview questions. They encourage short, speculative, System 1 answers. Transform your research questions into **story-based prompts** that ground the participant in specific past behavior. - -### 3.2 The Mom Test (Rob Fitzpatrick) - -Three rules to ensure you get truthful, useful data — even from people who want to be polite: - -1. **Talk about their life instead of your idea.** Don't pitch; explore their reality. -2. **Ask about specifics in the past instead of generics or opinions about the future.** "Tell me about the last time…" beats "Would you ever…?" -3. **Talk less and listen more.** Your job is to extract signal, not to fill silence. - -**Deflect bad data:** -- **Compliments** ("That sounds really cool!") → Redirect: "Thanks — but tell me more about how you handle this today." -- **Fluff / generalities** ("I usually…" / "I always…") → Anchor: "When did that last happen? Walk me through it." -- **Hypothetical promises** ("I would definitely pay for that") → Dig: "What have you tried so far to solve this?" - -**Pre-interview discipline:** Before every conversation, write down the **three most important things you need to learn**. This keeps interviews focused and prevents aimless chatting. - -### 3.3 Story-Based Interviewing (Teresa Torres) - -The most reliable method for uncovering goals, context, and unmet needs. Instead of asking about general experiences, ask for **specific stories about past behavior**. - -**Why stories work:** -- They activate **System 2 thinking** (deliberate recall), producing more accurate answers than fast System 1 generalizations. -- They surface **context** — when, where, why, what device, what mood, who else was involved. -- They reveal **needs, pain points, and desires** (collectively: **opportunities**) that the participant may not even be consciously aware of. - -**The core prompt structure:** - -> "Tell me about the last time you [did the relevant activity]." -> "Tell me about a specific time when [relevant scenario]." - -**Interview flow:** - -| Phase | Duration | Purpose | Techniques | -|---|---|---|---| -| **Warm-up** | 2–3 min | Build rapport, set expectations | Easy personal questions; explain the purpose; reassure there are no right/wrong answers | -| **Story collection** | 15–20 min | Collect 1–2 specific stories about past behavior | "Tell me about the last time…"; follow up with "What happened next?"; gently redirect generalizations back to the specific instance | -| **Deepening** | 5–8 min | Explore pain points, workarounds, emotional context | "Tell me more about that"; "Why was that important?"; "How did you feel at that point?"; "What did you do next?" | -| **Wrap-up** | 2–3 min | Catch anything missed; close gracefully | "Is there anything else I should have asked?"; "Who else should I talk to?" | - -**Active listening techniques:** -- **Echoing:** Repeat the participant's last few words as a question to prompt elaboration. -- **Mirroring:** Match their body language and tone to build trust. -- **Comfortable silence:** Don't rush to fill pauses — participants often volunteer their best insights after a beat of silence. -- **Redirect generalizations:** When participants drift into "I usually…" or "I tend to…", gently guide back: "Can you think of a specific time that happened?" - -### 3.4 Question Bank: Good vs. Bad Questions - -| ❌ Avoid (Speculative / Leading / Closed) | ✅ Use Instead (Story-Based / Open-Ended) | -|---|---| -| "Would you use a feature that does X?" | "Tell me about the last time you tried to accomplish [goal]. What happened?" | -| "Do you like our product?" | "Walk me through the last time you used [product]. Start from the beginning." | -| "How often do you do X?" | "Tell me about the most recent time you did X. When was it? What was happening?" | -| "What's your biggest pain point?" | "Tell me about a time when [relevant task] was really frustrating. What happened?" | -| "What would your dream product do?" | "How are you solving this problem today? What have you tried?" | -| "Would you pay $X for Y?" | "Where does the money come from for tools like this? What's the buying process?" | -| "Do you think having the button on the left makes you less likely to click?" | "Walk me through what you did on this page. Was it easy to complete your task? Why or why not?" | - -### 3.5 Preparing the Discussion Guide - -A discussion guide is **flexible, not a rigid script**. It ensures you cover key topics while leaving room to follow interesting threads. - -**Structure:** -1. **Research goal** (1–2 sentences): What outcome are we learning about? -2. **Screening criteria:** Who qualifies for this interview? -3. **Warm-up questions** (2–3): Easy openers to build rapport. -4. **Story prompts** (2–3): Core story-based questions tied to your research goal. -5. **Follow-up / probing questions** (5–8): Nested under each story prompt — use as needed. -6. **Wrap-up questions** (1–2): "Anything else?" and referral questions. -7. **Debrief checklist:** Reminders for what to capture in your interview snapshot immediately after. - ---- - -## 4. Synthesizing Interviews: The Interview Snapshot - -### 4.1 Why Immediate Synthesis Matters - -> "Synthesize each interview immediately after it ends. Capture your thoughts while they're fresh, rather than assuming you'll revisit the recording or notes later." — Teresa Torres - -Memory degrades rapidly. Schedule **15 minutes immediately after every interview** for synthesis. The product trio should do this together — co-creation builds shared understanding. - -### 4.2 The Interview Snapshot (Teresa Torres) - -A **one-page summary** that makes each interview memorable, actionable, and reference-able. The product trio collaborates to complete it in 15–20 minutes post-interview. - -**Seven components:** - -| Component | What to Capture | -|---|---| -| **1. Name & Photo** | Identify and remember the participant | -| **2. Quick Facts** | Key context: role, segment, tenure, relevant demographics | -| **3. Memorable Quote** | A single quote that captures the essence of the story — helps trigger recall later | -| **4. Experience Map** | A simple visual timeline of the story they told (beginning → middle → end) with key moments marked | -| **5. Opportunities** | Unmet needs, pain points, and desires that surfaced during the story | -| **6. Insights** | Interesting learnings that aren't yet opportunities but may become relevant later | -| **7. Follow-up Items** | Open questions, things to verify, people to talk to next | - -**Templates available in:** Miro (Product Talk template), FigJam, Google Slides, PowerPoint, Keynote. - -**Key principle:** The snapshot is **synthesis, not transcription**. You are distilling meaning, not capturing every word. - ---- - -## 5. Extracting Insights from Transcripts - -### 5.1 Transcription First - -Before analysis, convert recordings to searchable text. This is the foundation for all downstream work. - -| Method | Best For | Considerations | -|---|---|---| -| **Automated transcription** (Otter.ai, Rev, Dovetail, Condens) | Speed; most use cases | Review critical sections manually — AI struggles with names, jargon, crosstalk | -| **Human transcription** | High-stakes research; heavy accents/jargon | More accurate but slower and more expensive | -| **Hybrid** | Enterprise research | Auto-transcribe first, then human-proofread key sections | - -**Essential metadata per session:** -- Session ID (stable, unique code) -- Date and type (interview, usability test, support call) -- Participant profile fields (role, segment, plan tier, region) -- Moderator/researcher and study name -- Consent/usage notes -- Links to recording and transcript files - -### 5.2 Highlighting: Capture Atomic Evidence - -Before tagging or theming, **highlight** the meaningful moments in each transcript. Each highlight should be an **atomic evidence unit** — a single observation, quote, or behavior that can stand alone. - -**What to highlight:** -- Direct quotes expressing needs, pain points, or desires -- Descriptions of behavior (what the participant actually did) -- Emotional reactions (frustration, surprise, delight) -- Workarounds and hacks (signals of unmet needs) -- Contradictions between stated preferences and actual behavior - -**Principle:** Highlight first, tag second, synthesize third. Don't jump to themes too early. - -### 5.3 Coding / Tagging - -Coding (or tagging) is the process of labeling highlights to enable pattern discovery across multiple interviews. - -**Two approaches:** - -| Approach | Description | When to Use | -|---|---|---| -| **Deductive (top-down)** | Define codes *before* reviewing data, based on research questions and hypotheses | When you have specific questions to answer; faster for time-constrained projects | -| **Inductive (bottom-up)** | Let codes emerge *from* the data as you review | When exploring new territory; prevents premature categorization | -| **Hybrid** | Start with a small set of deductive codes, then add inductive codes as new themes emerge | Most common in practice; balances speed and openness | - -**Practical tagging taxonomy:** - -| Tag Category | Examples | Purpose | -|---|---|---| -| **Descriptive** | Location, device, role, task, feature area | Organize by context | -| **Emotional** | Frustration, delight, confusion, surprise | Build empathy; identify emotional peaks | -| **Behavioral** | Workaround, abandonment, comparison shopping, habit | Surface actual behavior patterns | -| **Need/Pain Point** | Unmet need, pain point, desire, blocker | Feed directly into opportunities | -| **Evaluative** | Like, dislike, strong preference, indifference | Capture sentiment toward specific elements | - -**Best practices for a shared codebook:** -- Keep the tag set small (15–25 tags) and expand only when needed. -- Write a 1-sentence definition for each tag so teammates apply them consistently. -- Review and consolidate tags periodically — merge synonyms, retire unused tags. -- Use a shared tool (Dovetail, Condens, Notion, or even a spreadsheet) so the whole team sees the same taxonomy. - -### 5.4 Affinity Mapping & Thematic Analysis - -Once you've highlighted and tagged across multiple interviews, affinity mapping helps you see the patterns. - -**Step-by-step process:** - -1. **Gather all highlights** — Pull tagged quotes, observations, and notes from all interviews onto a shared surface (digital whiteboard, Miro, FigJam, or physical sticky notes). -2. **Group by similarity** — Move items that feel related near each other. Don't overthink categories yet — trust your intuition. -3. **Name the clusters** — Once groups form, give each a descriptive label that captures the theme (e.g., "Users distrust automated recommendations," "Onboarding feels overwhelming in week 1"). -4. **Look for hierarchy** — Some clusters may be sub-themes of larger themes. Nest them. -5. **Quantify (loosely)** — Note how many participants contributed to each theme and from which segments. This isn't statistical analysis — it's pattern recognition. -6. **Identify outliers** — Don't ignore insights that don't fit neatly. Outliers can signal emerging opportunities. -7. **Document** — Write a theme statement for each cluster, supported by 2–3 representative quotes with source references. - -**Watch out for bias:** -- **Confirmation bias:** Gravitating toward themes that confirm your hypotheses. -- **Recency bias:** Over-weighting the most recent interviews. -- **Loudness bias:** Giving more weight to articulate or emotionally expressive participants. - -Affinity mapping in a group (the product trio + stakeholders) helps counter individual bias through diverse perspectives. - -### 5.5 Atomic Research Nuggets - -For teams running continuous discovery over months/years, the **atomic research** approach (developed by Tomer Sharon and Daniel Pidcock) prevents insights from getting buried in reports. - -**What is a nugget?** -A nugget is the smallest indivisible unit of research insight: -- **Observation:** A single finding or fact (e.g., "3 of 5 users abandoned the wizard at step 3") -- **Evidence:** The source data that supports it (quote, timestamp, video clip) -- **Tags:** Metadata for searchability (feature area, user segment, research study) - -**Why nuggets work:** -- They are **reusable** across projects — you don't re-run the same research because someone didn't read last year's report. -- They are **searchable** — stakeholders can self-serve insights from a research repository. -- They are **composable** — multiple nuggets combine into higher-level insights and themes. - -**Storage:** Use a research repository tool (Dovetail, Condens, EnjoyHQ, Notion) or a structured spreadsheet with consistent tagging. - ---- - -## 6. From Insights to Action: The Opportunity Solution Tree - -### 6.1 What Is an Opportunity Solution Tree (OST)? - -The Opportunity Solution Tree, popularized by Teresa Torres, is a visual framework that connects: - -``` -Outcome (metric) - └── Opportunities (needs, pain points, desires) - └── Solutions (ideas to address opportunities) - └── Experiments (tests to validate solutions) -``` - -It ensures every solution traces back to a real customer opportunity, which traces back to a measurable business outcome. - -### 6.2 How to Build an OST - -1. **Set the outcome** — Place your target metric at the top of the tree (e.g., "Increase weekly active users by 15%"). -2. **Create an experience map** — Have each member of the product trio draw what they believe the current customer experience looks like. Merge into a shared map. Gaps in the map guide your interviews. -3. **Map opportunities from interview snapshots** — Every 3–4 interviews, review your snapshots and pull out the opportunities (needs, pain points, desires). Place them on the tree under the relevant moment in the experience map. -4. **Structure the opportunity space** — Group and nest related opportunities. Parent opportunities are broad (e.g., "Users struggle to find relevant content"); child opportunities are more specific (e.g., "Search results don't account for past viewing history"). -5. **Select a target opportunity** — Compare and contrast opportunities. Choose one that is solvable, impactful, and aligned with your outcome. -6. **Generate solutions** — Brainstorm multiple solutions for the target opportunity (divergent thinking). Don't commit to the first idea. -7. **Design experiments** — For each promising solution, identify the riskiest assumption and design a small test to validate or invalidate it. -8. **Iterate** — As you learn, revise the tree. New interviews add new opportunities. Failed experiments redirect you to alternative solutions. - -### 6.3 Common Pitfalls - -- **Framing opportunities as solutions.** "We need a better search bar" is a solution. The opportunity is "Users can't find content relevant to their interests." Practice separating the two. -- **Overreacting to the latest interview.** The tree prevents this by providing a big-picture view. One interview = one data point. Update the tree after every 3–4 interviews, not after every single one. -- **Skipping opportunity mapping.** Teams that jump from interview to solution miss the chance to compare opportunities strategically. -- **Setting the wrong outcome.** If your outcome isn't connected to business strategy, the whole tree drifts. Re-validate your outcome quarterly. - ---- - -## 7. Tools for Continuous Discovery - -| Category | Tools | Purpose | -|---|---|---| -| **Recruiting & Scheduling** | Ethnio, Orbital, Great Question, Calendly, UserInterviews | Automate participant recruitment and scheduling | -| **Video & Transcription** | Zoom, Grain, Otter.ai, Rev, Descript | Record interviews and generate transcripts | -| **Research Repository & Analysis** | Dovetail, Condens, EnjoyHQ, Notion, Airtable | Store, tag, search, and synthesize research data | -| **Synthesis & Mapping** | Miro, FigJam, Figjam, MURAL | Interview snapshots, affinity maps, experience maps, OSTs | -| **Opportunity Solution Trees** | Miro (Product Talk templates), Vistaly, ProductBoard | Visualize and manage the opportunity space | -| **AI-Assisted Analysis** | Dovetail AI, Condens AI, ChatGPT, Claude | Auto-transcription, auto-tagging, summarization (always human-validate) | - -**A note on AI tools:** AI can speed up transcription, suggest tags, and draft theme summaries. However, **do not rely on AI exclusively for synthesis** (per Teresa Torres). The act of personally reviewing conversations and identifying patterns is where deep understanding forms. Use AI to surface things you might overlook, not to replace your thinking. - ---- - -## 8. Building the Habit: Making Discovery Stick - -### 8.1 Weekly Cadence Template - -| Day | Activity | Time | -|---|---|---| -| **Monday** | Review upcoming interview schedule (auto-populated) | 5 min | -| **Tuesday** | Conduct interview #1; complete interview snapshot | 45–60 min | -| **Thursday** | Conduct interview #2; complete interview snapshot | 45–60 min | -| **Friday** | Cross-interview synthesis: update OST, review patterns | 30–45 min | - -This is approximately **2–3 hours per week** — roughly 5-7% of a trio's working hours. - -### 8.2 Protect Discovery Time - -- **Treat discovery like sprint planning** — it's not optional; it's on the calendar. -- **Batch interviews** — Don't spread them across random slots. Dedicated blocks reduce context-switching. -- **Rotate moderation** — Each trio member should take turns leading interviews to build shared capability. -- **Share snapshots visibly** — Post them in a team channel (Slack, Teams) or a shared Miro board so stakeholders stay informed without attending every session. - -### 8.3 Scaling Across Teams - -- **Create a shared codebook** — Standard tags and definitions across teams enable cross-team insight discovery. -- **Maintain a centralized research repository** — All snapshots, nuggets, and themes live in one searchable place. -- **Run periodic "insight jams"** — Monthly sessions where multiple trios review each other's OSTs and cross-pollinate opportunities. -- **Train PMs and designers on story-based interviewing** — The skill gap is the bottleneck, not the process. - ---- - -## 9. Quick-Reference Checklists - -### Pre-Interview Checklist -- [ ] Outcome defined and agreed upon by the product trio -- [ ] Discussion guide prepared (2–3 story prompts, follow-up questions) -- [ ] Participant recruited and confirmed (screener passed) -- [ ] Recording tool set up and tested -- [ ] Trio roles assigned (moderator, note-taker, observer) -- [ ] Three most important learning goals written down (The Mom Test) - -### During-Interview Checklist -- [ ] Warm-up complete; participant is comfortable -- [ ] Collecting specific stories about past behavior (not opinions about the future) -- [ ] Redirecting generalizations back to specifics -- [ ] Using active listening (echoing, silence, "tell me more") -- [ ] Not pitching solutions or leading the witness -- [ ] Capturing timestamps of key moments for later reference - -### Post-Interview Checklist -- [ ] Interview snapshot completed within 15–20 minutes -- [ ] Opportunities and insights documented -- [ ] Experience map drawn for the story collected -- [ ] Snapshot shared with the team -- [ ] Follow-up items logged -- [ ] Opportunities added to the Opportunity Solution Tree (after every 3–4 interviews) - -### Transcript Analysis Checklist -- [ ] Transcript reviewed and cleaned (names, jargon corrected) -- [ ] Key moments highlighted as atomic evidence units -- [ ] Highlights tagged using shared codebook -- [ ] Themes identified through affinity mapping -- [ ] Themes documented with supporting quotes and source references -- [ ] Findings connected to existing opportunities on the OST -- [ ] Insights stored in research repository for future reference - ---- - -## 10. Recommended Reading & Sources - -| Resource | Author | Key Contribution | -|---|---|---| -| *Continuous Discovery Habits* | Teresa Torres | The definitive framework for weekly customer touchpoints, interview snapshots, and Opportunity Solution Trees | -| *The Mom Test* | Rob Fitzpatrick | Rules for asking questions that produce truthful, useful answers | -| Product Talk Blog (producttalk.org) | Teresa Torres | Story-based interviewing, opportunity mapping, and OST deep dives | -| NN/g User Interviews 101 | Nielsen Norman Group | Foundational interviewing methodology for UX researchers | -| *Thinking, Fast and Slow* | Daniel Kahneman | Understanding System 1 vs. System 2 thinking and why story-based questions produce better data | -| Atomic Research | Tomer Sharon & Daniel Pidcock | Breaking research into reusable, searchable nuggets | -| Dovetail/Condens Workflows | Various | Practical transcript-to-theme synthesis workflows | - ---- - -*This playbook is a living document. Update it as your team's discovery practice matures. The goal is not perfection — it's a sustainable habit of learning from your customers every single week.* From ab115781e75a20f69a9e507ddb68f8b9fa01d229 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 14:47:15 -0500 Subject: [PATCH 07/13] docs: compound learnings from user research workflow integration MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Document 4 patterns learned during review: 1. "Do no harm" integration — producer handles absence with message, consumer handles absence with silence 2. PII-safe research artifacts — gitignore before create, MUST not consider 3. Reference file deduplication — one canonical copy, relative paths 4. Phase recommendation priority — actionable data over missing prerequisites Co-Authored-By: Claude Opus 4.6 --- ...rkflow-phases-with-graceful-degradation.md | 178 ++++++++++++++++++ 1 file changed, 178 insertions(+) create mode 100644 docs/solutions/integration-issues/adding-optional-workflow-phases-with-graceful-degradation.md diff --git a/docs/solutions/integration-issues/adding-optional-workflow-phases-with-graceful-degradation.md b/docs/solutions/integration-issues/adding-optional-workflow-phases-with-graceful-degradation.md new file mode 100644 index 00000000..376a2482 --- /dev/null +++ b/docs/solutions/integration-issues/adding-optional-workflow-phases-with-graceful-degradation.md @@ -0,0 +1,178 @@ +--- +title: "Adding optional workflow phases with graceful degradation" +date: 2026-02-13 +category: integration-issues +tags: + - workflow-orchestration + - privacy-by-design + - feature-integration + - graceful-degradation + - deduplication + - plugin-development +severity: medium +component: workflows +solution_type: pattern +--- + +# Adding Optional Workflow Phases with Graceful Degradation + +## Problem + +When adding a new optional workflow phase (Research) that feeds into existing phases (Brainstorm, Plan), four integration problems surfaced during review: + +1. **PII in sample data** — real names, company names, and confidential discussions were committed to a public-facing repo as sample research artifacts +2. **Reference file duplication** — a 414-line reference file was copied identically into 3 skill directories (~30% of the PR's line count) +3. **Noisy degradation** — the integration into brainstorm/plan workflows could mention missing research data to ALL users, not just those who opted into research +4. **Weak privacy language** — skills said "Consider adding transcripts to .gitignore" instead of "MUST NOT commit" + +## Solution 1: "Do No Harm" Integration Pattern + +When adding an optional agent to existing workflows, two layers of graceful degradation are needed: + +**Layer 1 — The agent handles empty data:** +```markdown +## Step 8: Handle Empty Research Directory + +If `docs/research/` does not exist or contains no files, return: +"No user research data found." +``` + +**Layer 2 — The calling workflow skips silently:** +```markdown +If `user-research-analyst` returns relevant findings (personas, insights, +opportunities), briefly summarize them before starting the collaborative +dialogue. If no research data exists, skip the summary silently and proceed +directly to the collaborative dialogue — do not mention the absence of +research or suggest running `/workflows:research`. +``` + +The agent that produces optional data handles the "no data" case with a message. The workflows that consume optional data handle it with **silence**. This prevents cascading "you should do research" messages. + +**Implementation pattern:** +- Run the new agent in **parallel** with existing agents (no serial bottleneck) +- Use conditional language in the consuming workflow: "If findings were returned... If not, skip silently" +- Explicitly instruct "do not mention the absence" to prevent well-meaning suggestions + +## Solution 2: PII-Safe Research Artifacts + +Raw interview transcripts contain PII and must never reach version control. + +**File structure:** +``` +docs/research/ +├── plans/ # Committed — research plans with hypotheses +├── transcripts/ # GITIGNORED — raw interview data with PII +├── interviews/ # Committed — anonymized snapshots (user-001, user-002) +└── personas/ # Committed — synthesized persona documents +``` + +**Key rules:** +- `.gitignore` must include `docs/research/transcripts/*.md` BEFORE any transcripts are created +- Skills must give explicit PII stripping instructions: replace names inline in quotes, anonymize company names, sanitize filenames +- Use "MUST NOT be committed to public repositories" — not "consider" or "should" +- Sample data for testing must use synthetic data, never real interview content + +**What went wrong:** Sample data files were created with real names ("Krista," "Holly," "Beth"), real companies ("WellCare," "Centene," "Highmark"), and confidential personnel discussions. These were committed to the branch and only caught during code review. + +## Solution 3: Reference File Deduplication + +When multiple skills need the same reference material, maintain ONE canonical copy. + +**Before (3 copies, 1,242 lines):** +``` +skills/ +├── research-plan/references/discovery-playbook.md # 414 lines +├── transcript-insights/references/discovery-playbook.md # 414 lines (duplicate) +└── persona-builder/references/discovery-playbook.md # 414 lines (duplicate) +``` + +**After (1 copy, 414 lines):** +``` +skills/ +├── research-plan/references/discovery-playbook.md # 414 lines (canonical) +├── transcript-insights/SKILL.md # references ../research-plan/references/ +└── persona-builder/SKILL.md # references ../research-plan/references/ +``` + +In each non-canonical SKILL.md: +```markdown +**Reference:** [discovery-playbook.md](../research-plan/references/discovery-playbook.md) +``` + +Relative paths work because Claude Code follows markdown links when loading skill context. + +## Solution 4: Phase Recommendation Priority + +When a workflow command recommends the next phase, prioritize actionable data over missing prerequisites. + +**Wrong order:** +``` +- No plans exist → recommend Plan +- Unprocessed transcripts exist → recommend Process +``` + +**Right order:** +``` +- Unprocessed transcripts exist → recommend Process (ready-to-process data takes priority) +- Interviews exist but no personas → recommend Personas +- No plans and no transcripts → recommend Plan +``` + +Users who drop a transcript into the folder and run `/workflows:research` should be guided to process it — not steered back to create a plan first. Always offer an ad-hoc option (`research_plan: ad-hoc`) so no phase is a hard prerequisite for another. + +## Prevention Strategies + +### 1. PII in Sample Data + +**Prevention:** Create `.gitignore` entries for data directories BEFORE creating the directories. Use only synthetic data in committed samples. + +**Checklist item:** `[ ] All sample data uses fictional names/companies only (no real PII)` + +**Detection:** `grep -riE '[A-Z][a-z]+ (said|mentioned|discussed)' docs/research/` in pre-commit or CI. + +### 2. Reference File Duplication + +**Prevention:** Before copying a reference file into a second skill, stop and use a relative path instead. + +**Checklist item:** `[ ] No reference files duplicated across skills (use relative paths to canonical copy)` + +**Detection:** `find plugins/compound-engineering/skills -name "*.md" -exec md5sum {} \; | sort | uniq -w32 -d` + +### 3. "Do No Harm" Not Verified Until Review + +**Prevention:** When modifying brainstorm/plan/work workflows, explicitly test with an empty `docs/research/` directory and confirm zero behavioral change. + +**Checklist item:** `[ ] Workflow changes verified to produce no output difference when optional data is absent` + +**Detection:** Run the modified workflow in a fresh repo without the feature's data. Confirm no new prompts, messages, or suggestions appear. + +### 4. Weak Privacy Language + +**Prevention:** Use RFC 2119 language: MUST/MUST NOT for security and privacy requirements. Never use "consider," "should," or "recommended" for PII handling. + +**Checklist item:** `[ ] Privacy/security requirements use MUST/MUST NOT language (not "consider" or "should")` + +**Detection:** `grep -rn 'consider.*gitignore\|should.*PII\|recommended.*privacy' skills/` + +## Related Documentation + +- `docs/solutions/plugin-versioning-requirements.md` — Plugin versioning and multi-file update patterns +- `plugins/compound-engineering/CLAUDE.md` — Plugin development conventions, skill compliance checklist +- `docs/brainstorms/2026-02-13-user-research-workflow-integration-brainstorm.md` — Integration design decisions +- `docs/brainstorms/2026-02-10-user-research-workflow-brainstorm.md` — Original workflow design + +## Files Modified + +| File | Change | +|------|--------| +| `commands/workflows/brainstorm.md` | Added `user-research-analyst` to Phase 1.1 with silent degradation | +| `commands/workflows/plan.md` | Added `user-research-analyst` to Step 1 and Step 1.6 with conditional inclusion | +| `commands/workflows/research.md` | Fixed phase recommendation to prioritize unprocessed transcripts | +| `agents/research/user-research-analyst.md` | Removed "to be wired in PR 2" TODO, updated Integration Points | +| `skills/transcript-insights/SKILL.md` | Strengthened PII guidance from "Consider" to "MUST NOT" | +| `skills/persona-builder/SKILL.md` | Simplified evidence strength/hypothesis status tables; deduplicated playbook reference | +| `.gitignore` | Added `docs/research/transcripts/*.md` | + +## Key Takeaway + +The pattern for adding optional workflow phases: **the producer handles absence with a message; the consumer handles absence with silence.** This ensures the feature enhances workflows for adopters without degrading them for everyone else. From c6fa9f4977bad5fdf1ef489ae9500e2bfb60045e Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 21:39:04 -0500 Subject: [PATCH 08/13] fix: handle inline transcripts and empty plans in research workflow Research process phase stalled when used as the first research action. Two fixes: (1) workflows:research now saves inline transcript content to a file before processing instead of requiring pre-existing files, and (2) transcript-insights skill gracefully handles empty plans directory by defaulting to ad-hoc. Closes EveryInc/compound-engineering-plugin#187 Co-Authored-By: Claude Opus 4.6 --- plugins/compound-engineering/CHANGELOG.md | 5 +++++ .../commands/workflows/research.md | 13 ++++++++++++- .../skills/transcript-insights/SKILL.md | 6 ++++++ 3 files changed, 23 insertions(+), 1 deletion(-) diff --git a/plugins/compound-engineering/CHANGELOG.md b/plugins/compound-engineering/CHANGELOG.md index 0365708a..f34255c6 100644 --- a/plugins/compound-engineering/CHANGELOG.md +++ b/plugins/compound-engineering/CHANGELOG.md @@ -21,6 +21,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - **`/workflows:brainstorm`** - Now runs `user-research-analyst` in parallel; silently skips when no research data exists - **`/workflows:plan`** - Research context integrated into Step 1.6 consolidation +### Fixed + +- **`/workflows:research` command** — Process phase now handles inline transcript content (saves to file before processing) instead of requiring pre-existing files in `docs/research/transcripts/` +- **`transcript-insights` skill** — Step 2 (Link to Research Plan) now gracefully handles empty `docs/research/plans/` directory by defaulting to ad-hoc instead of stalling + --- ## [2.34.0] - 2026-02-14 diff --git a/plugins/compound-engineering/commands/workflows/research.md b/plugins/compound-engineering/commands/workflows/research.md index 6f226bb4..5e96dcf9 100644 --- a/plugins/compound-engineering/commands/workflows/research.md +++ b/plugins/compound-engineering/commands/workflows/research.md @@ -81,12 +81,23 @@ After the skill completes, proceed to **Handoff**. ## Phase 2: Process +### Check for Inline Content + +If the research phase argument contains more than just the word "process" (i.e., transcript content was provided inline): + +1. Extract the transcript content from the argument (everything after "process") +2. Look for a meeting title or date in the content to generate a filename. Use the format: `YYYY-MM-DD__transcript.md`. If no title or date is found, use today's date with a generic slug (e.g., `2026-02-13_interview_transcript.md`) +3. Save the content to `docs/research/transcripts/[filename]` +4. Skip the transcript selection step below — proceed directly to **Process Selected Transcript** with this file path + ### Check for Transcripts +**If inline content was already saved above, skip this section.** + Look for `.md` files in `docs/research/transcripts/`. **If no transcripts exist:** -Report: "No transcripts found in `docs/research/transcripts/`. Save your interview transcript as a `.md` file there, then re-run this phase." +Report: "No transcripts found in `docs/research/transcripts/`. Save your interview transcript as a `.md` file there, or pass the content inline: `/workflows:research process [transcript content]`." Proceed to **Handoff**. **If transcripts exist:** diff --git a/plugins/compound-engineering/skills/transcript-insights/SKILL.md b/plugins/compound-engineering/skills/transcript-insights/SKILL.md index a3c643b2..ccf4d665 100644 --- a/plugins/compound-engineering/skills/transcript-insights/SKILL.md +++ b/plugins/compound-engineering/skills/transcript-insights/SKILL.md @@ -30,6 +30,12 @@ If content is pasted directly, proceed with that content (no file reference in o ### Step 2: Link to Research Plan +Check for files in `docs/research/plans/`. + +**If no plans exist:** +Set `research_plan: ad-hoc` in frontmatter and proceed to Step 3. + +**If plans exist:** List existing research plans by reading frontmatter from files in `docs/research/plans/`: - Show title, date, and status for each plan - Most recent first, cap at 7 entries From 88329ce3ba5229f5e0d6ed82a13f00b1f8d0098b Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 21:44:32 -0500 Subject: [PATCH 09/13] docs: add solution doc for research workflow first-use failure Documents the root cause, fix, and prevention checklist for the workflow-skill input mismatch that caused /workflows:research process to stall on first use with empty directories. Co-Authored-By: Claude Opus 4.6 --- ...orkflow-skill-transcript-input-mismatch.md | 128 ++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 docs/solutions/integration-issues/workflow-skill-transcript-input-mismatch.md diff --git a/docs/solutions/integration-issues/workflow-skill-transcript-input-mismatch.md b/docs/solutions/integration-issues/workflow-skill-transcript-input-mismatch.md new file mode 100644 index 00000000..f0dd6b1b --- /dev/null +++ b/docs/solutions/integration-issues/workflow-skill-transcript-input-mismatch.md @@ -0,0 +1,128 @@ +--- +title: "/workflows:research process stalls on inline transcript with empty plans directory" +date: 2026-02-13 +category: integration-issues +tags: [workflow-automation, research-workflow, transcript-processing, state-handling, first-use-failure] +severity: high +component: plugins/compound-engineering/commands/workflows/research.md + plugins/compound-engineering/skills/transcript-insights/SKILL.md +resolution_time: 30 minutes +--- + +# /workflows:research process stalls on inline transcript with empty plans directory + +## Problem + +`/workflows:research process this transcript [content]` stalls when used as the first research action. Claude creates directories but never produces output. Brainstorm and plan workflows are unaffected. + +**Symptom:** The model appears stuck after creating `docs/research/` directories. No transcript is saved, no interview snapshot is generated. + +**Trigger:** Empty research directories (no prior transcripts, plans, or interviews). + +## Root Cause + +Two cascading bugs at the integration boundary between the workflow command and its downstream skill: + +**Bug 1: Workflow command doesn't pass inline content to the skill.** +Phase 2 of `research.md` only checks for `.md` files in `docs/research/transcripts/`. When the directory is empty, it reports "no transcripts found" and exits. The `transcript-insights` skill supports inline content (Step 1: "If content is pasted directly, proceed"), but the workflow command never provides a path for inline content to reach the skill. + +**Bug 2: Skill stalls on empty plans directory.** +Even if Bug 1 were fixed, `transcript-insights` Step 2 tries to list research plans from `docs/research/plans/` with no empty-state fallback. On first use, the directory is empty and the model stalls trying to reconcile the instruction to "list plans" with nothing to list. + +**Pattern:** This is a "first-use failure" — everything works once artifacts exist from prior runs, but the first invocation with empty directories fails. + +## Solution + +### Fix 1: research.md — Add inline content handling + +Added a "Check for Inline Content" section before "Check for Transcripts" in Phase 2: + +```markdown +### Check for Inline Content + +If the research phase argument contains more than just the word "process" +(i.e., transcript content was provided inline): + +1. Extract the transcript content from the argument (everything after "process") +2. Look for a meeting title or date in the content to generate a filename. + Use the format: YYYY-MM-DD__transcript.md +3. Save the content to docs/research/transcripts/[filename] +4. Skip the transcript selection step — proceed directly to Process Selected Transcript +``` + +Updated "Check for Transcripts" with a guard: "If inline content was already saved above, skip this section." Also updated the error message to mention the inline option. + +### Fix 2: transcript-insights/SKILL.md — Add empty-state handling + +Replaced the unconditional plan listing in Step 2 with: + +```markdown +Check for files in docs/research/plans/. + +**If no plans exist:** +Set research_plan: ad-hoc in frontmatter and proceed to Step 3. + +**If plans exist:** +List existing research plans... +[existing flow unchanged] +``` + +Removed the AskUserQuestion confirmation for empty state — the user already committed to processing by providing a transcript. Just default to ad-hoc silently. + +## Why It Works + +- **Inline content becomes first-class:** The workflow now extracts, saves, and passes inline content through to the skill, matching what the skill already documented as supported input. +- **Empty state is a non-event:** When no plans exist, the skill defaults to ad-hoc without blocking. The user can create plans later. +- **Backward compatible:** The existing file-based flow is untouched. The inline path only activates when the argument contains more than "process". +- **Converges at the same point:** Both the inline and file-based paths meet at "Process Selected Transcript" with a file path, so all downstream logic is shared. + +## Why Other Workflows Were Unaffected + +| Workflow | Why it works | +|----------|-------------| +| `/workflows:brainstorm` | Accepts inline descriptions directly — no file dependency | +| `/workflows:plan` | Has explicit fallback: "If no brainstorm found, run idea refinement" | +| `/workflows:research` (phase menu) | Just counts files — 0 is valid | +| `/workflows:research plan` | Creates from scratch — no dependency on existing artifacts | +| `/workflows:research personas` | Explicitly handles empty state: "No processed interviews found" | + +## Prevention: First-Use Failure Checklist + +This class of bug happens when workflow commands and skills have mismatched input handling or missing empty-state fallbacks. When writing new workflow commands or skills, check: + +### Input Contract +- [ ] Every supported input format (file path, inline content, empty) is documented in both the workflow command AND the skill +- [ ] If a skill says it accepts inline content, the workflow command has a path to pass it through +- [ ] Empty input is handled explicitly (not silently ignored) + +### Empty-State Handling +- [ ] Every instruction that reads from a directory has an "If empty" branch +- [ ] Empty-state messages guide the user to a next action (not just "not found") +- [ ] Default behavior exists for first-use (e.g., ad-hoc tagging, skip to next step) + +### First-Run Test +- [ ] Can a user run this workflow with NO prior artifacts and succeed? +- [ ] All directories are created upfront (mkdir -p in Directory Setup) +- [ ] File selection handles 0, 1, and N files explicitly + +### Integration Boundary +- [ ] Workflow command documents what it passes to the skill +- [ ] Skill documents what it expects to receive +- [ ] Return contract is documented (what file gets created, what frontmatter fields) + +**Core insight:** Design workflows for the worst case (empty, first-run) first, then optimize for the common case (existing artifacts). + +## Files Changed + +| File | Change | +|------|--------| +| `plugins/compound-engineering/commands/workflows/research.md` | Added inline content handling before file check | +| `plugins/compound-engineering/skills/transcript-insights/SKILL.md` | Added empty-state handling for plans directory | +| `plugins/compound-engineering/.claude-plugin/plugin.json` | Version 2.32.0 → 2.32.1 | +| `plugins/compound-engineering/CHANGELOG.md` | Added Fixed section for 2.32.1 | + +## References + +- Issue: [EveryInc/compound-engineering-plugin#187](https://github.com/EveryInc/compound-engineering-plugin/issues/187) +- Fix plan: `docs/plans/2026-02-13-fix-research-process-first-action-plan.md` +- Original feature plan: `docs/plans/2026-02-11-feat-user-research-workflow-plan.md` +- Plugin versioning guide: `docs/solutions/plugin-versioning-requirements.md` From 47610ea79394d1d9d04dc0fbf1071141ff059836 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 22:17:52 -0500 Subject: [PATCH 10/13] fix: restructure research workflow routing to prevent phase selection leak The model was running Phase Selection (artifact status checks) even when inline transcript content was provided. Replaced soft "jump to Phase 2" instruction with numbered rules (Rule 1-4) with explicit "follow FIRST match and STOP" semantics. Inline content handling now happens at the routing point itself rather than deferring to a distant section. Co-Authored-By: Claude Opus 4.6 --- .../commands/workflows/research.md | 35 ++++++++++++++----- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/plugins/compound-engineering/commands/workflows/research.md b/plugins/compound-engineering/commands/workflows/research.md index 5e96dcf9..fdbb1a48 100644 --- a/plugins/compound-engineering/commands/workflows/research.md +++ b/plugins/compound-engineering/commands/workflows/research.md @@ -31,18 +31,38 @@ mkdir -p docs/research/plans docs/research/transcripts docs/research/interviews Run this silently before any phase. -## Research Phase +## Routing #$ARGUMENTS -**If argument matches a phase name** (`plan`, `process`, or `personas`), jump directly to that phase below. +Read the content inside ``. Follow the FIRST matching rule below and STOP — do not continue to later rules. -**If argument is unrecognized**, show the phase selection menu with a note: "Valid arguments: `plan`, `process`, `personas`." +### Rule 1: Inline transcript content -**If argument is empty**, run phase selection: +If the argument contains multi-line content (a transcript, meeting notes, interview text — anything beyond a single keyword or short phrase): + +This IS inline transcript content. Do NOT check artifact status. Do NOT show phase selection. Handle it immediately: + +1. Extract the meeting title and date from the content to generate a filename: `YYYY-MM-DD__transcript.md`. If no date is found, use today's date. +2. Save the full content to `docs/research/transcripts/[filename]` +3. Jump to **Process Selected Transcript** in Phase 2 below with this file path + +### Rule 2: Phase name keyword + +If the argument is exactly `plan`, `process`, or `personas`, jump to that phase below. + +### Rule 3: Unrecognized argument + +If the argument is a short unrecognized string, show the phase selection menu with a note: "Valid arguments: `plan`, `process`, `personas`." + +### Rule 4: Empty argument + +If the argument is empty, run phase selection: ### Phase Selection +**SKIP this section if Rule 1 matched above (inline content).** Only run this when the argument was empty. + Show a brief artifact status (2-3 lines max): ``` @@ -83,12 +103,9 @@ After the skill completes, proceed to **Handoff**. ### Check for Inline Content -If the research phase argument contains more than just the word "process" (i.e., transcript content was provided inline): +If arriving here from **Rule 1** in Routing, the transcript has already been saved. Skip directly to **Process Selected Transcript** below. -1. Extract the transcript content from the argument (everything after "process") -2. Look for a meeting title or date in the content to generate a filename. Use the format: `YYYY-MM-DD__transcript.md`. If no title or date is found, use today's date with a generic slug (e.g., `2026-02-13_interview_transcript.md`) -3. Save the content to `docs/research/transcripts/[filename]` -4. Skip the transcript selection step below — proceed directly to **Process Selected Transcript** with this file path +If the argument starts with "process" followed by substantial content, strip the "process" prefix, save the content as a transcript file (using the same naming logic from Rule 1), and skip to **Process Selected Transcript**. ### Check for Transcripts From 14e4e5a7687017d9970cde3e6d92787fda55869e Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Fri, 13 Feb 2026 22:30:51 -0500 Subject: [PATCH 11/13] Co-Authored-By: Claude Opus 4.6 --- .claude/settings.json | 5 + ...02-13-documentation-workflow-brainstorm.md | 67 +++++ ...-namespaced-extension-system-brainstorm.md | 136 ++++++++++ ...-02-13-feat-documentation-workflow-plan.md | 122 +++++++++ ...3-feat-namespaced-extension-system-plan.md | 238 +++++++++++++++++ ...-fix-research-process-first-action-plan.md | 109 ++++++++ .../interviews/2026-01-13-participant-001.md | 181 +++++++++++++ .../the-sales-operations-strategist.md | 92 +++++++ docs/research/transcripts/1.md | 252 ++++++++++++++++++ 9 files changed, 1202 insertions(+) create mode 100644 .claude/settings.json create mode 100644 docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md create mode 100644 docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md create mode 100644 docs/plans/2026-02-13-feat-documentation-workflow-plan.md create mode 100644 docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md create mode 100644 docs/plans/2026-02-13-fix-research-process-first-action-plan.md create mode 100644 docs/research/interviews/2026-01-13-participant-001.md create mode 100644 docs/research/personas/the-sales-operations-strategist.md create mode 100644 docs/research/transcripts/1.md diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 00000000..eca411a1 --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,5 @@ +{ + "enabledPlugins": { + "compound-engineering@every-marketplace": true + } +} diff --git a/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md b/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md new file mode 100644 index 00000000..596cd37e --- /dev/null +++ b/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md @@ -0,0 +1,67 @@ +# Documentation Workflow Brainstorm + +**Date:** 2026-02-13 +**Status:** Ready for planning + +## What We're Building + +A `/workflows:document` command that automatically updates project documentation after a feature is implemented and reviewed. It fills the gap between code review and knowledge capture in the workflow chain: + +``` +Research → Brainstorm → Plan → Work → Review → Document → Compound +``` + +**Scope:** Full project documentation — README, CHANGELOG, API docs, user guides, and inline code docs. Not limited to the plugin itself. + +## Why This Approach + +The compound-engineering workflow chain currently has no step for updating user-facing documentation. After `/workflows:work` finishes implementation and `/workflows:review` validates the code, documentation updates are left as a manual afterthought. This creates a gap where features ship without corresponding docs. + +A phased workflow command (not an agent) was chosen because: +- It follows the established workflow pattern (phase-based, skill-loading, handoff points) +- It fits naturally in the chain between Review and Compound +- A single workflow is simpler to maintain than multiple specialist agents +- The propose-then-confirm model gives users control without being tedious + +## Key Decisions + +1. **Form factor:** New workflow command (`/workflows:document`), not an agent +2. **Chain position:** After Review, before Compound +3. **Discovery method:** Git diff + chain docs (brainstorm/plan) for full context +4. **Autonomy model:** Propose-then-confirm — analyze what needs updating, present a plan, get approval, then execute +5. **Documentation scope:** Full project docs (README, CHANGELOG, API docs, user guides, inline code docs) + +## Design + +### Phase 1: Discovery + +Analyze the codebase to understand what was built and what docs need updating: + +- **Git diff analysis:** Read the diff between current branch and main to identify what changed +- **Chain doc lookup:** Find and read any brainstorm/plan documents for this feature (auto-detect from `docs/brainstorms/` and `docs/plans/` by date or topic) +- **Doc inventory:** Scan the project for existing documentation files (README, CHANGELOG, API docs, guides, etc.) +- **Gap analysis:** Compare what was built against what's documented + +### Phase 2: Proposal + +Present a structured proposal to the user: + +- List each doc file that needs updating +- For each file, describe what changes are needed (new section, updated section, new entry, etc.) +- Flag any docs that should be created (e.g., "No API docs exist yet — should we create one?") +- Use `AskUserQuestion` to get approval (approve all, select specific items, or skip) + +### Phase 3: Execution + +Make the approved documentation changes: + +- Update each approved doc file +- Follow existing doc conventions (detect style from existing content) +- After all updates, show a summary of what was changed +- Offer handoff to `/workflows:compound` for knowledge capture + +## Open Questions + +1. Should the workflow also update inline code comments/docstrings, or just standalone doc files? +2. Should it create a documentation PR comment summarizing what was updated (useful for team visibility)? +3. How should it handle projects with no existing docs — offer to scaffold a basic doc structure? diff --git a/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md b/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md new file mode 100644 index 00000000..c8a9af2f --- /dev/null +++ b/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md @@ -0,0 +1,136 @@ +# Namespaced Extension System for Compound Engineering Plugin + +**Date:** 2026-02-13 +**Status:** Brainstorm +**Author:** Matthew Thompson + +## What We're Building + +A general extensibility system that lets users create optional plugins ("extensions") that work alongside the core compound-engineering plugin. Extensions follow a naming convention (`compound-engineering-`) and are distributed through the same marketplace. They enable three categories of customization: + +1. **Custom agents/skills** - Specialized agents and skills for specific domains (e.g., a Phoenix reviewer, a Terraform skill) +2. **Framework packs** - Bundled sets of agents + skills + commands for a specific stack (e.g., "Rails Pack" with Rails-specific reviewers, generators, and conventions) +3. **Convention configs** - Team/personal rules and style preferences delivered as curated CLAUDE.md snippets that influence how existing core agents behave + +## Why This Approach + +We chose a **namespaced extension system** (Approach 2) over alternatives because: + +- **Works within current Claude Code spec** - No custom fields or spec changes needed. The marketplace already supports multiple plugins (coding-tutor proves this). +- **Convention over configuration** - Naming patterns (`compound-engineering-*`) and tags create clear relationships without formal dependency mechanisms. +- **Seamless experience** - The primary success metric. Extensions should feel like a natural part of the core with no conflicts or configuration headaches. +- **Upgradable** - Can graduate to a formal manifest system (Approach 3) later if the ecosystem grows large enough to need it. + +### Rejected Alternatives + +- **Flat plugin ecosystem** - Too loose. No way to signal which plugins complement the core. Users would have to guess. +- **Pack manifest system** - Adds `extends` field not in the Claude Code spec. Premature complexity for current ecosystem size. + +## Key Decisions + +### 1. Naming Convention + +All extensions use the pattern: `compound-engineering-` + +Examples: +- `compound-engineering-rails` (Rails framework pack) +- `compound-engineering-security` (security-focused agents) +- `compound-engineering-every-conventions` (team convention config) + +This makes extensions immediately identifiable and groups them naturally in marketplace listings. + +### 2. Convention Configs via CLAUDE.md + +Convention configs are plugins that ship curated CLAUDE.md instructions rather than (or in addition to) agents. Claude already reads CLAUDE.md files as context, so this is the most natural mechanism. No new infrastructure needed. + +A convention config plugin structure: +``` +plugins/compound-engineering-my-team/ +├── .claude-plugin/plugin.json +├── CLAUDE.md # Team conventions that agents read +└── README.md +``` + +### 3. Discovery via Tags and Browse Command + +- Extensions use a shared tag (e.g., `compound-engineering-extension`) in marketplace.json +- A `/extensions` command (or similar) lets users browse available extensions with descriptions +- The marketplace listing groups extensions visually + +### 4. Distribution Through Same Marketplace + +All extensions live in the same marketplace repo (`every-marketplace`). This provides: +- One-stop browsing +- Consistent quality (maintainers can review contributions) +- Simple installation (`claude /plugin install compound-engineering-rails`) + +### 5. No Formal Dependencies + +Claude Code doesn't support plugin dependencies. Each extension must function independently - it can complement the core but shouldn't break without it. This is a platform constraint we accept. + +### 6. Component Namespacing + +To avoid name collisions between extensions: +- Agents: prefix or suffix with pack name (e.g., `rails-model-reviewer` not just `reviewer`) +- Skills: use descriptive names (e.g., `rails-generators` not just `generators`) +- Commands: use pack prefix (e.g., `rails:scaffold` not just `scaffold`) + +## Extension Categories + +### Framework Packs +A framework pack bundles domain-specific tooling for a technology stack. + +Example: `compound-engineering-rails` +``` +plugins/compound-engineering-rails/ +├── .claude-plugin/plugin.json +├── CLAUDE.md # Rails conventions and preferences +├── agents/ +│ ├── rails-model-reviewer.md +│ ├── rails-migration-checker.md +│ └── rails-performance-agent.md +├── skills/ +│ └── rails-generators/SKILL.md +└── README.md +``` + +### Custom Agents/Skills +Individual agents or skills for specific needs. + +Example: `compound-engineering-security` +``` +plugins/compound-engineering-security/ +├── .claude-plugin/plugin.json +├── agents/ +│ ├── owasp-scanner.md +│ └── dependency-auditor.md +├── skills/ +│ └── threat-model/SKILL.md +└── README.md +``` + +### Convention Configs +Team or personal preferences that shape agent behavior. + +Example: `compound-engineering-every-conventions` +``` +plugins/compound-engineering-every-conventions/ +├── .claude-plugin/plugin.json +├── CLAUDE.md # Every's coding standards, style preferences, etc. +└── README.md +``` + +## Open Questions + +1. **Quality control** - Should there be a review process for community-contributed extensions, or is it open contribution? +2. **Versioning alignment** - Should extensions declare which version of the core they're designed for, even informally? +3. **Starter template** - Should we provide a `/create-extension` command or template repo to scaffold new extensions? +4. **Testing** - How do we verify extensions don't conflict with each other or the core? +5. **Documentation** - Should the docs site auto-generate pages for extensions, or is README.md sufficient? + +## Next Steps + +- Plan the implementation: directory structure, marketplace.json changes, example extensions +- Build 1-2 example extensions to validate the pattern +- Create documentation for extension authors +- Consider a `/create-extension` scaffolding command diff --git a/docs/plans/2026-02-13-feat-documentation-workflow-plan.md b/docs/plans/2026-02-13-feat-documentation-workflow-plan.md new file mode 100644 index 00000000..93785d5d --- /dev/null +++ b/docs/plans/2026-02-13-feat-documentation-workflow-plan.md @@ -0,0 +1,122 @@ +--- +title: "feat: Add documentation workflow command" +type: feat +date: 2026-02-13 +--- + +# feat: Add documentation workflow command + +## Overview + +Add a `/workflows:document` command that updates project documentation after a feature is implemented and reviewed. This fills the gap between `/workflows:review` and `/workflows:compound` in the workflow chain: + +``` +Research → Brainstorm → Plan → Work → Review → Document → Compound +``` + +**Brainstorm:** [docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md](../brainstorms/2026-02-13-documentation-workflow-brainstorm.md) + +## Problem Statement + +The workflow chain has no step for updating user-facing documentation. Features ship without corresponding doc updates because it's left as a manual afterthought. A structured, propose-then-confirm workflow would make documentation a natural part of the development cycle. + +## Proposed Solution + +A single workflow command file at `plugins/compound-engineering/commands/workflows/document.md` that follows the established phase-based pattern. Three phases: Discovery → Proposal → Execution. + +## Implementation + +### File to create + +#### `plugins/compound-engineering/commands/workflows/document.md` + +The workflow command. Structure: + +```yaml +--- +name: workflows:document +description: Update project documentation after implementation and review +argument-hint: "[optional: path to brainstorm or plan doc, or PR number]" +--- +``` + +**Phase 1: Discovery** + +Gather context about what was built and what docs exist: + +1. **Determine diff base** — Check for PR (via `gh pr view`), fall back to `main`/`master` +2. **Git diff analysis** — Run `git diff ...HEAD --stat` then read changed files to understand what was built. Filter out test files, generated code, and lock files to keep scope manageable +3. **Chain doc lookup** — Search `docs/brainstorms/` and `docs/plans/` for recent documents matching the current feature (by date, branch name, or topic). If `$ARGUMENTS` provides a path, use that directly. If nothing found, proceed with diff-only mode +4. **Doc inventory** — Use Glob to find existing documentation files: `README*`, `CHANGELOG*`, `docs/**/*.md`, `API.md`, `GUIDE.md`, any `**/README.md` in subdirectories. Note their last-modified dates and sizes +5. **Gap analysis** — Compare what was built (from diff + chain docs) against what's documented. Identify: new features not mentioned in README, missing CHANGELOG entry, outdated API docs, new public APIs without docstrings + +**Phase 2: Proposal** + +Present a structured update plan to the user: + +- For each doc file that needs changes, show: file path, what kind of update (new section, updated section, new entry), and a 1-line summary of the change +- Flag any new docs that should be created (e.g., "No CHANGELOG exists — create one?") +- Flag docs that might need deletion or archival (if features were removed) +- Use `AskUserQuestion` with options: + 1. **Approve all** — Execute all proposed updates + 2. **Select items** — Choose which updates to apply (use multiSelect) + 3. **Skip documentation** — Exit without changes + 4. **Refine proposal** — Ask for adjustments + +**Guardrails to prevent overwriting user content:** +- Only modify sections relevant to the changed code — never rewrite entire files +- When updating an existing section, show the diff preview before applying +- Detect and preserve custom sections (anything not generated by this workflow) +- For README: append new feature sections, don't restructure existing content + +**Phase 3: Execution** + +Make the approved changes: + +1. For each approved update, read the target file, make the change, write it back +2. Match existing doc style (detect heading levels, tone, formatting from surrounding content) +3. For CHANGELOG: use Keep a Changelog format if one exists, otherwise detect existing format +4. After all updates, show a summary: files changed, sections added/updated +5. Offer handoff via `AskUserQuestion`: + 1. **Continue to `/workflows:compound`** — Document solved problems for team knowledge + 2. **Review changes** — Load `document-review` skill for quality pass + 3. **Done** — Documentation complete + +### Edge cases to handle + +- **No existing docs:** Offer to scaffold a minimal doc structure (README + CHANGELOG) rather than silently failing +- **No git diff:** If on main with no changes, check `$ARGUMENTS` for a PR number. If nothing, tell the user and exit +- **Doc-only changes in diff:** Detect and exit early — "Changes are documentation-only, nothing additional to document" +- **Massive diffs (50+ files):** Summarize by directory/component rather than file-by-file. Focus on public API changes +- **No chain docs found:** Proceed with diff-only mode, mention that brainstorm/plan context would improve results + +### Files to update (plugin metadata) + +After creating the command, update these files per the plugin's versioning requirements: + +- [ ] `plugins/compound-engineering/.claude-plugin/plugin.json` — bump minor version, update command count in description +- [ ] `.claude-plugin/marketplace.json` — update description with new command count +- [ ] `plugins/compound-engineering/README.md` — add `/workflows:document` to commands list +- [ ] `plugins/compound-engineering/CHANGELOG.md` — add entry under new version + +### Optional: Update review workflow handoff + +Update `plugins/compound-engineering/commands/workflows/review.md` to offer `/workflows:document` as a next step after review completes. Add an option in the final handoff section. + +## Acceptance Criteria + +- [ ] `/workflows:document` command exists and loads correctly +- [ ] Phase 1 discovers changed files via git diff and finds chain docs when available +- [ ] Phase 2 presents a clear proposal listing each doc update needed +- [ ] User can approve all, select specific items, or skip +- [ ] Phase 3 makes only approved changes without overwriting unrelated content +- [ ] Handoff to `/workflows:compound` works +- [ ] Plugin metadata (version, counts, changelog) updated correctly +- [ ] Works in diff-only mode when no chain docs exist + +## References + +- Workflow pattern: `plugins/compound-engineering/commands/workflows/work.md` +- Handoff pattern: `plugins/compound-engineering/commands/workflows/compound.md` +- Doc style skills: `plugins/compound-engineering/skills/document-review/`, `plugins/compound-engineering/skills/every-style-editor/` +- Plugin versioning: `docs/solutions/plugin-versioning-requirements.md` diff --git a/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md b/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md new file mode 100644 index 00000000..2d435008 --- /dev/null +++ b/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md @@ -0,0 +1,238 @@ +--- +title: "feat: Namespaced Extension System" +type: feat +date: 2026-02-13 +brainstorm: docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md +related: plans/grow-your-own-garden-plugin-architecture.md +--- + +# Namespaced Extension System + +## Overview + +Create an extension ecosystem where users can install optional plugins that complement the core compound-engineering plugin. Extensions follow the naming convention `compound-engineering-`, live in the same marketplace, and enable three categories of customization: custom agents/skills, framework packs, and convention configs (delivered as CLAUDE.md snippets). + +## Problem Statement / Motivation + +The compound-engineering plugin is monolithic — 30 agents, 25 commands, 21 skills. Users working in Rails don't need Python reviewers, and vice versa. Teams want their own conventions baked in. The "Grow Your Own Garden" plan identified this problem but proposed a complex growth-loop mechanism. This plan takes a simpler approach: let people install optional extensions from the same marketplace. + +The infrastructure already works — `coding-tutor` proves multiple plugins can coexist in the marketplace. We just need conventions, templates, and a discovery mechanism. + +## Proposed Solution + +### What Claude Code Already Handles + +These are platform-level capabilities we don't need to build: + +- **Installation**: `claude /plugin install ` works for any plugin in a marketplace +- **Component discovery**: Agents, commands, skills auto-discovered from standard directories +- **CLAUDE.md loading**: All CLAUDE.md files in installed plugins are read as context +- **Plugin isolation**: Each plugin is cached independently, no cross-plugin file access +- **Sandboxing**: User permission model applies to all plugin components equally + +### What We Build + +1. **Extension template** — Scaffold structure for creating extensions +2. **Example extensions** — Two reference implementations (framework pack + convention config) +3. **`/extensions` command** — Browse available extensions from the marketplace +4. **Marketplace entries** — Add extensions to marketplace.json with shared tags +5. **Author guide** — Documentation for creating and submitting extensions +6. **Naming validation** — Script to check naming conventions and detect collisions + +## Technical Considerations + +### Naming Conventions + +| Component | Pattern | Example | +|-----------|---------|---------| +| Plugin name | `compound-engineering-` | `compound-engineering-rails` | +| Agent names | `-` | `rails-model-reviewer` | +| Skill names | `-` | `rails-generators` | +| Command names | `:` | `rails:scaffold` | +| Marketplace tag | `compound-engineering-extension` | — | + +### Extension Types + +**Framework Pack** — Bundled agents + skills + commands + CLAUDE.md for a stack: +``` +plugins/compound-engineering-rails/ +├── .claude-plugin/plugin.json +├── CLAUDE.md # Rails conventions (prefer RSpec, follow Rails Way, etc.) +├── README.md +├── agents/ +│ ├── rails-model-reviewer.md +│ └── rails-migration-checker.md +├── commands/ +│ └── rails-console.md +└── skills/ + └── rails-generators/ + └── SKILL.md +``` + +**Convention Config** — Just CLAUDE.md with team/personal rules: +``` +plugins/compound-engineering-every-conventions/ +├── .claude-plugin/plugin.json +├── CLAUDE.md # Team coding standards, PR conventions, etc. +└── README.md +``` + +**Custom Agents/Skills** — Individual specialized components: +``` +plugins/compound-engineering-security/ +├── .claude-plugin/plugin.json +├── README.md +└── agents/ + ├── owasp-scanner.md + └── dependency-auditor.md +``` + +### CLAUDE.md Convention Configs + +Claude Code loads all CLAUDE.md files from installed plugins as context. Priority order (per Claude Code docs): project CLAUDE.md > user CLAUDE.md > plugin CLAUDE.md. This means: + +- Extension conventions apply globally when installed +- Project-level CLAUDE.md can always override extension conventions +- Multiple extension CLAUDE.md files all load (no conflict resolution needed — Claude synthesizes instructions naturally) + +**Guidelines for convention config authors:** +- Keep CLAUDE.md under 2KB (respect token budget) +- Use clear section headers so instructions are scannable +- Prefix rules with context: "When working on Rails code..." rather than absolute rules +- Document which core agents the conventions influence + +### Collision Avoidance + +- Naming convention is the primary defense (convention over enforcement) +- Validation script checks for collisions against core plugin components at submission time +- Component names must not match any existing agent/command/skill in core or other extensions +- If collision detected, author must rename before merging + +### Plugin.json for Extensions (Minimal) + +```json +{ + "name": "compound-engineering-rails", + "version": "1.0.0", + "description": "Rails framework pack for compound-engineering. Adds Rails-specific code review agents, generators, and conventions.", + "author": { + "name": "Author Name" + }, + "keywords": ["compound-engineering-extension", "rails", "ruby", "framework-pack"] +} +``` + +Required fields: `name`, `version`, `description`, `author` +Required keyword: `compound-engineering-extension` (for discovery) + +## Acceptance Criteria + +- [ ] Extension template exists with scaffold script or documented structure +- [ ] At least one example extension (`compound-engineering-rails`) is functional +- [ ] At least one convention config extension exists as a reference +- [ ] `/extensions` command lists available extensions with descriptions and install commands +- [ ] Extensions install alongside core plugin without conflicts +- [ ] marketplace.json includes extension entries with `compound-engineering-extension` tag +- [ ] Author guide documents naming conventions, structure, and submission process +- [ ] Validation script detects naming collisions against core plugin components + +## Success Metrics + +- Extensions install and work alongside the core plugin with zero configuration +- A new extension can be created from template in under 10 minutes +- `/extensions` command provides enough info to decide whether to install + +## Dependencies & Risks + +**Dependencies:** +- Claude Code plugin system continues to support multiple plugins per marketplace (currently works) +- CLAUDE.md files from plugins continue to be loaded as context (currently works) + +**Risks:** +- **Token budget**: Multiple extension CLAUDE.md files could consume too much context. Mitigation: 2KB guideline for convention configs. +- **Name collisions**: Convention-based naming can't prevent all collisions. Mitigation: Validation script checks at submission time. +- **Core plugin changes**: Core agent renames could collide with extensions. Mitigation: Extensions use domain-prefixed names that won't overlap with core's generic names. + +## Implementation + +### Phase 1: Template and Example Extension + +Create the extension template structure and build `compound-engineering-rails` as the reference implementation. + +**Files to create:** + +1. `plugins/compound-engineering-rails/.claude-plugin/plugin.json` — Minimal manifest +2. `plugins/compound-engineering-rails/CLAUDE.md` — Rails conventions +3. `plugins/compound-engineering-rails/README.md` — Usage documentation +4. `plugins/compound-engineering-rails/agents/rails-model-reviewer.md` — Example agent +5. `plugins/compound-engineering-rails/agents/rails-migration-checker.md` — Example agent +6. `plugins/compound-engineering-rails/skills/rails-generators/SKILL.md` — Example skill + +### Phase 2: Convention Config Example + +Create `compound-engineering-every-conventions` as a reference convention config. + +**Files to create:** + +1. `plugins/compound-engineering-every-conventions/.claude-plugin/plugin.json` +2. `plugins/compound-engineering-every-conventions/CLAUDE.md` — Every's coding standards +3. `plugins/compound-engineering-every-conventions/README.md` + +### Phase 3: Discovery Command + +Build the `/extensions` command that reads marketplace.json and displays available extensions. + +**Files to create:** + +1. `plugins/compound-engineering/commands/extensions.md` — Browse command + +**Command behavior:** +- Reads marketplace.json +- Filters plugins with `compound-engineering-extension` keyword/tag +- Displays each extension: name, description, component counts, install command +- Groups by type (framework pack, convention config, custom agents) if possible + +### Phase 4: Marketplace and Documentation + +Update marketplace.json with new extensions and create the author guide. + +**Files to update:** + +1. `.claude-plugin/marketplace.json` — Add extension entries +2. `plugins/compound-engineering/.claude-plugin/plugin.json` — Update description +3. `plugins/compound-engineering/README.md` — Add "Extensions" section + +**Files to create:** + +1. `docs/guides/creating-extensions.md` — Author guide with naming conventions, structure, submission process + +### Phase 5: Validation + +Create a validation script that checks extension compliance. + +**Files to create:** + +1. `scripts/validate-extension.sh` — Checks naming, structure, collisions + +**Validation checks:** +- Plugin name starts with `compound-engineering-` +- `compound-engineering-extension` keyword present in plugin.json +- No component names collide with core plugin components +- Required files exist (.claude-plugin/plugin.json, README.md) +- plugin.json is valid JSON with required fields + +## References & Research + +### Internal References + +- Brainstorm: `docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md` +- Related plan: `plans/grow-your-own-garden-plugin-architecture.md` +- Example plugin: `plugins/coding-tutor/` (minimal plugin structure) +- Core plugin: `plugins/compound-engineering/` (full plugin structure) +- Marketplace: `.claude-plugin/marketplace.json` + +### External References + +- [Claude Code Plugin Documentation](https://docs.claude.com/en/docs/claude-code/plugins) +- [Plugin Marketplace Documentation](https://docs.claude.com/en/docs/claude-code/plugin-marketplaces) +- [Plugin Reference](https://docs.claude.com/en/docs/claude-code/plugins-reference) diff --git a/docs/plans/2026-02-13-fix-research-process-first-action-plan.md b/docs/plans/2026-02-13-fix-research-process-first-action-plan.md new file mode 100644 index 00000000..995de2b1 --- /dev/null +++ b/docs/plans/2026-02-13-fix-research-process-first-action-plan.md @@ -0,0 +1,109 @@ +--- +title: "fix: Research process fails when transcript is first action" +type: fix +date: 2026-02-13 +--- + +# fix: Research process fails when transcript is first action + +## Overview + +`/workflows:research` stalls when a user tries to process a transcript as their first research action. Two cascading bugs: (1) Phase 2 doesn't handle inline transcript content, and (2) the transcript-insights skill has no empty-state handling for the plans directory. + +Related issue: EveryInc/compound-engineering-plugin#187 + +## Problem Statement + +When a user runs `/workflows:research process this transcript [content]` with empty research directories: + +1. Phase 2 checks `docs/research/transcripts/` for files → finds nothing → reports "no transcripts found" and exits +2. Even if the transcript were saved first, the transcript-insights skill tries to list research plans from an empty `docs/research/plans/` directory with no fallback + +The workflow command and the skill have **mismatched input handling** — the skill supports inline content (Step 1: "If content is pasted directly, proceed with that content") but the workflow command never passes inline content through. + +## Proposed Solution + +Two targeted edits to two files. No new files needed. + +### Fix 1: `plugins/compound-engineering/commands/workflows/research.md` + +**Location:** Phase 2: Process section (lines 82-114) + +**Change:** Add an inline content check before the file-based transcript check. Insert a new subsection before "### Check for Transcripts": + +```markdown +### Check for Inline Content + +If the research phase argument contains more than just the word "process" (i.e., transcript content was provided inline): +1. Extract the transcript content from the argument (everything after "process") +2. Generate a filename from the meeting title or date: `YYYY-MM-DD__transcript.md` +3. Save to `docs/research/transcripts/[filename]` +4. Skip the transcript selection step — proceed directly to "Process Selected Transcript" with this file path +``` + +Then update the existing "Check for Transcripts" section to say: + +```markdown +### Check for Transcripts + +**If inline content was already handled above, skip this section.** + +Look for `.md` files in `docs/research/transcripts/`. +[... rest unchanged ...] +``` + +### Fix 2: `plugins/compound-engineering/skills/transcript-insights/SKILL.md` + +**Location:** Step 2: Link to Research Plan (lines 31-40) + +**Change:** Add empty-state handling before the existing list instruction. Replace the current Step 2 opening with: + +```markdown +### Step 2: Link to Research Plan + +Check for files in `docs/research/plans/`. + +**If no plans exist:** +Skip the plan listing. Use AskUserQuestion to confirm: "No research plans found. This will be tagged as ad-hoc research. Continue?" +If confirmed, set `research_plan: ad-hoc` in frontmatter and proceed to Step 3. + +**If plans exist:** +List existing research plans by reading frontmatter from files in `docs/research/plans/`: +- Show title, date, and status for each plan +- Most recent first, cap at 7 entries +- Include "Ad-hoc / no plan" as the final option + +Use AskUserQuestion to ask which plan this transcript belongs to. Store the plan slug (filename without date prefix and extension) in the output frontmatter. + +If "Ad-hoc" is selected, set `research_plan: ad-hoc` in frontmatter. +``` + +## Acceptance Criteria + +- [x] `/workflows:research process this transcript [inline content]` saves transcript to file and processes it — even with empty research directories +- [x] `/workflows:research process` (no inline content) still works with existing file-based flow +- [x] transcript-insights skill handles empty `docs/research/plans/` gracefully by defaulting to ad-hoc +- [x] transcript-insights skill still lists plans when they exist +- [x] No changes to brainstorm, plan, or other workflow commands + +## Files to Edit + +| File | Lines | Change | +|------|-------|--------| +| `plugins/compound-engineering/commands/workflows/research.md` | 82-90 | Add inline content handling before file check | +| `plugins/compound-engineering/skills/transcript-insights/SKILL.md` | 31-40 | Add empty-state handling for plans directory | + +## Plugin Metadata Updates + +This is a patch fix (no new components), so: + +- [x] Bump patch version in `plugins/compound-engineering/.claude-plugin/plugin.json` +- [x] Add CHANGELOG entry under `### Fixed` +- [x] No README or marketplace.json changes needed (component counts unchanged) + +## References + +- Issue: EveryInc/compound-engineering-plugin#187 +- Workflow command: `plugins/compound-engineering/commands/workflows/research.md:82-114` +- Skill: `plugins/compound-engineering/skills/transcript-insights/SKILL.md:31-40` +- Good empty-state pattern: `plugins/compound-engineering/agents/research/user-research-analyst.md` (Step 8) diff --git a/docs/research/interviews/2026-01-13-participant-001.md b/docs/research/interviews/2026-01-13-participant-001.md new file mode 100644 index 00000000..85d84b9f --- /dev/null +++ b/docs/research/interviews/2026-01-13-participant-001.md @@ -0,0 +1,181 @@ +--- +participant_id: user-001 +role: "Strategic Account Manager" +company_type: "Healthcare SaaS" +date: 2026-01-13 +research_plan: ad-hoc +source_transcript: "1.md" +focus: "Strategic accounts dashboard demo and upsell opportunity planning" +duration_minutes: 20 +tags: [upsell-strategy, sales-enablement, account-management, dashboard, salesforce] +--- + +# Interview Snapshot: user-001 + +## Summary + +This was a collaborative working session where the interviewer demoed a self-built strategic accounts dashboard to a colleague with deep Salesforce and sales pipeline expertise. The dashboard pulls Jira and Salesforce data to surface upsell opportunities and account health signals. The participant validated the approach but challenged the routing strategy, arguing that qualified upsell leads should go to experienced closers (Tony and Kathy) rather than overwhelmed Strategic Account Managers (SAMs) who lack product knowledge. They also identified that upsells should be created as Salesforce opportunities rather than Jira tickets, and offered to help configure Salesforce product data as a first step. The session revealed a significant capability gap in the SAM role and a data-rich but under-leveraged Stars Compare benchmarking tool. + +## Experience Map + +``` +Trigger → Context → Actions → Obstacles → Workarounds → Outcome +``` + +| Step | What Happened | Feeling | Tools/Process | +|------|--------------|---------|---------------| +| Trigger | Interviewer built a strategic accounts dashboard as a side project | Motivated, proactive | Claude, N8N, Jira API, Salesforce API | +| Context | SAMs are overwhelmed, untrained on products, and struggling with account management | Frustrated (interviewer), empathetic (participant) | Jira, Salesforce, Confluence | +| Action 1 | Demoed dashboard showing account health, momentum scoring, and upsell signals | Excited, hopeful | Strategic Accounts Dashboard (custom) | +| Action 2 | Showed Stars Compare lead generation list built with Clay | Impressed (participant) | Clay, Stars Compare tool | +| Obstacle | SAMs don't know products well enough to sell upsells even with tools | Skeptical, concerned | - | +| Workaround | Participant suggested routing leads to experienced closers (Tony/Kathy) instead of SAMs | Aligned, strategic | Salesforce opportunity routing | +| Outcome | Agreed on next steps: product configurator in Salesforce, present at February onsite | Energized, collaborative | Salesforce, onsite meeting | + +## Insights + +### Pain Points + +> "I don't know what the hell. I have no idea what a population health director even does, and I'm sitting in this meeting. And I'm having all these people talk at me about caps Files. I don't. I'm Googling caps." +- **Type:** pain-point +- **Topics:** product-knowledge, onboarding +- **Context:** Interviewer describing SAM experience with unfamiliar healthcare products (secondhand account) + +> "If we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels?" +- **Type:** pain-point +- **Topics:** sales-enablement, account-management +- **Context:** Participant questioning whether a dashboard alone solves the SAM capability gap + +> "We also don't have the best, like, easily accessible analytics in general. And then it's like, hey, I'm looking at somebody, like, I can't tell if they have CAPS or HAAS." +- **Type:** pain-point +- **Topics:** analytics-access, product-visibility +- **Context:** Interviewer describing difficulty identifying which products accounts have + +### Needs + +> "Can they do that without there being an opportunity first? Like, don't you want to, instead of creating a JIRA ticket for that? If it's potentially an upsell, shouldn't it be an opportunity instead?" +- **Type:** need +- **Topics:** salesforce, upsell-strategy +- **Context:** Participant identifying that upsells need to follow proper Salesforce opportunity workflow + +> "You're going to have to eventually. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op." +- **Type:** need +- **Topics:** salesforce, pipeline-management +- **Context:** Participant explaining why opportunities should be created from the start rather than Jira tickets + +> "One is just mapping this out. On paper or just in an whatever you prefer. And then the second is like how we want to create that UI for them." +- **Type:** need +- **Topics:** planning, dashboard +- **Context:** Participant identifying that the tool needs proper planning before deployment + +### Behaviors + +> "Why don't we just hand the best leads to the best closers?" +- **Type:** behavior +- **Topics:** sales-routing, upsell-strategy +- **Context:** Participant proposing a strategic shift from SAM-driven to specialist-driven upsells + +> "If these guys are already our customers, the whole sales motion is a lot easier because they're not a cold lead, in a sense. You have Kathy and Tony, who are ideally, they actually know the product way more than the SAMs do, who are like, overwhelmed and catching up." +- **Type:** behavior +- **Topics:** upsell-strategy, sales-enablement +- **Context:** Participant contrasting experienced closers with overwhelmed SAMs for existing customer expansion + +> "We can put that in Salesforce. Product configurators. Those two are the key things." +- **Type:** behavior +- **Topics:** salesforce, product-configuration +- **Context:** Participant identifying where product data should live in Salesforce + +> "And it gets in qualification so it doesn't hurt their win rates or whatever." +- **Type:** behavior +- **Topics:** pipeline-management, salesforce +- **Context:** Participant explaining the qualification stage protects sales metrics + +### Workarounds + +> "This is a little thing that I built. Strategic accounts Dashboard." +- **Type:** workaround +- **Topics:** dashboard, side-project +- **Context:** Interviewer built a custom dashboard as a side project because no adequate tool existed + +> "I basically copy numbers from the dashboard into my spreadsheet because the export never works right." [Observed behavior: interviewer built custom integrations pulling Jira and Salesforce data via N8N because native tools didn't provide the combined view needed] +- **Type:** workaround +- **Topics:** data-integration, n8n +- **Context:** N8N workflow automation used to combine data from multiple sources that don't natively connect + +> "I got a hold of the Stars Compare tool data. And then I was able to calculate their intra state performance at a national level and their performance at a state level." +- **Type:** workaround +- **Topics:** lead-generation, stars-compare +- **Context:** Interviewer manually mined Stars Compare benchmarking data to generate cold outreach leads using Clay + +### Desires + +> "Can you connect this to an LLM that just gives them that recommendation?" +- **Type:** desire +- **Topics:** ai-automation, sales-enablement +- **Context:** Participant immediately seeing the potential for AI-powered upsell recommendations + +> "I mean, this is like a top tier list to use." +- **Type:** desire +- **Topics:** lead-generation, stars-compare +- **Context:** Participant expressing enthusiasm for the data-driven prospect list + +### Motivations + +> "I'm kind of done. I'm exhausted from the account manager saying, like, they have no, like, because they're like, hey, we're doing our best." +- **Type:** motivation +- **Topics:** sales-enablement, account-management +- **Context:** Interviewer's frustration driving them to build tools rather than continue complaining about SAM performance + +> "I would love to, rather than start, like, complaining about, like, hey, the SAMs aren't doing right, be like, hey, I built you a tool. I'm going to give you the exact playbooks to run." +- **Type:** motivation +- **Topics:** sales-enablement, playbook +- **Context:** Interviewer motivated by empowering SAMs with concrete playbooks rather than criticism + +> "I think it's a good pipeline generation move." +- **Type:** motivation +- **Topics:** pipeline-management, upsell-strategy +- **Context:** Participant validating the strategic value of automated opportunity generation + +## Opportunities + +Opportunities are unmet needs -- NOT solutions. + +| # | Opportunity | Evidence Strength | Quote | +|---|-----------|------------------|-------| +| 1 | Users need a way to see which products each account has at a glance | Strong | "I can't tell if they have CAPS or HAAS. Like, there's a lot of problems in general" | +| 2 | Users need a way to route qualified upsell leads to the right seller based on expertise | Strong | "Why don't we just hand the best leads to the best closers?" | +| 3 | Users need a way to automatically identify accounts that are good candidates for specific product upsells | Strong | "I want it to actually run. And then I want it to create a... expansion ticket that gets assigned" | +| 4 | Users need a way to arm sellers with product-specific talking points and competitive benchmarks for upsell conversations | Strong | "This is the scores where they're low. This is their benchmarks. These are their competitors who are doing better than them. Like talk about this." | +| 5 | Users need a way to mine benchmarking data (Stars Compare) for outbound lead generation | Medium | "Have we actually mined the data and say, like, hey, here's the people who should be taking it?" | +| 6 | Users need a way to onboard SAMs on complex healthcare products without repeated training sessions | Medium | "I train all them, like, literally six times, and then like, no one there, and still every time there's a couple" | +| 7 | Users need a way to track account health momentum alongside expansion opportunities in one view | Medium | "How many monthly active users do they have? Like, are they actually using the product? Are they getting value out of it?" | + +**Evidence strength:** +- **Strong**: Participant explicitly described this need with emotional weight +- **Medium**: Participant mentioned this in passing or as part of a larger story +- **Weak**: Inferred from behavior or workaround, not directly stated + +## Hypothesis Tracking + +| # | Hypothesis | Status | Evidence | +|---|-----------|--------|----------| +| 1 | SAMs lack the product knowledge to effectively sell upsells | NEW | "They're just, like, catching up... I don't know what the hell. I have no idea what a population health director even does" | +| 2 | Automated upsell identification would increase pipeline generation | NEW | "I think it's a good pipeline generation move" | +| 3 | Experienced closers (not SAMs) should handle qualified upsell leads | NEW | "Why don't we just hand the best leads to the best closers?" | +| 4 | Stars Compare benchmarking data is an underleveraged asset for lead generation | NEW | "No, we haven't. They should. I've requested something like that before." | +| 5 | Upsells should be tracked as Salesforce opportunities from the start, not Jira tickets | NEW | "It's going to have to be an opportunity anyways if it materializes into revenue for us to capture" | + +## Behavioral Observations + +- **Tools mentioned:** Salesforce (opportunities, product configurators, qualification stages), Jira, N8N (workflow automation), Clay (lead enrichment), Stars Compare (benchmarking tool), Claude (AI for building dashboard), Confluence (playbook documentation), Slack +- **Frequency indicators:** SAM training happened "literally six times"; dashboard and analytics checked regularly; Stars Compare data collected periodically +- **Emotional signals:** Interviewer frustrated with SAM capability gap ("I'm exhausted"); participant impressed by dashboard ("this is dope", "I think they should make you VP of marketing"); both energized by the upsell automation vision; participant skeptical about SAM-driven approach ("wouldn't they still run to that same issue?") +- **Workaround patterns:** Built entire custom dashboard as a "side project" because no adequate tool existed; used Clay to enrich Stars Compare data into actionable lead lists; N8N used to bridge data silos between Jira and Salesforce; manual product identification because no consolidated view exists + +## Human Review Checklist + +- [ ] All quotes verified against source transcript +- [ ] Experience map accurately reflects story arc +- [ ] Opportunities reflect participant needs, not assumed solutions +- [ ] Tags accurate and consistent with existing taxonomy +- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/personas/the-sales-operations-strategist.md b/docs/research/personas/the-sales-operations-strategist.md new file mode 100644 index 00000000..30aec86e --- /dev/null +++ b/docs/research/personas/the-sales-operations-strategist.md @@ -0,0 +1,92 @@ +--- +name: "The Sales Operations Strategist" +role: "Strategic Account Manager" +company_type: "Healthcare SaaS" +last_updated: 2026-02-13 +interview_count: 1 +confidence: low +source_interviews: [user-001] +version: 1 +--- + +# The Sales Operations Strategist + +## Overview + +The Sales Operations Strategist is a process-oriented sales professional who thinks in systems rather than individual deals. They have deep expertise in CRM configuration, pipeline stages, and sales workflows -- and they instinctively evaluate new tools through the lens of "does this fit into the existing sales process correctly?" Rather than getting excited about flashy dashboards, they ask the hard questions: who actually runs the plays, does the data flow into the right systems, and will this scale beyond a side project. + +They occupy a unique position between sales execution and operations. They know Salesforce intimately -- product configurators, qualification stages, opportunity routing -- and they use this knowledge to challenge naive assumptions about who should handle what. When presented with a tool that automates upsell identification, their first instinct is not "cool, let's deploy it" but "who are we routing these leads to, and can those people actually close them?" They advocate for matching lead quality to seller capability, preferring experienced closers over generalists who are still learning the product. + +This persona is pragmatic and collaborative. They validate good ideas quickly ("this is dope", "I think it's a good pipeline generation move") but immediately pivot to execution concerns: Salesforce configuration, opportunity creation workflows, and realistic seller capability. They are a critical ally for product and analytics teams building internal tools -- they won't block innovation, but they will insist it plugs into real sales processes correctly. + +## Goals + +1. Route qualified upsell leads to sellers who can actually close them (1/1 participants) +2. Ensure all revenue-generating activities are tracked as Salesforce opportunities from the start (1/1 participants) +3. Leverage existing customer relationships for easier expansion sales motions (1/1 participants) +4. Get product data properly configured in Salesforce (product configurators, licenses, active users) (1/1 participants) + +## Frustrations + +1. SAMs lack product knowledge and are overwhelmed, yet they're expected to handle upsells (1/1 participants) +2. Tools alone don't solve capability gaps -- people still need to know what they're selling (1/1 participants) +3. Upsell workflows that bypass Salesforce opportunity tracking create data and attribution problems (1/1 participants) +4. No one has systematically mined benchmarking data (Stars Compare) for lead generation despite repeated requests (1/1 participants) + +## Behaviors + +| Behavior | Frequency | Evidence | +|----------|-----------|----------| +| Evaluates new tools against existing Salesforce workflows before endorsing | Per interaction | (1/1 participants) | +| Challenges routing assumptions -- asks "who actually runs this play?" | Per interaction | (1/1 participants) | +| Advocates for qualification-stage opportunities to protect win rates | Per interaction | (1/1 participants) | +| Quickly identifies CRM configuration steps needed to operationalize ideas | Per interaction | (1/1 participants) | +| Validates good ideas fast, then pivots to execution concerns | Per interaction | (1/1 participants) | + +## Key Quotes + +> "Why don't we just hand the best leads to the best closers?" +> -- user-001, proposing specialist-driven upsells over SAM-driven approach + +> "If we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels?" +> -- user-001, questioning whether tools alone solve the SAM capability gap + +> "You're going to have to eventually. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op." +> -- user-001, explaining why upsells must be Salesforce opportunities from the start + +> "If these guys are already our customers, the whole sales motion is a lot easier because they're not a cold lead, in a sense." +> -- user-001, distinguishing expansion sales from new logo acquisition + +> "I think it's a good pipeline generation move." +> -- user-001, validating automated upsell opportunity creation + +## Opportunities + +| # | Opportunity | Evidence Strength | Participants | Key Quote | +|---|-----------|------------------|-------------|-----------| +| 1 | Users need a way to route qualified upsell leads to the right seller based on expertise | Weak | user-001 | "Why don't we just hand the best leads to the best closers?" | +| 2 | Users need a way to see which products each account has at a glance | Weak | user-001 | "I can't tell if they have CAPS or HAAS" | +| 3 | Users need a way to automatically identify accounts that are good candidates for specific product upsells | Weak | user-001 | "I want it to actually run. And then I want it to create a... expansion ticket" | +| 4 | Users need a way to arm sellers with product-specific talking points and competitive benchmarks | Weak | user-001 | "This is the scores where they're low. These are their competitors who are doing better than them." | +| 5 | Users need a way to mine benchmarking data for outbound lead generation | Weak | user-001 | "Have we actually mined the data and say, like, here's the people who should be taking it?" | +| 6 | Users need a way to onboard SAMs on complex healthcare products without repeated training | Weak | user-001 | "I train all them, like, literally six times" | +| 7 | Users need a way to track account health momentum alongside expansion opportunities in one view | Weak | user-001 | "How many monthly active users do they have? Are they actually using the product?" | + +## Divergences + +_No divergences identified yet._ + +## Evidence + +| Participant | Research Plan | Date | Focus | +|------------|--------------|------|-------| +| user-001 | ad-hoc | 2026-01-13 | Strategic accounts dashboard demo and upsell opportunity planning | + +## Human Review Checklist + +- [ ] Goals and frustrations grounded in interview evidence +- [ ] Behavior counts accurate (absence not counted as negative) +- [ ] Quotes are exact (verified against source interviews) +- [ ] Opportunities framed as needs, not solutions +- [ ] Divergences section reflects actual contradictions +- [ ] Confidence level matches interview count threshold diff --git a/docs/research/transcripts/1.md b/docs/research/transcripts/1.md new file mode 100644 index 00000000..f23fcb02 --- /dev/null +++ b/docs/research/transcripts/1.md @@ -0,0 +1,252 @@ +Meeting Title: Strategic accounts dashboard and upsell opportunity planning with sales analytics team +Date: Jan 13 + +Transcript: + +Them: Hey, man, what's going on? +Me: What's up, dude? How are you? +Them: Nothing much. Happy Tuesday. +Me: Not true. Not true. Especially. +Them: Yeah, you're right. +Me: I see you popping around on this open community office hours all the time. +Them: Yeah. You caught me. +Me: That was the hardest. +Them: It was good. Got some time to rest, recharge, ski a little. How about you? +Me: Oh, nice. Where are you skiing? +Them: I'm down in Montana. Or up in Montana, I should say. I'm in California right now. Yeah. +Me: Whoa. +Them: I wish we went to Kalispell, which is, like, on the other side of Montana. Whitefishes. That's rice. +Me: Okay, got it, got it, got it. +Them: Cage. +Me: I'm looking at. Things that are. I'm trying to make this list for you, and it's not right yet, but it will be. +Them: Oh, yes. Oh, yes. +Me: So we had. We moved into our house, and it's, like, right next to Emily's parents. This is a couple years. This is, like, in Covid. +Them: Oh, nice. +Me: But across the street. +Them: Okay? +Me: There was this really nice set of neighbors. They're really cool, but they're older. And they had a kid who was like a junior in high school when moved in. Then he graduated. The older girls going, but basically the kid graduated, and they were like, we're going to Montana. We're going to bozeman. And the mom was, like, needing. She's like, my older daughter is probably never going to have kids. +Them: Yeah. +Me: Our younger son is. Should not be having kids right now. I need grandkids. And I was like. I was like, you got him. We need help. You want to come over? +Them: Yeah. Yeah. +Me: And so she was like fairy God across the street. +Them: Run them out. Yeah. That's awesome, though. +Me: It was great. And they're like, we're moving to Bozeman. Do you want to come visit? I was like, do not. Just one of those. Like, I'm not. Because, like, now they're, like, fancy. Or at least the husband's fancy. +Them: Yeah, throw that out there. Yeah. +Me: Don't be like, oh, yeah, no, sorry. I won't be in Jackson Hole that time of year. We can't. I'm like, I have nothing better to do. I will book a ticket right now if you invite me to. I can stand your nice house and, yeah, come. And so we went over there. We went in the summer and it was beautiful. +Them: That's literally what it is. Yeah. Yeah, exactly. Take the kids too. Yeah. Oh. What do you think? Yeah. In summer it's incredible. +Me: It was amazing. It was nuts. It was so. +Them: Do you get a chance to go to Paradise Valley right next to it, or no? +Me: Yes. Yes. We went all the way into Yellowstone and just, like, it was. It was stunning. It was so cool. +Them: Yeah. I went to a wedding there. I think it was two years ago, and it was just crazy, you know? Apparently John Mayer, like, lives out there. And plays at local bars. +Me: That's what they were telling, like. Well, she's like, Glenn Close is, like, their neighbor. Like, all these, like, famous people. Bozeman's crazy. It's a micropolis. I had never heard it because the guy who, like, used to be in real estate. +Them: Holy crap. Yeah. +Me: It's like the fastest growing Micropolis. Whatever. But it's this crazy thing where, like. They were, like, doing this where, like, you buy a house for $750,000. +Them: Yeah. +Me: When you buy, like, a piece of land with a house on it, you level the house immediately, and then you build a $2 million house on top of it. And then there's, like, all these, like, beautiful home, but it's like, all the, like, housing crisis problems, like, but, like, in tiny. +Them: Yeah, exactly. +Me: Like you're seeing crazy houses go up and there's all these restrictions and zoning laws and all this sort of stuff. +Them: Oh, for sure. +Me: And then. And then it's like Van Life in the Park. +Them: Yeah. It's so gnarly. +Me: It's, like, all white and her daughter's vandalized girl. +Them: Yeah. Dude, it's. It's. +Me: And so, yeah, crazy. I'm like, this dude not wearing news. Like, riding a bike without shoes is not living in that house. Like, where does this roll? +Them: Yeah. No 100%. Every time. Yeah, and my fiance went to school there. She went to MSU back before it blew up. Like, yeah, before Covid and before everyone found out about Bozeman. And, yeah, every time we've been now, you're totally right. It's like 50 new little housing development things and then, like, a whole street. Row full of RVs and cars and people. Car, camera. It's so. It's so wild. +Me: All right. Yeah, exactly. It's crazy. Y. Eah. And they're like. Because, I guess. Well, is that. Is that. Or is it Eastern or is it. +Them: But it's so. It's pretty. It's eastern Montana. +Me: Yeah, yeah, that's what. Yeah, the. I mean, they're like. You probably know it. Like, there's like a sushi place on the. Like, one back from. And then like four, four rows back, there's, like the park and that's like, that's where they are, right? They're like. Six blocks from, like, the main saloon strip. Like, I could. I could, like, picture it like, I walk there because there's only one main drag. +Them: Yeah, I wouldn't trust it. Yeah. Yeah. Yeah, well, you know, like, that whole area too. It's just like, what's next? So Bozeman blew up. Then I guess Livingston, which is closer to Yellowstone's, probably gonna be next. And Butte? I don't know if you had a chance to go to Butte, but. +Me: Yeah. Yeah. Yeah. Yeah. No, we did not go there, but. +Them: You didn't miss out. I wasn't a fan of you. +Me: It was. Well, it's funny, too, because I used to, like. That was, like, the joke. I used to work in television, and we used, like, all these, like, broadcast TV stations, and everybody would make fun of Bozeman Butte because it was like, really? They're like. He's like, there's literally one. +Them: Yeah. Yeah. +Me: Light in my town. Like, he's like, in the whole town. And, like, you know, all these, like, people from Seattle are showing up, and we're like. We're like. Butte's like a strong cbs. Like, it's looking pretty good. +Them: Yeah. Yeah. Viewed as. I think I saw a comment. It was like the only place in Montana where two people will just fight for absolutely no reason. Can only go down and be. That's true. +Me: Exactly. No. Yeah, exactly. Yeah. +Them: But, you know, I think it's gonna do turnaround, like they all will at this point. They're all gonna. +Me: Well, no, I think. I mean, that's. And I think they kind of know it, too, because. We don't want you here. Like, we moved to Monta from you and like, we found this, like. +Them: Yeah, watch you here. +Me: Boss place to live, and we just didn't tell anybody. And now, like, you guys? Yeah. +Them: Everybody knows. Freaking Yellowstone. That's. That show just, like, ruined it. +Me: Yeah, exactly. +Them: For some people, I guess. But. But yeah. You wanted to talk about showing. +Me: Yeah. Let's. Let me see if I. Yeah, let me see if this is actually working right now. I think it might actually be. +Them: Analytics products. +Me: Yep. Sweet. Okay, cool. Let me see. If I'm going to upload, this is probably wrong. I haven't even checked. But. I will see if we can get. This to work, okay? All. Right. Let's check this. I'm putting it in. All right. Let me share screen. This is not right, but let me see if this would be helpful. Account id. So here's like, the account id, here's Apex Health Solutions, and then we need to find out. This product will be what ultimately becomes either like caps or pass or medication adherence. +Them: Exactly. Where did you get this from? Salesforce. +Me: All. Right. So I'm going to bring you in under. +Them: Oh, this is from jira. Okay. +Me: Well, it's both. So here's my side project that I've been working on in the backgrounds. And so I'm going to show you and swear you to secrecy until such time as I can at least get from. Validation from somebody that I'm not going to get fired. This will actually help. All right, so I'll tell you, I have struggled with. The Sam's in general. +Them: Yeah. +Me: At least specifically as it relates to, like, products and, like, all the change that's going on in there and, like, hard job and, like, there's a lot of stuff being thrown at them. Like, all of the engagement ones are now in charge of analytics. Like, I train. I train all them, like, literally six times, and then like, and, like, no one there, and still every time there's a couple, it's getting better. +Them: Still guts it. +Me: But it's just like, it's been rough. +Them: Yeah. +Me: And somebody like, I don't have. We also don't have the best, like, easily accessible analytics in general. And then it's like, hey, I'm looking at somebody, like, I can't tell if they have CAPS or haas. Like, there's a lot of problems in general, and some of it we're trying to, like, productize. But where I want to get to. +Them: Sure. +Me: This is a little. Thing. Yeah, this is a little thing that I built. Strategic accounts Dashboard. +Them: There you go. +Me: But I basically want to. +Them: Oh, perfect. +Me: Kind of pull all this data in, and basically, this is probably like a scoring mechanism. +Them: Yeah. +Me: But I don't know or how long it's going to take. But basically, it's like it pulls in JIRA data, it pulls in Salesforce data. So here's Kansas. It's just adding up all the revs. This is the tickets that are associated with it. So this is actually like the JIRA data. +Them: This looks really cool. I like this. +Me: So this is like all the data refreshes, the monthly refreshes. So this is like someplace where you. Because, like, they have to go to Jira and, like, I don't know, like, they have to, like, figure out where is this pulling from. +Them: Yeah. +Me: The client analytics data, like, so the client analytics data is, like, flowing into here. And it basically just says, like, hey, here's all this with the overarching goal of basically, like, saying anything that has either, like, an upsell opportunity or renewal will, like, get them in this, like, top right box of, like, positive momentum. And then, like, you know, hey, here's like, some other things. +Them: Yeah. Okay? Grow zone. Yeah. Downgrades, returns would be in the. In the bottom area. +Me: And then so here, like government employee Health Association. This has a churn opportunity that has an opportunity that was like during the date timeline between 23 and then, like, A 2026. And so. This puts them in this bottom box, and at least we know. +Them: Down. +Me: But my overarching goal to, like, start to combine these so we can start to say like, and then basically, this momentum thing is basically, like, how many monthly active users do they have? Like, are they actually using the product? Are they getting value out of it? +Them: Yeah. +Me: And then where I'm really going is, like, it's going to run and there's going to be a library of expansion opportunities. Like, do you have caps? It's like, you should get Haas. Like, do you have, like, do you have, like, this? +Them: So. So literally on that point, is there a tab or area where you see what product they have and then, like, how it was set up, or is it just captured as an opportunity, like, +Me: Right now. It just pulls in opportunities and account data, and then it basically computes those, like, +Them: Dan. Claude is good. +Me: Right. +Them: Yeah, it's so good. So I see your vision. Yeah. Oh, go ahead, go ahead. +Me: There. That's okay. But basically, what it, like, I want to. And this is, like, create Jira ticket. But basically, like, the goal is that I want it to, like, actually run. And then I want it to create a Jira, like, expansion ticket that gets assigned to a SAM that's like, hey, like, run this playbook. Like, talk to them about like talk to them about this? +Them: Can they? Can they do that without there being an opportunity first? Like, don't you want to, instead of creating a JIRA ticket for that? If it's potentially an upsell, shouldn't it be an opportunity instead, is what I'm saying. +Me: Yeah, like one of the. One of the two, like, and I don't know, like, which one they want to use. Like, I think, like, probably like an opportunity. The only thing. Yeah, I guess that would be, like. Is it, like, count as an opportunity if it's like. +Them: You're going to have to eventually. So it's a great question. +Me: This is like a push. This is a push. +Them: You're going to. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op. +Me: Yeah, that's a good point. Yeah. +Them: And it gets in qualification so it doesn't hurt their win rates or whatever. So I see, I see your vision. So you basically this platform here will pull in what products they have, upsells and downturns and then product specific data about like the number of licenses and active users and everything in between, okay? Cool, cool. +Me: Right. Right. And this is basically. But this is kind of, like, at least for me, that it's helpful. But, like, what I really want this to do is I really want this to be, like, a flexible tool that people can use to start, like, creating these qualification opportunities, because, like, they should, like, like, they're kind of like, deer in headlights. At least from what I've seen, it's like. I don't know. +Them: Yeah. Yeah. And why is that? So, like, they're supposed to be on the front line. +Me: They're just, like, catching up. They're like, I don't know what the hell. I have no idea what a population health director even does, and I'm sitting in this meeting. And I'm having all these people talk at me about caps Files. I don't. I'm Googling caps. I don't know what that means. Like, apparently it's a training thing, and I think it's like, also, it was previously an implementation job, and that got like, +Them: So it's a training thing. +Me: Pulled away from them. But then. But then, like, nobody, like, filled the gap there. +Them: Yeah. But here's my question for you, though, on that. If we do run this or if we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels? +Me: Well, no, I didn't. That's why. That's why it's like. That's why it's like arming this combination of, like, this and a cat made, but it's like. You've been assigned. You're qualifying. We have qualified this client for the Haas Upsell. This is what the so is. Go to this Confluence page. +Them: Yeah. +Me: Call this person present this deck. Once you get them to say yes, because. And then, like, also what's in here, because we actually have data from the stars. Compare tool. Be like, this is the. This is the scores where they're low. This is their benchmarks. These are their competitors who are doing better than them. Like talk about this. +Them: I see. Yeah. +Me: Your first presentation. +Them: You're talking points, and you have, like, basically. Yeah, so you have a Runway of, like, presenting. You're creating an upsell, and you're basically, like, greasing the wheels. +Me: Yeah. I feel like these are. These are the latest. Like, these are the latest case studies. This is what the HAAS is like. Present this deck. And talk about these studies. +Them: Yeah. +Me: And say like here we thought about this from you. We saw the latest stars, compare like data. We know that you guys are suffering in this area. We know it's hard. We've had success. Do you want to like. We would love to talk to you more about it. +Them: Can you connect this to an LLM that just gives them that recommendation? +Me: Yeah, that's what I'm doing. +Them: Ok? +Me: And so that's like this. This is like the UI part of it, but the other fun stuff. +Them: Ok? +Me: That you start to do. +Them: Yeah, go ahead. +Me: I don't know if you mess with N8N much. +Them: I have when you and I were working on it. I think it's a great tool. +Me: But basically, that's what this. That's what this all runs off of. +Them: But were you thinking of using NAN as a trigger to create a new upsell op, like automatically, or is it going to be okay? +Me: Yeah. Yeah. And basically, that's what the. +Them: How do you. +Me: So, like, even here, like, right? +Them: How do you make sure? +Me: All. Right. So then. Here's. Why is this. There we go. Okay. So, like, here's the one that pulls the Jira tickets. It's a web hook, so it's responding to the actual thing, but you can also set this up. To where you're adding in. LLM. +Them: Yeah. +Me: An A1 and then whatever. +Them: Because that's. That's what you'd have to train to tell it. +Me: But then. And the tool is like this. You can, like, add. Yeah. Call N8N workflow tool. And so you could say here, like, pull the JIRA ticket. You could, like, pull all the Jira tickets if you want. You can. +Them: You would have to. Can you also pull Is it just one poll, or can you pull Salesforce data as well? +Me: No. Then you have as many tools as you want. So you could say, like, call the and I There's a. This is the Sam Cron which pulls all Salesforce data. +Them: Okay? So this all makes sense. I think we need to start with two things if we want to actually do this. One is just mapping this out. On paper or just in an. I don't know, whatever you prefer. +Me: Right. +Them: And then the second is like how we want to create that UI for them. I mean. Claude did a very good MVP from what I could see, like. +Me: No. Well, that's what I think, like, and honestly. My overarching hope is that it might be. Hey, look, if it's crazy successful and everybody wants access to this little, like, tool, like, great, yeah, we'll figure out a way to deploy it and get people access to it. +Them: Yeah. +Me: But my other hope is that it's like, we already have enough places. Like, maybe I'll just have that on my screen. But, like, what we really need to do is, like, the whole point is that you don't actually have to look at this, just look at all the tools you're normally doing. Just like. Hey, like, here's the salesforce. Like how many of these qualified leads actually turn out? Or, like, okay, well, we need to do a better job of making these qualifications or, like, maybe doing a better job of making these opportunities because, like, Like, like, basically, like have our target rate be. Or like target metric be. Like, how many qualified leads become advanced in the next stage? And then that's sort of the goal that we keep. Going against. And I like my hope would be. +Them: Do you mean, qualified opportunities? +Me: Yes. Well, sorry. Yes. How many? Like, because you said the qualification, basically. Where the qualification, whatever the opportunity that we make, that doesn't count, then it progresses to the next stage where it starts to actually be like, hey, this is like, industry. Yeah. +Them: Yeah, yeah. That's actually exactly it. Yeah. Qualification discovery. +Me: Yeah. +Them: Okay? So. +Me: But this is, like, total side project. You're the first person I talk to about it, but I think, like, this is what I'm hoping to do because, like, I want to actually start hitting more of, like, our metrics. And then, like, I'm kind of done. I'm exhausted from the account manager. Saying, like, they have no, like, because they're like, hey, we're doing our best. But, like, the ones who know what they're doing are doing great. And then I'm worried about. +Them: No, this is dope. +Me: I would love to. Rather than start, like, complaining about, like, hey, the sands aren't doing right. Be like, hey, I built you a tool. I'm going to give you the exact playbooks to run. Talk to them about these upsell opportunities. They're already in Salesforce like, and just, like, give us feedback if they're doing well or not. +Them: Here's the thing. What about bringing sales, like the analytics sales guys, Tony and Kathy instead? +Me: Y. Eah. +Them: Is they? Why are we making it like Sam focused with these guys? +Me: That's a good point. Yeah, we could definitely. Yeah, we could definitely do it that way, too. +Them: You tell me. Like those two have the most experience selling. +Me: No, they are definitely the better. Yeah, they're definitely the best ones. And I think. +Them: So my opinion. +Me: Yeah, I'm actually curious. I'll be curious as to if they're like, I already know this, or like, this, like, already, like. +Them: Yeah. +Me: Well, I guess it's helpful, too, because there are they're not taking over all the accounts. And this is actually a better idea because we're coming up in February in Boston and they're already talking about other people are going to start. +Them: Yeah. +Me: Like, these are all the current client analytics customers I haven't brought in for non analytics customers. And like, where would be a good place to start? But that would be sort of like the next. +Them: Yeah. Before we get to that step. Here's. Here's my thoughts on it. If these guys are already our customers, The whole sales motion is a lot easier because they're not a cold bleed, in a sense. You have Kathy and Tony, who are ideally, they actually know the product way more in the Sam zoo, who are like, overwhelmed and catching up. +Me: Call. +Them: Why don't we just hand the best leads to the best closers? +Me: Yeah. That's a good point. Well, I think that, too. And then the other question that I had on my other side project was, do you know how much if there is anything been done with. The Stars Compare tool. I know that, like, it's kind of like we push people to it, we ask people, like, we market it, and we basically have it as, like, a lead generator, like, if they do it. But have we actually mined the data and say, like, hey, here's the people who should be taking it. +Them: That's it? Yeah. No, we haven't. They should. I've requested something like that before. +Me: Okay? Here. I'll at least give this to you. This is clay, which is another fun little tool. This is all for free. +Them: Yeah. +Me: I've not messed with it, but basically what I did is I got a hold of which data. Okay, this is. I got a hold of. The stars. Compare tool. Data. And then I was able to. For everybody. Calculate. Their intra state. Basically like their performance at a national level and their performance at a state level. +Them: Compared to each other, right? Yeah. +Me: And then I basically just said, here's all the list. This is everybody. Where is it? Contract summary. I think I put it. +Them: I mean, this is like a top tier list to use. +Me: This is everybody. This is everybody who. Shoot. I can't remember which one I did it in, but anyway. It was everybody who is in the bottom. State quartile. +Them: Yeah. +Me: And then. I took out some current customers and I took out some ones that I knew as prospects. And then. I just said, like, I want to find the directors of quality. At these companies, and this is the list that popped out, and this is their emails. +Them: Dude, this is like. I think they should make you VP of marketing. But this list. For? No, but actually, like, this list is exactly what analytics needs to be hitting. +Me: But this is not. This is like, not. This was pretty straightforward. To generate because we did all the hard work by, like, getting all the data and then. +Them: Yeah. Yeah, like, we got all those responses. I just don't know if anyone ever, like, follow us up with them directly. +Me: I think there's some of it, but, like, I feel like basically we should just have this be like, I don't know how to. +Them: And, like, whether they go down a rabbit hole. +Me: But basically, like, yeah, I think, like, this is, like, cold outreach campaign to basically just say, like, book time with. Tony or, like, book time with, you know? +Them: Yeah, like, I mean, almost be automated. +Me: Totally. +Them: But I think this is good. I mean, this is also good. On top of that. And I don't know how we can tie this in directly to what you showed me on Nan. And the lower you have, because those were existing clients. This is like a new logo, like. +Me: This is new logo. Yeah, this is new logo. Sdr. +Them: Yeah. Which is cool. Like, that's still more. That's still more of. +Me: Yeah. +Them: It's just more white space for us to attack. So I know we don't have too much time. What do you need for me to help you? +Me: I'm going to get you the list of products to people and then. Or products to accounts and then do you need anything else to make that, like. Yeah, the Salesforce account one. +Them: We can put that in Salesforce. Product configurators. Those two are the key things. What would be nice is like any comments or details about like specific configurations or license. +Me: Like, why the business, Medicare, Medicaid. +Them: I think we already have that under assets anyways, but it would be like. +Me: Okay? Well, I think they actually. And then total allowed lives. +Them: Yeah, well, no, like, licenses, maybe. +Me: Yeah. Licenses. Yeah, we should. +Them: And then active users. +Me: Yeah. We can definitely. I will get those things. Okay. +Them: That would be. That would be the first step. And then after that, what do you want to do next? +Me: I think I just want to get this as a field at the account level in Salesforce. +Them: What do you want to call. +Me: What is the portals call it? What does the portals call? Like account. +Them: It's a product configurator. +Me: Do they have it at the account level or is it okay? +Them: They do. +Me: I would just want to like that thing. And I would. Whatever they have it. Like I would want to replicate basically exactly their currency. +Them: Just to, like, share my screen with you. You're saying. This. One sec. This list here. +Me: Exactly. Yep. Solutions. +Them: Yeah. +Me: Yeah, exactly. +Them: Renewal letters. Premium billing ed. Yeah, so? +Me: It would be predict and it would be the product would be caps Haas medication adherence. +Them: Yeah. Perfect. +Me: Yeah. +Them: Yeah. So that's what I can do for you. Okay, so that's first. What. What's the next step after that? Like, where do you want to then switch over to N8 N? +Me: On the. Yeah. So I think. Well, actually, I think. I'm going to pass that list by Dan Reddy probably actually is he thinks still like a reasonable. Cold. Outreach person. To talk to. Why? Does. Ok? Ay, there we go. +Them: Daniel Reddy is currently unavailable. +Me: To. Show you something. Okay? I don't know where. Where I went. Yeah. +Them: Yeah. I don't know what happened there. +Me: And a ton, I think. Yeah. Next step would be. Let's. I like the idea about moving it to, like, the best closers. I want to talk to Phil Brian about it. I want to make sure that. And then I'll talk to Kathy and Tony and make sure that they have, like, the right that they're, like, on board for these sort of things. I don't know if. You felt it might actually be better, that's what I'll do. I'm going to frame it. I'm going to show it off. At the onsite in early February. And then assuming they're good with it, like, I'll be able to meet with enough people and kind of show it off. To like. I don't think it's hard to make opportunities like in Salesforce. With AI Like, I just would need you to help me make sure that we get it, like, with this mapping and this sort of, like, configuration. +Them: Yeah. Yeah, exactly. +Me: But that'll be the goal. It'll basically be like, hey. Let's identify upsell opportunities. And put them into the qualification stage. And. Have them be associated with the right accounts and then we'll let them decide of, like, hey, you know, is it Kathy and Tony, or is it, like, move towards. But I feel like that'll be helpful too, because, like, we're literally doing this on site training. For the salespeople. To teach them about, like, even what the hell, like, predict product line. And like is. And like, who are these customers? And I think if we could at the end say, like, hey, now we have an automated tool. That's going to take the guesswork out of, like, This is the right opportunity to present to this client. Like, hey, like, if someone's new and they're not doing well in caps. Do caps, like, right? +Them: Here it caps to you, yeah. +Me: Can't, like, talk to them about caps today. +Them: Yeah. +Me: And say these things. +Them: I think it's a good pipeline generation move. +Me: Yeah. +Them: Yeah, for sure. +Me: Okay, sweet. +Them: Awesome, man. I'm a little bit over, but, yeah, gave me that list, and then I'll go from there. +Me: I'm going to go through that list, and then. Yeah, we'll talk in. Yeah, early February will be the latest. Well, we'll talk between them, but that's like I. I want to get. Now, I have, like, a little deadline to get this out. +Them: Go. Sounds good. +Me: Sweet. +Them: Alrighty. Catch you later. \ No newline at end of file From b76a28dfc381b110c2efd0d2ac87bba8a5c95686 Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Mon, 16 Feb 2026 06:10:52 -0500 Subject: [PATCH 12/13] chore: revert unnecessary bun.lock format change Co-Authored-By: Warp --- bun.lock | 1 - 1 file changed, 1 deletion(-) diff --git a/bun.lock b/bun.lock index 3a07728e..26361fc8 100644 --- a/bun.lock +++ b/bun.lock @@ -1,6 +1,5 @@ { "lockfileVersion": 1, - "configVersion": 0, "workspaces": { "": { "name": "compound-plugin", From 07e30feb83cda88cd7187688c91657eb7133c20e Mon Sep 17 00:00:00 2001 From: Matt Thompson Date: Mon, 16 Feb 2026 15:15:19 -0500 Subject: [PATCH 13/13] chore: removed test transcripts / artifacts --- ...02-13-documentation-workflow-brainstorm.md | 67 ----- ...-namespaced-extension-system-brainstorm.md | 136 ---------- ...-02-13-feat-documentation-workflow-plan.md | 122 --------- ...3-feat-namespaced-extension-system-plan.md | 238 ----------------- .../interviews/2026-01-13-participant-001.md | 181 ------------- .../the-sales-operations-strategist.md | 92 ------- docs/research/transcripts/1.md | 252 ------------------ 7 files changed, 1088 deletions(-) delete mode 100644 docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md delete mode 100644 docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md delete mode 100644 docs/plans/2026-02-13-feat-documentation-workflow-plan.md delete mode 100644 docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md delete mode 100644 docs/research/interviews/2026-01-13-participant-001.md delete mode 100644 docs/research/personas/the-sales-operations-strategist.md delete mode 100644 docs/research/transcripts/1.md diff --git a/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md b/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md deleted file mode 100644 index 596cd37e..00000000 --- a/docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md +++ /dev/null @@ -1,67 +0,0 @@ -# Documentation Workflow Brainstorm - -**Date:** 2026-02-13 -**Status:** Ready for planning - -## What We're Building - -A `/workflows:document` command that automatically updates project documentation after a feature is implemented and reviewed. It fills the gap between code review and knowledge capture in the workflow chain: - -``` -Research → Brainstorm → Plan → Work → Review → Document → Compound -``` - -**Scope:** Full project documentation — README, CHANGELOG, API docs, user guides, and inline code docs. Not limited to the plugin itself. - -## Why This Approach - -The compound-engineering workflow chain currently has no step for updating user-facing documentation. After `/workflows:work` finishes implementation and `/workflows:review` validates the code, documentation updates are left as a manual afterthought. This creates a gap where features ship without corresponding docs. - -A phased workflow command (not an agent) was chosen because: -- It follows the established workflow pattern (phase-based, skill-loading, handoff points) -- It fits naturally in the chain between Review and Compound -- A single workflow is simpler to maintain than multiple specialist agents -- The propose-then-confirm model gives users control without being tedious - -## Key Decisions - -1. **Form factor:** New workflow command (`/workflows:document`), not an agent -2. **Chain position:** After Review, before Compound -3. **Discovery method:** Git diff + chain docs (brainstorm/plan) for full context -4. **Autonomy model:** Propose-then-confirm — analyze what needs updating, present a plan, get approval, then execute -5. **Documentation scope:** Full project docs (README, CHANGELOG, API docs, user guides, inline code docs) - -## Design - -### Phase 1: Discovery - -Analyze the codebase to understand what was built and what docs need updating: - -- **Git diff analysis:** Read the diff between current branch and main to identify what changed -- **Chain doc lookup:** Find and read any brainstorm/plan documents for this feature (auto-detect from `docs/brainstorms/` and `docs/plans/` by date or topic) -- **Doc inventory:** Scan the project for existing documentation files (README, CHANGELOG, API docs, guides, etc.) -- **Gap analysis:** Compare what was built against what's documented - -### Phase 2: Proposal - -Present a structured proposal to the user: - -- List each doc file that needs updating -- For each file, describe what changes are needed (new section, updated section, new entry, etc.) -- Flag any docs that should be created (e.g., "No API docs exist yet — should we create one?") -- Use `AskUserQuestion` to get approval (approve all, select specific items, or skip) - -### Phase 3: Execution - -Make the approved documentation changes: - -- Update each approved doc file -- Follow existing doc conventions (detect style from existing content) -- After all updates, show a summary of what was changed -- Offer handoff to `/workflows:compound` for knowledge capture - -## Open Questions - -1. Should the workflow also update inline code comments/docstrings, or just standalone doc files? -2. Should it create a documentation PR comment summarizing what was updated (useful for team visibility)? -3. How should it handle projects with no existing docs — offer to scaffold a basic doc structure? diff --git a/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md b/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md deleted file mode 100644 index c8a9af2f..00000000 --- a/docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md +++ /dev/null @@ -1,136 +0,0 @@ -# Namespaced Extension System for Compound Engineering Plugin - -**Date:** 2026-02-13 -**Status:** Brainstorm -**Author:** Matthew Thompson - -## What We're Building - -A general extensibility system that lets users create optional plugins ("extensions") that work alongside the core compound-engineering plugin. Extensions follow a naming convention (`compound-engineering-`) and are distributed through the same marketplace. They enable three categories of customization: - -1. **Custom agents/skills** - Specialized agents and skills for specific domains (e.g., a Phoenix reviewer, a Terraform skill) -2. **Framework packs** - Bundled sets of agents + skills + commands for a specific stack (e.g., "Rails Pack" with Rails-specific reviewers, generators, and conventions) -3. **Convention configs** - Team/personal rules and style preferences delivered as curated CLAUDE.md snippets that influence how existing core agents behave - -## Why This Approach - -We chose a **namespaced extension system** (Approach 2) over alternatives because: - -- **Works within current Claude Code spec** - No custom fields or spec changes needed. The marketplace already supports multiple plugins (coding-tutor proves this). -- **Convention over configuration** - Naming patterns (`compound-engineering-*`) and tags create clear relationships without formal dependency mechanisms. -- **Seamless experience** - The primary success metric. Extensions should feel like a natural part of the core with no conflicts or configuration headaches. -- **Upgradable** - Can graduate to a formal manifest system (Approach 3) later if the ecosystem grows large enough to need it. - -### Rejected Alternatives - -- **Flat plugin ecosystem** - Too loose. No way to signal which plugins complement the core. Users would have to guess. -- **Pack manifest system** - Adds `extends` field not in the Claude Code spec. Premature complexity for current ecosystem size. - -## Key Decisions - -### 1. Naming Convention - -All extensions use the pattern: `compound-engineering-` - -Examples: -- `compound-engineering-rails` (Rails framework pack) -- `compound-engineering-security` (security-focused agents) -- `compound-engineering-every-conventions` (team convention config) - -This makes extensions immediately identifiable and groups them naturally in marketplace listings. - -### 2. Convention Configs via CLAUDE.md - -Convention configs are plugins that ship curated CLAUDE.md instructions rather than (or in addition to) agents. Claude already reads CLAUDE.md files as context, so this is the most natural mechanism. No new infrastructure needed. - -A convention config plugin structure: -``` -plugins/compound-engineering-my-team/ -├── .claude-plugin/plugin.json -├── CLAUDE.md # Team conventions that agents read -└── README.md -``` - -### 3. Discovery via Tags and Browse Command - -- Extensions use a shared tag (e.g., `compound-engineering-extension`) in marketplace.json -- A `/extensions` command (or similar) lets users browse available extensions with descriptions -- The marketplace listing groups extensions visually - -### 4. Distribution Through Same Marketplace - -All extensions live in the same marketplace repo (`every-marketplace`). This provides: -- One-stop browsing -- Consistent quality (maintainers can review contributions) -- Simple installation (`claude /plugin install compound-engineering-rails`) - -### 5. No Formal Dependencies - -Claude Code doesn't support plugin dependencies. Each extension must function independently - it can complement the core but shouldn't break without it. This is a platform constraint we accept. - -### 6. Component Namespacing - -To avoid name collisions between extensions: -- Agents: prefix or suffix with pack name (e.g., `rails-model-reviewer` not just `reviewer`) -- Skills: use descriptive names (e.g., `rails-generators` not just `generators`) -- Commands: use pack prefix (e.g., `rails:scaffold` not just `scaffold`) - -## Extension Categories - -### Framework Packs -A framework pack bundles domain-specific tooling for a technology stack. - -Example: `compound-engineering-rails` -``` -plugins/compound-engineering-rails/ -├── .claude-plugin/plugin.json -├── CLAUDE.md # Rails conventions and preferences -├── agents/ -│ ├── rails-model-reviewer.md -│ ├── rails-migration-checker.md -│ └── rails-performance-agent.md -├── skills/ -│ └── rails-generators/SKILL.md -└── README.md -``` - -### Custom Agents/Skills -Individual agents or skills for specific needs. - -Example: `compound-engineering-security` -``` -plugins/compound-engineering-security/ -├── .claude-plugin/plugin.json -├── agents/ -│ ├── owasp-scanner.md -│ └── dependency-auditor.md -├── skills/ -│ └── threat-model/SKILL.md -└── README.md -``` - -### Convention Configs -Team or personal preferences that shape agent behavior. - -Example: `compound-engineering-every-conventions` -``` -plugins/compound-engineering-every-conventions/ -├── .claude-plugin/plugin.json -├── CLAUDE.md # Every's coding standards, style preferences, etc. -└── README.md -``` - -## Open Questions - -1. **Quality control** - Should there be a review process for community-contributed extensions, or is it open contribution? -2. **Versioning alignment** - Should extensions declare which version of the core they're designed for, even informally? -3. **Starter template** - Should we provide a `/create-extension` command or template repo to scaffold new extensions? -4. **Testing** - How do we verify extensions don't conflict with each other or the core? -5. **Documentation** - Should the docs site auto-generate pages for extensions, or is README.md sufficient? - -## Next Steps - -- Plan the implementation: directory structure, marketplace.json changes, example extensions -- Build 1-2 example extensions to validate the pattern -- Create documentation for extension authors -- Consider a `/create-extension` scaffolding command diff --git a/docs/plans/2026-02-13-feat-documentation-workflow-plan.md b/docs/plans/2026-02-13-feat-documentation-workflow-plan.md deleted file mode 100644 index 93785d5d..00000000 --- a/docs/plans/2026-02-13-feat-documentation-workflow-plan.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "feat: Add documentation workflow command" -type: feat -date: 2026-02-13 ---- - -# feat: Add documentation workflow command - -## Overview - -Add a `/workflows:document` command that updates project documentation after a feature is implemented and reviewed. This fills the gap between `/workflows:review` and `/workflows:compound` in the workflow chain: - -``` -Research → Brainstorm → Plan → Work → Review → Document → Compound -``` - -**Brainstorm:** [docs/brainstorms/2026-02-13-documentation-workflow-brainstorm.md](../brainstorms/2026-02-13-documentation-workflow-brainstorm.md) - -## Problem Statement - -The workflow chain has no step for updating user-facing documentation. Features ship without corresponding doc updates because it's left as a manual afterthought. A structured, propose-then-confirm workflow would make documentation a natural part of the development cycle. - -## Proposed Solution - -A single workflow command file at `plugins/compound-engineering/commands/workflows/document.md` that follows the established phase-based pattern. Three phases: Discovery → Proposal → Execution. - -## Implementation - -### File to create - -#### `plugins/compound-engineering/commands/workflows/document.md` - -The workflow command. Structure: - -```yaml ---- -name: workflows:document -description: Update project documentation after implementation and review -argument-hint: "[optional: path to brainstorm or plan doc, or PR number]" ---- -``` - -**Phase 1: Discovery** - -Gather context about what was built and what docs exist: - -1. **Determine diff base** — Check for PR (via `gh pr view`), fall back to `main`/`master` -2. **Git diff analysis** — Run `git diff ...HEAD --stat` then read changed files to understand what was built. Filter out test files, generated code, and lock files to keep scope manageable -3. **Chain doc lookup** — Search `docs/brainstorms/` and `docs/plans/` for recent documents matching the current feature (by date, branch name, or topic). If `$ARGUMENTS` provides a path, use that directly. If nothing found, proceed with diff-only mode -4. **Doc inventory** — Use Glob to find existing documentation files: `README*`, `CHANGELOG*`, `docs/**/*.md`, `API.md`, `GUIDE.md`, any `**/README.md` in subdirectories. Note their last-modified dates and sizes -5. **Gap analysis** — Compare what was built (from diff + chain docs) against what's documented. Identify: new features not mentioned in README, missing CHANGELOG entry, outdated API docs, new public APIs without docstrings - -**Phase 2: Proposal** - -Present a structured update plan to the user: - -- For each doc file that needs changes, show: file path, what kind of update (new section, updated section, new entry), and a 1-line summary of the change -- Flag any new docs that should be created (e.g., "No CHANGELOG exists — create one?") -- Flag docs that might need deletion or archival (if features were removed) -- Use `AskUserQuestion` with options: - 1. **Approve all** — Execute all proposed updates - 2. **Select items** — Choose which updates to apply (use multiSelect) - 3. **Skip documentation** — Exit without changes - 4. **Refine proposal** — Ask for adjustments - -**Guardrails to prevent overwriting user content:** -- Only modify sections relevant to the changed code — never rewrite entire files -- When updating an existing section, show the diff preview before applying -- Detect and preserve custom sections (anything not generated by this workflow) -- For README: append new feature sections, don't restructure existing content - -**Phase 3: Execution** - -Make the approved changes: - -1. For each approved update, read the target file, make the change, write it back -2. Match existing doc style (detect heading levels, tone, formatting from surrounding content) -3. For CHANGELOG: use Keep a Changelog format if one exists, otherwise detect existing format -4. After all updates, show a summary: files changed, sections added/updated -5. Offer handoff via `AskUserQuestion`: - 1. **Continue to `/workflows:compound`** — Document solved problems for team knowledge - 2. **Review changes** — Load `document-review` skill for quality pass - 3. **Done** — Documentation complete - -### Edge cases to handle - -- **No existing docs:** Offer to scaffold a minimal doc structure (README + CHANGELOG) rather than silently failing -- **No git diff:** If on main with no changes, check `$ARGUMENTS` for a PR number. If nothing, tell the user and exit -- **Doc-only changes in diff:** Detect and exit early — "Changes are documentation-only, nothing additional to document" -- **Massive diffs (50+ files):** Summarize by directory/component rather than file-by-file. Focus on public API changes -- **No chain docs found:** Proceed with diff-only mode, mention that brainstorm/plan context would improve results - -### Files to update (plugin metadata) - -After creating the command, update these files per the plugin's versioning requirements: - -- [ ] `plugins/compound-engineering/.claude-plugin/plugin.json` — bump minor version, update command count in description -- [ ] `.claude-plugin/marketplace.json` — update description with new command count -- [ ] `plugins/compound-engineering/README.md` — add `/workflows:document` to commands list -- [ ] `plugins/compound-engineering/CHANGELOG.md` — add entry under new version - -### Optional: Update review workflow handoff - -Update `plugins/compound-engineering/commands/workflows/review.md` to offer `/workflows:document` as a next step after review completes. Add an option in the final handoff section. - -## Acceptance Criteria - -- [ ] `/workflows:document` command exists and loads correctly -- [ ] Phase 1 discovers changed files via git diff and finds chain docs when available -- [ ] Phase 2 presents a clear proposal listing each doc update needed -- [ ] User can approve all, select specific items, or skip -- [ ] Phase 3 makes only approved changes without overwriting unrelated content -- [ ] Handoff to `/workflows:compound` works -- [ ] Plugin metadata (version, counts, changelog) updated correctly -- [ ] Works in diff-only mode when no chain docs exist - -## References - -- Workflow pattern: `plugins/compound-engineering/commands/workflows/work.md` -- Handoff pattern: `plugins/compound-engineering/commands/workflows/compound.md` -- Doc style skills: `plugins/compound-engineering/skills/document-review/`, `plugins/compound-engineering/skills/every-style-editor/` -- Plugin versioning: `docs/solutions/plugin-versioning-requirements.md` diff --git a/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md b/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md deleted file mode 100644 index 2d435008..00000000 --- a/docs/plans/2026-02-13-feat-namespaced-extension-system-plan.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -title: "feat: Namespaced Extension System" -type: feat -date: 2026-02-13 -brainstorm: docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md -related: plans/grow-your-own-garden-plugin-architecture.md ---- - -# Namespaced Extension System - -## Overview - -Create an extension ecosystem where users can install optional plugins that complement the core compound-engineering plugin. Extensions follow the naming convention `compound-engineering-`, live in the same marketplace, and enable three categories of customization: custom agents/skills, framework packs, and convention configs (delivered as CLAUDE.md snippets). - -## Problem Statement / Motivation - -The compound-engineering plugin is monolithic — 30 agents, 25 commands, 21 skills. Users working in Rails don't need Python reviewers, and vice versa. Teams want their own conventions baked in. The "Grow Your Own Garden" plan identified this problem but proposed a complex growth-loop mechanism. This plan takes a simpler approach: let people install optional extensions from the same marketplace. - -The infrastructure already works — `coding-tutor` proves multiple plugins can coexist in the marketplace. We just need conventions, templates, and a discovery mechanism. - -## Proposed Solution - -### What Claude Code Already Handles - -These are platform-level capabilities we don't need to build: - -- **Installation**: `claude /plugin install ` works for any plugin in a marketplace -- **Component discovery**: Agents, commands, skills auto-discovered from standard directories -- **CLAUDE.md loading**: All CLAUDE.md files in installed plugins are read as context -- **Plugin isolation**: Each plugin is cached independently, no cross-plugin file access -- **Sandboxing**: User permission model applies to all plugin components equally - -### What We Build - -1. **Extension template** — Scaffold structure for creating extensions -2. **Example extensions** — Two reference implementations (framework pack + convention config) -3. **`/extensions` command** — Browse available extensions from the marketplace -4. **Marketplace entries** — Add extensions to marketplace.json with shared tags -5. **Author guide** — Documentation for creating and submitting extensions -6. **Naming validation** — Script to check naming conventions and detect collisions - -## Technical Considerations - -### Naming Conventions - -| Component | Pattern | Example | -|-----------|---------|---------| -| Plugin name | `compound-engineering-` | `compound-engineering-rails` | -| Agent names | `-` | `rails-model-reviewer` | -| Skill names | `-` | `rails-generators` | -| Command names | `:` | `rails:scaffold` | -| Marketplace tag | `compound-engineering-extension` | — | - -### Extension Types - -**Framework Pack** — Bundled agents + skills + commands + CLAUDE.md for a stack: -``` -plugins/compound-engineering-rails/ -├── .claude-plugin/plugin.json -├── CLAUDE.md # Rails conventions (prefer RSpec, follow Rails Way, etc.) -├── README.md -├── agents/ -│ ├── rails-model-reviewer.md -│ └── rails-migration-checker.md -├── commands/ -│ └── rails-console.md -└── skills/ - └── rails-generators/ - └── SKILL.md -``` - -**Convention Config** — Just CLAUDE.md with team/personal rules: -``` -plugins/compound-engineering-every-conventions/ -├── .claude-plugin/plugin.json -├── CLAUDE.md # Team coding standards, PR conventions, etc. -└── README.md -``` - -**Custom Agents/Skills** — Individual specialized components: -``` -plugins/compound-engineering-security/ -├── .claude-plugin/plugin.json -├── README.md -└── agents/ - ├── owasp-scanner.md - └── dependency-auditor.md -``` - -### CLAUDE.md Convention Configs - -Claude Code loads all CLAUDE.md files from installed plugins as context. Priority order (per Claude Code docs): project CLAUDE.md > user CLAUDE.md > plugin CLAUDE.md. This means: - -- Extension conventions apply globally when installed -- Project-level CLAUDE.md can always override extension conventions -- Multiple extension CLAUDE.md files all load (no conflict resolution needed — Claude synthesizes instructions naturally) - -**Guidelines for convention config authors:** -- Keep CLAUDE.md under 2KB (respect token budget) -- Use clear section headers so instructions are scannable -- Prefix rules with context: "When working on Rails code..." rather than absolute rules -- Document which core agents the conventions influence - -### Collision Avoidance - -- Naming convention is the primary defense (convention over enforcement) -- Validation script checks for collisions against core plugin components at submission time -- Component names must not match any existing agent/command/skill in core or other extensions -- If collision detected, author must rename before merging - -### Plugin.json for Extensions (Minimal) - -```json -{ - "name": "compound-engineering-rails", - "version": "1.0.0", - "description": "Rails framework pack for compound-engineering. Adds Rails-specific code review agents, generators, and conventions.", - "author": { - "name": "Author Name" - }, - "keywords": ["compound-engineering-extension", "rails", "ruby", "framework-pack"] -} -``` - -Required fields: `name`, `version`, `description`, `author` -Required keyword: `compound-engineering-extension` (for discovery) - -## Acceptance Criteria - -- [ ] Extension template exists with scaffold script or documented structure -- [ ] At least one example extension (`compound-engineering-rails`) is functional -- [ ] At least one convention config extension exists as a reference -- [ ] `/extensions` command lists available extensions with descriptions and install commands -- [ ] Extensions install alongside core plugin without conflicts -- [ ] marketplace.json includes extension entries with `compound-engineering-extension` tag -- [ ] Author guide documents naming conventions, structure, and submission process -- [ ] Validation script detects naming collisions against core plugin components - -## Success Metrics - -- Extensions install and work alongside the core plugin with zero configuration -- A new extension can be created from template in under 10 minutes -- `/extensions` command provides enough info to decide whether to install - -## Dependencies & Risks - -**Dependencies:** -- Claude Code plugin system continues to support multiple plugins per marketplace (currently works) -- CLAUDE.md files from plugins continue to be loaded as context (currently works) - -**Risks:** -- **Token budget**: Multiple extension CLAUDE.md files could consume too much context. Mitigation: 2KB guideline for convention configs. -- **Name collisions**: Convention-based naming can't prevent all collisions. Mitigation: Validation script checks at submission time. -- **Core plugin changes**: Core agent renames could collide with extensions. Mitigation: Extensions use domain-prefixed names that won't overlap with core's generic names. - -## Implementation - -### Phase 1: Template and Example Extension - -Create the extension template structure and build `compound-engineering-rails` as the reference implementation. - -**Files to create:** - -1. `plugins/compound-engineering-rails/.claude-plugin/plugin.json` — Minimal manifest -2. `plugins/compound-engineering-rails/CLAUDE.md` — Rails conventions -3. `plugins/compound-engineering-rails/README.md` — Usage documentation -4. `plugins/compound-engineering-rails/agents/rails-model-reviewer.md` — Example agent -5. `plugins/compound-engineering-rails/agents/rails-migration-checker.md` — Example agent -6. `plugins/compound-engineering-rails/skills/rails-generators/SKILL.md` — Example skill - -### Phase 2: Convention Config Example - -Create `compound-engineering-every-conventions` as a reference convention config. - -**Files to create:** - -1. `plugins/compound-engineering-every-conventions/.claude-plugin/plugin.json` -2. `plugins/compound-engineering-every-conventions/CLAUDE.md` — Every's coding standards -3. `plugins/compound-engineering-every-conventions/README.md` - -### Phase 3: Discovery Command - -Build the `/extensions` command that reads marketplace.json and displays available extensions. - -**Files to create:** - -1. `plugins/compound-engineering/commands/extensions.md` — Browse command - -**Command behavior:** -- Reads marketplace.json -- Filters plugins with `compound-engineering-extension` keyword/tag -- Displays each extension: name, description, component counts, install command -- Groups by type (framework pack, convention config, custom agents) if possible - -### Phase 4: Marketplace and Documentation - -Update marketplace.json with new extensions and create the author guide. - -**Files to update:** - -1. `.claude-plugin/marketplace.json` — Add extension entries -2. `plugins/compound-engineering/.claude-plugin/plugin.json` — Update description -3. `plugins/compound-engineering/README.md` — Add "Extensions" section - -**Files to create:** - -1. `docs/guides/creating-extensions.md` — Author guide with naming conventions, structure, submission process - -### Phase 5: Validation - -Create a validation script that checks extension compliance. - -**Files to create:** - -1. `scripts/validate-extension.sh` — Checks naming, structure, collisions - -**Validation checks:** -- Plugin name starts with `compound-engineering-` -- `compound-engineering-extension` keyword present in plugin.json -- No component names collide with core plugin components -- Required files exist (.claude-plugin/plugin.json, README.md) -- plugin.json is valid JSON with required fields - -## References & Research - -### Internal References - -- Brainstorm: `docs/brainstorms/2026-02-13-namespaced-extension-system-brainstorm.md` -- Related plan: `plans/grow-your-own-garden-plugin-architecture.md` -- Example plugin: `plugins/coding-tutor/` (minimal plugin structure) -- Core plugin: `plugins/compound-engineering/` (full plugin structure) -- Marketplace: `.claude-plugin/marketplace.json` - -### External References - -- [Claude Code Plugin Documentation](https://docs.claude.com/en/docs/claude-code/plugins) -- [Plugin Marketplace Documentation](https://docs.claude.com/en/docs/claude-code/plugin-marketplaces) -- [Plugin Reference](https://docs.claude.com/en/docs/claude-code/plugins-reference) diff --git a/docs/research/interviews/2026-01-13-participant-001.md b/docs/research/interviews/2026-01-13-participant-001.md deleted file mode 100644 index 85d84b9f..00000000 --- a/docs/research/interviews/2026-01-13-participant-001.md +++ /dev/null @@ -1,181 +0,0 @@ ---- -participant_id: user-001 -role: "Strategic Account Manager" -company_type: "Healthcare SaaS" -date: 2026-01-13 -research_plan: ad-hoc -source_transcript: "1.md" -focus: "Strategic accounts dashboard demo and upsell opportunity planning" -duration_minutes: 20 -tags: [upsell-strategy, sales-enablement, account-management, dashboard, salesforce] ---- - -# Interview Snapshot: user-001 - -## Summary - -This was a collaborative working session where the interviewer demoed a self-built strategic accounts dashboard to a colleague with deep Salesforce and sales pipeline expertise. The dashboard pulls Jira and Salesforce data to surface upsell opportunities and account health signals. The participant validated the approach but challenged the routing strategy, arguing that qualified upsell leads should go to experienced closers (Tony and Kathy) rather than overwhelmed Strategic Account Managers (SAMs) who lack product knowledge. They also identified that upsells should be created as Salesforce opportunities rather than Jira tickets, and offered to help configure Salesforce product data as a first step. The session revealed a significant capability gap in the SAM role and a data-rich but under-leveraged Stars Compare benchmarking tool. - -## Experience Map - -``` -Trigger → Context → Actions → Obstacles → Workarounds → Outcome -``` - -| Step | What Happened | Feeling | Tools/Process | -|------|--------------|---------|---------------| -| Trigger | Interviewer built a strategic accounts dashboard as a side project | Motivated, proactive | Claude, N8N, Jira API, Salesforce API | -| Context | SAMs are overwhelmed, untrained on products, and struggling with account management | Frustrated (interviewer), empathetic (participant) | Jira, Salesforce, Confluence | -| Action 1 | Demoed dashboard showing account health, momentum scoring, and upsell signals | Excited, hopeful | Strategic Accounts Dashboard (custom) | -| Action 2 | Showed Stars Compare lead generation list built with Clay | Impressed (participant) | Clay, Stars Compare tool | -| Obstacle | SAMs don't know products well enough to sell upsells even with tools | Skeptical, concerned | - | -| Workaround | Participant suggested routing leads to experienced closers (Tony/Kathy) instead of SAMs | Aligned, strategic | Salesforce opportunity routing | -| Outcome | Agreed on next steps: product configurator in Salesforce, present at February onsite | Energized, collaborative | Salesforce, onsite meeting | - -## Insights - -### Pain Points - -> "I don't know what the hell. I have no idea what a population health director even does, and I'm sitting in this meeting. And I'm having all these people talk at me about caps Files. I don't. I'm Googling caps." -- **Type:** pain-point -- **Topics:** product-knowledge, onboarding -- **Context:** Interviewer describing SAM experience with unfamiliar healthcare products (secondhand account) - -> "If we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels?" -- **Type:** pain-point -- **Topics:** sales-enablement, account-management -- **Context:** Participant questioning whether a dashboard alone solves the SAM capability gap - -> "We also don't have the best, like, easily accessible analytics in general. And then it's like, hey, I'm looking at somebody, like, I can't tell if they have CAPS or HAAS." -- **Type:** pain-point -- **Topics:** analytics-access, product-visibility -- **Context:** Interviewer describing difficulty identifying which products accounts have - -### Needs - -> "Can they do that without there being an opportunity first? Like, don't you want to, instead of creating a JIRA ticket for that? If it's potentially an upsell, shouldn't it be an opportunity instead?" -- **Type:** need -- **Topics:** salesforce, upsell-strategy -- **Context:** Participant identifying that upsells need to follow proper Salesforce opportunity workflow - -> "You're going to have to eventually. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op." -- **Type:** need -- **Topics:** salesforce, pipeline-management -- **Context:** Participant explaining why opportunities should be created from the start rather than Jira tickets - -> "One is just mapping this out. On paper or just in an whatever you prefer. And then the second is like how we want to create that UI for them." -- **Type:** need -- **Topics:** planning, dashboard -- **Context:** Participant identifying that the tool needs proper planning before deployment - -### Behaviors - -> "Why don't we just hand the best leads to the best closers?" -- **Type:** behavior -- **Topics:** sales-routing, upsell-strategy -- **Context:** Participant proposing a strategic shift from SAM-driven to specialist-driven upsells - -> "If these guys are already our customers, the whole sales motion is a lot easier because they're not a cold lead, in a sense. You have Kathy and Tony, who are ideally, they actually know the product way more than the SAMs do, who are like, overwhelmed and catching up." -- **Type:** behavior -- **Topics:** upsell-strategy, sales-enablement -- **Context:** Participant contrasting experienced closers with overwhelmed SAMs for existing customer expansion - -> "We can put that in Salesforce. Product configurators. Those two are the key things." -- **Type:** behavior -- **Topics:** salesforce, product-configuration -- **Context:** Participant identifying where product data should live in Salesforce - -> "And it gets in qualification so it doesn't hurt their win rates or whatever." -- **Type:** behavior -- **Topics:** pipeline-management, salesforce -- **Context:** Participant explaining the qualification stage protects sales metrics - -### Workarounds - -> "This is a little thing that I built. Strategic accounts Dashboard." -- **Type:** workaround -- **Topics:** dashboard, side-project -- **Context:** Interviewer built a custom dashboard as a side project because no adequate tool existed - -> "I basically copy numbers from the dashboard into my spreadsheet because the export never works right." [Observed behavior: interviewer built custom integrations pulling Jira and Salesforce data via N8N because native tools didn't provide the combined view needed] -- **Type:** workaround -- **Topics:** data-integration, n8n -- **Context:** N8N workflow automation used to combine data from multiple sources that don't natively connect - -> "I got a hold of the Stars Compare tool data. And then I was able to calculate their intra state performance at a national level and their performance at a state level." -- **Type:** workaround -- **Topics:** lead-generation, stars-compare -- **Context:** Interviewer manually mined Stars Compare benchmarking data to generate cold outreach leads using Clay - -### Desires - -> "Can you connect this to an LLM that just gives them that recommendation?" -- **Type:** desire -- **Topics:** ai-automation, sales-enablement -- **Context:** Participant immediately seeing the potential for AI-powered upsell recommendations - -> "I mean, this is like a top tier list to use." -- **Type:** desire -- **Topics:** lead-generation, stars-compare -- **Context:** Participant expressing enthusiasm for the data-driven prospect list - -### Motivations - -> "I'm kind of done. I'm exhausted from the account manager saying, like, they have no, like, because they're like, hey, we're doing our best." -- **Type:** motivation -- **Topics:** sales-enablement, account-management -- **Context:** Interviewer's frustration driving them to build tools rather than continue complaining about SAM performance - -> "I would love to, rather than start, like, complaining about, like, hey, the SAMs aren't doing right, be like, hey, I built you a tool. I'm going to give you the exact playbooks to run." -- **Type:** motivation -- **Topics:** sales-enablement, playbook -- **Context:** Interviewer motivated by empowering SAMs with concrete playbooks rather than criticism - -> "I think it's a good pipeline generation move." -- **Type:** motivation -- **Topics:** pipeline-management, upsell-strategy -- **Context:** Participant validating the strategic value of automated opportunity generation - -## Opportunities - -Opportunities are unmet needs -- NOT solutions. - -| # | Opportunity | Evidence Strength | Quote | -|---|-----------|------------------|-------| -| 1 | Users need a way to see which products each account has at a glance | Strong | "I can't tell if they have CAPS or HAAS. Like, there's a lot of problems in general" | -| 2 | Users need a way to route qualified upsell leads to the right seller based on expertise | Strong | "Why don't we just hand the best leads to the best closers?" | -| 3 | Users need a way to automatically identify accounts that are good candidates for specific product upsells | Strong | "I want it to actually run. And then I want it to create a... expansion ticket that gets assigned" | -| 4 | Users need a way to arm sellers with product-specific talking points and competitive benchmarks for upsell conversations | Strong | "This is the scores where they're low. This is their benchmarks. These are their competitors who are doing better than them. Like talk about this." | -| 5 | Users need a way to mine benchmarking data (Stars Compare) for outbound lead generation | Medium | "Have we actually mined the data and say, like, hey, here's the people who should be taking it?" | -| 6 | Users need a way to onboard SAMs on complex healthcare products without repeated training sessions | Medium | "I train all them, like, literally six times, and then like, no one there, and still every time there's a couple" | -| 7 | Users need a way to track account health momentum alongside expansion opportunities in one view | Medium | "How many monthly active users do they have? Like, are they actually using the product? Are they getting value out of it?" | - -**Evidence strength:** -- **Strong**: Participant explicitly described this need with emotional weight -- **Medium**: Participant mentioned this in passing or as part of a larger story -- **Weak**: Inferred from behavior or workaround, not directly stated - -## Hypothesis Tracking - -| # | Hypothesis | Status | Evidence | -|---|-----------|--------|----------| -| 1 | SAMs lack the product knowledge to effectively sell upsells | NEW | "They're just, like, catching up... I don't know what the hell. I have no idea what a population health director even does" | -| 2 | Automated upsell identification would increase pipeline generation | NEW | "I think it's a good pipeline generation move" | -| 3 | Experienced closers (not SAMs) should handle qualified upsell leads | NEW | "Why don't we just hand the best leads to the best closers?" | -| 4 | Stars Compare benchmarking data is an underleveraged asset for lead generation | NEW | "No, we haven't. They should. I've requested something like that before." | -| 5 | Upsells should be tracked as Salesforce opportunities from the start, not Jira tickets | NEW | "It's going to have to be an opportunity anyways if it materializes into revenue for us to capture" | - -## Behavioral Observations - -- **Tools mentioned:** Salesforce (opportunities, product configurators, qualification stages), Jira, N8N (workflow automation), Clay (lead enrichment), Stars Compare (benchmarking tool), Claude (AI for building dashboard), Confluence (playbook documentation), Slack -- **Frequency indicators:** SAM training happened "literally six times"; dashboard and analytics checked regularly; Stars Compare data collected periodically -- **Emotional signals:** Interviewer frustrated with SAM capability gap ("I'm exhausted"); participant impressed by dashboard ("this is dope", "I think they should make you VP of marketing"); both energized by the upsell automation vision; participant skeptical about SAM-driven approach ("wouldn't they still run to that same issue?") -- **Workaround patterns:** Built entire custom dashboard as a "side project" because no adequate tool existed; used Clay to enrich Stars Compare data into actionable lead lists; N8N used to bridge data silos between Jira and Salesforce; manual product identification because no consolidated view exists - -## Human Review Checklist - -- [ ] All quotes verified against source transcript -- [ ] Experience map accurately reflects story arc -- [ ] Opportunities reflect participant needs, not assumed solutions -- [ ] Tags accurate and consistent with existing taxonomy -- [ ] No insights fabricated or composited from multiple participants diff --git a/docs/research/personas/the-sales-operations-strategist.md b/docs/research/personas/the-sales-operations-strategist.md deleted file mode 100644 index 30aec86e..00000000 --- a/docs/research/personas/the-sales-operations-strategist.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -name: "The Sales Operations Strategist" -role: "Strategic Account Manager" -company_type: "Healthcare SaaS" -last_updated: 2026-02-13 -interview_count: 1 -confidence: low -source_interviews: [user-001] -version: 1 ---- - -# The Sales Operations Strategist - -## Overview - -The Sales Operations Strategist is a process-oriented sales professional who thinks in systems rather than individual deals. They have deep expertise in CRM configuration, pipeline stages, and sales workflows -- and they instinctively evaluate new tools through the lens of "does this fit into the existing sales process correctly?" Rather than getting excited about flashy dashboards, they ask the hard questions: who actually runs the plays, does the data flow into the right systems, and will this scale beyond a side project. - -They occupy a unique position between sales execution and operations. They know Salesforce intimately -- product configurators, qualification stages, opportunity routing -- and they use this knowledge to challenge naive assumptions about who should handle what. When presented with a tool that automates upsell identification, their first instinct is not "cool, let's deploy it" but "who are we routing these leads to, and can those people actually close them?" They advocate for matching lead quality to seller capability, preferring experienced closers over generalists who are still learning the product. - -This persona is pragmatic and collaborative. They validate good ideas quickly ("this is dope", "I think it's a good pipeline generation move") but immediately pivot to execution concerns: Salesforce configuration, opportunity creation workflows, and realistic seller capability. They are a critical ally for product and analytics teams building internal tools -- they won't block innovation, but they will insist it plugs into real sales processes correctly. - -## Goals - -1. Route qualified upsell leads to sellers who can actually close them (1/1 participants) -2. Ensure all revenue-generating activities are tracked as Salesforce opportunities from the start (1/1 participants) -3. Leverage existing customer relationships for easier expansion sales motions (1/1 participants) -4. Get product data properly configured in Salesforce (product configurators, licenses, active users) (1/1 participants) - -## Frustrations - -1. SAMs lack product knowledge and are overwhelmed, yet they're expected to handle upsells (1/1 participants) -2. Tools alone don't solve capability gaps -- people still need to know what they're selling (1/1 participants) -3. Upsell workflows that bypass Salesforce opportunity tracking create data and attribution problems (1/1 participants) -4. No one has systematically mined benchmarking data (Stars Compare) for lead generation despite repeated requests (1/1 participants) - -## Behaviors - -| Behavior | Frequency | Evidence | -|----------|-----------|----------| -| Evaluates new tools against existing Salesforce workflows before endorsing | Per interaction | (1/1 participants) | -| Challenges routing assumptions -- asks "who actually runs this play?" | Per interaction | (1/1 participants) | -| Advocates for qualification-stage opportunities to protect win rates | Per interaction | (1/1 participants) | -| Quickly identifies CRM configuration steps needed to operationalize ideas | Per interaction | (1/1 participants) | -| Validates good ideas fast, then pivots to execution concerns | Per interaction | (1/1 participants) | - -## Key Quotes - -> "Why don't we just hand the best leads to the best closers?" -> -- user-001, proposing specialist-driven upsells over SAM-driven approach - -> "If we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels?" -> -- user-001, questioning whether tools alone solve the SAM capability gap - -> "You're going to have to eventually. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op." -> -- user-001, explaining why upsells must be Salesforce opportunities from the start - -> "If these guys are already our customers, the whole sales motion is a lot easier because they're not a cold lead, in a sense." -> -- user-001, distinguishing expansion sales from new logo acquisition - -> "I think it's a good pipeline generation move." -> -- user-001, validating automated upsell opportunity creation - -## Opportunities - -| # | Opportunity | Evidence Strength | Participants | Key Quote | -|---|-----------|------------------|-------------|-----------| -| 1 | Users need a way to route qualified upsell leads to the right seller based on expertise | Weak | user-001 | "Why don't we just hand the best leads to the best closers?" | -| 2 | Users need a way to see which products each account has at a glance | Weak | user-001 | "I can't tell if they have CAPS or HAAS" | -| 3 | Users need a way to automatically identify accounts that are good candidates for specific product upsells | Weak | user-001 | "I want it to actually run. And then I want it to create a... expansion ticket" | -| 4 | Users need a way to arm sellers with product-specific talking points and competitive benchmarks | Weak | user-001 | "This is the scores where they're low. These are their competitors who are doing better than them." | -| 5 | Users need a way to mine benchmarking data for outbound lead generation | Weak | user-001 | "Have we actually mined the data and say, like, here's the people who should be taking it?" | -| 6 | Users need a way to onboard SAMs on complex healthcare products without repeated training | Weak | user-001 | "I train all them, like, literally six times" | -| 7 | Users need a way to track account health momentum alongside expansion opportunities in one view | Weak | user-001 | "How many monthly active users do they have? Are they actually using the product?" | - -## Divergences - -_No divergences identified yet._ - -## Evidence - -| Participant | Research Plan | Date | Focus | -|------------|--------------|------|-------| -| user-001 | ad-hoc | 2026-01-13 | Strategic accounts dashboard demo and upsell opportunity planning | - -## Human Review Checklist - -- [ ] Goals and frustrations grounded in interview evidence -- [ ] Behavior counts accurate (absence not counted as negative) -- [ ] Quotes are exact (verified against source interviews) -- [ ] Opportunities framed as needs, not solutions -- [ ] Divergences section reflects actual contradictions -- [ ] Confidence level matches interview count threshold diff --git a/docs/research/transcripts/1.md b/docs/research/transcripts/1.md deleted file mode 100644 index f23fcb02..00000000 --- a/docs/research/transcripts/1.md +++ /dev/null @@ -1,252 +0,0 @@ -Meeting Title: Strategic accounts dashboard and upsell opportunity planning with sales analytics team -Date: Jan 13 - -Transcript: - -Them: Hey, man, what's going on? -Me: What's up, dude? How are you? -Them: Nothing much. Happy Tuesday. -Me: Not true. Not true. Especially. -Them: Yeah, you're right. -Me: I see you popping around on this open community office hours all the time. -Them: Yeah. You caught me. -Me: That was the hardest. -Them: It was good. Got some time to rest, recharge, ski a little. How about you? -Me: Oh, nice. Where are you skiing? -Them: I'm down in Montana. Or up in Montana, I should say. I'm in California right now. Yeah. -Me: Whoa. -Them: I wish we went to Kalispell, which is, like, on the other side of Montana. Whitefishes. That's rice. -Me: Okay, got it, got it, got it. -Them: Cage. -Me: I'm looking at. Things that are. I'm trying to make this list for you, and it's not right yet, but it will be. -Them: Oh, yes. Oh, yes. -Me: So we had. We moved into our house, and it's, like, right next to Emily's parents. This is a couple years. This is, like, in Covid. -Them: Oh, nice. -Me: But across the street. -Them: Okay? -Me: There was this really nice set of neighbors. They're really cool, but they're older. And they had a kid who was like a junior in high school when moved in. Then he graduated. The older girls going, but basically the kid graduated, and they were like, we're going to Montana. We're going to bozeman. And the mom was, like, needing. She's like, my older daughter is probably never going to have kids. -Them: Yeah. -Me: Our younger son is. Should not be having kids right now. I need grandkids. And I was like. I was like, you got him. We need help. You want to come over? -Them: Yeah. Yeah. -Me: And so she was like fairy God across the street. -Them: Run them out. Yeah. That's awesome, though. -Me: It was great. And they're like, we're moving to Bozeman. Do you want to come visit? I was like, do not. Just one of those. Like, I'm not. Because, like, now they're, like, fancy. Or at least the husband's fancy. -Them: Yeah, throw that out there. Yeah. -Me: Don't be like, oh, yeah, no, sorry. I won't be in Jackson Hole that time of year. We can't. I'm like, I have nothing better to do. I will book a ticket right now if you invite me to. I can stand your nice house and, yeah, come. And so we went over there. We went in the summer and it was beautiful. -Them: That's literally what it is. Yeah. Yeah, exactly. Take the kids too. Yeah. Oh. What do you think? Yeah. In summer it's incredible. -Me: It was amazing. It was nuts. It was so. -Them: Do you get a chance to go to Paradise Valley right next to it, or no? -Me: Yes. Yes. We went all the way into Yellowstone and just, like, it was. It was stunning. It was so cool. -Them: Yeah. I went to a wedding there. I think it was two years ago, and it was just crazy, you know? Apparently John Mayer, like, lives out there. And plays at local bars. -Me: That's what they were telling, like. Well, she's like, Glenn Close is, like, their neighbor. Like, all these, like, famous people. Bozeman's crazy. It's a micropolis. I had never heard it because the guy who, like, used to be in real estate. -Them: Holy crap. Yeah. -Me: It's like the fastest growing Micropolis. Whatever. But it's this crazy thing where, like. They were, like, doing this where, like, you buy a house for $750,000. -Them: Yeah. -Me: When you buy, like, a piece of land with a house on it, you level the house immediately, and then you build a $2 million house on top of it. And then there's, like, all these, like, beautiful home, but it's like, all the, like, housing crisis problems, like, but, like, in tiny. -Them: Yeah, exactly. -Me: Like you're seeing crazy houses go up and there's all these restrictions and zoning laws and all this sort of stuff. -Them: Oh, for sure. -Me: And then. And then it's like Van Life in the Park. -Them: Yeah. It's so gnarly. -Me: It's, like, all white and her daughter's vandalized girl. -Them: Yeah. Dude, it's. It's. -Me: And so, yeah, crazy. I'm like, this dude not wearing news. Like, riding a bike without shoes is not living in that house. Like, where does this roll? -Them: Yeah. No 100%. Every time. Yeah, and my fiance went to school there. She went to MSU back before it blew up. Like, yeah, before Covid and before everyone found out about Bozeman. And, yeah, every time we've been now, you're totally right. It's like 50 new little housing development things and then, like, a whole street. Row full of RVs and cars and people. Car, camera. It's so. It's so wild. -Me: All right. Yeah, exactly. It's crazy. Y. Eah. And they're like. Because, I guess. Well, is that. Is that. Or is it Eastern or is it. -Them: But it's so. It's pretty. It's eastern Montana. -Me: Yeah, yeah, that's what. Yeah, the. I mean, they're like. You probably know it. Like, there's like a sushi place on the. Like, one back from. And then like four, four rows back, there's, like the park and that's like, that's where they are, right? They're like. Six blocks from, like, the main saloon strip. Like, I could. I could, like, picture it like, I walk there because there's only one main drag. -Them: Yeah, I wouldn't trust it. Yeah. Yeah. Yeah, well, you know, like, that whole area too. It's just like, what's next? So Bozeman blew up. Then I guess Livingston, which is closer to Yellowstone's, probably gonna be next. And Butte? I don't know if you had a chance to go to Butte, but. -Me: Yeah. Yeah. Yeah. Yeah. No, we did not go there, but. -Them: You didn't miss out. I wasn't a fan of you. -Me: It was. Well, it's funny, too, because I used to, like. That was, like, the joke. I used to work in television, and we used, like, all these, like, broadcast TV stations, and everybody would make fun of Bozeman Butte because it was like, really? They're like. He's like, there's literally one. -Them: Yeah. Yeah. -Me: Light in my town. Like, he's like, in the whole town. And, like, you know, all these, like, people from Seattle are showing up, and we're like. We're like. Butte's like a strong cbs. Like, it's looking pretty good. -Them: Yeah. Yeah. Viewed as. I think I saw a comment. It was like the only place in Montana where two people will just fight for absolutely no reason. Can only go down and be. That's true. -Me: Exactly. No. Yeah, exactly. Yeah. -Them: But, you know, I think it's gonna do turnaround, like they all will at this point. They're all gonna. -Me: Well, no, I think. I mean, that's. And I think they kind of know it, too, because. We don't want you here. Like, we moved to Monta from you and like, we found this, like. -Them: Yeah, watch you here. -Me: Boss place to live, and we just didn't tell anybody. And now, like, you guys? Yeah. -Them: Everybody knows. Freaking Yellowstone. That's. That show just, like, ruined it. -Me: Yeah, exactly. -Them: For some people, I guess. But. But yeah. You wanted to talk about showing. -Me: Yeah. Let's. Let me see if I. Yeah, let me see if this is actually working right now. I think it might actually be. -Them: Analytics products. -Me: Yep. Sweet. Okay, cool. Let me see. If I'm going to upload, this is probably wrong. I haven't even checked. But. I will see if we can get. This to work, okay? All. Right. Let's check this. I'm putting it in. All right. Let me share screen. This is not right, but let me see if this would be helpful. Account id. So here's like, the account id, here's Apex Health Solutions, and then we need to find out. This product will be what ultimately becomes either like caps or pass or medication adherence. -Them: Exactly. Where did you get this from? Salesforce. -Me: All. Right. So I'm going to bring you in under. -Them: Oh, this is from jira. Okay. -Me: Well, it's both. So here's my side project that I've been working on in the backgrounds. And so I'm going to show you and swear you to secrecy until such time as I can at least get from. Validation from somebody that I'm not going to get fired. This will actually help. All right, so I'll tell you, I have struggled with. The Sam's in general. -Them: Yeah. -Me: At least specifically as it relates to, like, products and, like, all the change that's going on in there and, like, hard job and, like, there's a lot of stuff being thrown at them. Like, all of the engagement ones are now in charge of analytics. Like, I train. I train all them, like, literally six times, and then like, and, like, no one there, and still every time there's a couple, it's getting better. -Them: Still guts it. -Me: But it's just like, it's been rough. -Them: Yeah. -Me: And somebody like, I don't have. We also don't have the best, like, easily accessible analytics in general. And then it's like, hey, I'm looking at somebody, like, I can't tell if they have CAPS or haas. Like, there's a lot of problems in general, and some of it we're trying to, like, productize. But where I want to get to. -Them: Sure. -Me: This is a little. Thing. Yeah, this is a little thing that I built. Strategic accounts Dashboard. -Them: There you go. -Me: But I basically want to. -Them: Oh, perfect. -Me: Kind of pull all this data in, and basically, this is probably like a scoring mechanism. -Them: Yeah. -Me: But I don't know or how long it's going to take. But basically, it's like it pulls in JIRA data, it pulls in Salesforce data. So here's Kansas. It's just adding up all the revs. This is the tickets that are associated with it. So this is actually like the JIRA data. -Them: This looks really cool. I like this. -Me: So this is like all the data refreshes, the monthly refreshes. So this is like someplace where you. Because, like, they have to go to Jira and, like, I don't know, like, they have to, like, figure out where is this pulling from. -Them: Yeah. -Me: The client analytics data, like, so the client analytics data is, like, flowing into here. And it basically just says, like, hey, here's all this with the overarching goal of basically, like, saying anything that has either, like, an upsell opportunity or renewal will, like, get them in this, like, top right box of, like, positive momentum. And then, like, you know, hey, here's like, some other things. -Them: Yeah. Okay? Grow zone. Yeah. Downgrades, returns would be in the. In the bottom area. -Me: And then so here, like government employee Health Association. This has a churn opportunity that has an opportunity that was like during the date timeline between 23 and then, like, A 2026. And so. This puts them in this bottom box, and at least we know. -Them: Down. -Me: But my overarching goal to, like, start to combine these so we can start to say like, and then basically, this momentum thing is basically, like, how many monthly active users do they have? Like, are they actually using the product? Are they getting value out of it? -Them: Yeah. -Me: And then where I'm really going is, like, it's going to run and there's going to be a library of expansion opportunities. Like, do you have caps? It's like, you should get Haas. Like, do you have, like, do you have, like, this? -Them: So. So literally on that point, is there a tab or area where you see what product they have and then, like, how it was set up, or is it just captured as an opportunity, like, -Me: Right now. It just pulls in opportunities and account data, and then it basically computes those, like, -Them: Dan. Claude is good. -Me: Right. -Them: Yeah, it's so good. So I see your vision. Yeah. Oh, go ahead, go ahead. -Me: There. That's okay. But basically, what it, like, I want to. And this is, like, create Jira ticket. But basically, like, the goal is that I want it to, like, actually run. And then I want it to create a Jira, like, expansion ticket that gets assigned to a SAM that's like, hey, like, run this playbook. Like, talk to them about like talk to them about this? -Them: Can they? Can they do that without there being an opportunity first? Like, don't you want to, instead of creating a JIRA ticket for that? If it's potentially an upsell, shouldn't it be an opportunity instead, is what I'm saying. -Me: Yeah, like one of the. One of the two, like, and I don't know, like, which one they want to use. Like, I think, like, probably like an opportunity. The only thing. Yeah, I guess that would be, like. Is it, like, count as an opportunity if it's like. -Them: You're going to have to eventually. So it's a great question. -Me: This is like a push. This is a push. -Them: You're going to. It's going to have to be an opportunity anyways if it materializes into revenue for us to capture, so we might as well just make it an op. -Me: Yeah, that's a good point. Yeah. -Them: And it gets in qualification so it doesn't hurt their win rates or whatever. So I see, I see your vision. So you basically this platform here will pull in what products they have, upsells and downturns and then product specific data about like the number of licenses and active users and everything in between, okay? Cool, cool. -Me: Right. Right. And this is basically. But this is kind of, like, at least for me, that it's helpful. But, like, what I really want this to do is I really want this to be, like, a flexible tool that people can use to start, like, creating these qualification opportunities, because, like, they should, like, like, they're kind of like, deer in headlights. At least from what I've seen, it's like. I don't know. -Them: Yeah. Yeah. And why is that? So, like, they're supposed to be on the front line. -Me: They're just, like, catching up. They're like, I don't know what the hell. I have no idea what a population health director even does, and I'm sitting in this meeting. And I'm having all these people talk at me about caps Files. I don't. I'm Googling caps. I don't know what that means. Like, apparently it's a training thing, and I think it's like, also, it was previously an implementation job, and that got like, -Them: So it's a training thing. -Me: Pulled away from them. But then. But then, like, nobody, like, filled the gap there. -Them: Yeah. But here's my question for you, though, on that. If we do run this or if we do spin this up and have this running, wouldn't they still run to that same issue? Are you the one going in there and, like, greasing the wheels? -Me: Well, no, I didn't. That's why. That's why it's like. That's why it's like arming this combination of, like, this and a cat made, but it's like. You've been assigned. You're qualifying. We have qualified this client for the Haas Upsell. This is what the so is. Go to this Confluence page. -Them: Yeah. -Me: Call this person present this deck. Once you get them to say yes, because. And then, like, also what's in here, because we actually have data from the stars. Compare tool. Be like, this is the. This is the scores where they're low. This is their benchmarks. These are their competitors who are doing better than them. Like talk about this. -Them: I see. Yeah. -Me: Your first presentation. -Them: You're talking points, and you have, like, basically. Yeah, so you have a Runway of, like, presenting. You're creating an upsell, and you're basically, like, greasing the wheels. -Me: Yeah. I feel like these are. These are the latest. Like, these are the latest case studies. This is what the HAAS is like. Present this deck. And talk about these studies. -Them: Yeah. -Me: And say like here we thought about this from you. We saw the latest stars, compare like data. We know that you guys are suffering in this area. We know it's hard. We've had success. Do you want to like. We would love to talk to you more about it. -Them: Can you connect this to an LLM that just gives them that recommendation? -Me: Yeah, that's what I'm doing. -Them: Ok? -Me: And so that's like this. This is like the UI part of it, but the other fun stuff. -Them: Ok? -Me: That you start to do. -Them: Yeah, go ahead. -Me: I don't know if you mess with N8N much. -Them: I have when you and I were working on it. I think it's a great tool. -Me: But basically, that's what this. That's what this all runs off of. -Them: But were you thinking of using NAN as a trigger to create a new upsell op, like automatically, or is it going to be okay? -Me: Yeah. Yeah. And basically, that's what the. -Them: How do you. -Me: So, like, even here, like, right? -Them: How do you make sure? -Me: All. Right. So then. Here's. Why is this. There we go. Okay. So, like, here's the one that pulls the Jira tickets. It's a web hook, so it's responding to the actual thing, but you can also set this up. To where you're adding in. LLM. -Them: Yeah. -Me: An A1 and then whatever. -Them: Because that's. That's what you'd have to train to tell it. -Me: But then. And the tool is like this. You can, like, add. Yeah. Call N8N workflow tool. And so you could say here, like, pull the JIRA ticket. You could, like, pull all the Jira tickets if you want. You can. -Them: You would have to. Can you also pull Is it just one poll, or can you pull Salesforce data as well? -Me: No. Then you have as many tools as you want. So you could say, like, call the and I There's a. This is the Sam Cron which pulls all Salesforce data. -Them: Okay? So this all makes sense. I think we need to start with two things if we want to actually do this. One is just mapping this out. On paper or just in an. I don't know, whatever you prefer. -Me: Right. -Them: And then the second is like how we want to create that UI for them. I mean. Claude did a very good MVP from what I could see, like. -Me: No. Well, that's what I think, like, and honestly. My overarching hope is that it might be. Hey, look, if it's crazy successful and everybody wants access to this little, like, tool, like, great, yeah, we'll figure out a way to deploy it and get people access to it. -Them: Yeah. -Me: But my other hope is that it's like, we already have enough places. Like, maybe I'll just have that on my screen. But, like, what we really need to do is, like, the whole point is that you don't actually have to look at this, just look at all the tools you're normally doing. Just like. Hey, like, here's the salesforce. Like how many of these qualified leads actually turn out? Or, like, okay, well, we need to do a better job of making these qualifications or, like, maybe doing a better job of making these opportunities because, like, Like, like, basically, like have our target rate be. Or like target metric be. Like, how many qualified leads become advanced in the next stage? And then that's sort of the goal that we keep. Going against. And I like my hope would be. -Them: Do you mean, qualified opportunities? -Me: Yes. Well, sorry. Yes. How many? Like, because you said the qualification, basically. Where the qualification, whatever the opportunity that we make, that doesn't count, then it progresses to the next stage where it starts to actually be like, hey, this is like, industry. Yeah. -Them: Yeah, yeah. That's actually exactly it. Yeah. Qualification discovery. -Me: Yeah. -Them: Okay? So. -Me: But this is, like, total side project. You're the first person I talk to about it, but I think, like, this is what I'm hoping to do because, like, I want to actually start hitting more of, like, our metrics. And then, like, I'm kind of done. I'm exhausted from the account manager. Saying, like, they have no, like, because they're like, hey, we're doing our best. But, like, the ones who know what they're doing are doing great. And then I'm worried about. -Them: No, this is dope. -Me: I would love to. Rather than start, like, complaining about, like, hey, the sands aren't doing right. Be like, hey, I built you a tool. I'm going to give you the exact playbooks to run. Talk to them about these upsell opportunities. They're already in Salesforce like, and just, like, give us feedback if they're doing well or not. -Them: Here's the thing. What about bringing sales, like the analytics sales guys, Tony and Kathy instead? -Me: Y. Eah. -Them: Is they? Why are we making it like Sam focused with these guys? -Me: That's a good point. Yeah, we could definitely. Yeah, we could definitely do it that way, too. -Them: You tell me. Like those two have the most experience selling. -Me: No, they are definitely the better. Yeah, they're definitely the best ones. And I think. -Them: So my opinion. -Me: Yeah, I'm actually curious. I'll be curious as to if they're like, I already know this, or like, this, like, already, like. -Them: Yeah. -Me: Well, I guess it's helpful, too, because there are they're not taking over all the accounts. And this is actually a better idea because we're coming up in February in Boston and they're already talking about other people are going to start. -Them: Yeah. -Me: Like, these are all the current client analytics customers I haven't brought in for non analytics customers. And like, where would be a good place to start? But that would be sort of like the next. -Them: Yeah. Before we get to that step. Here's. Here's my thoughts on it. If these guys are already our customers, The whole sales motion is a lot easier because they're not a cold bleed, in a sense. You have Kathy and Tony, who are ideally, they actually know the product way more in the Sam zoo, who are like, overwhelmed and catching up. -Me: Call. -Them: Why don't we just hand the best leads to the best closers? -Me: Yeah. That's a good point. Well, I think that, too. And then the other question that I had on my other side project was, do you know how much if there is anything been done with. The Stars Compare tool. I know that, like, it's kind of like we push people to it, we ask people, like, we market it, and we basically have it as, like, a lead generator, like, if they do it. But have we actually mined the data and say, like, hey, here's the people who should be taking it. -Them: That's it? Yeah. No, we haven't. They should. I've requested something like that before. -Me: Okay? Here. I'll at least give this to you. This is clay, which is another fun little tool. This is all for free. -Them: Yeah. -Me: I've not messed with it, but basically what I did is I got a hold of which data. Okay, this is. I got a hold of. The stars. Compare tool. Data. And then I was able to. For everybody. Calculate. Their intra state. Basically like their performance at a national level and their performance at a state level. -Them: Compared to each other, right? Yeah. -Me: And then I basically just said, here's all the list. This is everybody. Where is it? Contract summary. I think I put it. -Them: I mean, this is like a top tier list to use. -Me: This is everybody. This is everybody who. Shoot. I can't remember which one I did it in, but anyway. It was everybody who is in the bottom. State quartile. -Them: Yeah. -Me: And then. I took out some current customers and I took out some ones that I knew as prospects. And then. I just said, like, I want to find the directors of quality. At these companies, and this is the list that popped out, and this is their emails. -Them: Dude, this is like. I think they should make you VP of marketing. But this list. For? No, but actually, like, this list is exactly what analytics needs to be hitting. -Me: But this is not. This is like, not. This was pretty straightforward. To generate because we did all the hard work by, like, getting all the data and then. -Them: Yeah. Yeah, like, we got all those responses. I just don't know if anyone ever, like, follow us up with them directly. -Me: I think there's some of it, but, like, I feel like basically we should just have this be like, I don't know how to. -Them: And, like, whether they go down a rabbit hole. -Me: But basically, like, yeah, I think, like, this is, like, cold outreach campaign to basically just say, like, book time with. Tony or, like, book time with, you know? -Them: Yeah, like, I mean, almost be automated. -Me: Totally. -Them: But I think this is good. I mean, this is also good. On top of that. And I don't know how we can tie this in directly to what you showed me on Nan. And the lower you have, because those were existing clients. This is like a new logo, like. -Me: This is new logo. Yeah, this is new logo. Sdr. -Them: Yeah. Which is cool. Like, that's still more. That's still more of. -Me: Yeah. -Them: It's just more white space for us to attack. So I know we don't have too much time. What do you need for me to help you? -Me: I'm going to get you the list of products to people and then. Or products to accounts and then do you need anything else to make that, like. Yeah, the Salesforce account one. -Them: We can put that in Salesforce. Product configurators. Those two are the key things. What would be nice is like any comments or details about like specific configurations or license. -Me: Like, why the business, Medicare, Medicaid. -Them: I think we already have that under assets anyways, but it would be like. -Me: Okay? Well, I think they actually. And then total allowed lives. -Them: Yeah, well, no, like, licenses, maybe. -Me: Yeah. Licenses. Yeah, we should. -Them: And then active users. -Me: Yeah. We can definitely. I will get those things. Okay. -Them: That would be. That would be the first step. And then after that, what do you want to do next? -Me: I think I just want to get this as a field at the account level in Salesforce. -Them: What do you want to call. -Me: What is the portals call it? What does the portals call? Like account. -Them: It's a product configurator. -Me: Do they have it at the account level or is it okay? -Them: They do. -Me: I would just want to like that thing. And I would. Whatever they have it. Like I would want to replicate basically exactly their currency. -Them: Just to, like, share my screen with you. You're saying. This. One sec. This list here. -Me: Exactly. Yep. Solutions. -Them: Yeah. -Me: Yeah, exactly. -Them: Renewal letters. Premium billing ed. Yeah, so? -Me: It would be predict and it would be the product would be caps Haas medication adherence. -Them: Yeah. Perfect. -Me: Yeah. -Them: Yeah. So that's what I can do for you. Okay, so that's first. What. What's the next step after that? Like, where do you want to then switch over to N8 N? -Me: On the. Yeah. So I think. Well, actually, I think. I'm going to pass that list by Dan Reddy probably actually is he thinks still like a reasonable. Cold. Outreach person. To talk to. Why? Does. Ok? Ay, there we go. -Them: Daniel Reddy is currently unavailable. -Me: To. Show you something. Okay? I don't know where. Where I went. Yeah. -Them: Yeah. I don't know what happened there. -Me: And a ton, I think. Yeah. Next step would be. Let's. I like the idea about moving it to, like, the best closers. I want to talk to Phil Brian about it. I want to make sure that. And then I'll talk to Kathy and Tony and make sure that they have, like, the right that they're, like, on board for these sort of things. I don't know if. You felt it might actually be better, that's what I'll do. I'm going to frame it. I'm going to show it off. At the onsite in early February. And then assuming they're good with it, like, I'll be able to meet with enough people and kind of show it off. To like. I don't think it's hard to make opportunities like in Salesforce. With AI Like, I just would need you to help me make sure that we get it, like, with this mapping and this sort of, like, configuration. -Them: Yeah. Yeah, exactly. -Me: But that'll be the goal. It'll basically be like, hey. Let's identify upsell opportunities. And put them into the qualification stage. And. Have them be associated with the right accounts and then we'll let them decide of, like, hey, you know, is it Kathy and Tony, or is it, like, move towards. But I feel like that'll be helpful too, because, like, we're literally doing this on site training. For the salespeople. To teach them about, like, even what the hell, like, predict product line. And like is. And like, who are these customers? And I think if we could at the end say, like, hey, now we have an automated tool. That's going to take the guesswork out of, like, This is the right opportunity to present to this client. Like, hey, like, if someone's new and they're not doing well in caps. Do caps, like, right? -Them: Here it caps to you, yeah. -Me: Can't, like, talk to them about caps today. -Them: Yeah. -Me: And say these things. -Them: I think it's a good pipeline generation move. -Me: Yeah. -Them: Yeah, for sure. -Me: Okay, sweet. -Them: Awesome, man. I'm a little bit over, but, yeah, gave me that list, and then I'll go from there. -Me: I'm going to go through that list, and then. Yeah, we'll talk in. Yeah, early February will be the latest. Well, we'll talk between them, but that's like I. I want to get. Now, I have, like, a little deadline to get this out. -Them: Go. Sounds good. -Me: Sweet. -Them: Alrighty. Catch you later. \ No newline at end of file