Anduril Activation Plan
Task 2: 15-minute domain engineer activation. Existing customer. 20 SEs active, domain engineers treating Flow as extra admin work. Goal: behavior change, not purchase decision.
Before the call: 15-min setup
Same sandbox as Rivian. Pre-open the same four tabs — the domain engineer will be watching your screen and needs to see Flow in the context of their subsystem, not the full system tree.
app.flowengineering.com → candidate-2 org → candidate-2-demo-project. Left sidebar → Requirements → Tree mode. Click into the Cooling Subsystem node, then switch to Table mode. This is the scoped view — only the 12 requirements that affect this subsystem. REQ-35, REQ-28, REQ-34 show as inherited.
Left sidebar → Parameters → Analysis. Pre-load Heating Model (Python/GitHub), Cooling Model, Reliability Model, Main Chamber Volume (Onshape). Leave parked on Heating Model. When the engineer's Python script runs, Flow reads the output number and stores it in the spec automatically — that's what we show in Beat 2.
Left sidebar → Verification → Testing. Confirm 'Automated Run — TC11 PASS' is visible and the Iterations dropdown shows saved baselines. This is where their test results auto-link to requirements.
Duplicate Tab 1. REQ-35 (power budget) lives here — Claude updates it from 1950 W → 2150 W during Beat 5b. Refresh this tab live so they see the number change.
Open Claude Code with projects/cowork-flow-interview/ as root. Type “what flow tools do you have?” and confirm 10 flow_*tools appear. Beat 5 framing here is different from Rivian — frame it as a question they'd ask before committing a change, not a governance story for their VP.
- REQ-28 — temperature accuracy (inherited by Cooling from root)
- REQ-34 — temperature uniformity (inherited by Cooling)
- REQ-35 — total power budget: 1950 W actual, 2500 W cap (Claude updates this in Beat 5b)
- REQ-37 — humidity sensor MTBF: 40,000 hrs (side-panel drill-down in Beat 1)
- REQ-39 — system R90/C90 ≥ 20,000 hrs actual ~54,000 hrs (Beat 5a pre-commit query)
Branch on their resistance
After discovery, ask one opening question to find where the friction lives. Branch based on what they say.
"You don't file anything. Your Python script runs, Flow reads the output number, and the spec updates. If you're already running the script, the update already happened. Zero extra steps."
- → Beat 2 — show how running the script auto-updates the spec
- → Beat 3 — show how passing tests auto-link to specs
Skip context-setting, go straight to running the script live
"The systems engineer (SE) writes the specs — that's their job. But to keep those specs accurate, they need your actual numbers. Right now they email you asking for updates. When your script runs into Flow automatically, they can just look it up. You get fewer interruption emails, not more work."
- → Beat 1 — show the scoped spec view first
- → Beat 2 — show how the script run IS the spec update
Lead with what they stop getting interrupted by, not what they gain to do
"That tool made engineers manually file information — and engineers don't have time for that, so nobody filed anything. Flow reads the information out of the work they're already doing: the Python scripts they already run, the tests they already execute. The AI moment you're about to see removes the last bit of manual effort."
- → Beat 5 — AI moment first
- → Beat 2 — show how the script run IS the spec update
Frame it as AI eliminating the filing cost, not engineers adopting a new habit
Demo Run Sheet — Anduril (Task 2) (20 min)
Sales Feed principles throughout: outcome framing before screen share, aha-moment management, silence as a tool, constant check-ins, flat-reaction recovery, name their next step before they do.
Interviewer plays an Anduril domain engineer — a specialist like a thermal engineer, electrical engineer, or software engineer, probably in thermal or power systems given the sandbox — in sprint crunch. Flow is already licensed. The systems engineering (SE) team uses it. This engineer has been told to log updates into Flow but sees it as the SE's job. Their objection is not price — it's time. "I have 14 PRs queued. When do I have time to update the spec?" Win condition: show that their existing work already IS the update. They don't file the spec manually. Their scripts and test runs do it automatically. The AI moment is a question they'd ask before a commit, not a governance story for their VP. Close with a concrete live action, not a vague next step.
Opening: Discovery first (2 min, no screen share)
Manufacture discovery with 3 targeted questions. Their answers become your “you said...” ammunition for every beat.
"From me" or "from git" = your Beat 2 setup. They're manually translating their own work into Flow instead of it happening automatically.
"After" or "usually after" = Beat 1 lands hard. They're building against stale constraints.
"Someone cross-references it" or "I ping the SE" = Beat 3. The friction is that passing tests don't automatically mark requirements as covered.
“Based on what you just said, [their specific answer], that's exactly what I want to show you. 20 minutes, focused on those problems. Not a feature tour.”
They agree. Now share your screen.
The 5 Beats
Log in to app.flowengineering.com → select the candidate-2 org → open candidate-2-demo-project. Left sidebar → Requirements. Switch view mode to Tree. Click into the Cooling Subsystem node so the filtered Table view shows only its requirements.
The Requirements view in Table mode, scoped to a single subsystem, is what a specialist engineer opens day-to-day. (At Anduril that's whoever owns the subsystem — thermal, electrical, software.) It shows only the requirements that belong to their work. Not 400 system requirements — 12, 15, whatever their subsystem owns. Each row shows value, stage, owner, and whether it's been verified.
Replaces the workflow where an engineer opens a 400-row spec export in DOORS or Confluence and hunts for rows marked with their name. The subsystem filter is automatic — when the systems engineer assigns ownership, the specialist's filtered view updates. When a top-level spec changes and affects this subsystem, it shows up automatically as an inherited row (REQ-35, REQ-28, REQ-34 in the demo) — the engineer didn't have to ask or go find it.
Beat 1 reframes the 'extra admin work' objection. If the engineer opens Flow and sees only their 12 requirements, it's a filter, not a burden. This beat is fast (2 min) because it exists to land a single idea: Flow isn't a second place to update things — it's a scoped view of the things you already own.
Left sidebar → Parameters → Analysis. List of analysis models down the left side: Heating Model (Python script on GitHub), Cooling Model, Reliability Model, Main Chamber Volume (Onshape CAD), and a few others. Pre-load the Heating Model detail pane. Leave it parked there.
Flow's connection layer to the tools engineers already use. Each 'model' in the list is just a pointer to a real file or script elsewhere — a Python script in GitHub, a CAD file in Onshape (or NX, CATIA, SolidWorks), a calc in Excel or MATLAB. Flow reads the output numbers from each script or file and keeps a live record of them in the spec. The engineer runs their script in their normal tool; Flow reads the result automatically.
When the Python thermal script runs and outputs max_temp_achieved_c = 233.3, Flow picks up that number. Any spec requirement that references that measurement now knows whether it passes or fails — without any manual data entry. When someone updates a CAD file and re-syncs, Flow pulls the new mass automatically. The engineer doesn't export anything, fill in any fields, or ping anyone.
This beat directly addresses the 'extra admin work' objection. The objection is real if the engineer imagines manually filing updates. The reframe: their work already IS the update. If they run their thermal model in the normal CI loop, the output lands in Flow without an extra step. Beat 2 shows this mechanically — run once, Flow updates. That's the behavior change.
Left sidebar → Verification → Testing. Full test plan with four test cases. Find 'Automated Run — TC11 PASS' in the Last Run column — point at that row. Above the test list, find the Iterations dropdown (default: 'Current Iteration'). Both the test list and the dropdown should be visible without scrolling.
Flow's test ledger. Every test case — script, bench procedure, SITL run, HIL on Speedgoat, or Crucible cross-domain event — is a row. The Last Run column shows what happened most recently and includes the CI run ID. When a test runs and passes, Flow links the result to the requirements it covers automatically. The PR that ran the test IS the audit trail.
Replaces the workflow where an engineer passes a test, mentions it in Slack, and waits for the systems engineer to manually look up which spec items it covered. With Flow, the linkage is automatic: run the test, the result lands in the ledger, and the spec items it covers flip to verified — automatically. The engineer doesn't file anything. They don't ping anyone.
This beat addresses the second most common complaint: 'I run the test. I tell the systems engineer. They update the tracker. Why is that my job?' Beat 3 shows it isn't — the test run does it automatically. The Iterations dropdown shows a bonus: when the gate review comes, the evidence package is already assembled. The engineer contributed to it just by running their normal tests.
Still on the Testing page from Beat 3. Click the Iterations dropdown at the top of the test list. Show the saved baselines: 'CoDR Entry (10 Dec 2025),' etc. Switch between a prior baseline and Current, then switch back. The point is how fast the view changes.
Flow's named-baseline mechanism. Every gate ceremony (SRR, PDR, CDR) snapshots the entire requirements + verification state at that moment. You switch between any baseline and the current state with a single click. 'Is this what was signed at PDR?' is a one-click diff, not a three-day doc assembly exercise.
Turns gate prep from a week-long document sprint into a dashboard query. The PDR readiness score is a live number. The committee sees 94% before they walk in the room. The two open items are specific and named, not buried in a changelog. Domain engineers contributed to that score by running their normal tests — they don't have to assemble evidence separately.
Beat 4 is the 'you're already done' beat. Most domain engineers have been part of gate prep nightmares — a week before PDR, someone sends a spreadsheet and asks everyone to fill in test results. With named baselines, that spreadsheet never gets sent. Your sprint work is the evidence. The beat is short because the idea lands fast once they've seen Beats 2 and 3.
Beat 5: The AI moment (3 min, the payoff)
Leave the Flow UI visible on the left. Bring Claude into view on the right.
"Pari published the AI Systems Engineering Handbook two weeks ago. His thesis: AI reduces design time 10x, creates 10x more artifacts, creates 100x the alignment problem. For engineers in the trenches that alignment problem looks like: 'I changed something, I'm not sure what it touches, I should probably check with the systems engineer.' What you've seen in the last 10 minutes is the structured data layer that makes the next 3 minutes possible — because Claude can only reason about your specs if your specs are a live database, not a PDF."
REQ-39 requires system R90/C90 ≥ 20,000 hrs. What's our current margin, and which component MTBF is the biggest risk if it degrades 20%?Paste this into Claude Code on the right side of the screen.
REQ-35 caps total power at 2500 W and we're at 1950 W. I want 200 W more heating headroom. Propose a new value for REQ-35, explain the tradeoff against REQ-33 and REQ-28, and if I approve, update the value and move the requirement to In Review. Keep the proposal to 5 bullets and then the two actions.Paste this into Claude Code on the right side of the screen.
Now create a verification test run for the Ramp Rate Performance test case, link it to REQ-33 and REQ-35, and set the result to PENDING so the ME team can pick it up.Paste this into Claude Code on the right side of the screen.
Close: name the next steps for them (90 sec)
- "Pick one requirement that's currently stale in your sprint. We'll link your last test run to it right now. 90 seconds. You'll have done the thing you said takes too long."
- "If every engineer on your team did what you just watched once per sprint, the SE stops asking for updates — because Flow already has them. The activation isn't a training program. It's showing this view to five people."
- "The AI moment you're about to see — that's the payoff for having structured requirements data. It only works because you and your team ran your models and tests into Flow instead of into a spreadsheet."
“Here's what I've seen happen at this point. Your team needs to go discuss internally. You've probably got questions to take back to colleagues who weren't on this call. So let me stop here and ask directly: what did you see today that was most relevant to where you are? And what would need to be true for this to be worth a deeper conversation?”
Stop talking. Wait for them.
Meta connection: why this prep IS the demonstration
The activation problem at Anduril is the same problem Pari is solving at the company level: structured data unlocks everything downstream, but only if the people closest to the work contribute to it. My prep workflow for this interview is the same pattern — research monorepo, handbook ingested, sandbox reverse-engineered, agents reasoning over structured context to write the script and the MCP. The insight isn't that AI is powerful. It's that AI is only as useful as the data structure underneath it. That's what Flow gives domain engineers access to — and what an FDE's job is to make real at the team level, not just at the VP level.
If the reaction is flat at any point
- “I want to make sure I'm showing you the right things. What I just showed, does that map to a problem you actually have, or am I off base?”
- “You seem like you're thinking about something. What is it?”
- “Normally at this point people have one of two reactions. Either this is exactly what they've been looking for, or it's not the right fit at this stage. Which is it for you?”
Resistance cheat sheet
Domain engineers resist differently than buyers. These aren't price objections — they're time and ownership objections.
| Resistance | Reframe |
|---|---|
| I don't have time to update the spec mid-sprint | You don't update the spec manually. Your Python script does it. Your test script does it. If you're already running those, Flow already has the update. Nothing extra. |
| That's the systems engineer's job, not mine | The systems engineer (SE) owns the spec structure. You own the actual numbers. When your thermal script runs into Flow automatically, the SE doesn't have to email you asking for the latest value — they can just look it up. That's fewer interruptions for you, not more work. |
| We tried a tool like this before and nobody used it | That tool asked engineers to manually file information — and they didn't, because filing it cost more time than it was worth. Flow reads it out of the scripts and tests they're already running. That's the whole point of Beats 2 and 3 — your script run IS the update. |
| My test scripts already link to Jira tickets, not requirements | The CI commit that runs the test is the same ref Flow tracks. You don't reroute your pipeline — Flow reads the same PR/commit reference. Both linkages can coexist. |
| I can't modify my CI pipeline to add a webhook | You don't modify your pipeline. Flow's model integrations pull outputs on-demand when you re-run. The trigger is you, not a webhook. Start manual, automate later. |
| What if I update Flow and the SE overrides it? | Flow has a Change Request layer. If something you've contributed to gets changed, the CR creates an audit trail. You're not losing ownership — you're gaining a paper trail when others touch your subsystem. |
| I'll look at it after this sprint | That's fine — and expected. The close here isn't 'start today.' It's: before you leave, pick one stale requirement and link your last test run to it. 90 seconds. So you know what it feels like before the sprint ends. |
End with the artifact
flow.thedefrag.ai
“Everything I used to prep this — the MCP, the activation plan, the sandbox walkthrough — is live at flow.thedefrag.ai. If you want to show another engineer what we just walked through, that's the link. The activation plan for your team is already built. You don't have to explain it — you just share the URL.”
Different close than Rivian. Not a leave-behind for evaluation — it's a tool they can forward to the next engineer without scheduling another call.