Rivian Run Sheet
Task 1: 20-minute first sales call. Audience: Rivian Head of Systems Engineering evaluating purchase. Goal: close on next steps, not a feature tour.
Before the call: 15-min setup
Screen share demo. Four browser tabs pre-opened so every beat snaps instantly — no loading, no hunting. The sandbox project is a temperature/humidity test chamber called TestEquity 123H.
app.flowengineering.com → candidate-2 org → candidate-2-demo-project. Left sidebar → Requirements → switch view to Tree. Expand the full hierarchy: TestEquity 123H at root, Chamber Enclosure, Control & Sensors, Humidity Control (Cooling + Heating as children).
Left sidebar → Parameters → Analysis. Click each model once to pre-load: Heating Model (Python/GitHub), Cooling Model, Reliability Model, Main Chamber Volume (Onshape). Leave parked on Heating Model.
Left sidebar → Verification → Testing. Confirm you can see 'Automated Run — TC11 PASS' in the Last Run column and the Iterations dropdown at the top without scrolling.
Duplicate Tab 1, click into Cooling Subsystem node, switch to Table mode. REQ-35 (power budget) lives here — Claude updates it from 1950 W → 2150 W during Beat 5b, then you refresh this tab live.
Open Claude Code with projects/cowork-flow-interview/ as root — this loads the flow-demo MCP automatically. Type “what flow tools do you have?” and confirm 10 flow_* tools appear. Flow UI left 60%, Claude Code right 40% — both visible at once.
- REQ-28 — temperature accuracy (inherited by Cooling from root)
- REQ-34 — temperature uniformity (inherited by Cooling)
- REQ-35 — total power budget: 1950 W actual, 2500 W cap (Claude updates this in Beat 5b)
- REQ-37 — humidity sensor MTBF: 40,000 hrs (click for side-panel drill-down in Beat 2)
- REQ-39 — system R90/C90 ≥ 20,000 hrs actual ~54,000 hrs (Beat 5a reasons about this margin)
Branch on their answer
After “How do you manage requirements today?” — branch immediately. The whole demo shifts shape based on which path you're on.
"DOORS is a document tool pretending to be a system. It captures requirements well but doesn't know what's verifying them. With Flow, the spec and the verification evidence are the same data — two views."
- → Beat 1 — Report view
- → Beat 2 — flow-down
- → Beat 3 — design values
Skip Beat 5 unless they ask
"Right now you have a Status column someone manually updates. With Flow, it updates when the test passes. The spec regenerates every time you open the page. Nobody maintains it."
- → Beat 3 — CI/CD for hardware
- → Beat 4 — verification dashboard
Beat 5 if they light up
"The reason engineers don't update it is that updating costs more time than the information is worth. When Claude reads and writes requirements on their behalf, the adoption problem inverts. The tool becomes valuable because the effort disappears."
- → Beat 5 — AI demo first
- → Governance talking point
Frame Flow AI as the adoption engine
Demo Run Sheet — Rivian (20 min)
Sales Feed principles throughout: outcome framing before screen share, aha-moment management, silence as a tool, constant check-ins, flat-reaction recovery, name their next step before they do.
Task 1: new prospect sales call. You are pitching Flow to Rivian as if they have never bought it. The interviewer plays Rivian Head of Systems Engineering evaluating whether to purchase. Background context (do NOT say in the demo): Rivian is actually a real Flow customer with 1,500 engineers on the platform — so the persona will be knowledgeable and realistic, not a blank slate. That makes this harder, not easier. Do not reference Rivian as a customer story; use Joby or Astranis instead. See /prep for full attendee bios.
Opening: Discovery first (2 min, no screen share)
Manufacture discovery with 3 targeted questions. Their answers become your “you said...” ammunition for every beat.
Their answer is your Beat 2 setup. Whatever number, that's the gap Flow closes.
"Mostly manual" or "takes a week" = your Beat 4 setup.
Beat 1 and Beat 5 setup. Mostly-systems-engineers = the adoption pain.
“Based on what you just said, [their specific answer], that's exactly what I want to show you. 20 minutes, focused on those problems. Not a feature tour.”
They agree. Now share your screen.
The 5 Beats
Log in to app.flowengineering.com → select the candidate-2 org → open candidate-2-demo-project. Left sidebar → Requirements. Top of the requirements panel, switch view mode to Tree. Expand the root so the full hierarchy is visible: TestEquity 123H at top, then Chamber Enclosure, Control & Sensors, Humidity Control (which has Cooling and Heating as children).
The Requirements view is the central artifact in Flow — it's where every spec, constraint, and design criterion for the product lives. Tree mode renders that as the system architecture: each node is a subsystem, each subsystem holds the requirements that scope what it must do. The view-mode switcher (Tree / Table / Report / Design Reviews) at the top is critical to understand. Same underlying data, four ways to look at it. Tree shows hierarchy. Table shows a flat list with columns. Report renders it as the numbered spec document a VP would sign. Design Reviews shows it organized around gate ceremonies.
It replaces the dual-tool setup most teams have today (Word/Confluence spec doc + DOORS or Jira for requirements tracking). One place. The numbered document and the live database are the same data. Each node rolls up a count of how many requirements it contains, so a systems engineer can see at a glance which subsystems are heavy and which are light.
Beat 1 lands the core thesis: the spec doc isn't dead, it just stopped being the source of truth. You start here because if the interviewer doesn't accept that requirements can be a living artifact (not a quarterly Word doc), nothing else in the demo lands.
Duplicate Tab 1. In the requirements panel, click into the Cooling Subsystem node (not the root). Switch view mode from Tree to Table. You're now seeing only the requirements scoped to Cooling — not the whole system, just one subsystem's slice. Columns: Value, Stage, Change Requests, Owner.
Table mode is what a domain engineer (the Cooling lead, the BMS lead, the thermal team) actually opens day-to-day. It's a flat list of just the requirements they own or inherit, with the values they care about visible without clicking. The key visual is the inheritance indicator on certain rows (REQ-35, REQ-28, REQ-34) — those constraints didn't originate at the Cooling level, they came down from the parent system.
Shows requirements flow down without the Cooling lead having to assemble it manually. When the Head of Systems Engineering changes REQ-28 at the root level, the Cooling lead's view updates. When they click any single requirement, a side panel opens showing value, verification status, owner, stage, change-request history — one record for everything about that requirement.
Beat 2 closes the loop on Beat 1. If the spec is the live artifact (Beat 1), this is what propagation looks like in practice. The interviewer will be testing whether you can articulate trace-by-default — automatic — vs. trace-by-audit (find the breakage after the fact). For Rivian specifically, this is where a cell chemistry change at the system level reaches the pack thermal lead automatically instead of through email.
Left sidebar → Parameters → Analysis. You'll see a list of analysis models down the left side: Heating Model (linked to a Python script on GitHub), Cooling Model, Reliability Model, Main Chamber Volume (linked to an Onshape CAD file), and a few others. Click each model once to pre-load its detail pane (analysis models can take a beat to render the first time). Leave it parked on the Heating Model when warmed.
This is Flow's connection layer to the engineering tools your team already uses. Each 'model' in this list isn't something you build inside Flow — it's a pointer to a real artifact that lives somewhere else: a Python script in GitHub, a part in Onshape, a sheet in Excel, a calc in MATLAB. Flow reads the inputs and outputs of each model and exposes them as Design Values — numbers that the requirements record knows about and references.
When the Python thermal model runs, the output (max_temp_achieved_c = 233.3) becomes a Design Value inside Flow. The requirements that reference that Design Value automatically know whether they pass or fail. When someone bumps a parameter in Onshape and re-syncs, Flow pulls the new mass automatically. Verification stops being a quarterly campaign and becomes a continuous side-effect of designing.
This is Beat 3 because it's where most demos either land the aha or lose the room. If they get this — that the analysis tools your team already runs become the verification engine, with no rip-and-replace — every other beat is downstream of this insight. For Rivian specifically, they already have SimOS chaining thermal/structural/electrical sims on commit hooks. The compute is solved. What Flow adds is requirements traceability on top of that pipeline.
Left sidebar → Verification → Testing. You'll see the Full Verification Test plan with four test cases listed. One of them shows 'Automated Run — TC11 PASS' in the Last Run column — that's the visual you'll point at. Above the test list, find the Iterations dropdown. By default it shows 'Current Iteration.' Click it, you'll see saved baselines including 'CoDR Entry (10 Dec 2025).' Make sure you can see both the test list and the dropdown without scrolling.
The Verification view is Flow's test ledger. Every test case (whether it's a Python script, a bench procedure, a tech with a multimeter, a CI job) is a row. The Last Run column shows what happened most recently. The Iterations dropdown is the most important thing on this page — it's how Flow does named baselines. Every design review (CoDR, PDR, CDR, Design Freeze) creates a snapshot of the entire requirements + verification state at that moment, and you can switch between any baseline and current with a click.
Replaces the manual evidence-assembly that happens before every gate. Today the evidence package is a person pulling test reports, model outputs, and sign-offs from a dozen tools into a single presentation. With Flow, the evidence is the system state. 'Is this what I approved at CoDR?' becomes a one-click diff — switch the iteration to CoDR Entry, see what changed, switch back to current.
Beat 4 closes the verification loop and sets up Beat 5. If Beat 3 showed how analysis outputs flow into requirements, Beat 4 shows how test results do the same — and how named baselines turn audit prep from a week-long sprint into a query. For Rivian, the named-baseline story matters because Design Freeze Reviews require sign-off across 12+ subsystem leads, each of whom currently aggregates their own evidence.
Beat 5: The AI moment (3 min, the payoff)
Leave the Flow UI visible on the left. Bring Claude into view on the right.
"Pari published the AI Systems Engineering Handbook two weeks ago. His thesis: AI reduces design time 10x, creates 10x more artifacts, creates 100x the alignment problem. Solution isn't smarter AI — it's structured requirements data AI agents can reason over. What you just saw in the last 15 minutes is what makes the next 3 minutes possible. This is Agentic Systems Engineering in practice."
REQ-39 requires system R90/C90 ≥ 20,000 hrs. What's our current margin, and which component MTBF is the biggest risk if it degrades 20%?Paste this into Claude Code on the right side of the screen.
REQ-35 caps total power at 2500 W and we're at 1950 W. I want 200 W more heating headroom. Propose a new value for REQ-35, explain the tradeoff against REQ-33 and REQ-28, and if I approve, update the value and move the requirement to In Review. Keep the proposal to 5 bullets and then the two actions.Paste this into Claude Code on the right side of the screen.
Now create a verification test run for the Ramp Rate Performance test case, link it to REQ-33 and REQ-35, and set the result to PENDING so the ME team can pick it up.Paste this into Claude Code on the right side of the screen.
Close: name the next steps for them (90 sec)
- "You watched me do a tradeoff analysis and propose a requirement change in 90 seconds. Run ten of those a day, you run more V-cycles before R2 freeze."
- "When REQ-35 changed, Claude flagged that REQ-28 is already FAILED. That's a pack-to-vehicle integration surprise caught in V1, not V3."
- "Named baselines, change-request trail, agent-authored proposals. One dashboard, one question answerable in one click."
“Here's what I've seen happen at this point. Your team needs to go discuss internally. You've probably got questions to take back to colleagues who weren't on this call. So let me stop here and ask directly: what did you see today that was most relevant to where you are? And what would need to be true for this to be worth a deeper conversation?”
Stop talking. Wait for them.
Meta connection: why this prep IS the demonstration
Pari's thesis says you need structured requirements data for AI agents to work. My cowork monorepo is the same idea applied to knowledge work — a structured, context-rich research system where Flow's handbook is ingested, the sandbox is reverse-engineered into a walkthrough, decisions are version-controlled, and AI agents reasoned over that structure to write the MCP server, the script, and the slides. I didn't build a custom model. I built the data structure. That's exactly what Flow does for hardware teams. When this comes up in the demo, it's not an analogy — it's proof I think in the same paradigm Pari is staking the company on.
If the reaction is flat at any point
- “I want to make sure I'm showing you the right things. What I just showed, does that map to a problem you actually have, or am I off base?”
- “You seem like you're thinking about something. What is it?”
- “Normally at this point people have one of two reactions. Either this is exactly what they've been looking for, or it's not the right fit at this stage. Which is it for you?”
Objections cheat sheet
| Objection | Pivot |
|---|---|
| We already use DOORS / Jama | Flow's import tools round-trip DOORS reqifs. The question isn't rip-and-replace — it's 'do my next ten V-cycles run faster here.' |
| Our scripts are proprietary | They stay in your GitHub. Flow reads outputs via your runners. Nothing leaves your network. |
| How do I get mechanical engineers to actually use it? | Their CAD tool stays primary — Flow reads parameters on commit. They touch Flow at review, not mid-design. (Full Task 2 answer if they push.) |
| Pricing | Token-based pricing shift is mid-flight. Trial is flat monthly. Long-term quote once we see real load. |
| ISO 26262 / DO-178C / AS9100 compliance | Handbook Vol 2 Ch. 8 covers iterative certification. Flow stores the trace chain. Iterations = signed baselines per gate. The evidence is a side-effect of development, not a quarterly assembly. |
| Did you write this MCP with an LLM? | Yes — paired with Claude Code on the scaffold, then hardened against the live sandbox. About a day. Any FDE I'd hire should work this way. |
| What if AI changes something without approval? | The Flow API returns 200 whether a stage transition went through or opened a CR. I caught this reading back every write — the API silently gates everything through governance even if the client assumes otherwise. That's the right failure mode. |
End with the artifact
flow.thedefrag.ai
“What I built for this conversation is live at flow.thedefrag.ai. The MCP source, the Claude Code plugin, the Rivian and Anduril prospect pages — all there. That's what I'd hand a prospect at the end of an FDE engagement: a working integration they can pull and run against their own project, not a slide deck.”
If they ask “what would you do first as an FDE?” — “Harden the MCP into a reference server your customers fork in a weekend. Expose change-requests and approvals as first-class tools so the agent can walk the full governance cycle. Per-user OAuth so the audit trail attributes to the engineer, not a service account.”