Ask why before how. Cite every claim. Ship the artifact.
Every SimSo engagement runs the same seven-stage workflow. Each stage produces a tangible artifact you can review. Most teams stop at stage six. We don't.
-
Problem framing
The instinct on a revenue problem is to jump to "raise prices" or "drive more leads." We don't. We ask what specifically is breaking. And most of the time the answer is a single moment in the workflow: the recommendation moment, the renewal moment, the handoff. Narrow framing produces narrow design briefs. Output: a one-paragraph problem statement everyone in the room can repeat back.
-
Evidence gathering
Every claim that lands in the business case carries a named source. The rule: if a number can't be cited, it doesn't go in the deck. This matters less for visual polish and more for the moment a skeptical board member asks "where does that figure come from". And you want the answer to be one source away, not five. Output: a sourced evidence appendix.
-
Option generation
Three or more candidate plays, each genuinely defensible. The point of listing real alternatives is that a board member who walks in asking "why not just do what [competitor] does?" gets a real answer in the document, not a defensive non-answer. Output: a comparison matrix of alternatives.
-
Selection & justification
We run the alternatives through explicit filters. Does it defend against the competition, does it leverage assets you already own, does it compound over time, is it executable with the team you have? The chosen option is the one that passes all four. The filters are surfaced explicitly so a reader can rerun our logic and either agree or push back. Output: a "why we chose X" memo.
-
Economic modeling
Investment is stated in plain numbers. Not "around $50K," but $76K one-time plus $46K annual. Revenue lift is stated with the math shown. Not "significant uplift," but "4,200 visits × 40% lift on flagged cases × $95 average yield ≈ $160K." Transparency on the math is the strongest signal you can send that the numbers are real. Output: a payback model with sensitivity ranges.
-
Rollout design
Phased plan with named owners per milestone and at least two revocable checkpoints. The board approves a foundation, not an unbroken twelve-month commitment. And the off-ramps are designed in from day one. This turns a hard yes into an easy yes. Output: a quarter-by-quarter plan with KPI gates.
-
Demonstration
The unusual part. Most engagements stop at stage six. You hand the board a plan and ask for a check. We go further and actually build the things the plan depends on. A working engine, a working dashboard, a working agent. Not a mock, not a screenshot, not a future promise. Something a reviewer can click through and verify. This is what separates "we should build an AI engine" from "here is the engine; watch it work." Output: deployable artifacts that prove each claim.
The places business cases fall apart in review.
In our experience, business cases fall apart in board reviews for a small set of repeatable reasons. The seven-stage workflow exists to inoculate against them.
The evidence has holes
One or two claims lack citations and the whole document gets questioned. We handle this by citing everything in stage 2. Including the things that feel "obvious."
The alternatives are strawmen
"We could do nothing, or we could do our thing." Boards see through this. Stage 3 forces real alternatives onto the page and stage 4 explains why each one loses. Which strengthens the chosen option, not weakens it.
The math is suspicious
Round numbers, optimistic denominators, no sensitivity. Stage 5 shows the derivation in plain English so a reviewer can audit it themselves.
The rollout is all-or-nothing
"Give us the budget for 12 months and we'll come back." Boards hate this. Stage 6 designs revocable checkpoints in from day one.
There's no proof it can work
Everything is future tense. Stage 7 ships working artifacts that turn "I believe you" into "I just saw it."
The hand-off fails
The strategy document gets approved and immediately goes stale because nobody knows how to operate it. SimSo engagements end with the team running the system, not with us holding the only working copy.
Ask why before how, cite every claim, generate real alternatives, select with explicit filters, show the math, structure the rollout around revocable commitments, and prove each claim with a working artifact.
That's it. Not veterinary-specific. Not AI-specific. Works for any board-level commitment.
See the method applied.
The PawStyle case study walks through every stage of this workflow on a real business problem. With all four production artifacts available to click through.