Case study artifact PawStyle by MV Clinic ↗

What happens at 30 cases.

The engine's compounding value doesn't show up on the first visit. It shows up at the cohort level. This page takes 30 synthetic patient cases, runs each through the engine, and then runs real in-browser machine learning (k-means clustering and a decision-tree risk classifier) against the outputs.

How this works

The cohort below is 30 synthetic patient cases spanning six realistic archetypes (overdue seniors, geriatric patients with symptoms, mid-life breed-predisposed, young acute presentations, puppy/routine wellness, and recurrent-issue patients). Each was processed through the engine's recommendation logic, then passed through a realistic acceptance simulation.

Everything from the charts down is computed client-side in your browser from the cohort JSON. The k-means clusters and the decision tree are trained every page load. No server, no hidden backend. Inspect cohort.js to read the ML code.

Loading cohort data…
Your key is stored in this browser tab only, sent directly to api.anthropic.com, and never reaches a SimSo server.

1. Cohort composition

Cohort of 30 patients. The engine processes a realistic mix. This is what 30 routine visits at the clinic looks like.

Patient archetype distribution

Breakdown

Overall risk level distribution

The engine assigns an overall risk level to each patient based on signalment, visit gap, and presenting signs. This is what the engine flagged across the cohort.

2. K-means clustering (k=4)

Each patient is represented as a 9-dimensional feature vector (age, weight, visit gap, number of recommendations, projected revenue, symptomatic flag, senior flag, overdue flag, species). K-means finds four natural groupings. The scatter plot below projects those into the two most interpretable axes. Age and visit-gap. But the clustering uses all nine dimensions.

Auto-derived cluster labels

Labels above come from each cluster's dominant features. The engine is picking up on patterns no single visit would reveal.

3. Risk classifier (decision tree)

A shallow decision tree (max depth 4, information-gain splits) trained on the same 9-dimensional feature vectors. It learns to predict the engine's risk assignment from signalment alone. Which means once trained, it can score new patients instantly, no API call needed. The tree structure below is the real trained tree.

Loading…
Training accuracy
,
Tree depth
≤ 4
Classes
3

Production note: A real deployment would train this on hundreds of the clinic's own accepted/declined cases with cross-validation. At 30 synthetic samples, the accuracy number is indicative. Not a rigorous holdout metric. The value here is demonstrating that the engine's recommendations can be distilled into a fast, explainable model the clinic owns and inspects.

4. Revenue translation

The dollar math on this cohort, and what it scales to annually. Baseline capture of 16% is the industry figure cited in the case study (dvm360); the engine-guided capture rate is what the acceptance simulation produced across the 30 cases.

Total recommended
Engine-captured
,
Capture rate
,
Cohort uplift (vs. 16%)
,
Per visit avg.
,
Annualized (flagged share)
,

How the annualized number is computed: per-visit uplift on this cohort × (4,200 annual wellness visits × 40% flagged share) = annual projection. The 40% flagged share is the case-study assumption. Only visits the engine flags get the full lift, not all 4,200 visits.

5. Key insights

The operational takeaways. Where the capture gap is concentrated, which clients decline for which reasons, and which breed/age combinations are under-recommended.

Capture rate by visit-gap cohort

Clients overdue 12+ months convert differently than clients seen in the last 6. This is where the clinic's reminder-timing strategy lives.

Decline reasons. Where the revenue leaks

Of the recommended diagnostics that didn't convert, these are the reasons. The distribution tells the marketing team which framing to lead with.

By revenue at risk

Worst-capture breed × age cells

Where the clinic's capture rate is lowest. Ranked by volume-weighted impact. These are the cells that would benefit most from engine-surfaced prompts.

Breed / age band Recommended Captured Rate

6. All 30 cases

Every patient in the cohort, their engine-assigned cluster, and their individual economics.

# Patient Age Gap Risk Cluster Recommended Captured
Up next

From recommendation to recovery.

Even the best engine doesn't help if declined items disappear into the void. The follow-up agent closes the loop. Parsing the doctor's SOAP note and drafting the right outbound communication for each declined item.