LINCR turns continuous intraop monitoring + intervention data into the three things every anesthesia group needs and almost none can produce at scale: automated quality-improvement reports for the medical director, safety alerts for events that need clinical review, and defensibility documentation the malpractice carrier can stand behind. Every output starts from the same foundation — a clinical differential per event, generated the way a CAA actually reasons at the bedside.
This demo runs against the MOVER dataset (UC Irvine, 55,483 anesthesia cases). We sample 300 cases, detect every IOH event (sustained MAP <65 mmHg for ≥5 min), generate a differential per event, and aggregate up into the QI · safety · defensibility outputs you'd hand to a customer.
Each line is the actual recorded value from the MOVER dataset. Pink shaded windows are detected IOH events (sustained MAP <65 mmHg for ≥5 minutes). Drug administrations are stem markers along the bottom.
This is the foundational unit of LINCR's output. For each IOH event, the platform triangulates across all available vitals + intervention signals and produces a structured differential: the most-likely cause as primary, a competing hypothesis as secondary when evidence supports it, and explicit rule-outs for the rest. Each line carries the evidence FOR and AGAINST that diagnosis. This is what flows downstream into the QI report, the safety alert, and the defensibility record.
Generated by a Python orchestrator (differential_engine.py) that wraps the signal-extraction layer. Output: primary + optional secondary + ruled-out + the lab/EKG signals that would resolve any remaining ambiguity. M2+ adds EHR lab integration; M3+ adds EKG waveform analysis and CF2 counterfactual scoring. The differential output is structured (JSON), so downstream rendering is trivially adaptable to any reporting surface (PDF for the QI committee, dashboard for the medical director, expert-witness exhibit for the malpractice attorney).
Full decision logic, signal definitions, and priority order are documented in section 02B below.
For each detected IOH event (sustained MAP <65 mmHg for ≥5 minutes), the heuristic computes eight signals from the surrounding minutes of vitals + intervention record, then applies a priority-ordered decision tree. Bradycardia is checked first as a deterministic override. Hypovolemia and myocardial depression both score multiple weak signals — the distinguishing feature is whether HR rose (compensation) or fell (failure) in response to the MAP drop. Vasodilation is the default when HR is preserved. Unattributable is surfaced honestly when key data is missing rather than defaulting against insufficient evidence — in the audit it doubles as a data-quality flag for the customer.
The audit triangulates across 13 intraoperative vitals + intervention channels. Where a definitive lab or waveform signal would normally apply (lactate, base deficit, Hgb trend, troponin, BNP, EKG ST/rhythm, echo), we flag it explicitly — those require EHR + waveform integration shipping in M2–M3+. The rules below operate on what continuous monitoring alone provides.
Bradycardia first because the HR threshold is unambiguous and missing it has high consequence (different drug class entirely). Hypovolemia next because volume-loss signals (SVV, EBL, pulse-pressure narrowing, CVP drop, urine output drop) are highly specific when present — a positive SVV or active EBL is hard to mistake for anything else. Myocardial depression after hypovolemia because their signatures overlap (both can show ETCO2 drop and pulse-pressure narrowing); the distinguishing feature is whether HR rose (compensation) or fell (failure), and whether CVP rose with falling MAP (cardiogenic). Vasodilation last among the affirmative endotypes because it requires drug evidence to fire — we don't default to it just because HR is preserved. Unattributable when no rule fires with evidence, so the system never claims an endotype the underlying data cannot support.
The medical director and the QI committee don't need 419 individual differentials. They need to know what's actually driving IOH burden across the group, where the outliers are, and which one protocol change has the most leverage. This is the next layer up.
From the differentials of N events across N cases.
Events that fall outside expected severity or duration patterns. These are the cases the QI committee should review first — the ones most likely to involve a near-miss or a documentation gap that matters.
Highest-leverage change identified by the audit. Based on which endotype dominates the burden + what intervention pattern in the data correlates with the events.
The differentials per event and the site-level patterns above are the raw material. The customer-facing deliverables are below — what actually shows up in the medical director's inbox, the chief of anesthesia's safety review queue, and the malpractice carrier's expert-witness file.
All three outputs derive from the same structured differential JSON produced for each event. Adding a new output surface is a rendering layer change, not a model retrain — e.g., a CMS PSI reporting pack, a research dataset export for an academic partner, or a real-time SMS to the on-call attending all reuse the same primitive. The data layer (LINCR's differential record) is the durable asset; the surfaces are commodity.