// CASE STUDIES

The Receipts.

Four systems built inside the Tesseract ecosystem. What they solve, how they were built, and what they proved.

Case Study 01
The InDecision Engine
// The Challenge

7 years of market analysis, 300+ investor calls, and one recurring problem — I could see setups clearly but couldn't systematically articulate what made them high-conviction versus noise. Everything lived in my head.

// What Was Built

A 6-factor weighted scoring model. Daily Pattern (30%), Volume (25%), Timeframe Alignment (20%), Technical Confluence (15%), Market Timing (10%), and Risk Context as a qualitative override. Each factor scored 0-100. Spread between bull and bear scores determines conviction tier.

// The Numbers
82.5%
Directional accuracy
6
Weighted factors
v5
Current version
Live
Polymarket deployment
// The Lesson

The value of a framework is not the output — it's the structured diagnostic. When a trade fails, InDecision tells you exactly which factor was wrong. That's the compound learning loop that pure intuition can never produce.

Case Study 02
Soul — The Digital Twin
// The Challenge

Every pipeline I built was orphaned after it ran. Claude Code sessions start and end. Cron jobs fire into the void. No persistent context, no ongoing intelligence, no way to orchestrate between systems. I needed a nervous system, not more tools.

// What Was Built

Soul — a persistent 24/7 communication and session agent running on a Mac Mini. Not a chatbot. The always-on interface between Knox and the system. Soul manages Discord and Telegram sessions, routes Knox's directives, and maintains persistent session context across all channels. The September attempt broke because I was treating agent sessions like API calls. The fix was persistence — a layer that never resets, never forgets the last session, never loses the thread.

// The Numbers
24/7
Always running, no restarts
16
Incidents caught by Sentinel
0
Unplanned outages
3
Channels managed simultaneously
// The Lesson

Persistence is not a feature. It's the foundation. A session that resets is a session that forgets. The system that never sleeps isn't impressive because it's always on — it's impressive because it always knows where it left off.

Case Study 03
The Polymarket Bot
// The Challenge

I had a trading framework with 82.5% accuracy, but I was still executing manually — analyzing setups, placing bets, closing positions by hand. The bottleneck was me. At scale, the advantage disappears if execution speed doesn't match signal speed.

// What Was Built

PolyEdge v4 — fully autonomous prediction market trader. Scans Polymarket binary options, runs each setup through InDecision, executes on high-conviction signals (HC ≥15 spread), manages position sizing, and triggers Post Mortem AI on every close. Direct InDecision Engine integration with no human in the loop.

// The Numbers
v4.0
Current version deployed
HC ≥15
Execution threshold
3
New market categories
Auto
Post mortem on every close
// The Lesson

Automation without a framework is just fast mistakes. The bot is only as good as InDecision. The model came first. The automation made it scale.

Case Study 04
The Harness
// The Challenge

A fleet of 54 apps and a growing agent roster had one systemic vulnerability: trust. Agents were trusted to read their context — they could skip it. Trusted to complete tasks — they could stop halfway. Trusted to stay within budget — spend was invisible until the invoice. The system was capable. It wasn't provable.

// What Was Built

Six mechanical enforcement gates running underneath all 54 apps. Not prompt engineering. Not convention. Infrastructure with receipts. 01 / Credit Preflight — Before any Opus-class session initiates, the harness queries API credit balance and projects daily burn rate. On first deployment it returned a live WARN: $5.43 remaining at $10.91/day. Caught before it killed a session. 02 / Akashic Read Receipt — Every agent must read its knowledge base before work begins. A receipt keyed to the session ID is required at dispatch. No receipt: blocked, logged, alerted. The constraint is mechanical, not instructional. 03 / Independent Completion Verification — When an agent marks a task done, a separate Haiku-class verifier checks the output against acceptance criteria before it archives. Agents don't grade their own work. 04 / Per-Agent Cost Attribution — Every API call attributed to a specific agent and model tier. Haiku for observation. Sonnet for planning. Opus for directives only. The cost model is enforced, not assumed. 05 / Skill Drift Governance — 66 skills entered a 90-day freshness window on April 13, 2026. Skills without a successful invocation in 90 days are flagged for review. The library compounds. It doesn't rot silently. 06 / Peer Agent Bus — The E-Board routes peer-to-peer without Knox as broker. The harness logs every exchange. Knox only sees escalations.

// The Numbers
77
Tests passing
6
Enforcements
1
Session to build
31
Files shipped
// The Lesson

Capability is not the same as reliability. A capable system does what you ask when everything goes right. A reliable system proves what it did — even when something went wrong. The Harness is the difference between a fleet that runs and a fleet that can be trusted.