// THE STACK

Every Architectural Choice Has a Reason.

The tech decisions behind 54 apps, four layers, and a system that doesn't break.

// THE FOUR LAYERS
Agent Layer
Soul + Claude Code

Persistent context plus on-demand build agent. Soul runs 24/7 and maintains state. Claude Code is spawned for implementation tasks and closes after each session. The split is intentional — always-on orchestration, on-demand execution.

Intelligence Layer
InDecision Engine + Content Flywheel

Structured scoring over vibes. InDecision removes intuition as the primary input and replaces it with a 6-factor weighted model. Content Flywheel applies the same Gather → Synthesize → Deliver discipline to content production.

Build Layer
Next.js + FastAPI + Docker

Static export for web — zero server cost, instant CDN delivery. Python for AI and data pipelines — the language the models speak natively. Docker isolates every service so a broken container doesn't cascade into a broken system.

Monitoring Layer
Invictus Sentinel + Mission Control

Catch incidents before Knox does. Sentinel watches every service, tunnel, and pipeline independently via launchd — so if Soul goes down, monitoring stays up. Mission Control provides a single-pane-of-glass view across all 54 apps.

// FULL STACK BREAKDOWN
Frontend
Next.js 14 App Router`output: 'export'` — fully static, zero server cost
Tailwind CSS v3Utility-first styling with custom design tokens
TypeScriptType safety across every component and utility
Framer MotionAnimation and transition layer
VercelDeploy target — GitHub → auto-deploy on main merge
Backend & AI
Python + FastAPIAll AI pipelines and internal APIs
Anthropic Claude APIPrimary LLM — reasoning, synthesis, code generation
Custom prompt layersFramework-aware prompts built per pipeline
Structured output parsingPydantic models for reliable AI output
Data & State
PostgreSQLPersistent relational data for trading and analytics
YAML/JSON flat filesSoul workspace and memory files
No ORMDirect queries — performance over abstraction
Infrastructure
Mac Mini M4 ProAlways-on host — 24GB RAM, 1TB SSD
Cloudflare TunnelsSecure remote access, no exposed ports
DockerContainerized services with isolated blast radii
GitHub ActionsCI/CD for all web properties
launchdmacOS-native service manager for cron jobs
// WHY NOT X
Why not AWS/cloud?

Cost, latency, and unnecessary complexity. The Mac Mini handles 54 apps without breaking a sweat. Cloud would add a monthly bill, an extra network hop, and a management plane that solves a problem that doesn't exist at this scale.

Why not LangChain?

Too opinionated. LangChain imposes abstractions that fight custom pipeline logic. Building custom prompt layers takes longer upfront and gives total control over behavior, cost, and debugging. No magic boxes.

Why not microservices at scale?

Overkill for a single operator. Microservice overhead — service discovery, inter-service auth, distributed tracing — is designed for teams of 50+. The Tesseract stack uses isolated blast radii to get the key benefit without the overhead.

Why not a managed LLM platform?

Direct API access beats every wrapper. Managed platforms add cost, add latency, and limit control over model parameters, system prompts, and output formatting. Direct Anthropic API is the only rational choice at this layer.

// EXPLORE FURTHER

See the Full Architecture

Four layers, 54 apps, one persistent agent — the AIOS architecture that makes the stack coherent.

VIEW PLATFORM ARCHITECTURE →