AI Signal Post-Mortems: The Competitive Intelligence Methodology Most Systems Skip
Intelligence practitioners spend enormous effort studying external adversaries. Almost none apply the same systematic apparatus to studying themselves. Alpha Journal is what that looks like when you do.

The most dangerous blind spot in any intelligence operation isn't the adversary you've misread. It's the instrument you trust without questioning. Analysts who spend years mapping competitor behavior will rarely turn the same apparatus on themselves — on their own methods, their own signal sources, their own pattern of judgment. The assumption is that the tool is clean. It almost never is.
Systematic traders make this mistake constantly. They build sophisticated signal architectures, backtest obsessively, then deploy — and treat the system as a black box from that point forward. Win or lose, the machine runs. The feedback loop closes on outcomes but never on the signal machinery itself. Which means the system degrades silently, and no one knows until the drawdown is deep enough to force attention.
Alpha Journal is the corrective. It's a post-mortem system built not to study markets, but to study the signal engine studying markets. The distinction matters more than it sounds.
The Intelligence Discipline Most Systems Skip
Competitive intelligence at its core is a feedback architecture. You observe, you structure, you extract patterns, you adjust. The cycle is the methodology — not the individual insight, but the compounding of insights over time into a model that gets more accurate with use.
Most trading systems capture the observation and the outcome but skip the structured analysis layer entirely. A trade fires. It wins or loses. The outcome feeds back into a performance dashboard. End of loop. This is intelligence collection without intelligence processing — data accumulation mistaken for understanding.
What's missing is exactly what any serious CI operation would flag immediately: attribution. Who or what was responsible for the outcome? Under what conditions did the signal perform? Where did it degrade and why? Without that analysis, you're not learning from the system — you're just watching it.
The InDecision Framework treats decisions as data points in a structured process. Signal post-mortems take that logic one layer deeper: they treat every trade the system touches — wins, losses, and deliberate passes — as evidence about the signal machinery itself. Every outcome is testimony. The question is whether you're listening systematically or just reacting to the loudest signals.
What a Signal Post-Mortem Actually Is
Alpha Journal runs 11 modules in sequence: pull_trades, attribution, postmortem, health_report, degradation_alert, auto_weight_adjuster, and downstream synthesis. The pipeline isn't just logging — it's structured analysis with a specific epistemological constraint: no LLM in the measurement layer.
That constraint is deliberate and important. The attribution module computes signal factor win rates from actual trade outcomes — deterministic math, not language model inference. This is the equivalent of a CI operation insisting on primary sources before interpretation. You don't let the analyst rewrite the raw intercept. You compute the facts first, then apply judgment to what the facts mean.
The post-mortem module works in compound mode. Rather than generating individual trade stories — which create narrative but not pattern — it clusters trades over a 72-hour window by dominant factor and outcome. Losses that share a dominant signal factor get analyzed together. Winners that shared conditions get analyzed together. The system is looking for systemic behavior, not individual events.
This is how intelligence analysis is supposed to work. A single anomaly is noise. Three anomalies with a shared characteristic are a pattern. Ten patterns with structural similarity are a capability. The clustering logic forces the system to aggregate before it interprets, which means the post-mortems surface real patterns rather than confabulated explanations for individual outcomes.
Ghost P&L extends this further. The system doesn't just analyze trades it took — it analyzes signals it declined to act on. Every skipped trade has an opportunity cost, and that cost is measured explicitly. If the system's filters are systematically rejecting winning setups, the ghost P&L line will show it before the performance metrics catch up. This is the intelligence equivalent of studying your own intelligence gaps — tracking not just what you know, but what you failed to know, and what that failure cost.
The Feedback Architecture
Degradation detection must run below the level of human attention. A feedback loop that requires a person to notice the problem first isn't a feedback loop — it's a post-mortem waiting to happen. The detection tier should be automated. The response tier requires judgment. These are not the same job.
Three consecutive weeks with win rate below 45% triggers a degradation alert. The alert doesn't just flag the problem — it auto-drafts a pull request to adjust signal weights. The system detects its own degradation and generates the corrective intervention in the same pipeline run.
That's a closed feedback loop operating below the level of human judgment. Not autonomous — the PR still requires review and merge — but the detection and initial response are handled without a human needing to notice the problem first. This matters because human attention is the bottleneck in most CI operations. Analysts miss patterns not because the data isn't there, but because the data never surfaces to a layer where human attention can be applied.
The architecture solves this by automating the detection tier while keeping human judgment in the response tier. The system knows when it's degrading. The analyst decides what to do about it.
304 tests covering 96% of the codebase enforce a different kind of discipline: the system must behave correctly even when it's producing bad outputs. A signal post-mortem system that works when signals are performing is useless — that's when you don't need it. The test coverage exists to ensure the degradation detection, ghost P&L calculation, and auto-weight drafting all fire correctly when the system is under stress. Adversarial conditions are exactly when the feedback architecture needs to hold.
Health reporting runs as a separate module after postmortem synthesis. The distinction between a health report and a post-mortem is precision: a post-mortem attributes causality, a health report surfaces current state. Running them as separate modules with separate outputs means the degradation alert has clean signal to act on — it's reading the health report, not trying to extract operational state from a narrative analysis document. Separation of function is a basic tradecraft principle that most systems ignore because it requires more engineering.
What This Means for Systematic Intelligence
The principle generalizes beyond trading. Any system that generates signals — a competitive intelligence function, a market research operation, a product analytics stack — has the same structural problem. The signals age. The conditions that made them valid shift. The weights assigned to different sources become miscalibrated over time as the environment changes. Without a systematic post-mortem layer, the degradation is invisible until it's consequential.
Jeremy Knox has argued that competitive intelligence is prophecy — not prediction, but the rigorous structuring of what you observe into forward-looking pattern. The architecture of that argument depends on the instrument being trustworthy. A CI function that studies competitors without studying itself is like an intelligence service that vets foreign sources but not its own analysts. The external picture may be accurate. The internal filter is the unknown.
Signal post-mortems are what it looks like to apply the full CI methodology — including the inward-facing discipline. Systematic observation applied to your own signal machinery. Structured analysis that forces attribution before interpretation. Pattern extraction at the cluster level, not the individual trade level. Feedback that closes the loop on the instrument, not just the outcomes the instrument produces.
Most trading systems study markets. The edge is in studying the signal machinery that studies markets.
The difference between a system that degrades silently and one that self-corrects isn't raw capability — it's architecture. The feedback loop has to be built in deliberately, with measurement before interpretation and detection before response. Alpha Journal is one implementation of that architecture. The methodology it embodies is older than algorithmic trading, older than quantitative finance. It's what serious intelligence operations have always done when they take the quality of their own instruments seriously.
The question isn't whether your signals are good. The question is whether you have a system in place to know when they stop being good — before the market tells you.
Explore the Invictus Labs Ecosystem

2026-03-09 · 6 min
Hot Reload and the Execution Gap: What Continuous Deployment Means for Live Alpha

2026-03-09 · 6 min
The Automated Signal Surface: What AI-Generated Visual Intelligence Reveals About Competitive Infrastructure

2026-02-28 · 7 min
Thoth: Automated Documentation Intelligence for High-Velocity Engineering Teams
Follow the Signal
Intelligence dispatches, system breakdowns, and strategic thinking — follow along before the mainstream catches on.