The question I kept coming back to: Claude can reason. Can it reason about trades?

The thing about trading decisions is that they're not purely quantitative. Signal strength matters. Regime context matters. What happened in the last three trades matters. These are exactly the things a language model is good at — synthesising messy context into a single judgment call.

So I built Archer.

The architecture

Python handles everything fast. It scans BTC and ETH markets every five minutes, computes ATR, RSI, MACD, order book imbalance, and funding rate. It applies signal filters. If the composite score clears 0.45 and the minimum ATR threshold is met, the signal gets passed to Claude.

Claude then sees the market snapshot, the last few journal entries, and any skills it has built from past trades. It reasons about the signal: Is the regime trending or ranging? Does this pattern match anything that worked before? Is there a reason to skip even though the score is high enough? It returns a decision — TRADE or SKIP — with reasoning.

If Claude says trade, Python applies all risk guards before any order goes to the exchange. Claude cannot override the 5% daily loss limit, the 4:1 TP:SL minimum, or the Kelly-based position size. The reasoning lives in Claude. The execution discipline lives in Python.

What it does at midnight

Every ten trades, or whenever the daily drawdown hits 3%, Archer runs a reflection. Claude reviews the journal — wins, losses, patterns, mistakes — and updates its skill set. Skills are stored to disk. They get retrieved in future decisions. The idea is that the agent gets better over time, not just as a function of the market seeing more data, but because the model has explicitly documented what it learned.

Whether this compounds meaningfully at low trade frequency, I'm still measuring. What I know is that the journal entries are genuinely useful — Claude's reasoning on skipped trades is often more interesting than on the ones it took.

The constraint I won't lift

Claude cannot place orders directly. It cannot access the exchange API. It cannot disable circuit breakers. Every execution pathway goes through Python guards that don't care what Claude decided. This isn't distrust — it's architecture. The model's value is in reasoning, not in bypassing the guardrails that make it safe to run overnight while you're asleep.

Running Archer on $17.27 of capital. Real exchange. Real fees. The number is small enough that the cost of being wrong is a learning fee, not a catastrophe.