Decision layer for
autonomous agents

Evaluate actions before they execute.

Every agent should ask one question before acting: should this execute?

One call. A structured verdict.

Send a proposed action — with cost, confidence, and a target — and get back a go/no-go decision in milliseconds. No model, no config, no overhead.

POST /v1/should-execute
Request
{
  "action": "call_api",
  "target": "https://api.openai.com/v1/chat",
  "expected_cost": 0.002,
  "confidence": 0.6,
  "context": "user wants summary"
}
Response
{
  "should_execute": true,
  "confidence": 0.82,
  "risk": "low",
  "reason": "Low cost and high
    likelihood of success",
  "decision_factors": {
    "cost_score": 0.9,
    "confidence_score": 0.7,
    "risk_score": 0.2
  }
}

Simple rules, clear outcomes.

Three factors combine into a score. Above 0.6 — execute. Below — don't.

1
Cost evaluation Actions over $0.01 are penalised. Lower cost raises the score.
2
Confidence check Agent confidence below 0.5 is a soft veto. Uncertainty feeds directly into the verdict.
3
Target risk scoring Known domains score low risk. Unknown targets raise it automatically.
4
Decision threshold Score > 0.6 → should_execute: true. Otherwise false, with a plain-English reason.
POST /v1/should-execute
Evaluate: summarise via OpenAI
real request · live response
Request
{
  "action": "call_api",
  "target": "https://api.openai.com/v1/chat",
  "expected_cost": 0.002,
  "confidence": 0.6,
  "context": "user wants summary"
}
Response
// hit Run to call the live API
Confidence: