Skip to content

Configuration

FlagDefaultDescription
-dbaudit.dbSQLite audit database path
-receipt-dbreceipts.dbSQLite receipt store path
-key(ephemeral)Ed25519 private key PEM file for signing receipts
-taxonomy(none)Taxonomy mappings JSON file for action classification
-rules(built-in defaults)Policy rules YAML file
-name(inferred from command)Server name for the audit trail
-issuerdid:agent:mcp-proxyIssuer DID for receipts
-issuer-name(none)Issuer name, e.g. Claude Code or Codex
-issuer-model(none)AI model identifier, e.g. claude-sonnet-4-6. Static per session — omit if the client can switch models mid-session
-operator-id(none)Operator DID (organisation running the agent), e.g. did:web:anthropic.com
-operator-name(none)Operator name, e.g. Anthropic
-principaldid:user:unknownPrincipal DID for receipts
-chain(auto UUID)Chain ID for receipt chaining
-http127.0.0.1:0HTTP address for the approval endpoint (default: random port, logged to stderr)
-approval-timeout1m0sMaximum time to wait for HTTP approval before a paused call is auto-denied

Rules are defined in YAML and control what happens when a tool call matches:

rules:
- name: block_destructive_ops
description: Block delete operations on sensitive tools
enabled: true
tool_pattern: "delete_*"
server_pattern: "*postgres*"
operation_types: [delete]
min_risk_score: 70
action: block
- name: pause_high_risk
description: Require approval for high-risk operations
enabled: true
min_risk_score: 50
action: pause
FieldRequiredDescription
nameyesUnique rule identifier
descriptionnoHuman-readable description
enabledyesWhether the rule is active
tool_patternnoGlob pattern matching tool name (case-insensitive)
server_patternnoGlob pattern matching server name
operation_typesnoFilter by operation type: read, write, delete, execute
min_risk_scorenoMinimum risk score (0-100) to match
actionyesOne of pass, flag, pause, block
ActionBehavior
passLog only, forward normally
flagLog with highlight, forward normally
pauseHold for HTTP approval (configurable timeout, auto-denied on timeout)
blockReject immediately with error

When multiple rules match, the most restrictive action wins (block > pause > flag > pass).

Risk scores range from 0 to 100, computed from:

FactorScoreCondition
Operation type0—40read=0, write=20, execute=30, delete=40
Sensitive keywords+30Tool name contains: auth, credential, password, token, secret, key
SQL without WHERE+30Arguments contain UPDATE/DELETE/TRUNCATE without WHERE
Config modification+20Tool name contains: config, setting
External messaging+15Tool name starts with: send_, post_
Unknown operation+10Fallback if classification fails

Tool names are classified by prefix (case-insensitive):

TypePrefixes
deletedelete_, remove_, drop_, destroy_, purge_
executerun_, exec_, invoke_, call_, trigger_
writecreate_, update_, set_, add_, put_, edit_, modify_, write_
readget_, read_, list_, search_, describe_, show_
unknown(fallback)

MCP tool names are automatically stripped of their mcp__<server>__ prefix before classification. For example, mcp__github-audited__create_branch is classified as create_branch (write). This means taxonomy mappings and policy rules use bare tool names.

For more precise classification than prefix-based inference, provide a -taxonomy JSON file mapping tool names to action types:

{
"mappings": [
{"tool_name": "merge_pull_request", "action_type": "data.api.write"},
{"tool_name": "list_issues", "action_type": "data.api.read"},
{"tool_name": "delete_file", "action_type": "data.api.delete"}
]
}

Available action types include filesystem.file.*, system.*, and data.api.* (read, write, delete). See the taxonomy spec for the full list.

A bundled configs/github_taxonomy.json provides mappings for GitHub MCP server tools.

When a tool call is paused by a policy rule:

  1. The proxy logs an approval ID and waits up to the configured -approval-timeout duration

  2. The approval URL and token are logged to stderr at startup, in two formats:

    mcp-proxy: approvals at http://127.0.0.1:59850 (token: 5fce4e79...)
    {"event":"approval_endpoint","url":"http://127.0.0.1:59850","token":"5fce4e79..."}

    The JSON line is a stable contract for tooling that wants to discover the endpoint without parsing the human line.

  3. Approve or deny via HTTP (substitute the URL from your stderr, and export the token first):

Terminal window
# Copy the token from the stderr line above (or parse the JSON event).
export APPROVAL_TOKEN=5fce4e79...
# Approve
curl -X POST http://127.0.0.1:59850/api/tool-calls/{id}/approve \
-H "Authorization: Bearer $APPROVAL_TOKEN"
# Deny
curl -X POST http://127.0.0.1:59850/api/tool-calls/{id}/deny \
-H "Authorization: Bearer $APPROVAL_TOKEN"

If no response arrives before -approval-timeout elapses, the call is automatically denied.

Paused calls return a JSON-RPC error with structured details in error.data, including:

  • status (denied or timed_out)
  • rule_name
  • risk_score
  • approval_id
  • approval_url

If you need a stable, predictable URL, pin the port with -http 127.0.0.1:8080 (or any free port).

The proxy redacts sensitive data before storage using two passes:

JSON-aware redaction replaces values of sensitive keys including: password, token, api_key, secret, authorization, private_key, access_token, jwt, database_url, ssh_key, connection_string, and others (42 keys total).

Pattern-based redaction matches known secret formats:

  • GitHub PATs and OAuth tokens (ghp_*, gho_*)
  • OpenAI/Anthropic API keys (sk-*)
  • AWS access keys (AKIA*)
  • Bearer tokens
  • Slack tokens (xox*)
  • PEM private key blocks

Set the BEACON_ENCRYPTION_KEY environment variable to enable AES-256-GCM encryption of all stored audit data:

Terminal window
BEACON_ENCRYPTION_KEY="my-passphrase" mcp-proxy node server.js

Key derivation uses Argon2id (t=1, m=64MB, p=4). Encrypted fields are stored with an enc: prefix and transparently decrypted on retrieval.

Multiple MCP clients running the proxy simultaneously

Section titled “Multiple MCP clients running the proxy simultaneously”

Each mcp-proxy instance binds its own HTTP server for the approval workflow. By default the OS picks a random free port, so multiple instances (Claude Desktop + Claude Code, Codex alongside either, etc.) coexist without configuration. The actual address is logged to stderr at startup.

If you want a fixed port for each client (e.g. so an external approval UI can connect reliably), pin it with -http:

Terminal window
# Pin one client to 8080
mcp-proxy -name github -http 127.0.0.1:8080 ... /path/to/server
# Pin another to 8081
mcp-proxy -name github -http 127.0.0.1:8081 ... /path/to/server

Each instance can also use separate -db and -receipt-db paths if you want isolated audit trails, or share the same databases if you want a unified log.