Quick Start

Get TameFlare running in under 5 minutes. Zero code changes to your agent.

Prerequisites

RequirementVersionCheck
Node.js≥ 18node --version

That's it. The CLI handles local setup - no git clone, no build step, no Docker.

OS support: Linux, macOS, Windows. All CLI commands work in PowerShell, CMD, and Bash.


1. Install the CLI

npm install -g @tameflare/cli

This installs the tf command globally. No binary downloads, no Docker - the gateway runs in the cloud.

2. Log in

tf login

This opens your browser to tameflare.com. Log in (or register), then click Authorize to connect the CLI to your account. Credentials are saved to ~/.tameflare/credentials.json.

Tip
Don't have an account yet? Register at tameflare.com/register - the Starter tier is free with 3 gateways and 1,000 actions/month.

3. Create a gateway in the dashboard

Go to tameflare.com/dashboard/gateways and click Create gateway. Name your gateway (e.g. "dev", "production"), then the wizard walks you through:

  1. Connectors - select which APIs this gateway can access (GitHub, OpenAI, Anthropic, Stripe, Slack, MCP, Webhook, Generic HTTP). Click a connector to paste its API key and test the connection.
  2. Access Rules - per connector: toggle action categories on/off, set per-action overrides (allow / deny / require approval).
  3. Notifications - TameFlare dashboard (always on) + optional Slack.
  4. Review - pre-flight checklist, then click Create gateway.

4. Initialize locally

tf init

Since you're logged in, the CLI automatically fetches your gateways and selects one (or prompts you to pick if you have multiple). Use tf init --list to explicitly choose.

This creates a .tf/config.yaml with your gateway credentials. No binary download - the gateway runs in the cloud.

Note
CI/CD alternative: If you can't use tf login (headless environment), use tf init --gateway-id gw_xxx --gateway-token gwtk_xxx with the credentials from the dashboard.

5. Run your agent

tf run -- python agent.py

All outbound HTTP traffic from agent.py is now routed through the TameFlare cloud proxy at proxy.tameflare.com. The agent never sees real API keys.

The tf run command:

  1. Sets HTTP_PROXY / HTTPS_PROXY (and lowercase variants) to https://{gateway_token}@proxy.tameflare.com
  2. Spawns your process with the proxy env vars set
  3. Every HTTP request is routed through the cloud gateway
  4. The gateway parses, permission-checks, and logs each request
  5. Connectors and permissions are managed via the CLI or dashboard

6. Set permissions

Manage per-action permission rules directly from the CLI:

tf permissions set github "github.*" allow              # Allow all GitHub actions
tf permissions set github "github.repo.delete" deny     # But deny repo deletion
tf permissions set openai "openai.chat.*" require_approval  # Gate chat completions
tf permissions list                                     # View all rules
tf permissions delete github "github.repo.delete"       # Remove a rule

Rules are enforced in real time by the cloud proxy. The most specific pattern wins:

PatternMatchesSpecificity
github.repo.deleteExact actionHighest
github.repo.*All repo actionsMedium
github.*All GitHub actionsLowest

Decisions: allow (forward with credentials), deny (block with 403), require_approval (hold connection, wait for human).

You can also manage permissions in the dashboard gateway wizard under Access Rules.

7. Monitor

Open the Traffic page in the dashboard to see live requests, or use the CLI:

tf status                       # Gateway config + connectivity
tf logs                         # Opens dashboard traffic view
Tip
The proxy works with any process that makes HTTP calls - Python, Node.js, Go, Rust, shell scripts, LangChain, CrewAI, n8n, Claude Code, MCP servers, etc. No SDK integration required.

How it works

TameFlare uses a cloud proxy model. There is no binary to download, no server to deploy, no Docker container to run.

You (developer)
  │
  │  tf run -- python agent.py
  ▼
┌─────────────────────────────────────────────────────┐
│  proxy.tameflare.com (cloud gateway)                │
│                                                     │
│  1. Identifies gateway by token in proxy auth       │
│  2. Loads connectors + permissions from dashboard   │
│  3. Intercepts all HTTP/HTTPS from your process     │
│                                                     │
│  domain → connector → parse action → check perms    │
│                                                     │
│  allowed:          inject creds → forward           │
│  denied:           return 403                       │
│  Require approval: hold connection → wait           │
└─────────────────────────────────────────────────────┘
  │
  ▼
External APIs (GitHub, Stripe, Slack, OpenAI...)

Key architecture points

  • The gateway runs in the cloud at proxy.tameflare.com. The CLI sets HTTP_PROXY / HTTPS_PROXY (and lowercase variants) and spawns your process - all HTTP traffic is routed through the cloud proxy.
  • The dashboard at tameflare.com is the control plane. Gateway configuration, audit logs, and approval workflows are managed there.
  • The CLI is lightweight. tf login authenticates, tf init picks a gateway, tf run proxies traffic. No local binary needed.

Key concepts

  • Deny-all default - no connector configured = no access. Your process cannot reach any domain you haven't explicitly set up.
  • Connectors parse raw HTTP requests into structured actions (e.g., github.pr.merge, mcp.tools.create_issue).
  • Permissions are per-gateway, per-connector, per-action. Most specific rule wins.
  • Credential vault stores API keys encrypted (AES-256-GCM). Injected by the proxy at request time - the agent process never sees real keys.
  • Kill switch blocks all traffic instantly. Managed from the dashboard.

Example: govern a LangChain agent

# 1. Install the CLI
npm install -g @tameflare/cli
 
# 2. Log in (one-time per machine)
tf login
 
# 3. Create a gateway at tameflare.com/dashboard/gateways
#    - Add GitHub connector, set "Require approval" for merges
#    - Add OpenAI connector, set "Allow all"
#    - Paste your API keys into the vault
 
# 4. Initialize locally
tf init
 
# 5. Run your agent
tf run -- python langchain_agent.py
 
# 6. Fine-tune permissions from the CLI
tf permissions set github "github.pr.merge" require_approval
tf permissions set openai "openai.*" allow
 
# 7. Watch the traffic in the dashboard
tf logs

Output:

14:32:01 | langchain-prod | openai.chat.completion | ALLOW  | 245ms
14:32:03 | langchain-prod | github.issue.create    | ALLOW  | 89ms
14:32:05 | langchain-prod | github.pr.merge        | HOLD   | waiting...
14:32:18 | langchain-prod | github.pr.merge        | ALLOW  | 112ms (approved)

Troubleshooting

Dashboard shows no traffic - Make sure you've created a gateway in the dashboard and are running your process with tf run -- <command>.

tf init fails - Check that Node.js >= 18 is installed and you've run tf login first.

Agent can't reach APIs - The gateway uses deny-all by default. Add a connector for each API domain your agent needs in the dashboard wizard.

Gateway not enforcing rules - The cloud proxy caches config from the dashboard. If you changed rules, restart the process or wait a few minutes for config cache refresh.

Slack notifications not sending - Check that Slack is enabled in the gateway wizard's notification step and the bot is invited to the target channel.