AI tools need access to internal resources, models, and APIs. Traditional approaches force a choice between security and velocity. These gateways eliminate that tradeoff.
Two open source gateways built on OpenZiti. Route AI clients to tools and models through an encrypted overlay with cryptographic identity, end-to-end encryption, no shared API keys, no open ports, and no VPN.
Both gateways are built on OpenZiti and share the same zero-trust foundation. They work independently, but they're designed to work together.
A single OpenZiti identity gives an agent access to specific LLM models and specific MCP tools. No separate credentials for each system.
Trace a request from agent through LLM call to tool invocation and back. See the full picture of what your AI workflows are doing.
Consistent policies across model access and tool access. Same identity model, same enforcement approach, same audit trail.
Zero-trust access to MCP tool servers from Claude Desktop, Cursor, VS Code, and any MCP-compatible client.
Wrap any MCP server with a single mcp-bridge command. No code changes to your server.
Combine local stdio servers and remote zrok shares into a single connection for your client.
Your clients see a clean, unified toolset regardless of how many backends you run. Tools are namespaced automatically - no collisions, no manual prefixing.
Permission filtering removes tools from the registry entirely. Not checked at runtime - gone from the schema.
Each client gets dedicated backend connections. One client's crash or misbehavior never affects another.
No listening ports. Nothing to scan, nothing to probe. If you're not authorized, the service doesn't exist.
# Aggregate multiple backends with filtering
backends:
- id: "files"
transport:
type: "stdio"
command: "mcp-filesystem-server"
tools:
mode: "allow"
list: ["read_*", "list_*"]
- id: "github"
transport:
type: "zrok"
share_token: "abc123def"
tools:
mode: "deny"
list: ["delete_*", "drop_*"]
# Wrap any MCP server in one command
mcp-bridge run /path/to/mcp-server
# Connect from Claude Desktop
mcp-tools run <share-token>
View on GitHub
OpenAI-compatible proxy with semantic routing and zero-trust networking. Change your base_url and everything else works.
Route across OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Google Vertex AI, Ollama, and any OpenAI-compatible endpoint without changing client code.
Picks the best model per request. Three-layer cascade: heuristics, embeddings, optional LLM classifier.
Distribute requests across multiple Ollama instances with health checks and automatic failover.
Connect to models on other machines via zrok. No open ports, no VPN, no firewall rules.
PII detection, content safety filtering, topic allow/deny lists, and prompt injection detection.
Consistent streaming behavior whether you're hitting OpenAI, Ollama, or anything in between. Three deployment modes: public, private, and reserved shares.
# Point it at your providers
providers:
open_ai:
api_key: "${OPENAI_API_KEY}"
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
bedrock:
region: "us-east-1"
profile: "default"
ollama:
base_url: "http://localhost:11434"
llm-gateway run config.yaml
View on GitHub
Both projects are Apache 2.0, written in Go, and ship as single binaries with no runtime dependencies. They work with the tools you already use - no code changes, no new SDKs, no workflow disruption.
The fastest path to hands-on:
go install github.com/openziti/llm-gateway/cmd/llm-gateway@latest
Create a config pointing at your local Ollama and run it. Any OpenAI-compatible client can talk to it. Takes about two minutes.
Getting started guideInstall mcp-bridge and mcp-tools, wrap an MCP server, connect from Claude Desktop.
Getting started guidezrok provides a user experience layer for OpenZiti. It handles network configuration, identity provisioning, and share management automatically - so both gateways can offer encrypted, identity-based connectivity without impacting velocity.