MCP Gateway + LLM Gateway. Identity-First Connectivity™ for AI.

AI tools need access to internal resources, models, and APIs. Traditional approaches force a choice between security and velocity. These gateways eliminate that tradeoff.

Architecture diagram showing AI clients connecting through the OpenZiti overlay to MCP servers and LLM providers

Two open source gateways built on OpenZiti. Route AI clients to tools and models through an encrypted overlay with cryptographic identity, end-to-end encryption, no shared API keys, no open ports, and no VPN.

One identity, one security model

Both gateways are built on OpenZiti and share the same zero-trust foundation. They work independently, but they're designed to work together.

Unified identity

A single OpenZiti identity gives an agent access to specific LLM models and specific MCP tools. No separate credentials for each system.

Correlated observability

Trace a request from agent through LLM call to tool invocation and back. See the full picture of what your AI workflows are doing.

Coordinated governance

Consistent policies across model access and tool access. Same identity model, same enforcement approach, same audit trail.

MCP Gateway

Zero-trust access to MCP tool servers from Claude Desktop, Cursor, VS Code, and any MCP-compatible client.

One-command setup

Wrap any MCP server with a single mcp-bridge command. No code changes to your server.

Multi-backend aggregation

Combine local stdio servers and remote zrok shares into a single connection for your client.

Tool namespacing

Your clients see a clean, unified toolset regardless of how many backends you run. Tools are namespaced automatically - no collisions, no manual prefixing.

Security by construction

Permission filtering removes tools from the registry entirely. Not checked at runtime - gone from the schema.

Session isolation

Each client gets dedicated backend connections. One client's crash or misbehavior never affects another.

Dark by default

No listening ports. Nothing to scan, nothing to probe. If you're not authorized, the service doesn't exist.

# Aggregate multiple backends with filtering
backends:
  - id: "files"
    transport:
      type: "stdio"
      command: "mcp-filesystem-server"
    tools:
      mode: "allow"
      list: ["read_*", "list_*"]
  - id: "github"
    transport:
      type: "zrok"
      share_token: "abc123def"
    tools:
      mode: "deny"
      list: ["delete_*", "drop_*"]
# Wrap any MCP server in one command
mcp-bridge run /path/to/mcp-server

# Connect from Claude Desktop
mcp-tools run <share-token>
View on GitHub

LLM Gateway

OpenAI-compatible proxy with semantic routing and zero-trust networking. Change your base_url and everything else works.

Multi-provider routing

Route across OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Google Vertex AI, Ollama, and any OpenAI-compatible endpoint without changing client code.

Semantic routing

Picks the best model per request. Three-layer cascade: heuristics, embeddings, optional LLM classifier.

Ollama load balancing

Distribute requests across multiple Ollama instances with health checks and automatic failover.

Private model mesh

Connect to models on other machines via zrok. No open ports, no VPN, no firewall rules.

Guardrails

PII detection, content safety filtering, topic allow/deny lists, and prompt injection detection.

Full streaming

Consistent streaming behavior whether you're hitting OpenAI, Ollama, or anything in between. Three deployment modes: public, private, and reserved shares.

# Point it at your providers
providers:
  open_ai:
    api_key: "${OPENAI_API_KEY}"
  anthropic:
    api_key: "${ANTHROPIC_API_KEY}"
  bedrock:
    region: "us-east-1"
    profile: "default"
  ollama:
    base_url: "http://localhost:11434"
llm-gateway run config.yaml
View on GitHub

Both projects are Apache 2.0, written in Go, and ship as single binaries with no runtime dependencies. They work with the tools you already use - no code changes, no new SDKs, no workflow disruption.

Get started

The fastest path to hands-on:

Try the LLM Gateway with a local Ollama

go install github.com/openziti/llm-gateway/cmd/llm-gateway@latest

Create a config pointing at your local Ollama and run it. Any OpenAI-compatible client can talk to it. Takes about two minutes.

Getting started guide

Try the MCP Gateway with Claude Desktop

Install mcp-bridge and mcp-tools, wrap an MCP server, connect from Claude Desktop.

Getting started guide

Easy setup with zrok

zrok provides a user experience layer for OpenZiti. It handles network configuration, identity provisioning, and share management automatically - so both gateways can offer encrypted, identity-based connectivity without impacting velocity.

Use zrok.io

Free hosted zrok service from NetFoundry. Create an account and start sharing in minutes.

Get started on zrok.io

Self-host zrok

Run your own zrok instance for full control over the overlay network.

Self-hosting guide