AI Agent Proxy
Overview
AI Agent Proxy is a security layer that protects AI agent applications by intercepting, analyzing, and securing communications between end users and AI agents. It provides real-time threat detection, guardrails enforcement, and response filtering for AI agent deployments running in customer environments.
Key Features
Threat Detection: Real-time scanning and blocking of malicious requests before they reach your AI agent
Request Guardrails: Enforce security policies on incoming requests to prevent attacks and policy violations
Response Guardrails: Scan and filter AI agent responses for sensitive data, policy violations, and security issues
Response Redaction: Automatically redact sensitive information from AI agent responses
Complete Visibility: Monitor all AI agent communications with comprehensive logging
Container-Based Deployment: Deploy as Docker containers alongside your AI agent infrastructure
Architecture
The AI Agent Proxy can be deployed as a Docker container or as a Kubernetes sidecar alongside your AI agent, providing a secure gateway for all AI agent traffic.
Traffic Flow:
End user sends request to AI Agent Proxy endpoint
Proxy performs threat detection and applies request guardrails
Valid requests are forwarded to AI agent container
AI agent processes request and returns response to proxy
Proxy receives response and applies response guardrails and redaction rules
End user receives final response (original, blocked, or redacted)
Deployment
Prerequisites
Docker installed on your VM
An AI agent application running as a Docker container
Network connectivity between proxy and AI agent containers
Docker Compose Setup
Locally running AI Agent
Create a docker-compose.yml file to run both the AI agent and proxy containers:
Hosted AI Agent
Alternatively, if your agent is hosted elsewhere and you want to access it locally, use this docker-compose.yml
Environment Variables
Configure the AI Agent Proxy with the following environment variables:
AKTO_API_TOKEN
Authentication token from Akto dashboard
Yes
-
AKTO_API_BASE_URL
URL for Akto data ingestion service (obtained from Akto dashboard)
Yes
-
APP_URL
Base URL where your AI agent is running. For docker-compose use service name (e.g., http://your-agent:3001), for local testing use localhost
Yes
-
PROJECT_NAME
Unique identifier for this AI agent deployment
Yes
-
APP_TYPE
Type of application being proxied: agent or mcp-server
Yes
agent
APP_SERVER_NAME
Name to identify this agent server for policy filtering. If not set, will be automatically extracted from APP_URL hostname
No
(extracted from APP_URL)
AKTO_PROXY_PORT
Port where AI Agent Shield will listen
No
8080
SKIP_THREAT
Set to true to skip sending threat reports to Akto (useful for testing)
No
false
REQUEST_TIMEOUT
Timeout for forwarding requests to AI agent (in seconds)
No
120
MAX_REQUEST_SIZE
Maximum request body size in bytes (0 = unlimited)
No
0
MAX_RESPONSE_SIZE
Maximum response body size in bytes (0 = unlimited)
No
0
ALLOWED_HTTP_METHODS
Comma-separated list of allowed HTTP methods (empty = all allowed)
No
(all methods)
APPLY_GUARDRAILS_TO_SSE
Apply guardrails to SSE (Server-Sent Events / text/event-stream) requests
No
true
GUARDRAIL_ENDPOINTS
Specific endpoints to apply guardrails. Format: METHOD:PATH or just PATH (defaults to POST). Comma-separated. If set, only these endpoints will have guardrails applied. Example: POST:/v1/workspace/slug/chat,GET:/v1/query
No
(apply to all SSE)
Start the Services
Configure Your Application
Update your application to route AI agent requests through the proxy:
Before:
After:
Kubernetes Setup
Prerequisites
Kubernetes cluster (v1.19+)
kubectl configured to access your cluster
An AI agent application deployed in Kubernetes
Akto API token from app.akto.io
Architecture: Sidecar Pattern
The AI Agent Shield runs as a sidecar container in the same pod as your AI agent, providing zero-latency security.
Benefits:
Zero network latency (localhost communication)
Automatic scaling with main container
Per-pod isolation
Simplified service routing
Traffic Flow:
Step 1: Create ConfigMap
Create a ConfigMap with common AI Agent Shield configuration:
Step 2: Add Secret
Add your Akto API token to a Kubernetes secret:
Step 3: Update Deployment with Sidecar
Add the AI Agent Shield sidecar container to your existing deployment:
Apply the updated deployment:
Step 4: Update Service
Update your service to route traffic to the AI Agent Shield sidecar port (8080):
Apply the service update:
Traffic Flow with Shield:
Step 5: Verify Deployment
Configuration
All guardrails and security policies are configured through the Akto dashboard at app.akto.io. You can define:
Request guardrails (rate limiting, pattern matching, PII detection)
Response guardrails (PII redaction, sensitive data blocking, content filtering)
Threat detection rules (prompt injection, SQL injection, command injection, etc.)
Custom security policies specific to your organization
Navigate to Akto Argus Dashboard → Settings → Guardrails to configure your security policies.
Monitoring & Logging
Container Logs
View real-time logs from the proxy:
Dashboard Integration
Connect to Akto dashboard for centralized monitoring:
Login to app.akto.io
Navigate to Akto Argus Dashboard -> Connectors -> AI Agent Proxy
View real-time metrics:
Request volume and trends
Threat detection statistics
Blocked request analysis
Top guardrails triggered
Response redaction statistics
Get Support
There are multiple ways to request support from Akto. We are 24X7 available on the following:
In-app
intercomsupport. Message us with your query on intercom in Akto dashboard and someone will reply.Join our discord channel for community support.
Contact
[email protected]for email support.Contact us here.
Last updated