Nginx
Architecture 1: Nginx in Front of Trident
Nginx handles TLS and forwards all requests to Trident. Trident caches responses from your backend.
Client --HTTPS--> Nginx (:443) --HTTP--> Trident (:8120) --HTTP--> Backend (:8080)Nginx Configuration
upstream trident {
server 127.0.0.1:8120;
keepalive 64;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://trident;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}Trident Configuration
{
"server": {
"listeners": [
{ "address": "127.0.0.1:8120" }
]
},
"backends": [
{
"name": "origin",
"address": "127.0.0.1:8080"
}
]
}When Nginx sits in front, bind Trident to 127.0.0.1:8120 instead of 0.0.0.0:8120 so it only accepts connections from Nginx.
The keepalive 64 directive and Connection "" header enable persistent connections between Nginx and Trident, reducing connection overhead.
Architecture 2: Trident in Front of Nginx
Trident is the entry point. Cache misses are forwarded to Nginx, which serves the application.
Client --HTTP--> Trident (:8120) --HTTP--> Nginx (:8080) --> AppTrident Configuration
{
"server": {
"listeners": [
{ "address": "0.0.0.0:8120" }
]
},
"backends": [
{
"name": "origin",
"address": "127.0.0.1:8080"
}
]
}Nginx Configuration
Configure Nginx to serve the application on port 8080:
server {
listen 8080;
server_name example.com;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}Health Checks
If you use Nginx in front of Trident, you can configure a health check endpoint that verifies Trident is responding:
location /health {
proxy_pass http://trident/health;
access_log off;
}Trident's admin API (default 127.0.0.1:6085) exposes a health endpoint you can use for monitoring and load balancer health checks separately from the cache port.
Cache Purge Forwarding
To forward purge requests from Nginx to Trident's admin API:
location /purge {
allow 127.0.0.1;
deny all;
proxy_pass http://127.0.0.1:6085;
proxy_set_header Host $host;
}Always restrict access to purge endpoints using allow/deny rules or firewall rules. Exposing purge endpoints publicly allows anyone to invalidate your cache.