Skip to content

AWS Applications Deployment

This guide covers deploying the core Kindo application stack on AWS using the kindo-applications Terraform module.

Overview

The applications module deploys these Kindo services:

ServicePurpose
APINode.js backend REST/tRPC API
Next.jsServer-rendered React frontend
LiteLLMAI model routing proxy
Llama IndexerDocument indexing service
SSOReadySSO authentication services
CerbosAuthorization policy engine
Task WorkerBackground job processing
External PollerIntegration data polling
External SyncIntegration synchronization
CreditsUsage tracking service
Audit Log ExporterCompliance log export

Quick Start

  1. Set up the applications stack:

    Terminal window
    cd kindo-modules/stacks/applications
    cp terraform.tfvars.example terraform.tfvars
  2. Configure terraform.tfvars with outputs from infrastructure, secrets, and peripheries modules.

  3. Deploy:

    Terminal window
    terraform init
    terraform plan
    terraform apply

Configuration

Core Settings

project_name = "mycompany"
environment = "production"
aws_region = "us-west-2"
# Domain
domain_name = "kindo.mycompany.com"
# Kindo registry
kindo_registry_username = "..."
kindo_registry_password = "..."
# Application version
app_version = "2025.08.2"

From Previous Module Outputs

The applications module requires outputs from all three previous modules (infrastructure, secrets, peripheries), including:

  • EKS cluster details
  • Database endpoints and credentials
  • Redis and RabbitMQ endpoints
  • S3 bucket names
  • Unleash endpoint
  • Secret references

Post-Deployment

Verify All Services

Terminal window
for ns in api next litellm llama-indexer ssoready cerbos task-worker-ts \
external-poller external-sync credits audit-log-exporter; do
echo "=== $ns ==="
kubectl get pods -n $ns
done

Configure DNS

Create DNS records pointing to the ALB endpoints:

Terminal window
# Get ALB addresses
kubectl get ingress -A -o wide

Create A/CNAME records for:

  • app.kindo.mycompany.com — Next.js ingress
  • api.kindo.mycompany.com — API ingress
  • sso.kindo.mycompany.com — SSOReady ingress

Initial Setup

  1. Create the first organization and admin user using the setup SQL (see SSO Setup)
  2. Configure AI models in Unleash (see Model Configuration)
  3. Set up integrations via Nango (see Integrations)

Test End-to-End

Terminal window
# Health check
curl https://api.kindo.mycompany.com/health
# Access the UI
open https://app.kindo.mycompany.com

Ingress and DNS

The applications stack creates a shared Application Load Balancer (ALB) ingress group that fronts every Kindo service. Use the tables below to plan DNS, firewall rules, and health probes for self-managed installs. The values mirror the installer reference in infra/released-modules/kindo-helm-e2e-dist/docs/INGRESS.md.

Route Table

HostnameNamespaceServicePortHealth CheckPathsAccess
app.<domain>nextnext80/api/health/Whitelisted
api.<domain>apiapi80/healthcheck/Whitelisted
api.<domain> (webhooks)apiapi80/healthcheck/merge-integration-changes, /merge-common-model-synced-webhookPublic/Webhook Provider
integrations-api.<domain>nangonango80/health/Provider callbacks (see Nango docs)
integrations-connect.<domain>nangonango-connect-ui3009//Whitelisted
sso-auth.<domain>ssoreadyssoready-auth80/health/Whitelisted
sso-api.<domain>ssoreadyssoready-api80/health/Whitelisted
sso.<domain>ssoreadyssoready-admin80/health/Whitelisted
sso-app.<domain>ssoreadyssoready-app80/health/Whitelisted
hatchet.<domain>hatchethatchet-api8080/api/ready/apiWhitelisted
hatchet.<domain> (fallback)hatchethatchet-frontend8080/api/ready/Whitelisted
litellm.<domain>litellmlitellm4000/health/liveliness/Whitelisted
unleash.<domain>unleashunleash4242/health/Whitelisted
unleash-edge.<domain>unleashunleash-edge3063/health/Whitelisted

Webhooks and Streaming Requirements

  • Merge.dev webhooks: /merge-integration-changes and /merge-common-model-synced-webhook must be reachable by Merge. In private (internal) ALB deployments these paths route over your corporate network; in internet-facing setups, allow-list Merge’s IP ranges or route the two paths via a dedicated ingress rule.
  • SSE streaming (/express/chat): The API ingress must disable response buffering, gzip, and short timeouts. Configure your ingress/WAF with:
    • Response buffering: disabled (proxy_buffering off in NGINX)
    • Read/send timeouts: ≥ 300s
    • Response body limits: none
    • No content transformation on SSE responses

Hatchet Path-Based Routing

hatchet.<domain> serves two backends: /api goes to hatchet-api (REST/gRPC) and the catch-all / route serves the admin UI. Ensure the /api rule has a higher priority than / so task-worker traffic doesn’t hit the UI service.

DNS Strategy

  • Wildcard: Point *.<domain> at the ALB for the quickest rollout.
  • Explicit records: Create individual A/CNAME entries for each hostname when you need per-host routing or firewall control.
  • Regardless of strategy, keep TLS certs aligned with the hostnames above. With cert-manager, add the full host list to your Certificate resources.

Bring Your Own Ingress

The shipped kindo-ingress chart is optional. If you remove it (make ingress-destroy) and supply your own ingress controller:

  • AWS ALB Controller: Use the kindo-shared IngressGroup order from the table above. Attach the AWSLoadBalancerControllerIAMPolicy to the controller role.
  • Non-VPC CNIs (Calico/Cilium): ALB target-type=ip cannot reach pod IPs on those CNIs. Either switch services to NodePort and use target-type=instance, or adopt an in-cluster ingress (NGINX, Traefik, Istio, etc.).
  • NGINX example: set proxy-buffering: "off", proxy-read-timeout: 300, proxy-send-timeout: 300, and disable gzip on the API ingress. Similar settings exist for Traefik (respondingTimeouts) and Envoy (stream_idle_timeout).

These requirements also apply to downstream proxies (CloudFront, F5, HAProxy, etc.). Always replicate the streaming and health-check behavior documented above.

Troubleshooting

Pods in CrashLoopBackOff

Terminal window
kubectl logs <pod-name> -n <namespace> --previous
kubectl describe pod <pod-name> -n <namespace>

Common causes: missing secrets, database connection failures, incorrect environment variables.

Ingress Not Working

Terminal window
# Check ALB controller logs
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
# Verify target group health
aws elbv2 describe-target-health --target-group-arn <arn>