Skip to content

Deploying Shoehorn with Helm

This guide covers deploying Shoehorn to a Kubernetes cluster using the official Helm chart.

  • Kubernetes cluster (v1.28+)
  • Helm (v3.12+)
  • kubectl configured for your cluster
  • A storage class for persistent volumes
  • DNS records pointing to your cluster’s ingress

A production Shoehorn deployment consists of:

  • API (2 replicas) - REST API gateway
  • Web (2 replicas) - Svelte frontend
  • Worker (3 replicas) - Background job processor
  • Crawler (2 replicas) - GitHub repository discovery
  • Forge (2 replicas) - Workflow engine
  • EventBus (1 replica) - Event streaming manager
  • PostgreSQL (1 replica or external managed)
  • Meilisearch (1 replica)
  • Valkey (1 replica or external managed)
  • Redpanda (1 replica or external managed)
  • Cerbos (1 replica) - Authorization engine

On 2-node clusters or nodes past 80% CPU-requested, set every replicaCount.* to 1. The default 2-replica services need a surge replica during rolling upgrade and can wedge against Insufficient cpu.

Terminal window
kubectl create namespace shoehorn
Terminal window
kubectl create secret generic database-credentials -n shoehorn \
--from-literal=postgres_password='<admin-password>' \
--from-literal=db_password='<app-user-password>'
Terminal window
kubectl create secret generic auth-credentials -n shoehorn \
--from-literal=session-encryption-key="$(openssl rand -hex 32)" \
--from-literal=service-user-pat='<zitadel-service-user-pat>'
Terminal window
kubectl create secret generic service-credentials -n shoehorn \
--from-literal=meilisearchMasterKey="$(openssl rand -base64 32)" \
--from-literal=valkey_password='<valkey-password>'
Terminal window
kubectl create secret generic integration-credentials -n shoehorn \
--from-literal=github_app_id='<app-id>' \
--from-literal=github_app_installation_id='<installation-id>' \
--from-file=github_app_private_key=path/to/private-key.pem
Terminal window
kubectl create secret generic smtp-credentials -n shoehorn \
--from-literal=smtp_password='<smtp-password>'

Create a values.yaml file for your deployment:

global:
domain: shoehorn.example.com
storageClass: "standard" # your cluster's storage class
organization:
name: "My Organization"
slug: "my-org"
image:
tag: "v0.7.0" # always pin; never use "latest" in production
pullPolicy: IfNotPresent
# Service replicas
replicaCount:
api: 2
web: 2
worker: 3
crawler: 2
forge: 2
eventbus: 1
# Authentication
auth:
provider: zitadel # zitadel, okta, or entra-id
zitadel:
externalUrl: https://auth.example.com
projectId: "<zitadel-project-id>"
clientId: "<zitadel-client-id>"
# RBAC
rbac:
enabled: true
# GitHub Integration
github:
organizations: "my-org"
manifestPatterns: ".shoehorn/**/*.yml,.shoehorn/**/*.yaml,catalog-info.yaml"
# Database (built-in)
postgresql:
enabled: true
image:
repository: shoehorned/shoehorn-postgres
tag: "v18.3-pgaudit-1.0" # pinned, not tied to platform release
persistence:
size: 10Gi
# Search
meilisearch:
enabled: true
persistence:
size: 10Gi
# Cache
valkey:
enabled: true
# Event Streaming
redpanda:
enabled: true
# Authorization
cerbos:
enabled: true
# Ingress
ingressRoute:
enabled: true
tls:
enabled: true
certResolver: letsencrypt
# Monitoring (optional)
monitoring:
enabled: false

Shoehorn uses Traefik as the ingress controller:

Terminal window
helm repo add traefik https://traefik.github.io/charts
helm install traefik traefik/traefik \
--namespace traefik --create-namespace \
-f config/helm/prod-traefik-values.yaml
Terminal window
helm install shoehorn oci://ghcr.io/shoehorn-dev/helm-charts/shoehorn \
--namespace shoehorn \
-f values.yaml
Terminal window
# Check all pods are running
kubectl get pods -n shoehorn
# Check API health
kubectl port-forward -n shoehorn svc/api 8080:8080
curl http://localhost:8080/health

Point your domain to the Traefik load balancer IP:

Terminal window
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Create DNS records:

RecordTypeValue
shoehorn.example.comA<load-balancer-ip>
auth.example.comA<load-balancer-ip>

Shoehorn uses PostgreSQL Row-Level Security (RLS) for tenant isolation. All database tables have RLS policies that filter data by tenant_id. This is always enabled — there is no toggle.

The Helm chart automatically:

  1. Runs a migration init container on the API deployment using shoehorn_user (BYPASSRLS) to apply schema changes and create the app_user
  2. Configures all runtime services with app_user (NOBYPASSRLS) in their DATABASE_URL
UserRLSPurpose
shoehorn_userBYPASSRLSSchema migrations, creates app_user, admin operations
app_userNOBYPASSRLSAll runtime queries — RLS policies enforced by PostgreSQL

Your secret must contain two database passwords:

Terminal window
kubectl create secret generic database-credentials -n shoehorn \
--from-literal=postgres_password="$(openssl rand -base64 24)" \
--from-literal=db_password="$(openssl rand -base64 24)"
  • postgres_password: used by shoehorn_user for migrations
  • db_password: used by app_user for runtime queries

For single-tenant deployments, RLS still runs but the middleware auto-injects a fixed tenant ID via global.organization.slug.

See Connection Pool Tuning for sizing guidance.

resources:
api:
requests: { cpu: 100m, memory: 256Mi }
limits: { cpu: 500m, memory: 512Mi }
web:
requests: { cpu: 50m, memory: 128Mi }
limits: { cpu: 200m, memory: 256Mi }
worker:
requests: { cpu: 50m, memory: 128Mi }
limits: { cpu: 200m, memory: 256Mi }
resources:
api:
requests: { cpu: 250m, memory: 512Mi }
limits: { cpu: 1000m, memory: 1Gi }
worker:
requests: { cpu: 100m, memory: 256Mi }
limits: { cpu: 500m, memory: 512Mi }

Enable autoscaling:

autoscaling:
api:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilization: 70
worker:
enabled: true
minReplicas: 3
maxReplicas: 10
ComponentDefault SizePurpose
PostgreSQL10GiPrimary database
Meilisearch10GiSearch indexes
Valkey-In-memory cache (no persistence)
Redpanda-Event streaming (optional persistence)
Terminal window
helm uninstall shoehorn -n shoehorn

PVCs and the postgres StatefulSet survive the uninstall (helm.sh/resource-policy: keep). Delete them when you want the data gone:

Terminal window
kubectl delete sts -n shoehorn shoehorn-postgresql
kubectl delete pvc -n shoehorn --all
kubectl delete namespace shoehorn