You’ll need cluster admin access, kubectl, and optionally Helm. Ensure you have a GPU node pool with NVIDIA drivers and the device plugin installed (1x GPU with ≥20GB VRAM).
Prerequisites
- Kubernetes 1.24+ with a default StorageClass
- kubectl (and optional: Helm 3)
- Ingress controller (e.g., Nginx Ingress or cloud LB)
- GPU node pool for AI service: 1x NVIDIA GPU with ≥20GB VRAM; NVIDIA device plugin DaemonSet installed
1. Create namespace and secrets
# namespace
apiVersion: v1
kind: Namespace
metadata:
name: tietai
---
# example secret (DB credentials)
apiVersion: v1
kind: Secret
metadata:
name: db-secret
namespace: tietai
stringData:
POSTGRES_USER: tietai
POSTGRES_PASSWORD: change-me
POSTGRES_DB: platform
2. Datastores: PostgreSQL, Redis, NATS
You can use managed services (RDS/Cloud SQL/MemoryStore) or deploy in-cluster. Examples below use simple in-cluster Helm charts.
# PostgreSQL (Bitnami chart)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install pg bitnami/postgresql \
--namespace tietai \
--set auth.username=tietai,auth.password=change-me,auth.database=platform \
--set primary.persistence.size=20Gi
# Redis (Bitnami chart)
helm install redis bitnami/redis \
--namespace tietai \
--set architecture=replication \
--set master.persistence.size=5Gi \
--set replica.persistence.size=5Gi
# NATS (official chart)
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install nats nats/nats --namespace tietai \
--set nats.jetstream.enabled=true
3. Core microservices
Deploy each microservice as its own Deployment and Service. Configure environment variables to point to PostgreSQL, Redis, and NATS.
# Example microservice: integration-engine
apiVersion: apps/v1
kind: Deployment
metadata:
name: integration-engine
namespace: tietai
spec:
replicas: 2
selector:
matchLabels: { app: integration-engine }
template:
metadata:
labels: { app: integration-engine }
spec:
containers:
- name: app
image: ghcr.io/your-org/integration-engine:latest
env:
- name: DATABASE_URL
value: postgresql://tietai:change-me@pg-postgresql.tietai.svc.cluster.local:5432/platform
- name: REDIS_URL
value: redis://redis-master.tietai.svc.cluster.local:6379
- name: NATS_URL
value: nats://nats.tietai.svc.cluster.local:4222
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: integration-engine
namespace: tietai
spec:
selector: { app: integration-engine }
ports:
- port: 80
targetPort: 8080
Repeat this pattern for the remaining services (API gateway, auth, workflow, mapping, admin UI, etc.). Use separate Deployments so each service scales independently.
4. AI models service (GPU)
Schedule the AI models service on a GPU node. Request 1x NVIDIA GPU and ensure the device plugin is installed.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-models
namespace: tietai
spec:
replicas: 1
selector:
matchLabels: { app: ai-models }
template:
metadata:
labels: { app: ai-models }
spec:
nodeSelector:
nvidia.com/gpu.present: "true"
tolerations:
- key: "nvidia.com/gpu"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: models
image: ghcr.io/your-org/ai-models:latest
resources:
limits:
nvidia.com/gpu: 1
cpu: "4"
memory: 16Gi
requests:
cpu: "2"
memory: 8Gi
env:
- name: MODEL_STORE
value: /models
volumeMounts:
- name: models
mountPath: /models
volumes:
- name: models
emptyDir: {}
Notes:
- Use a GPU with ≥20GB VRAM (e.g., L4/A10-class). Kubernetes allocates GPUs as whole devices.
- Install the NVIDIA device plugin DaemonSet on the cluster so pods can request GPUs.
5. Ingress and TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: platform
namespace: tietai
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts: [ "platform.example.com" ]
secretName: platform-tls
rules:
- host: platform.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: integration-engine
port:
number: 80
6. Verify
- All pods are Running and Ready
- Services have ClusterIP (or LoadBalancer where needed)
- Ingress has an address and TLS is valid
- Application connects to PostgreSQL/Redis/NATS
Prefer managed data services? Use RDS/Cloud SQL/MemoryStore and point env vars accordingly. For cloud-specific setup, see AWS/GCP quickstarts.