Skip to main content

Overview

Bring Your Own Kubernetes (BYOK) allows you to connect your existing GKE cluster to Qovery. You maintain full control over your cluster while Qovery manages application deployments.

Prerequisites

Existing GKE cluster (Kubernetes 1.24+)
kubectl access with cluster-admin permissions
GCP service account for Qovery
Compute Engine persistent disk CSI driver installed
Load Balancer Controller or Nginx Ingress

Setup

1

Get Qovery Agent Manifests

In Qovery Console:
  1. Settings → Clusters → Add Cluster
  2. Select “Bring Your Own Kubernetes”
  3. Choose “Google GKE”
  4. Download Helm values or kubectl manifests
2

Install Qovery Agent

Using Helm (recommended):
helm repo add qovery https://helm.qovery.com
helm repo update

helm install qovery-agent qovery/qovery-agent \
  --namespace qovery \
  --create-namespace \
  --values qovery-values.yaml
Or using kubectl:
kubectl apply -f qovery-agent.yaml
3

Verify Connection

Check agent status:
kubectl get pods -n qovery
# qovery-agent-* should be Running
In Qovery Console, cluster should show as “Connected”
4

Deploy Applications

Start deploying applications to your BYOK cluster

What Qovery Installs

Qovery Agent:
  • Manages application deployments
  • Communicates with Qovery Control Plane
  • Handles secrets and configuration
Optional Components (if not present):
  • Nginx Ingress Controller
  • Cert-Manager (for SSL certificates)
  • External-DNS (for domain management)
  • Metrics Server

Requirements

Kubernetes Version

  • Minimum: 1.24
  • Recommended: 1.27+
  • Maximum: 1.29

Required Addons

  • Storage
  • Load Balancer
  • Metrics
Compute Persistent Disk CSI Driver:GKE clusters have this enabled by default. Verify:
kubectl get csidriver pd.csi.storage.gke.io
Storage Class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pd-ssd
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-ssd
  replication-type: regional-pd

IAM Permissions

Qovery needs GCP IAM permissions for:
  • Creating/managing Load Balancers
  • Managing Cloud DNS records (if using)
  • Google Container Registry access (if using GCR)
Example service account roles:
  • roles/compute.loadBalancerAdmin
  • roles/dns.admin
  • roles/storage.objectViewer (for GCR)
# Create service account for Qovery
gcloud iam service-accounts create qovery-agent \
  --display-name="Qovery Agent"

# Grant necessary roles
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="serviceAccount:qovery-agent@PROJECT_ID.iam.gserviceaccount.com" \
  --role="roles/compute.loadBalancerAdmin"

gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="serviceAccount:qovery-agent@PROJECT_ID.iam.gserviceaccount.com" \
  --role="roles/dns.admin"

Cluster Configuration

Resource Requirements

Minimum:
  • 2 nodes (e2-medium or larger)
  • 4 vCPUs total
  • 8 GB RAM total
Recommended:
  • 3+ nodes across multiple zones
  • Auto-scaling enabled
  • Mix of standard and preemptible VMs

Networking

VPC Requirements:
  • VPC-native cluster (alias IPs)
  • Private nodes (recommended)
  • Cloud NAT or Proxy for outbound traffic
  • Firewall rules for ingress traffic
Node Access:
  • Private nodes with Cloud NAT (recommended)
  • Public nodes (for dev/test only)
  • Authorized networks for control plane access

DNS Configuration

Option 1: External-DNS (automated)
helm install external-dns bitnami/external-dns \
  --set provider=google \
  --set google.project=YOUR_PROJECT_ID \
  --set txtOwnerId=my-cluster
Option 2: Manual DNS management
  • Create Cloud DNS records manually for each application
  • Point to load balancer IP address

Best Practices

Separate Namespaces

  • Use dedicated namespace for Qovery (qovery)
  • Separate namespaces per environment
  • Apply resource quotas
  • Network policies for isolation

Access Control

  • Create dedicated service account for Qovery
  • Use RBAC for least privilege
  • Workload Identity for pod authentication
  • Rotate credentials regularly

High Availability

  • Multi-zone node distribution
  • Regional persistent disks
  • Pod disruption budgets
  • Regular backups

Monitoring

  • Enable GKE monitoring and logging
  • Set up alerts for Qovery agent
  • Monitor cluster resource usage
  • Track application health

Troubleshooting

Solutions:
  • Verify agent pods are running: kubectl get pods -n qovery
  • Check agent logs: kubectl logs -n qovery -l app=qovery-agent
  • Ensure outbound internet access (Cloud NAT configured)
  • Verify API token is correct
  • Check firewall rules
Solutions:
  • Check node capacity and resources
  • Verify storage class exists and works
  • Ensure ingress controller is working
  • Check for network policy blocking traffic
  • Review GKE logs in Cloud Logging
Solutions:
  • Verify IAM permissions for load balancer creation
  • Check firewall rules allow health checks
  • Ensure service account has proper bindings
  • Review GCE Ingress controller logs
  • Check VPC firewall rules

Next Steps