Overview
Qovery-managed EKS provides production-ready Kubernetes clusters on AWS with zero configuration. Qovery handles everything: cluster creation, networking, scaling, monitoring, and ongoing maintenance. Perfect for: Teams who want AWS but don’t want to manage Kubernetes infrastructure.Karpenter-Powered Auto-Scaling
All Qovery-managed EKS clusters use Karpenter for intelligent node provisioning:- 60-90% cost savings through spot instances and consolidation
- Fast scaling in seconds (not minutes)
- Smart instance selection from your chosen instance types
- Automatic workload optimization to minimize costs
You select multiple instance types during setup (e.g., t3.medium, t3.large, m5.xlarge). Karpenter then automatically picks the best option based on your workload requirements, spot availability, and cost.
Create Your First Cluster
1
Open Qovery Console
- Go to Organization Settings → Clusters
- Click Create Cluster
- Select AWS
2
Configure Cluster
- Name: e.g.,
production-eks - Region: Choose closest to your users (e.g.,
us-east-1)
3
Connect AWS Account
Choose how to connect your AWS account:For detailed instructions with policy information, see AWS Installation Guide.
- STS Assume Role (Recommended)
- Static Credentials
Most secure - Temporary credentials that auto-rotate automatically.
- Click the CloudFormation link in Qovery
- In AWS CloudFormation Console:
- Click Next → Next → Next
- ✅ Check “I acknowledge that AWS CloudFormation might create IAM resources”
- Click Create stack
- Wait 1 minute for
CREATE_COMPLETEstatus - Go to Outputs tab
- Copy the RoleArn value (starts with
arn:aws:iam::) - Paste back in Qovery and click Save
4
Select Instance Types
Select ALL instance types you want Karpenter to choose from:Recommended selections:
- ✅ t3.medium (2 vCPU, 4GB)
- ✅ t3.large (2 vCPU, 8GB)
- ✅ t3.xlarge (4 vCPU, 16GB)
- ✅ t3.2xlarge (8 vCPU, 32GB)
- ✅ m5.large (2 vCPU, 8GB)
- ✅ m5.xlarge (4 vCPU, 16GB)
- ✅ m6i.large (2 vCPU, 8GB)
- ✅ m6i.xlarge (4 vCPU, 16GB)
Karpenter will automatically select the best instance type from your list based on:
- Current workload requirements
- Spot instance availability
- Cost optimization
5
Create
Click Create and Deploy - your cluster will be ready in 20-30 minutes!
Need detailed instructions?
See the complete AWS installation guide with screenshots and troubleshooting
What Qovery Creates
- ✅ EKS Cluster - Latest stable Kubernetes
- ✅ VPC & Networking - Public/private subnets across 3 AZs
- ✅ NAT Gateways - Secure internet access
- ✅ Security Groups & IAM Roles - Pre-configured best practices
- ✅ Karpenter - Intelligent auto-scaling (save up to 60% on costs)
- ✅ AWS Load Balancer Controller - Automatic ingress management
- ✅ EBS CSI Driver - Persistent volume support
- ✅ Metrics Server - Resource monitoring
- ✅ Qovery Agent - Observability and management
Karpenter Auto-Scaling
Qovery uses Karpenter to automatically provision optimal EC2 instances:- Scales nodes within seconds based on workload demands
- Consolidates workloads to reduce costs
- Handles spot instance interruptions gracefully
- Supports wide range of instance types (t3, m5, m6i, c5, r5, GPU)
- Mix of on-demand and spot instances for reliability
Configuration Options
Cluster Settings
Instance Types:- General Purpose: t3, m5, m6i, m6g (Graviton)
- Compute Optimized: c5, c6i, c6g (Graviton)
- Memory Optimized: r5, r6i, r6g (Graviton)
- GPU Instances: g4dn, g5, p3, p4 (for AI/ML workloads)
- Min nodes: 1
- Max nodes: 100
- Karpenter automatically provisions optimal instances
- Enable Spot instances for cost savings (60-90% off)
- Qovery handles interruptions gracefully
- Mix of Spot and On-Demand for reliability
GPU Support
Karpenter clusters support GPU-enabled instances for AI/ML workloads, scientific computing, and graphics-intensive applications. Available GPU Instance Types:| Instance Type | GPU | vCPU | Memory | Use Case |
|---|---|---|---|---|
| g4dn.xlarge | 1x NVIDIA T4 | 4 | 16 GB | ML inference, video processing |
| g4dn.2xlarge | 1x NVIDIA T4 | 8 | 32 GB | ML training, rendering |
| g5.xlarge | 1x NVIDIA A10G | 4 | 16 GB | High-performance ML inference |
| g5.2xlarge | 1x NVIDIA A10G | 8 | 32 GB | ML training, graphics workloads |
| p3.2xlarge | 1x NVIDIA V100 | 8 | 61 GB | Deep learning training |
| p4d.24xlarge | 8x NVIDIA A100 | 96 | 1152 GB | Large-scale ML training |
1
Include GPU Instance Types
When creating your cluster, select GPU instance types (g4dn, g5, p3, p4) in addition to your standard instances.
2
Configure Application
In your application settings, configure the GPU resource requirement:See Application GPU Configuration for details.
3
Karpenter Auto-Provisioning
Karpenter will automatically provision GPU instances when your application requests GPU resources, and deprovision them when not needed.
GPU Instance Costs: GPU instances are significantly more expensive than standard instances. Ensure your workload actually utilizes GPU acceleration before enabling.Example Pricing (us-east-1):
- g4dn.xlarge: ~$0.526/hour
- g5.xlarge: ~$1.006/hour
- p3.2xlarge: ~$3.06/hour
Network Settings
VPC Configuration:- CIDR: /16 (65,536 IPs)
- 3 public subnets (load balancers)
- 3 private subnets (pods)
- NAT Gateways per AZ
- Public endpoint (default)
- Private endpoint (enterprise feature)
- VPN access for private endpoints