Managed Kubernetes services from AWS, Azure, and Google Cloud appear simple at first glance — spin up a cluster, deploy pods, pay as you go. The reality is different. EKS charges $73/month just for the control plane, but the real costs hide in NAT gateways ($32+/month per AZ), cross-AZ data transfer ($0.01/GB), load balancers ($16+/month each), and EBS volumes. For steady-state workloads — applications that run 24/7 with predictable resource needs — dedicated Kubernetes on bare metal infrastructure delivers 40-60% lower total cost with better and more predictable performance.
What does EKS actually cost beyond the $73/month control plane?
Amazon EKS advertises a $0.10/hour ($73/month) control plane fee. This is the most visible cost and, paradoxically, the smallest part of your bill. The real costs accumulate in supporting services that are effectively mandatory for any production cluster.
A breakdown of a typical production EKS cluster running 10 worker nodes (m5.xlarge) across 3 availability zones:
| Cost component | Monthly cost | Annual cost |
|---|---|---|
| EKS control plane | $73 | $876 |
| EC2 worker nodes (10x m5.xlarge, on-demand) | $1,402 | $16,819 |
| NAT Gateway (3 AZs, moderate traffic) | $96+ | $1,152+ |
| NAT Gateway data processing (500 GB/month) | $22 | $270 |
| ALB (Application Load Balancer, 2 instances) | $33 | $396 |
| ALB LCU charges (moderate traffic) | $40+ | $480+ |
| EBS gp3 volumes (10x 100 GB) | $80 | $960 |
| Cross-AZ data transfer (200 GB/month) | $2 | $24 |
| Data transfer out (500 GB/month) | $43 | $518 |
| CloudWatch logging & monitoring | $50+ | $600+ |
| ECR (container registry, 50 GB) | $5 | $60 |
| Total | ~$1,846 | ~$22,155 |
That $73/month EKS cluster actually costs ~$1,846/month for a modest production setup. And this does not include Reserved Instance or Savings Plan discounts — which require 1-3 year commitments and upfront payments.
Where do the hidden costs come from?
The three largest hidden cost drivers in managed Kubernetes:
1. NAT Gateway pricing. Every private subnet needs a NAT Gateway for outbound internet access (pulling container images, accessing external APIs). AWS charges $0.045/hour ($32.40/month) per NAT Gateway plus $0.045/GB for data processed. With 3 AZs for high availability, that is $97/month before any significant data transfer.
2. Data transfer fees. AWS charges $0.01/GB for cross-AZ traffic. In a Kubernetes cluster, pods communicate across AZs constantly — service mesh traffic, database connections, distributed caching. A moderately busy cluster easily generates 200+ GB/month of cross-AZ traffic. Data transfer out to the internet costs $0.09/GB for the first 10 TB.
3. Load Balancer accumulation. Each Kubernetes Service of type LoadBalancer provisions a separate ALB or NLB. At $16.20/month base fee plus LCU charges per load balancer, a cluster with 5 externally-exposed services costs $81+/month in load balancers alone. Ingress controllers reduce this, but add their own complexity.
How do AKS and GKE compare on hidden costs?
The pricing structures differ across providers, but the pattern is consistent: the headline price understates the real cost.
Cost comparison: Managed Kubernetes services (10 worker nodes, production setup)
| Cost component | AWS EKS | Azure AKS | Google GKE | PROZETA Kubernetes |
|---|---|---|---|---|
| Control plane | $73/mo | Free (Standard) / $73 (Uptime SLA) | $73/mo (Standard) | Included |
| Worker nodes (10x 4 vCPU, 16 GB) | $1,402/mo | $1,241/mo | $1,192/mo | Fixed monthly |
| NAT / egress gateway | $96+/mo | $32+/mo | $44+/mo | Included |
| Data transfer (cross-zone) | $0.01/GB | $0.01/GB (cross-zone) | $0.01/GB | Included |
| Data transfer out (500 GB) | $43/mo | $43/mo | $60/mo | Included (CZ peering) |
| Load balancers (2x) | $33+/mo | $18+/mo | $18+/mo | Included |
| Block storage (1 TB gp3/equivalent) | $80/mo | $77/mo | $80/mo | BlackStor NVMe included |
| Monitoring & logging | $50+/mo | $40+/mo | $50+/mo | Included |
| Estimated monthly total | ~$1,846 | ~$1,530 | ~$1,580 | Contact for quote |
| Estimated annual total | ~$22,155 | ~$18,360 | ~$18,960 | Fixed annual contract |
| Noisy neighbor risk | Yes | Yes | Yes | No — dedicated hardware |
| Data sovereignty (EU) | Region-selectable | Region-selectable | Region-selectable | Prague DC, Czech law only |
| CLOUD Act exposure | Yes | Yes | Yes | No |
Key insight: AKS offers a free control plane tier (without uptime SLA), making it appear cheapest. But Azure's NAT Gateway, load balancer, and storage costs bring the total to a comparable range. GKE's Autopilot mode simplifies operations but charges a 23% markup on compute resources.
With PROZETA, all infrastructure components — compute, storage, networking, load balancing, monitoring — are included in a single predictable monthly fee. No per-GB data transfer charges. No NAT Gateway fees. No load balancer accumulation.
Learn more about PROZETA Kubernetes and BlackStor storage.
Why does dedicated Kubernetes perform better for steady-state workloads?
Steady-state workloads — databases, application servers, message queues, CI/CD runners — run continuously with predictable resource requirements. These workloads represent 60-80% of enterprise compute in typical organizations. For these workloads, dedicated infrastructure provides measurable performance advantages over shared cloud infrastructure.
What is the noisy neighbor problem in managed Kubernetes?
In hyperscaler environments, your worker nodes are virtual machines sharing physical hardware with other tenants. This creates the "noisy neighbor" effect:
- CPU steal time: When other tenants on the same physical host consume CPU bursts, your workloads experience increased latency. AWS m5.xlarge instances regularly show 2-5% CPU steal time, spiking to 10%+ during neighbor burst activity.
- Network I/O variability: Shared network interfaces mean bandwidth fluctuates. Benchmark tests consistently show 15-30% variance in network throughput on shared cloud instances.
- Storage I/O unpredictability: Even with provisioned IOPS on EBS, actual performance varies because the underlying storage infrastructure is shared. AWS documents that gp3 volumes deliver their baseline performance "on average over time" — meaning individual operations can exceed the provisioned latency.
- NUMA effects: VMs placed across NUMA nodes on the physical host experience higher memory access latency for cross-NUMA operations.
On dedicated infrastructure, these problems do not exist. Your Kubernetes nodes run on physical servers dedicated to your workloads. CPU cycles, memory bandwidth, network throughput, and storage IOPS are 100% available to your applications. There is no steal time, no bandwidth contention, no IOPS variability.
What performance numbers can you expect on dedicated hardware?
PROZETA Kubernetes runs on HPE ProLiant servers with:
- CPU: Latest-generation Intel Xeon or AMD EPYC processors, full cores dedicated to your workloads. Zero CPU steal time.
- Memory: DDR5 ECC RAM with consistent access latency — no NUMA surprises from VM placement.
- Storage: BlackStor NVMe storage delivering consistent sub-millisecond latency. Unlike cloud block storage, BlackStor is not Ceph-based — it uses a proprietary architecture optimized for low-latency workloads.
- Network: 25 Gbps dedicated interfaces with consistent throughput. No shared bandwidth, no "up to" performance claims.
For latency-sensitive workloads (databases, real-time processing, financial applications), the difference between dedicated and shared infrastructure is not marginal — it is typically a 2-5x improvement in P99 latency consistency.
When does dedicated Kubernetes make more financial sense than EKS?
The crossover point depends on workload characteristics, but the pattern is consistent: steady-state workloads hit the breakeven point quickly.
Workload profiles where dedicated wins
1. Always-on production workloads. If your cluster runs 24/7/365 (and most production clusters do), you are paying full on-demand rates or committing to 1-3 year Reserved Instances. Dedicated infrastructure offers equivalent or better pricing without the lock-in of RI commitments.
2. Data-intensive applications. Every GB of data transfer on cloud Kubernetes has a cost. If your applications process significant data volumes — log aggregation, analytics pipelines, media processing — data transfer fees can exceed compute costs. On dedicated infrastructure, internal data transfer is free and external bandwidth is included.
3. Storage-heavy workloads. EBS gp3 costs $0.08/GB/month. For a database cluster requiring 5 TB of storage, that is $400/month for storage alone — before IOPS charges. BlackStor NVMe storage on PROZETA infrastructure is included in the compute package with significantly better performance.
4. Multi-cluster setups. Organizations often run separate clusters for development, staging, and production. Each EKS cluster adds $73/month in control plane fees plus duplicated NAT Gateways and load balancers. On dedicated infrastructure, running multiple Kubernetes clusters on the same hardware has no additional per-cluster cost.
5. Compliance-sensitive workloads. If your workloads must comply with GDPR data sovereignty requirements, NIS2 supply chain security, or DORA ICT risk management, a Czech cloud with own datacenter eliminates an entire class of compliance risk — and the associated audit and legal costs.
Break-even calculation example
Consider a mid-size deployment: 20 worker nodes, 2 TB storage, 1 TB/month data transfer:
| Component | EKS (annual) | PROZETA Dedicated (annual) |
|---|---|---|
| Control plane | $876 | Included |
| Compute (20x m5.xlarge on-demand) | $33,638 | Fixed |
| NAT Gateways (3 AZ) | $1,166 | Included |
| Data transfer (1 TB/month out) | $1,044 | Included |
| Cross-AZ data transfer | $480+ | Included |
| Load balancers (5x ALB) | $972+ | Included |
| EBS storage (2 TB gp3) | $1,920 | BlackStor NVMe included |
| Monitoring (CloudWatch) | $1,200+ | Included |
| Annual total | ~$41,296 | Contact for fixed quote |
Even with 1-year Reserved Instances (which require upfront payment and commitment), EKS cost reduces by approximately 30-40% — still resulting in $24,000-29,000/year for this configuration. Dedicated infrastructure from PROZETA typically comes in 40-60% below on-demand cloud pricing for equivalent or superior hardware.
What about scaling flexibility — does not cloud Kubernetes scale better?
This is the most common objection to dedicated infrastructure, and it deserves a nuanced answer.
When cloud auto-scaling genuinely helps
Cloud auto-scaling provides value for:
- Unpredictable burst traffic: E-commerce flash sales, viral content events, seasonal peaks. If your traffic pattern is genuinely unpredictable and bursty, cloud elasticity has real value.
- Short-lived batch workloads: Data processing jobs, ML training runs, CI/CD pipelines that run for hours, not days. Pay-per-use makes sense here.
- New products with unknown demand: Startups and new product launches where demand patterns are not yet established.
When cloud auto-scaling is not what you need
For the majority of enterprise workloads, "auto-scaling" is solving a problem that does not exist:
- Most production workloads are steady-state. Database servers, application backends, API gateways, and message brokers run at relatively predictable load levels. A 2024 Datadog survey found that the average Kubernetes cluster runs at 30-50% CPU utilization — meaning most organizations are already over-provisioned.
- Auto-scaling latency is significant. Cluster Autoscaler in EKS takes 3-10 minutes to provision new nodes. Karpenter reduces this to 1-2 minutes. For most applications, if you cannot handle a traffic spike within your existing capacity for 2+ minutes, you have an architecture problem, not an infrastructure problem.
- Over-provisioning on cloud is expensive. Because cloud scaling is not instant, organizations over-provision by 50-100% as buffer. On dedicated hardware, this over-provisioning costs far less.
The hybrid approach: PROZETA supports hybrid architectures where steady-state workloads run on dedicated Kubernetes and burst capacity is available on the Tier5 OpenStack cloud. This gives you the cost benefits of dedicated infrastructure with cloud elasticity for genuine scaling needs — without the hyperscaler pricing model.
How does dedicated Kubernetes simplify compliance?
Running Kubernetes on dedicated infrastructure in a Czech datacenter addresses multiple compliance requirements simultaneously:
- GDPR data sovereignty: Container images, persistent volumes, logs, metrics, and secrets all remain within the Prague datacenter. No data flows to US-controlled infrastructure.
- NIS2 supply chain security: Your Kubernetes infrastructure provider is a Czech company (PRO-ZETA a.s., ISO 27001 certified) with no foreign parent entity. Supply chain risk assessment is straightforward.
- DORA ICT risk management: For financial institutions, dedicated infrastructure reduces concentration risk compared to running everything on a single hyperscaler. Exit strategies are simpler when you control the infrastructure.
- Audit simplicity: Physical and logical access audits involve a single jurisdiction, a single legal framework, and direct access to the datacenter in Prague.
The compliance advantage is not just theoretical. Organizations that move to dedicated infrastructure report 30-50% reduction in compliance audit preparation time because the infrastructure architecture eliminates entire categories of compliance questions.
What does a migration from EKS to dedicated Kubernetes look like?
Migration from managed Kubernetes to dedicated Kubernetes is straightforward because Kubernetes is Kubernetes — the API is the same regardless of the underlying infrastructure.
Typical migration steps:
- Infrastructure provisioning (1-2 weeks): PROZETA provisions dedicated servers, installs Kubernetes (using kubeadm or RKE2), configures networking (Calico/Cilium), and sets up BlackStor storage classes.
- Configuration replication (1 week): Export Kubernetes manifests (Deployments, Services, ConfigMaps, Secrets, Ingress) from EKS. Adapt cloud-specific annotations (ALB Ingress → Nginx/Traefik Ingress, EBS StorageClass → BlackStor StorageClass).
- Data migration (1-2 weeks): Migrate persistent volumes using Velero backup/restore or direct data transfer. Database migrations may use logical replication for zero-downtime cutover.
- DNS cutover and validation (1 week): Redirect traffic to the new cluster using DNS-based blue/green deployment. Validate application functionality, performance, and monitoring.
Total timeline: 4-6 weeks for a typical production cluster. PROZETA's engineering team handles infrastructure setup and provides migration support throughout the process.
The main adaptation points are cloud-specific integrations: - AWS ALB Ingress Controller → Nginx Ingress Controller or Traefik (both open source, no licensing cost) - EBS CSI driver → BlackStor CSI driver - AWS IAM for ServiceAccounts (IRSA) → Standard Kubernetes RBAC + external secret management - CloudWatch Container Insights → Prometheus + Grafana (included with PROZETA Kubernetes)
Frequently asked questions
Is dedicated Kubernetes more difficult to operate than EKS?
PROZETA manages the Kubernetes control plane, OS patching, and infrastructure monitoring. Your team interacts with the same Kubernetes API as EKS. The operational difference is minimal — you deploy the same Helm charts, the same kubectl commands, the same CI/CD pipelines.
Can I still use Kubernetes auto-scaling on dedicated infrastructure?
Yes. Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) work identically on dedicated Kubernetes. Cluster auto-scaling (adding new nodes) can be configured with PROZETA's Tier5 cloud for burst capacity, providing cloud elasticity for genuine scaling needs.
What about managed add-ons like AWS RDS, ElastiCache, or SQS?
Dedicated infrastructure does not preclude managed databases or message queues. PROZETA offers managed PostgreSQL, Redis, and other services on dedicated infrastructure. Alternatively, running these services on your own Kubernetes cluster (using operators like CloudNativePG or Redis Operator) is straightforward on dedicated hardware with NVMe storage.
How does BlackStor compare to EBS or Azure Disk?
BlackStor delivers consistent sub-millisecond latency on NVMe storage without the variability of shared cloud block storage. Unlike Ceph-based solutions common in OpenStack deployments, BlackStor uses a proprietary architecture that avoids Ceph's known latency spikes under heavy write loads. For database workloads, the consistent I/O performance is the most significant advantage.
What if my workload grows beyond the dedicated cluster capacity?
PROZETA supports capacity expansion with typical lead times of 1-2 weeks for additional physical servers. For immediate burst capacity, workloads can overflow to the Tier5 OpenStack cloud. This hybrid model gives you the cost predictability of dedicated infrastructure with the flexibility to handle unexpected demand.