What is AWS EKS?
Amazon Elastic Container Service for Kubernetes (EKS) is a cloud-based container management service that integrates natively with Kubernetes. It lets you deploy applications to a Kubernetes cluster, gaining all the automation capabilities of Kubernetes, the world’s most popular container orchestration platform.
Kubernetes is a powerful tool, making it possible to centrally orchestrate a large number of containers. However, it can be difficult to set up and manage. AWS EKS can automatically manage and scale clusters using Amazon infrastructure resources. It helps organizations use Kubernetes without installing, operating or managing the container orchestration platform.
This is part of a series of articles about managing and minimizing Kubernetes costs.
In this article:
- How Does Amazon EKS Work?
- AWS EKS Pricing
- Optimizing AWS EKS Cost
How Does Amazon EKS Work?
In Kubernetes, a cluster of worker nodes is responsible for running containers, while the control plane manages where and when containers are started on a cluster and monitors their status. Amazon EKS eliminates the need to manually provision and manage the Kubernetes control plane and worker nodes, automatically performing these tasks for you.
Amazon EKS lets you use a single command to provision worker nodes via the EKS console, API, or command-line interface (CLI). AWS handles the next steps by provisioning, scaling, and managing the Kubernetes control plane using a secure and highly available configuration. This automation eliminates a heavy operational burden.
AWS EKS Pricing
EKS Pricing in Amazon EC2
The basic price of EKS per is $0.10 per hour per Kubernetes cluster. In addition, you need to pay for other resources used by the cluster, such as compute and storage costs.
This is in contrast to its sister service, Elastic Container Service (ECS), which does not have a special charge per cluster (making EKS more expensive).
In both EKS and ECS, you pay for the resources consumed by the workload—EC2 instances running ECS tasks or EKS Kubernetes pods.
EKS Fargate Pricing
You can also run EKS via Amazon Fargate, which is billed in a serverless model, according to the actual time your containers run and the exact resources they consume. Fargate lets you deploy your workloads on EKS without needing to set up EC2 instances—the service automatically configures node infrastructure.
With Fargate, you don’t have to pay for related resources like EC2 and EBS volumes. Instead, you pay a fixed price based on the compute and storage resources your workload consumes. Fargate pricing in the US East (N. Virginia) Region with Linux/X86 pricing is currently as follows:
- vCPU cost per second – $0.000011244
- Memory cost per GB/second – $0.000001235/GB per hour
- Ephemeral storage cost – $0.00000000308
This uniform usage-based pricing model allows Fargate to save money when it has variable resource requirements or unpredictable workloads. This lets you avoid overprovisioning and paying for larger compute instances than you actually need for your workloads.
AWS Outposts and EKS Anywhere Pricing
If you have a hybrid environment and you use EKS to manage your on-premises servers, you can use AWS Outposts or EKS Anywhere.
- Outposts lets you pay a cluster management fee of $0.10 per hour in addition to the Outposts infrastructure cost. AWS provides the hardware for Outposts, so this fee is paid to AWS. There is no additional charge for EKS.
- EKS Anywhere does not have a cluster management fee. Because you deploy EKS Anywhere on your own infrastructure, the infrastructure cost is the cost of servers you deploy on-premises. Standard EKS provides basic support at no additional cost. You can buy an extended EKS Anywhere Support Subscription at a cost of $24,000 per year.
Optimizing AWS EKS Cost
Terminate Pods When No Longer Needed
A workflow can include various instances and tools used for development, testing, and staging. However, you might not need them to be available at all times. Instead of wasting these resources, you can ensure they are available only during business hours.
You can temporarily reduce the number of pods available to these applications and instances by using the Kubernetes downscaler. The kube-downscaler includes specific settings that allow scheduling systems to scale in or out at predefined times. It also provides options like forced extraordinary uptime and downscaling on weekends.
Cloud services and Kubernetes facilitate a high level of agility and a rapid deployment speed. However, rapidly-changing conditions can result in unclaimed environments (those previously deployed for tests or previews, for example). You can use Kubernetes Janitor to clean up clusters automatically.
Kubernetes Janitor lets you set time-to-live for all temporary deployments or separate resources. It allows you to specify a period after which the resources are automatically deleted. It also enables you to remove unused EBS volumes. These volumes can be otherwise overlooked, increasing your monthly Kubernetes costs by hundreds of dollars.
Auto-scaling is a cost optimization feature that enables you to reduce your cloud costs significantly. You can take advantage of this feature by running the Cluster Autoscaling tool, which searches your cluster for pods that lack resources to provide them with additional nodes. It also identifies underutilized nodes to reschedule these pods onto the nodes that lack these resources.
Control Resource Requests
Kubernetes employs resource requests to set the load on the CPU and memory. However, there is often a difference between the requested and used resources. This excess in reserve is referred to as slack. A higher slack indicates more resources that increase your spending. The Kubernetes Resource Report tool enables you to locate excess resources and find specific areas to lower resource requests.
Use Spot Instances for Kubernetes Workloads
Spot instances enable you to reduce your costs. If utilized successfully, spot instances can offer deeper discounts than reserved instances – up to 90%. However, since AWS can reclaim spot instances anytime, many use this discount mainly for non-critical or fault tolerant workloads. However, you can also use spot instances for missions critical workloads by leveraging automated workload management and cost optimization tools.
Use AWS Cost Allocation Tags
A cost allocation tag is a type of metadata you can assign to AWS resources for tracking purposes. These tags enable you to track AWS costs in detail, helping you identify, manage, organize, filter, and search for resources in a customized way.
This feature lets you create tags to categorize resources by various criteria, such as owner, purpose, and environment. You can create a tagging strategy for your project to identify the main sources of spending and detect those you can eliminate.
Related content: Read our guide to Kubernetes cost optimization (coming soon)
Use EKS Cost Optimization Tools
Cost optimization tools can help optimize Kubernetes workloads even further. With tools like Granulate’s capacity optimization, companies can eliminate over-provisioning and scale from a single deployment to a multi-cluster solution, with minimal manual R&D efforts.
The continuous and autonomous nature of the solution assures that applications are stable and resources are optimized no matter what happens to the environment, whether it be a peak in usage, a software update or an infrastructure upgrade. By constantly targeting the cost-performance sweet spot, both application engineers and DevOps/FinOps teams can rest assured that they’re reaching their KPIs.
Learn more about capacity optimization and access the tool for free here.