Back to blog

Understanding AWS EKS: Features, Deployment, Pricing, and Pro Tips

Itay Gershon

Product Manager, Intel Granulate

What Is Amazon Elastic Kubernetes (AWS EKS)?

Amazon Elastic Kubernetes Service (AWS EKS) is a managed container service that allows users to run Kubernetes on AWS without needing to install, operate, and maintain their own Kubernetes control plane or nodes. It simplifies the process of deploying, managing, and scaling containerized applications using Kubernetes on AWS. 

With EKS, developers can focus more on their applications rather than the underlying infrastructure. AWS EKS is fully compatible with Kubernetes applications and tools, ensuring that existing workloads can be migrated seamlessly. 

New call-to-action

Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes across multiple Availability Zones to ensure high availability. This makes it suitable for enterprises looking to deploy scalable and resilient applications in the cloud with minimal operational overhead.

In this article:

Amazon EKS Features 

EKS offers the following key features.

Managed Kubernetes Clusters

Handles the complexity of the control plane setup and maintenance. The managed Kubernetes service ensures that the underlying infrastructure is automatically adjusted according to the application’s needs, providing scalability and reliability without requiring manual intervention. 

Hybrid deployments

Allows organizations to run their Kubernetes applications in AWS cloud environments and on-premises. This flexibility supports a range of use cases, from applications that require low latency access to on-premises systems to those needing to comply with data residency requirements. EKS offers three options for on-premise deployment:

  • AWS Outposts: Allows organizations to run AWS EKS and other Amazon services on dedicated AWS hardware deployed within their premises.
  • EKS Distro: Offers the same reliable and secure Kubernetes distribution used by Amazon EKS for on-premises deployments, ensuring consistency across environments. 
  • EKS Anywhere: Simplifies the creation and operation of Kubernetes clusters on in-house infrastructure, including virtual machines and bare metal servers.

Networking and Security

Provides features to help ensure that your Kubernetes clusters are secure, including:

  • Integration with Amazon VPC, allowing users to isolate their cluster within a virtual network and leverage VPC security groups and network ACLs for security. 
  • Support for native VPC networking with the Amazon VPC CNI plugin for Kubernetes, offering high-performance and private IP addressing for your pods.
  • Integration with AWS Identity and Access Management (IAM), which offers fine-grained access control over resources in your clusters. 
  • Compliance with a variety of standards and regulations, making it suitable for running workloads in regulated industries.

Serverless Compute

EKS integrates with AWS Fargate, which can run containers without users having to manage servers or clusters. Fargate automatically allocates the compute resources applications need and scales those resources to meet demand.  

On Demand Webinar: Lower your TCO of Kubernetes ebook CTA

Amazon EKS Architecture

Let’s look at the main components of of the EKS architecture:

Control Plane

Each Kubernetes cluster has a dedicated control plane, preventing overlap in infrastructure across clusters or AWS accounts. The control plane is made highly available through the distribution of at least two API server instances and three etcd instances across multiple Availability Zones within an AWS Region. EKS monitors these components to ensure optimal performance.

Worker Nodes / Compute

There are several ways to run and manage EKS worker nodes:

  • AWS EC2 lets users run EKS worker nodes while retaining full control of server configuration. This is also known as ‘self-managed nodes’.
  • AWS Fargate is a serverless option that abstracts the underlying servers, focusing on simplicity and ease of use. It automatically provisions and scales the compute resources needed by applications, billed based on usage. 
  • Karpenter offers dynamic provisioning of EC2 instances for Kubernetes worker nodes to meet workload demands. 
  • Managed node groups provide an intermediate level of control with automated management features while allowing customization. 

Amazon EKS vs. Amazon ECS 

EKS differs from Amazon Elastic Container Service (ECS) in two primary ways:

Container Orchestrator

EKS is fully compatible with Kubernetes, providing a managed environment to run Kubernetes clusters without the hassle of managing the control plane or nodes. This compatibility is suitable for teams already using Kubernetes or looking to leverage its extensive ecosystem. 

ECS is a proprietary AWS container management service that integrates deeply with other AWS services. It offers a simpler approach to container orchestration, especially for teams prioritizing ease of use over broader Kubernetes ecosystem compatibility.

Service Focus

EKS focuses on providing a standardized Kubernetes experience, ensuring portability of applications across different environments that support Kubernetes. For organizations looking for the flexibility to run applications on-premises or in multiple cloud environments with the same toolset, EKS offers the broader ecosystem support and flexibility inherent in Kubernetes. 

ECS emphasizes simplicity and native AWS integration. ECS may be more suitable for projects that require tight integration with AWS services like IAM for granular access control and auto scaling for efficient resource management.  

Learn more in our detailed guide to EKS vs ECS (coming soon)

Amazon EKS Deployment Options

EKS can be deployed in the cloud, via Distro, on Outposts, or with EKS Anywhere.

Amazon EKS in the Cloud 

This fully managed Kubernetes option eliminates the need to install, operate, and maintain a Kubernetes control plane or nodes. It simplifies running Kubernetes applications in the AWS cloud by automating tasks such as version upgrades, patching, and scaling of the control plane components. 

Users can leverage AWS’s infrastructure to deploy their containerized applications quickly and efficiently without worrying about the underlying hardware or cluster management. This integration allows for advanced monitoring, security, and networking features that enhance application performance and resilience. It also offers seamless scalability.  

New call-to-action

Amazon EKS Distro

EKS Distro offers users a way to run Kubernetes in environments outside of AWS cloud, providing the same reliable and secure Kubernetes distribution used by Amazon EKS. This option is designed for those who prefer or require running their applications on-premises or in other cloud environments but still want the consistency and compatibility with AWS EKS. 

It follows the same version release cycle as Amazon EKS, ensuring that users can benefit from the latest features and security updates. This deployment option supports scenarios where connectivity to AWS services is limited or where data residency and sovereignty concerns dictate on-premises solutions. 

Amazon EKS Anywhere 

EKS Anywhere enables organizations to deploy, manage, and operate Kubernetes clusters on-premises, including on their own virtual machines and physical servers. This offering is built on the Amazon EKS Distro, ensuring consistency with Amazon EKS in the cloud. Users can access familiar Amazon EKS tooling and workflows for cluster management.

This deployment option is particularly beneficial for scenarios requiring data sovereignty or where connectivity to the public cloud is limited.  

Amazon EKS on AWS Outposts 

By deploying Amazon EKS on Outposts, enterprises can run Kubernetes applications in their data centers or on-premises facilities with the same user experience as in the AWS cloud. AWS Outposts is dedicated hardware designed by AWS, which can be deployed directly on customer premises. This option caters to workloads that require low-latency access to on-premises systems, have data residency requirements, or must integrate with internal resources not available in the cloud.

With Outposts, extended clusters allow for a control plane running in an AWS Region while nodes are hosted on Outposts, suitable for leveraging regional services alongside local workloads. Local clusters operate entirely on Outposts, offering a solution when applications must run in closer proximity to local data or specific hardware.  

AWS EKS Pricing 

Amazon EKS simplifies Kubernetes operations by managing the control plane. Users pay $0.10 per hour for each Amazon EKS cluster they create. Additional charges apply based on the compute resources used by worker nodes, either through EC2 instances or AWS Fargate.

For example, if an organization runs three Amazon EKS clusters continuously over a month (30 days), the cost calculation would be straightforward: 3 clusters X $0.10 per hour X 24 hours/day X 30 days = $216, and in addition the specific charges for the EC2 instances or AWS Fargate resources used.

EKS Anywhere and Outposts have a separate pricing model, which is beyond the scope of this article. Pricing may vary based on customer environment sizing, EDPs, or other discounts.

Learn more in our detailed guide to EKS pricing

AWS EKS: Pro Tips for Success

Here are some of the ways that organizations can make the most of EKS.

1. Right-Size Workloads 

Right-sizing workloads is critical for optimizing resource utilization and controlling costs in Amazon EKS environments, particularly when running self-managed nodes in Amazon EC2. This involves selecting the appropriate instance types and sizes for each workload based on their specific requirements. 

By carefully analyzing the CPU, memory, and I/O operations needed by applications, organizations can choose the optimal resources. Implementing autoscaling is another key aspect of right-sizing. Another option is to run workloads on Amazon Fargate, which automatically provisions the required resources in a serverless model.

2. Restrict Network Access to the EKS Cluster 

Configure VPC security groups and network access control lists (ACLs) to tightly control inbound and outbound traffic at the subnet level. Security groups act as virtual firewalls for EKS nodes, allowing admins to specify which traffic is permitted to reach each node and the services they host.

Additionally, Kubernetes network policies enable finer-grained control over the communication between pods within a cluster. By defining specific rules, administrators can restrict how pods communicate with each other and with external endpoints. This ensures that only authorized traffic can access the cluster’s resources. 

Get the Big Data Optimization Guide

3. Implement Health Checks and Self-Healing

EKS uses Kubernetes liveness and readiness probes to monitor the health of containers. Liveness probes determine if an application is running correctly, restarting containers that fail their checks. Readiness probes assess if a container is ready to handle requests, preventing traffic from being forwarded to containers that aren’t fully prepared. 

Self-healing capabilities within Amazon EKS automatically address issues such as failed nodes or degraded application performance. When a node becomes unhealthy, the Kubernetes control plane detects this state and schedules workloads on other available nodes. This process minimizes downtime and service disruption, allowing applications to maintain high availability. 

4. Restrict In-Cluster Network Communication 

Kubernetes provides network policies that specify how groups of pods can communicate with each other and with other network endpoints. Network policies are enforced by the Kubernetes network plugin to create a secure and tailored networking environment. By default, pods are non-isolated; they accept traffic from any source. 

Network policies allow developers to enforce rules that isolate specific pods to only receive traffic from other pods that are explicitly allowed, creating an allowlist of accessible services.

Applying these policies helps in segmenting the network within the cluster and enforcing the principle of least privilege for pod communication. For example, a web application’s front-end might only be allowed to communicate with the back-end API service. 

5. Use EKS Blueprints 

Amazon EKS Blueprints provide a declarative way to define and manage Kubernetes clusters in AWS, simplifying the setup of complex environments. These blueprints use Infrastructure as Code (IaC) practices, using AWS CloudFormation or HashiCorp Terraform to automate the deployment and configuration of EKS clusters. 

By encapsulating best practices and necessary configurations into reusable templates, EKS Blueprints simplify the process of creating well-architected Kubernetes clusters. This approach ensures consistency across deployments, reduces manual errors, and accelerates the provisioning of new environments. Teams can  incorporate custom configurations, such as logging, monitoring setups, and network policies, directly into their cluster creation process.  

6. Use HPA and VPA 

Horizontal Pod Autoscaler (HPA) adjusts the number of pod replicas in a Deployment or ReplicaSet based on observed CPU utilization or custom metrics. This allows applications to handle increased load without manual intervention, optimizing resource usage and maintaining performance.  

Vertical Pod Autoscaler (VPA) adjusts container CPU and memory limits within pods, helping to optimize the allocation of resources to match demand closely. Combining HPA and VPA can offer a comprehensive scaling solution that adjusts both the size and quantity of pods based on workload requirements. 

However, VPA may recommend changes that conflict with HPA actions. To mitigate this, it’s advisable to use VPA for applications with stable traffic but varying resource demands per request and HPA for workloads with fluctuating traffic levels. 

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog