Back to blog

EKS on Fargate: Basics, Pricing, and How to Get Started

Jacob Simkovich

Brand and Content Manager, Intel Granulate

What Is AWS EKS? 

AWS Elastic Kubernetes Service (EKS) is a managed service that simplifies running Kubernetes on AWS without the need for installing and operating a Kubernetes control plane or nodes. EKS automates key tasks such as patching, node provisioning, and updates. It supports the most popular Kubernetes applications and tools out of the box.

EKS integrates deeply with AWS services, enabling the use of AWS services such as Elastic Load Balancing (ELB), Amazon EC2, and AWS Identity and Access Management (IAM) when building Kubernetes environments. This integration improves the security, reliability, and scalability of applications.

What Is AWS Fargate? 

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). It eliminates the need to manage servers or clusters of Amazon EC2 instances, making it easier to focus on building applications.

Fargate allows users to work with containers without being exposed to the underlying server or cluster infrastructure. Users only need to specify the CPU and memory resources the application needs, and Fargate handles the allocation and maintenance of computing power. This offers a simpler model for resource allocation, reducing the operational overhead traditionally involved in running containerized applications.

In this article:

How Does AWS Fargate Work with Amazon EKS?

AWS Fargate simplifies running Kubernetes applications on Amazon EKS by handling the infrastructure management for you. When you use Fargate with EKS, you don’t need to worry about managing servers or clusters. You only need to specify the requirements for your applications, such as CPU and memory, and Fargate automatically provisions and scales the necessary compute resources.

This integration allows you to focus on deploying and managing your applications, while Fargate takes care of the underlying infrastructure. It ensures that Kubernetes pods run efficiently, with resources allocated dynamically based on your needs.

Tips from the expert

In my experience, here are tips that can help you better optimize your use of EKS on Fargate:

1. Integrate with AWS Systems Manager for enhanced management: Use AWS Systems Manager to manage and automate operational tasks across your Fargate and EKS environments. This includes patch management, compliance, and operational insights.
2. Automate cost management with budgets and alerts: Set up AWS Budgets and Cost Anomaly Detection to monitor your spending on Fargate. This helps you catch unexpected cost spikes early and take corrective actions.
3. Optimize startup time with pre-warmed Fargate tasks: For latency-sensitive applications, consider using pre-warmed Fargate tasks to reduce the startup time of your containers. This can significantly improve user experience by decreasing initial response times.
4. Leverage spot instances with Fargate Spot: Utilize Fargate Spot to run fault-tolerant workloads at a significant cost saving. This can be highly beneficial for batch jobs, CI/CD pipelines, or any workload that can handle interruptions.
5. Utilize EKS Managed Node Groups for mixed workloads: While Fargate is great for serverless workloads, using EKS managed node groups for mixed workloads can provide flexibility and cost optimization, especially for steady-state or highly customizable workloads.

Fargate on EKS Pricing 

When using AWS Fargate with Amazon EKS, pricing is based on the resources your pods consume. There are no upfront costs, and you only pay for the vCPU and memory resources used by your running pods.

The pricing structure for Fargate on EKS includes:

  • vCPU pricing: You are charged for the vCPU resources allocated to your Fargate tasks. The cost is calculated based on the amount of vCPU per hour that your pods consume.
  • Memory pricing: Similar to vCPU pricing, memory costs are based on the GB per hour used by your Fargate tasks.

By optimizing pod sizing and resource requests, you can effectively manage costs and ensure efficient use of compute power. There are no additional charges for using Fargate profiles or setting up the EKS control plane; these costs are included in the overall EKS service pricing.

Learn more in our detailed guide to EKS pricing 

Getting Started with AWS Fargate Using Amazon EKS 

Here’s an overview of how to use Fargate with EKS. The code and instructions are adapted from the official AWS documentation.

Step 1: Ensure Existing Nodes Can Communicate with Fargate 

For clusters with existing nodes, it’s crucial that these nodes can communicate with the Fargate pods. Fargate pods use the cluster security group by default, so existing nodes must be able to send and receive traffic from this group.

You can check and configure your cluster security group using the AWS Management Console or the AWS CLI. The following command retrieves the cluster security group ID:

aws eks describe-cluster --name example-cluster --query cluster.resourcesVpcConfig.clusterSecurityGroupId

Ensure that this security group is attached to your existing nodes. For node groups created with eksctl or Amazon EKS managed CloudFormation templates, manually add the cluster security group to the nodes or modify the Auto Scaling group launch template.

Step 2: Create a Fargate Profile for the Cluster

Fargate pods require permissions to make calls to AWS APIs. Create an Amazon EKS pod execution role to provide the necessary IAM permissions. This role is essential for the proper functioning of Fargate pods.

Before scheduling pods on Fargate, you need to create a Fargate profile. This profile defines which pods run on Fargate. You can create a Fargate profile using eksctl or the AWS Management Console. Ensure you have eksctl version 0.183.0 or later. You can check your version with the following command:

<code>eksctl version</code>

To create a Fargate profile using eksctl, run the following command, replacing the placeholders with your actual values:

eksctl create fargateprofile \
    --cluster example-cluster \
    --name example-fargate-profile \
    --namespace example-kubernetes-namespace \
    --labels key=value

Step 3: Update CoreDNS

If you plan to run all your pods on Fargate, you need to update CoreDNS to run on Fargate as well. If you used eksctl with the --fargate option, you can skip this step.

Create a Fargate profile for CoreDNS using the following command, replacing the placeholders with your actual details:

aws eks create-fargate-profile \
    --fargate-profile-name coredns \
    --cluster-name example-cluster \
    --pod-execution-role-arn arn:aws:iam::111122223333:role/AmazonEKSFargatePodExecutionRole \
    --selectors namespace=kube-system,labels={k8s-app=kube-dns} \
    --subnets subnet-0000000000000001 subnet-0000000000000002 subnet-0000000000000003

Finally, remove the eks.amazonaws.com/compute-type: ec2 annotation from the CoreDNS pods with this command:

kubectl patch deployment coredns \
    -n kube-system \
    --type json \
    -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]’

Best Practices for Running Fargate on EKS Effectively 

Here are some of the ways that organizations can ensure the effective deployment of Fargate on EKS.

1. Optimize Pod Sizing

Proper pod sizing helps ensure efficient resource utilization and cost management when using AWS Fargate with EKS. Start by understanding the resource requirements of your application. Monitor CPU and memory usage over time and adjust pod resource requests and limits accordingly.

 This helps avoid over-provisioning, which can lead to unnecessary costs, and under-provisioning, which can cause performance issues. Tools like Kubernetes Metrics Server and Prometheus can provide insights into resource usage, aiding in optimal pod sizing.

2. Use EKS Fargate Profiles

EKS Fargate profiles define which pods should run on Fargate. Use these profiles to segment workloads based on specific needs. For example, separate development and production workloads to ensure isolation and better security. 

Additionally, specify namespaces and label selectors within your Fargate profiles to precisely control which pods use Fargate. This helps in managing costs and maintaining organizational policies.

3. Implement Logging and Monitoring

Effective logging and monitoring are essential for maintaining the health and performance of your applications on EKS Fargate. Use AWS CloudWatch to collect, monitor, and analyze logs from your applications. 

Configure Kubernetes logging to send logs to CloudWatch, and use CloudWatch Alarms to get notified of any anomalies. Additionally, implement monitoring with AWS X-Ray to trace requests through your application and diagnose performance issues.

4. Ensure Effective Lifecycle Management

Managing the lifecycle of your applications on EKS Fargate involves automating updates and rollbacks. Use Kubernetes Deployments to handle updates to your applications, ensuring that you can perform rolling updates with minimal downtime. 

Implement health checks and readiness probes to ensure that new pods are functioning correctly before directing traffic to them. Also, maintain backup and disaster recovery plans to handle unexpected failures, using AWS Backup for regular data backups.

Optimizing EKS with Intel® Tiber™ App-Level Optimization

Intel Tiber App-Level Optimization allows you to keep Kubernetes SLAs competitive with autonomous pod resource rightsizing and optimizing containerized environments without compromising performance. App-Level Optimization offers a real-time, continuous optimization solution for Kubernetes and container environments, aiming to enhance performance and reduce costs.

The autonomous solution optimizes application workloads by dynamically adjusting resource allocations, eliminating over-provisioning, and reducing response times. App-Level Optimization requires no code changes and can be installed quickly, with full deployment achievable in under a week. It supports various provisioning methods, integrates seamlessly with CI/CD processes, and is capable of continuous learning and real-time optimization across diverse cloud environments.

For more details, visit Kubernetes & Containers Optimization.

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog