AWS Compute Services

AWS Fargate

5 min read
Updated June 21, 2025
5,650 characters

The Core Value Proposition: Abstracting the Server

Traditionally, running containers involved creating a cluster of virtual machines (like EC2 instances), installing container daemons and agents, and managing the health and capacity of that cluster.

Fargate eliminates this operational overhead. It integrates with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) to provide a "serverless" experience for running your containerized workloads. You simply define your application, specify the CPU and memory it requires, and Fargate launches and manages the containers for you in a secure, isolated environment.


How Fargate Works with ECS and EKS

Fargate is not a standalone orchestrator; it is a launch type or compute option within ECS and EKS.

Fargate with Amazon ECS

  • Integration: This is the most seamless and common use of Fargate. In your ECS Task Definition, you simply select "Fargate" as the launch type.
  • Workflow: You package your application in a container, define its resource needs in an ECS Task Definition, and ECS launches it on Fargate. You don't create or manage any EC2 instances in your cluster.

Fargate with Amazon EKS

  • Integration: Fargate allows you to run Kubernetes Pods without provisioning worker nodes.
  • Workflow: You create an EKS cluster and define a "Fargate Profile." This profile specifies which Pods (based on namespaces and labels) should run on Fargate. When a Pod matching the profile is launched, EKS automatically provisions it on Fargate. This allows for a "mixed-mode" cluster where some pods run on self-managed EC2 nodes (for workloads requiring more control) and others run on Fargate (for general-purpose workloads).

Key Operational Concepts

The Fargate Responsibility Model

With Fargate, AWS manages:

  • The underlying host machines: provisioning, patching, and security.
  • The container runtime and ECS/EKS agent.
  • Scaling the underlying compute capacity.

You are responsible for:

  • Building and securing your container images.
    -t Defining your application's resource requirements (CPU/memory).
  • Configuring networking and IAM permissions.

Networking with awsvpc Mode

A fundamental requirement of Fargate is the use of the awsvpc network mode.

  • How it Works: In this mode, every Fargate task (or pod) is provisioned with its own Elastic Network Interface (ENI). This ENI gets a private IP address directly from your VPC.
  • Benefits:
    • Enhanced Security: You can apply Security Groups directly to your tasks, providing fine-grained, firewall-like control.
    • Simplified Networking: No need for complex port mappings. Each task is a first-class citizen in your VPC.
    • Full VPC Integration: Tasks can seamlessly and privately communicate with other AWS services like RDS databases or ElastiCache clusters.

Persistent Storage with Amazon EFS

Since Fargate tasks are ephemeral, you need a solution for persistent data. AWS Fargate integrates with Amazon Elastic File System (EFS) to provide persistent file storage for your containerized applications.

  • How it Works: You can configure your Fargate tasks to mount an EFS file system. This allows data to persist beyond the lifecycle of a single task.
  • Use Case: Ideal for content management systems, shared developer tools, and stateful applications where data needs to be accessed and shared across multiple running containers.

Fargate Pricing: On-Demand vs. Spot

You pay only for the vCPU and memory resources your application requests, billed per second from the time your container image is pulled until the task terminates.

Fargate On-Demand

  • Model: Standard pricing with no long-term commitments.
  • Use Case: Best for critical, steady-state applications that cannot tolerate interruption, such as user-facing web applications or APIs.

Fargate Spot

  • Model: Offers a significant discount (up to 70%) on On-Demand prices by using spare compute capacity in the AWS cloud.
  • Use Case: Perfect for fault-tolerant or interruptible workloads, such as batch processing jobs, image rendering, CI/CD pipelines, and development/testing environments. Fargate Spot tasks can be interrupted with a two-minute warning when AWS needs the capacity back.

When to Choose Fargate vs. EC2 Launch Type

Feature AWS Fargate EC2 Launch Type
Management Model Serverless: No servers to manage. IaaS: You manage the EC2 instances.
Responsibility Focus on application code and configuration. Manage OS patching, security, and scaling of instances.
Control Less control over the underlying host. Full control over instance type, AMI, and OS.
Speed to Deploy Very fast. No instances to boot. Slower. EC2 instances must be provisioned and booted first.
Use Case General-purpose workloads, microservices, rapid prototyping. Workloads requiring specific instance types (e.g., GPU), deep OS customization, or strict compliance.
Cost Model Potentially higher for steady, high-utilization workloads. Can be more cost-effective for long-running, predictable workloads using Reserved Instances or Savings Plans.

Choose Fargate when your priority is operational simplicity, speed of deployment, and removing infrastructure management overhead.
Choose the EC2 Launch Type when you need granular control over your environment, require specialized hardware, or have existing licensing or compliance constraints tied to the host.