AWS Fargate is a serverless compute engine for containers that works with Amazon ECS and EKS, eliminating the need to provision, configure, or scale virtual machines to run your containers. You define your application’s CPU and memory requirements, and Fargate handles all the infrastructure management, allowing you to focus entirely on building and running your applications.
Key Takeaways
Fargate removes server management from container deployments—you don’t provision or maintain EC2 instances. You pay only for the vCPU and memory resources your containers use, calculated per second with a one-minute minimum. Fargate automatically scales infrastructure based on your task requirements and provides task-level isolation with dedicated compute resources for each task. It works with both ECS and EKS, integrates with VPC networking, and supports AWS monitoring and security services.
What is AWS Fargate
Fargate is AWS’s serverless container platform. When you run containers on traditional ECS with EC2, you manage a fleet of servers. With Fargate, AWS abstracts away the entire server layer. You never see or manage the underlying hosts.
Think of Fargate as “containers as a service.” You submit a container image and resource requirements, and AWS runs it. No capacity planning, no server patching, no cluster optimization.
How Fargate Works
When you launch a task on Fargate, you specify CPU and memory in your task definition. Fargate provisions the exact compute resources needed and launches your containers in an isolated environment.
Each task runs in its own kernel runtime environment. Tasks don’t share CPU, memory, storage, or network resources with other tasks. This isolation improves security compared to running multiple containers on the same EC2 instance.
Fargate supports both ECS and EKS. For ECS, you create task definitions with the Fargate launch type. For EKS, you create Fargate profiles that define which pods run on Fargate based on namespace and labels.
Resource Configuration
Fargate offers predefined CPU and memory combinations. You select from configurations ranging from 0.25 vCPU with 512 MB memory up to 16 vCPU with 120 GB memory. Not every CPU-memory combination is valid—AWS provides specific pairings based on workload patterns.
You can allocate resources at the task level or container level. Task-level resources define the total available to all containers in a task. Container-level resources set limits for individual containers within that task.
Networking
Fargate requires the awsvpc networking mode. Each task gets its own elastic network interface (ENI) with a private IP address from your VPC. You control network access using security groups attached directly to tasks.
Tasks can run in public or private subnets. For private subnets without internet access, you need a NAT gateway for outbound connections or VPC endpoints for AWS service access. Public subnets require tasks to have public IP addresses assigned for internet connectivity.
Storage
Fargate provides 20 GB of ephemeral storage by default for each task. You can configure up to 200 GB of ephemeral storage. This storage is temporary—data disappears when the task stops.
For persistent data, mount EFS file systems to your Fargate tasks. This allows multiple tasks to share data and preserves information across task restarts.
Pricing Model
You pay for the vCPU and memory resources your tasks use, calculated from when container images are pulled until the task terminates. Billing is per second with a one-minute minimum.
There’s no charge for stopped tasks or idle capacity. If your task uses 2 vCPU and 4 GB memory for 10 minutes, you pay only for those resources during that time. Pricing varies by region and operating system (Linux or Windows).
Security
Fargate isolates tasks at the kernel and network level. Each task runs in its own dedicated environment without sharing resources with other customers’ workloads.
IAM roles attach to tasks, granting specific permissions to access AWS services. You don’t need to manage credentials inside containers. Security groups control network traffic at the task level. Integration with AWS Secrets Manager and Systems Manager Parameter Store keeps sensitive data out of container images.
When to Use Fargate
Fargate suits workloads where you want to eliminate infrastructure management. It’s ideal for applications with variable traffic patterns since you don’t pay for idle servers. Microservices architectures benefit from task-level isolation and independent scaling.
Use Fargate when you want predictable per-task costs, need to reduce operational overhead, or lack dedicated DevOps resources for cluster management. It works well for batch jobs, CI/CD pipelines, and event-driven architectures.
Fargate vs EC2 Launch Type
EC2 launch type gives you more control and can be more cost-effective for steady-state workloads with high utilization. You can use reserved instances or savings plans for discounts. You have access to instance storage and can run specialized instance types with GPUs.
Fargate eliminates infrastructure management but costs more per vCPU-hour at full utilization. You can’t access the underlying host or use instance-specific features. The choice depends on your workload characteristics and operational preferences.
Conclusion
AWS Fargate removes the complexity of managing container infrastructure by providing serverless compute for ECS and EKS. You define resource requirements and networking, and Fargate handles provisioning, scaling, and isolation. While it costs more than optimized EC2 deployments, Fargate trades cost for simplicity, making it valuable when operational efficiency matters more than infrastructure optimization. It’s a practical choice for teams that want to run containers without becoming experts in cluster management.