Lambda, containers, and EC2 represent three compute models on AWS with different trade-offs: Lambda auto-scales and charges per request but limits runtime to 15 minutes, containers offer portability and consistent environments across any infrastructure, while EC2 gives you full control over virtual machines with no execution time limits. Your choice depends on your workload pattern, required control level, and cost structure preferences.
Key Takeaways
Use Lambda for event-driven workloads under 15 minutes that need automatic scaling without server management. Choose containers (ECS/EKS) when you need portability, consistent environments, and want to run any duration workload with some management overhead. Pick EC2 when you need full OS control, must run legacy applications, require specific hardware, or have steady-state workloads where reserved instances make sense financially.
When Lambda Makes Sense
Lambda works best for sporadic workloads. You write code, upload it, and AWS handles everything else. No servers to patch, no capacity planning. You pay only when your code runs, calculated per millisecond.
I’ve seen Lambda shine for API backends that get uneven traffic, image processing triggers, and scheduled tasks. A client saved 70% on costs by moving their nightly report generation from an EC2 instance (running 24/7) to Lambda (running 20 minutes per day).
Gotcha: Cold starts hurt. When Lambda hasn’t run recently, it takes extra time to initialize—sometimes seconds. This kills user experience for latency-sensitive applications. Provisioned concurrency solves this but adds cost.
The 15-minute execution limit is hard. No extensions, no exceptions. Your video transcoding job that takes 20 minutes? Lambda won’t work. You’ll also hit the 10GB memory ceiling eventually, and the 512MB temporary storage fills up faster than you’d expect.
When Containers Are Your Best Bet
Containers package your application with its dependencies. Build once, run anywhere—your laptop, a colleague’s machine, or production. This consistency eliminates “works on my machine” problems.
ECS (Elastic Container Service) offers AWS-native orchestration. It’s simpler but locks you into AWS. EKS (Elastic Kubernetes Service) runs Kubernetes, giving you portability across clouds and on-premises infrastructure.
We use containers for microservices architectures where different teams own different services. Each team picks their language and dependencies without conflicts. Containers also work well for batch processing jobs that exceed Lambda’s limits but don’t need a full EC2 instance running continuously.
Warning: Container orchestration has a learning curve. Kubernetes especially. I’ve watched teams spend months just getting comfortable with pods, services, and ingress controllers. Start with ECS if you’re new to containers—you can always migrate to EKS later.
Resource allocation matters more than you think. Set your CPU and memory limits carefully. Too low and your containers crash under load. Too high and you waste money. Finding the sweet spot takes monitoring and iteration.
When EC2 Is Still King
EC2 gives you a virtual machine. You control everything: the operating system, installed software, network configuration, storage. This flexibility comes with responsibility—you patch the OS, you monitor resources, you handle scaling.
Legacy applications often need EC2. That decade-old monolith with hard-coded file paths and specific library versions? EC2 lets you recreate its exact environment. You also need EC2 for applications requiring specific hardware like GPUs for machine learning or high-memory instances for in-memory databases.
Steady-state workloads favor EC2 financially. If you’re running something 24/7, reserved instances or savings plans cut costs by 30-70%. Lambda’s pay-per-execution model becomes expensive when you’re executing constantly.
Real-world anecdote: A company ran their database queries through Lambda because it seemed cheaper. Their queries ran every few seconds. The bill shocked them. Moving to a single t3.medium EC2 instance reduced costs by 85%.
You manage more with EC2. Auto Scaling Groups, Load Balancers, security patches, monitoring—all your responsibility. This operational overhead is real. Budget time for it.
Making the Decision
Start by mapping your execution pattern. Sporadic and event-driven? Lambda. Continuous with variable load? Containers. Continuous with predictable load? EC2.
Consider your team’s skills. Lambda requires less operational knowledge but you’re constrained by AWS’s runtime options. Containers need orchestration expertise. EC2 demands traditional systems administration.
Don’t lock yourself into one option. Mix them. We run our API on Lambda, background jobs in containers, and our database on EC2. Each workload gets the compute model that fits it best.
Gotcha: The cheapest option on paper often isn’t cheapest in reality. Lambda’s zero operational overhead might save more money than EC2’s lower compute costs when you factor in the engineering time spent managing servers.
Conclusion
Lambda excels at event-driven, short-duration tasks with automatic scaling and minimal management. Containers provide portability and consistency for longer-running services and microservices architectures. EC2 delivers full control for legacy applications, specialized hardware needs, and predictable always-on workloads. Your workload characteristics, team capabilities, and cost structure determine the right choice—and you’ll likely use all three for different parts of your infrastructure.