EBS (Elastic Block Store) is block storage that attaches to a single EC2 instance like a hard drive, EFS (Elastic File System) is managed NFS storage that multiple instances can access simultaneously, and S3 (Simple Storage Service) is object storage for files accessed via API rather than a file system—choose based on whether you need single-instance performance (EBS), shared file access (EFS), or scalable object storage (S3).
Key Takeaways
EBS provides high-performance block storage for single EC2 instances, perfect for databases and boot volumes. EFS offers shared file storage that scales automatically and works across multiple instances and availability zones, ideal for content management and shared application data. S3 delivers unlimited object storage accessed through APIs, best for backups, static assets, and data lakes. Performance, access patterns, and cost structures differ dramatically—EBS charges for provisioned capacity, EFS for actual usage, and S3 for storage plus requests.
EBS: Your Instance’s Hard Drive
EBS volumes behave like physical hard drives attached to your EC2 instance. You format them with a file system (ext4, xfs, NTFS), mount them, and access them through standard file operations. The key limitation: one EBS volume attaches to one instance at a time (except for io2 Multi-Attach in specific scenarios).
You choose from several volume types. gp3 (General Purpose SSD) handles most workloads and lets you configure IOPS and throughput independently—I use this for 90% of my deployments. io2 (Provisioned IOPS SSD) delivers consistent low-latency performance for demanding databases. st1 (Throughput Optimized HDD) works for big data and log processing where sequential reads matter more than random access. sc1 (Cold HDD) provides the cheapest option for infrequently accessed data.
Gotcha: EBS volumes exist in a single availability zone. If that AZ goes down, you can’t access your volume from an instance in another AZ. You need to snapshot and restore to move data between zones. I’ve seen production outages because teams didn’t realize their instance and volume had to be in the same AZ.
Performance scales with volume size on some types. A 100 GB gp3 volume gives you the same baseline performance as a 1 TB gp3 now, but on the older gp2 type, larger volumes got better performance. Always check the current specs because AWS changes these details.
Snapshots save you here. You can snapshot EBS volumes to S3 for backups. Snapshots are incremental—only changed blocks get stored after the first one. You can restore snapshots to new volumes, copy them across regions, or share them between accounts.
Use EBS for database storage (MySQL, PostgreSQL, MongoDB), boot volumes for EC2 instances, application servers that need low-latency storage, and any workload where one instance needs dedicated, high-performance block storage. It’s also perfect when you need to control IOPS precisely.
EFS: Shared Network File System
EFS provides NFS v4 storage that multiple EC2 instances can mount simultaneously. It’s fully managed, scales automatically from gigabytes to petabytes, and works across multiple availability zones in a region. You don’t provision capacity—it grows and shrinks as you add or remove files.
You access EFS by mounting it on Linux instances using standard NFS mount commands. Multiple instances across different AZs can read and write to the same file system concurrently. This makes it perfect for shared application data, content management systems, and development environments where teams need access to the same files.
EFS offers two performance modes: General Purpose (lower latency, most use cases) and Max I/O (higher aggregate throughput but slightly higher latency per operation). You can’t change performance mode after creation, so choose carefully. I’ve never needed Max I/O except for one massive parallel processing workload with hundreds of instances.
Storage classes reduce costs. Standard stores files you access frequently. Infrequent Access (IA) costs much less for files you don’t touch often. Lifecycle management automatically moves files to IA based on access patterns. I’ve seen storage costs drop 85% just by enabling lifecycle policies on log archives.
Warning: EFS costs significantly more than EBS per GB. Standard EFS runs about $0.30/GB/month versus $0.08/GB/month for gp3 EBS. You pay for convenience and shared access. Don’t use EFS when EBS works—I’ve audited environments wasting thousands monthly on EFS for single-instance workloads.
Throughput modes matter too. Bursting mode gives you throughput that scales with file system size. Provisioned mode lets you specify throughput independently of size, useful when you have small files but need high throughput. Elastic mode (newest) automatically scales throughput up and down—it’s more expensive but handles unpredictable workloads better.
Use EFS for content management systems, web serving environments that need shared storage, containerized applications requiring persistent shared storage, development and test environments, and big data analytics that need shared access to datasets. WordPress on multiple instances? EFS for the wp-content directory.
S3: Object Storage at Scale
S3 isn’t a file system. You can’t mount it and navigate directories. It stores objects (files) in buckets, and you access them through API calls or URLs. Each object has a key (like a file path) and metadata. This fundamental difference trips up newcomers who expect it to work like traditional storage.
S3 scales infinitely. You don’t provision capacity—just upload objects. It’s distributed across multiple facilities automatically, giving you 99.999999999% (11 nines) durability. That means if you store 10 million objects, you might lose one every 10,000 years statistically.
Storage classes optimize costs based on access patterns. S3 Standard for frequently accessed data. S3 Standard-IA (Infrequent Access) costs less for monthly-access patterns. S3 One Zone-IA sacrifices multi-AZ redundancy for lower cost. S3 Glacier Instant Retrieval for archive data you need immediately when accessed. S3 Glacier Flexible Retrieval for archives you can wait minutes to hours to access. S3 Glacier Deep Archive for long-term archives with 12-hour retrieval, the cheapest at about $1/TB/month.
Intelligent-Tiering moves objects between access tiers automatically based on usage patterns. It costs a small monitoring fee per object but can save significant money if you’re not sure about access patterns. I enable it by default for new buckets unless I know exactly how the data will be accessed.
Gotcha: S3 charges for requests, not just storage. PUT, GET, LIST operations all cost money. A misconfigured application making millions of unnecessary requests can rack up surprising bills. I’ve debugged applications with infinite retry loops hitting S3 that generated $10k+ monthly bills.
S3 integrates with everything in AWS. CloudFront for content delivery, Lambda for event-driven processing, Athena for querying data, EMR for analytics. You can host static websites directly from S3, serve as a data lake for analytics platforms, or store application backups and logs.
Versioning protects against accidental deletions and overwrites. When enabled, S3 keeps all versions of objects. Delete a file? The delete marker becomes the current version, but previous versions remain. This saves you during ransomware attacks or accidental bulk deletions, but watch costs—you pay for all versions stored.
Use S3 for static website assets, application backups, log aggregation, data lakes and analytics, media storage and distribution, disaster recovery, and archival storage. Anything you access via API rather than traditional file operations fits S3’s model perfectly.
How to Choose
Ask yourself these questions: Does a single EC2 instance need this storage? Use EBS. Do multiple instances need simultaneous file system access? Use EFS. Are you storing files accessed via application code rather than mounted file systems? Use S3.
Performance requirements matter. Need sub-millisecond latency and thousands of IOPS? EBS, specifically io2. Need shared access with decent performance? EFS. Latency-tolerant bulk storage? S3.
Consider cost structure. EBS charges for provisioned size regardless of usage—you pay for a 1 TB volume even if you use 100 GB. EFS and S3 charge for actual usage. But EFS costs more per GB than EBS, so don’t use it for single-instance workloads just because it auto-scales.
Access patterns reveal the right choice. Database files with random access? EBS. Shared configuration files? EFS. Millions of images served to users? S3 with CloudFront. Log files for long-term retention? S3 with lifecycle policies moving to Glacier.
You’ll often use combinations. I commonly see EC2 instances with EBS for the OS and application, EFS for shared uploads or session data, and S3 for backups, static assets, and logs. Each storage type solves specific problems—use the right tool for each job.
Real-world example: A WordPress deployment might use EBS for the database and OS, EFS for wp-content (shared across web servers), and S3 with CloudFront for serving uploaded images. This combination optimizes performance, enables scaling, and minimizes costs.
Conclusion
EBS delivers high-performance block storage for single EC2 instances, ideal for databases and applications needing low-latency dedicated storage. EFS provides managed NFS storage for scenarios requiring shared file system access across multiple instances and availability zones. S3 offers infinitely scalable object storage accessed through APIs, perfect for backups, static content, and data lakes. Choose based on your access pattern—single instance versus shared versus API access—and balance performance requirements against cost. Most production architectures use all three, leveraging each service’s strengths for different parts of the application stack.