AWS S3 for Developers: A Practical Guide to Cloud Storage
Amazon S3 (Simple Storage Service) is one of the most used AWS services β and one of the most versatile. Whether you are storing user uploads, serving static assets, hosting a website, or archiving data, S3 is the go-to solution. This guide covers everything a developer needs to know to use S3 effectively.
What Is S3?
S3 is an object storage service. Unlike a file system (which organizes files in a hierarchy of folders), S3 stores objects in flat buckets. Each object has:
- A key β the object name, which can look like a path (
images/user-123/avatar.jpg) - A value β the actual data (any file up to 5TB)
- Metadata β key-value pairs describing the object
- A version ID β if versioning is enabled
S3 is designed for 99.999999999% (11 nines) durability β objects are automatically replicated across multiple Availability Zones.
Core Concepts
Buckets
A bucket is a container for objects. Bucket names must be globally unique across all AWS accounts.
codeBucket: my-app-user-uploads Region: eu-west-1 Objects: avatars/user-123.jpg avatars/user-456.png documents/report-2025.pdf
Naming rules: Lowercase letters, numbers, and hyphens only. 3β63 characters. Must be globally unique.
Storage Classes
| Class | Use case | Cost |
|---|---|---|
| S3 Standard | Frequently accessed data | Highest |
| S3 Standard-IA | Infrequent access, rapid retrieval | Lower |
| S3 Glacier Instant | Archives, millisecond retrieval | Low |
| S3 Glacier Flexible | Archives, 1β12 hour retrieval | Very low |
| S3 Deep Archive | Long-term, 12β48 hour retrieval | Lowest |
| S3 Intelligent-Tiering | Automatically moves between tiers | Variable |
Use lifecycle rules to automatically transition objects between storage classes as they age.
Permissions and Access Control
Bucket Policies
Bucket policies are JSON documents that control who can access your bucket and what they can do:
json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-public-bucket/*" } ] }
This makes all objects in the bucket publicly readable β suitable for a static website or public CDN assets.
IAM Policies
For application access, attach an IAM policy to your EC2 instance role, Lambda function, or IAM user:
json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::my-app-bucket/*" }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-app-bucket" } ] }
Security best practice: Use IAM roles for applications running on AWS (EC2, Lambda, ECS) β never hardcode access keys in your code.
Block Public Access
S3 has a "Block Public Access" setting that overrides all bucket policies and ACLs. For buckets storing private user data, enable all four settings. Only disable when you intentionally need public access (static website hosting, public assets).
Presigned URLs
Presigned URLs are temporary URLs that grant access to a specific S3 object without making it public. Perfect for user uploads and private file downloads.
Upload with presigned URL (Node.js SDK v3)
javascriptimport { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"; import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; const s3 = new S3Client({ region: "eu-west-1" }); async function getUploadUrl(key, contentType) { const command = new PutObjectCommand({ Bucket: "my-app-uploads", Key: key, ContentType: contentType, }); const url = await getSignedUrl(s3, command, { expiresIn: 300 }); return url; // Valid for 5 minutes } -- Client uploads directly to S3 using this URL (PUT request) -- Your server never handles the file data
Download with presigned URL
javascriptimport { GetObjectCommand } from "@aws-sdk/client-s3"; import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; async function getDownloadUrl(key) { const command = new GetObjectCommand({ Bucket: "my-app-uploads", Key: key, }); return getSignedUrl(s3, command, { expiresIn: 3600 }); // 1 hour }
This pattern β client uploads/downloads directly with presigned URLs β avoids routing large files through your server, saving bandwidth and compute costs.
Using the AWS SDK (Node.js)
Upload an object
javascriptimport { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"; import { readFileSync } from "fs"; const s3 = new S3Client({ region: "eu-west-1" }); async function uploadFile(localPath, s3Key) { const fileContent = readFileSync(localPath); await s3.send(new PutObjectCommand({ Bucket: "my-app-bucket", Key: s3Key, Body: fileContent, ContentType: "image/jpeg", Metadata: { "uploaded-by": "api-server", }, })); console.log(`Uploaded to s3://my-app-bucket/${s3Key}`); }
Download an object
javascriptimport { GetObjectCommand } from "@aws-sdk/client-s3"; async function downloadFile(s3Key) { const response = await s3.send(new GetObjectCommand({ Bucket: "my-app-bucket", Key: s3Key, })); -- response.Body is a ReadableStream const chunks = []; for await (const chunk of response.Body) { chunks.push(chunk); } return Buffer.concat(chunks); }
List objects
javascriptimport { ListObjectsV2Command } from "@aws-sdk/client-s3"; async function listUserFiles(userId) { const response = await s3.send(new ListObjectsV2Command({ Bucket: "my-app-bucket", Prefix: `uploads/${userId}/`, })); return response.Contents.map(obj => ({ key: obj.Key, size: obj.Size, lastModified: obj.LastModified, })); }
Delete an object
javascriptimport { DeleteObjectCommand } from "@aws-sdk/client-s3"; async function deleteFile(s3Key) { await s3.send(new DeleteObjectCommand({ Bucket: "my-app-bucket", Key: s3Key, })); }
Static Website Hosting
S3 can serve static websites directly:
- Enable static website hosting in the bucket settings
- Set
index.htmlas the index document - Set a bucket policy to allow public
GetObject - Your site is live at code
http://bucket-name.s3-website-region.amazonaws.com
For production, put CloudFront in front for HTTPS, custom domain, and global CDN caching.
Lifecycle Rules
Automatically manage object storage costs:
json{ "Rules": [ { "ID": "move-old-uploads-to-glacier", "Status": "Enabled", "Filter": { "Prefix": "uploads/" }, "Transitions": [ { "Days": 30, "StorageClass": "STANDARD_IA" }, { "Days": 90, "StorageClass": "GLACIER" } ], "Expiration": { "Days": 365 } } ] }
This moves uploads to cheaper storage after 30 days, to Glacier after 90, and deletes them after a year.
Common Interview Questions
Q: What is the difference between S3 Standard and S3 Standard-IA?
Both offer the same durability and availability SLA, but Standard-IA has lower storage costs with a minimum storage duration of 30 days and a retrieval fee. Use it for data you access less than once a month.
Q: How do you make an object in a private bucket accessible temporarily?
Use a presigned URL β a time-limited URL that grants access to a specific object using the permissions of the AWS principal that generated it.
Q: What is S3 versioning?
When enabled, S3 keeps all versions of an object rather than overwriting. Deleting an object adds a delete marker instead of removing it. You can restore previous versions. Useful for user files and backups.
Q: What is the maximum size of an S3 object?
5TB. Objects over 5GB must use multipart upload (recommended for anything over 100MB).
Practice AWS on Froquiz
AWS is one of the most in-demand skills across backend and DevOps roles. Test your cloud and infrastructure knowledge on Froquiz across multiple AWS topics.
Summary
- S3 stores objects (files) in buckets β flat structure, keys can look like paths
- Choose the right storage class for your access pattern to control costs
- Bucket policies for resource-based access; IAM policies for identity-based access
- Presigned URLs grant temporary, secure access without making objects public
- Direct upload pattern: client uploads to S3 with a presigned URL β your server never handles file data
- Lifecycle rules automatically move objects to cheaper storage tiers as they age
- Always use IAM roles for applications on AWS β never hardcode credentials