AWS with Terraform (Day 18)
Day 18: Image Processing Serverless Project using AWS Lambda and Terraform
As part of my ongoing hands-on AWS Terraform learning, I completed a serverless image processing project that demonstrates how to build an event-driven, scalable workflow using AWS Lambda. The entire infrastructure is provisioned using Terraform, keeping the setup reproducible, version-controlled, and production-aligned.
The goal of this project was simple but practical: upload a single image and automatically generate multiple optimized variants without managing any servers.
Project Overview
This project implements an automated image processing pipeline using AWS serverless services.
When an image is uploaded to a source S3 bucket, an S3 event triggers a Lambda function. The Lambda function processes the image using the Pillow library and stores multiple optimized versions in a destination S3 bucket. Logging and execution metrics are captured in CloudWatch.
The entire workflow is deployed using Terraform.
Architecture Summary
The solution consists of the following components:
A source S3 bucket where original images are uploaded
A destination S3 bucket where processed images are stored
An AWS Lambda function written in Python for image processing
A Lambda layer containing the Pillow dependency
An S3 event notification to invoke Lambda on object creation
An IAM role and policy following least-privilege access
CloudWatch logs for monitoring and debugging
Each uploaded image is converted into five variants:
JPEG with 85 percent quality
JPEG with 60 percent quality
WebP format
PNG format
A 200x200 thumbnail
Terraform Implementation Highlights
All resources are provisioned using Terraform with a focus on security and maintainability.
S3 buckets are created with unique names, versioning enabled, server-side encryption, and public access fully blocked.
The Lambda execution role grants only the required permissions:
read access to the source bucket
write access to the destination bucket
CloudWatch logging permissions
The Pillow library is packaged as a Lambda layer to keep the function lightweight and reusable. The Lambda function configuration includes runtime, handler, timeout, memory settings, and environment variables such as the destination bucket name.
An S3 bucket notification is configured to trigger the Lambda function on object creation events, along with an explicit Lambda permission allowing invocation from S3.
Dependency Packaging and Reliability
One important lesson from this project is avoiding dependency mismatches.
Native libraries like Pillow must be compiled for the same environment that Lambda runs on. To prevent runtime import errors, the Lambda layer is built inside a Docker container that matches AWS Lambda’s Amazon Linux runtime. This ensures the layer works reliably in production rather than only on the local machine.
Testing and Verification
After deployment, testing was straightforward.
Uploading an image to the source S3 bucket immediately triggered the Lambda function. CloudWatch logs confirmed successful execution, and the destination bucket contained all five processed image variants.
This validated the end-to-end flow from event trigger to processing and storage.
Key Learnings
Built a fully serverless, event-driven image processing pipeline
Used Terraform to provision and wire AWS services cleanly
Packaged native Python dependencies using Lambda layers
Avoided runtime issues by building layers with Docker
Applied least-privilege IAM policies for security
Understood Lambda execution behavior, cold starts, and logging
Designed a scalable solution without managing servers
Diagram
Final Thoughts
This project reinforced how powerful serverless architectures can be when combined with Infrastructure as Code. With Terraform and AWS Lambda, it is possible to build scalable, cost-efficient systems that respond automatically to events while remaining easy to manage and extend.
Next improvements could include exposing processed images through API Gateway, integrating CloudFront for CDN delivery, or adding metadata extraction and tagging.
Day 18 focused on building something practical, production-oriented, and fully automated using AWS and Terraform.
Here is the session link:
Comments
Post a Comment