Deploying a small web service with AWS Lambda (Serverless Freamwork)

Soo Min Jeong
4 min readFeb 23, 2024

Let’s say you built a side project with Django, Flask, Fastify, or Express. What’s the estimation of the traffic? Speaking brutally honest, you wouldn’t really expect it to spike into billions of users at once. As much as scalability matters, an initial design should meet the small scale.

If your service expects less than 1M requests per month and fits into the package size of 50MB, AWS Lambda may serve your needs better than EC2. Using AWS Lambda as a server may not seem so familiar. It has been used for something simple like a single feature or an API. However, web application is one of the use cases officially depicted by AWS.

Within aforementioned range, AWS Lambda is a better option than EC2 in terms of scalability, cost efficiency, and the convenience of deployment. It can be cheaper as you pay per use and it automatically scales up and down in response to the traffic.

Serverless Framework

If you have never deployed your application on AWS Lambda, Serverless Framework offers a simple but versatile deployment. If your application code is ready, it will take less than 10 minutes to set up the automated deployment into AWS Lambda. Once you sync the AWS account, provide the handler of the application in serverless.yml as the following code, it’s all done.

# serverless.yml
service: myService

provider:
name: aws
runtime: nodejs14.x
runtimeManagement: auto # optional, set how Lambda controls all functions runtime. AWS default is auto; this can either be 'auto' or 'onFunctionUpdate'. For 'manual', see example in hello function below (syntax for both is identical)
memorySize: 512 # optional, in MB, default is 1024
timeout: 10 # optional, in seconds, default is 6
versionFunctions: false # optional, default is true
tracing:
lambda: true # optional, enables tracing for all functions (can be true (true equals 'Active') 'Active' or 'PassThrough')

functions:
hello:
handler: handler.hello # required, handler set in AWS Lambda
name: ${sls:stage}-lambdaName # optional, Deployed Lambda name
description: Description of what the lambda function does # optional, Description to publish to AWS
runtime: python3.11 # optional overwrite, default is provider runtime
runtimeManagement:
mode: manual # syntax required for manual, mode property also supports 'auto' or 'onFunctionUpdate' (see provider.runtimeManagement)
arn: <aws runtime arn> # required when mode is manual
memorySize: 512 # optional, in MB, default is 1024
timeout: 10 # optional, in seconds, default is 6
provisionedConcurrency: 3 # optional, Count of provisioned lambda instances
reservedConcurrency: 5 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit
tracing: PassThrough # optional, overwrite, can be 'Active' or 'PassThrough'

Wait, AWS Lambda service stays up for only 15 min and shuts down. Will the users have to suffer the boot time every 15 min?

This is the most common concern when it comes to using AWS Lambda for a service whose expected availability is 100%. Serverelss Framework offers a plugin called Warmup , which pings the AWS Lambda job regularly so that it could stay up 24/7.

Keeping it 24/7? Isn’t it too expensive?

Thanks to the beauty of pay-per-use, it can be even cheaper than EC2 instances. If your service needs low-computing tasks with less than 50,000 requests, you can operate cheaper with AWS Lambda.

AWS Lambda pricing is based on the number of requests and their durations. It’s free up to 1 million requests per month, and 400,000 GB-s with a limit of 3.2 million seconds (source). For example, if your task requires 512 MB of memory and each request takes 1 second, it will cost you nothing up to 800,000 requests.

… but web application is more than an API. What about other features like logging, endpoint management, and storage?

AWS offers it all, and you can easily configure services like Cloudwatch, S3, and API Gateway in the serverless.yml. Serverless Framework deploys other services with CloudFormation.

Architecture of the server deployed into AWS Lambda with Serverless Framework

The Lambda job is all set. Now how do I expose the endpoint to the public users?

There’s a prerequisite to this step — a TLS/SSL certificate registered in AWS Certificate Manager. After that, you can choose either using a Serverless Framework plugin or working on AWS UI.

You can create custom domain names and map them to the Lambda job with serverless-domain-manager. Or you can easily set it up in Route53. Once you map your custom domain in Route53 into the API in the API gateway, the requests into the domain will go into the Lambda job via API Gateway (Please refer to the screenshot below). As this configuration is usually done only once, the manual process can be more straight-forward.

You can map your custom domain name to a Lambda job in Custom domain names > API mappings.

How can I de-couple from the Serverless Framework?

You can simply remove the serverless.yml , adjust the code to de-couple it from the AWS Lambda requirements (handler with parameters like event andcontext , and set up your own CI/CD pipeline. The template in the CloudFormation is going to provide all the information about the infrastructure.

If it sounds convincing to you, you can start with this official tutorial. Thank you!

--

--