Serverless computing seems counterintuitive when you first hear of it, but there are a lot of benefits like automatic scaling and paying only for what you use. Serverless computing is, in fact, not serverless, though. All that the word “serverless” means is that your cloud provider manages the underlying infrastructure for you when your code needs to run. This means that you only need to worry about code.
LET'S SEE HOW IT WORKS
In this post, we’ll build a basic ExpressJS API and deploy it to an AWS Lambda. Just for fun, we’ll also automate the deployment with Github Actions.
Don’t worry if you’re not a NodeJS or ExpressJS expert – the code is here .
PREREQUISITES
In order to follow this guide, you’ll need to:
- Have a NodeJS installed on your development machine.
- Have an AWS account. The free tier should be more than enough for this guide.
- Have a Github account.
You don’t need super advanced knowledge of these services, but it would be good if you at least understood the basics of each to get a good understanding of what is happening under the hood.
CREATING OUR EXPRESSJS API
The API we’ll create is very basic. It’ll have a single endpoint that accepts a name as a query parameter and spits out a greeting. If a name is omitted from the query parameters, the word World
will be used as a default. First, we’ll initialize our Node app.
npm init
After following the prompts from npm init
, install ExpressJS:
npm install express --save
Once that’s done, we’re ready to create our ExpressJS API. First, let’s create our greeting route. In routes/greet.js
, add the following:
const express = require('express');
const router = express.Router();
router.get('/', (req, res) => {
const name = req.query.name || 'World';
res.send({message: `Hello, ${name}!`});
});
module.exports = router;
This endpoint just accepts a name in the query parameters, and outputs a greeting. If no name is given, it’ll default to World
.
Next, in index.js
, add this code to run the application:
const express = require('express');
const greetRoutes = require('./routes/greet');
const app = express();
const port = 3000;
app.use('/greet', greetRoutes);
app.listen(port, () => {
console.log(`The app is listening on port ${port}, request a greeting!`);
});
Finally, add the start
script to your packages.json
file:
"scripts": {
"start": "node index.js",
},
Now you can run npm start
, and make a request to http://localhost:3000/greet?name=JP. This will then give the response:
{
"message": "Hello, JP!"
}
Right, so now we have an ExpressJS app that greets a user. This totally isn’t a classical Hello, world!
application. Let’s make it serverless!
MAKE IT SERVERLESS COMPATIBLE
In its current state, the application won’t do much if we upload it to AWS.
We’ll use the serverless-http package to make our application work with AWS Lambda. The serverless package allows us to deploy our application to various cloud providers. Finally, the serverless-offline package allows us to run the application locally.
npm i -g serverless
npm i -d serverless-offline
npm i -s serverless-http
Then, in index.js
we change the code that starts the application up to this:
// app.listen(port, () => {
// console.log(`The app is listening on port ${port}, request a greeting!`);
// });
module.exports.handler = serverless(app);
Next, we need to create a configuration file tells our application what kind of serverless environment it’s going to run in. For more information, you can check out the serverless framework documentation .
Create a new file called serverless.yml
and add the following content to it:
service: expressjs-api
plugins:
- serverless-offline
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: us-east-1
memorySize: 128
timeout: 30
functions:
app:
handler: index.handler
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
Finally, we want to run our app as if it’s in a serverless environment locally to achieve dev parity with the environment that the application will run in. Change package.json
to this:
"scripts": {
"start": "serverless offline start --httpPort 3000"
},
When you run npm start
again, the app will be running at http://localhost:3000/dev. That’s the base URL that you can now use for calling the API. From there, you can open up http://localhost:3000/dev/greet?name=John to get a greeting.
Great! Our app is now serverless compatible. Specifically, it’s AWS Lambda compatible thanks to the AWS configuration in serverless.yml
.
DEPLOY TO AWS
To deploy our app to AWS, we’ll need a user with sufficient permission to create the resources to create the underlying resources.
This deployment makes use of things like:
- Cloudformation: To manage the stack for your application
- S3: To store artifacts
- Cloudwatch: For Lambda logs (stdout and execution logs will go here)
- IAM: To manage policies
- API Gateway: To create and manage API endpoints
- Lambda: To manage lambda functions
- EC2: Used to execute the Lambda in VPC
- Cloudwatch Events: to manage Cloudwatch even triggers
For more information about the permissions, check out the serverless stack documentation.
For now, you can just copy the policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*",
"s3:*",
"logs:*",
"iam:*",
"apigateway:*",
"lambda:*",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"events:*"
],
"Resource": [
"*"
]
}
]
}
The steps for creating a user for your deployment are:
- In your AWS console, go to IAM, then visit Policies. Click on Create Policy. Select the JSON editor and then paste the JSON above into the editor. Once that’s done, just follow the policy wizard steps and finish creating the policy.
- In IAM, go to Groups and create a new group with your new policy attached. You can also attach the policy directly to a new user if you’d like, although that’s not a great practice.
- In IAM, go to Users and create a new user. When adding the user, select the new group that you’ve created. When you are creating the user, you will get an option for Programmatic access. Select that option!
- Once the user is created, you’ll get an Access Key ID and Secret access key. Be sure to save those, you’ll need them soon.
Cool, so now you’ve got a user that can deploy your serverless app. Let’s deploy it.
In your console, run serverless config credentials --provider aws --key ACCESS_KEY --secret SECRET_KEY
.
Replace ACCESS_KEY
and SECRET_KEY
with your access key ID and secret key respectively.
Note: If you have previously run aws configure
, you might need to add the --overwrite
flag, or create a new AWS profile on your machine. For more information on that, check out the AWS docs.
Once you’ve configured your AWS credentials, run the following:
serverless deploy
The first deployment will take some time as new resources are created within AWS.
Once the deployment is done, the output will give you a URL like this:
✔ Service deployed to stack expressjs-api-dev (162s)
endpoints:
ANY - https://r51uadje24.execute-api.us-east-1.amazonaws.com/dev/
ANY - https://r51uadje24.execute-api.us-east-1.amazonaws.com/dev/{proxy+}
functions:
app: expressjs-api-dev-app (949 kB)
You can then visit the first URL, and add greet?name=John
onto it. In this case, that would be https://r51uadje24.execute-api.us-east-1.amazonaws.com/dev/greet?name=John
.
Your API is now deployed!
AUTOMATION
It’s great that we have our API deploying to AWS with the serverless deploy
command. It’d be even better if we didn’t have to do the deployment manually – that’s not scalable when you’re working in a team.
Let’s automate the deployment.
GITHUB ACTIONS
Github Actions is a CI/CD tool, similar to most others. You can create pipelines that deploy your application when you push to the repository. Github Actions can do a lot more than that, but that’s all we care about for now.
For more information on Github actions, check the documentation.
The first thing to do is to make sure that your application folder is a git repo. If you haven’t already done this, do the following: Create a repo on Github, copy the remote URL and run the following:
git init
git remote add origin git@github:username/repo-name.git
git add .
git commit -m "Initial commit"
git push origin master
Remember to replace your repository URL in line 2.
Once you have your folder initialised as a Github repo, you can get started with Github Actions.
- Create a folder called
.github/workflows
- In that folder, create a file named
deploy.yml
Populate the deploy.yml
file with the following, but don’t push anything to Github yet:
name: Deploy API to AWS
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- run: npm install
- run: npm i -g serverless
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_S3_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Deploy Lambda
run:
serverless deploy
More information on the aws-actions/configure-aws-credentials@v1
action can be found here.
Last bit! In your Github repo, go to Settings. From the context menu, select Secrets, and then Actions. Click on ** New repository secret**.
- Enter
AWS_S3_ACCESS_KEY_ID
in the Name field. - Enter your AWS access key ID.
- Click Add Secret.
Repeat the process for your secret access key, with the name AWS_S3_SECRET_ACCESS_KEY
.
Once that’s done, you can commit and push.
In Github, go to the Actions tab of your repo and select the currently running action. The output should look something the image below once it’s done. If all the steps passed, then your API was successfully deployed. You can try changing the greeting, committing and pushing the changes to see the difference in your API when it is redeployed.

CONCLUSION
In this post we learned how to deploy an ExpressJS API to AWS Lambda and then automate that deployment with Github Actions.
There are more things that you could do to make things even better, like using Route53 to expose your API with a custom domain name .
The most major benefit of this approach is that you only pay for the time that it takes to execute a request – which can save you huge amounts of money compared to running a VM or container permanently. You also don’t need to worry about the underlying infrastructure of the Lambda.