Hello World with AWS Lambda & Docker

I have been using Lambda functions to run some internal processes for quite a while now. It’s usually things like shutting down non-prod databases out of working hours, a couple of Slack bots…

One of the ways to deploy Lambdas is zipping them and pushing them to S3. For some more advanced Node applications I’ve used the Serverless Framework.

I was having a chat with one of my colleagues the other day. Turns out he’s working on a project that could benefit from using lambdas and he had read on the AWS documentation that apparently it’s now possible to deploy lamdas as container images. I was immediately interested and the results of my Hello World experiment are below in this post.

Writing your Docker Image

AWS are providing base images for all the supported Lambda runtimes (Python, Node.js, Java, .NET, Go, and Ruby). This makes adding our code and dependencies very easy.

They also provide base images for custom runtimes based on Amazon Linux, or you could also deploy other images based on Alpine or Debian. These images need to implement the Lambda Runtime API. However, I’m not covering this here as I just wanted to try an simple Python application.

The code

The idea is very simple. You send a post request to your app with that includes your name. The application will randomly pick a language from a list and greet you.

I wanted to demonstrate how simple it is to add your code an any files and dependencies you might need to run it. Just like a regular Docker image.

The Python code for my application is the following:


import random
import yaml

def lambda_handler(event, context):

    with open('hello.yml', 'r') as file:
        translations = yaml.load(file, Loader=yaml.FullLoader)

    hello = translations[random.randint(0, len(translations) - 1)]

    message = f"{hello.get('translation')} {event.get('name', 'Foo Bar')}! " \
              f"Now you know how to say 'Hello' in {hello.get('language')}"


    return {
        'statusCode': 200,
        'body': message,

And the file containing the different translations of “Hello”


- language: English
  translation: Hello
- language: Finnish
  translation: Hei
- language: Galician
  translation: Ola
- language: Portuguese
  translation: Olá
- language: Spanish
  translation: Hola

As you can see, I’m using yaml so I need an external library for it. My requirements file is as simple as it can get.


Directory structure

Just before we go on to write our Dockerfile, this is the directory structure of my hello world project.

├── Dockerfile
├── README.md
└── sample
    ├── hello.yml
    ├── lambda_function.py
    └── requirements.txt


Let’s start with our Docker file. At the time of writing, AWS provide the following base images:

TagsRuntimeOperating System
3, 3.8Python 3.8Amazon Linux 2
3.7Python 3.7Amazon Linux 2018.03
3.6Python 3.6Amazon Linux 2018.03
2, 2.7Python 2.7Amazon Linux 2018.03

And you can get them from either the Docker Hub or ECR. I used ECR and my Dockerfile looks like this.

FROM public.ecr.aws/lambda/python:3.8

COPY sample/* /var/task/

RUN pip install -r /var/task/requirements.txt

CMD ["lambda_function.lambda_handler"]
  1. I’m basing my image on AWS’ Python 3.8 image.
  2. I’m copying my Python code, the yaml file with the translations and the requirements to /var/task/ as that’s Lambda’s default working directory.
  3. I’m installing the dependencies with pip
  4. I’m telling Lambda what the handler function is. When our Lambda is invoked it runs the handler function.

Building the image and testing it locally

Let’s build the image. We’ll give it a very simple name. Bear in mind we will need to push it to ECR later so we will need to re-tag it properly for it.

Run this command in the directory where your Dockerfile lives.

docker build -t lambda-test .

And let’s run it now. We need to publish a port so we can curl our application. By default Lambdas listen on port 8080 and I’m mapping it to port 9000 on my local machine.

 docker run --rm=true -p 9000:8080 lambda-test

Now we can see the logs of the lambda function. Open another terminal and run the following curl command.

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"name": "David"}'

and you should get this in return (in whatever language the randomizer picks):

  "statusCode": 200,
  "body": "Hola David! Now you know how to say 'Hello' in Spanish"

and looking at the logs:

START RequestId: 5aa06e1e-a8a6-4bec-9abe-ef44d63e6d8a Version: $LATEST
Hola David! Now you know how to say 'Hello' in Spanish
END RequestId: 5aa06e1e-a8a6-4bec-9abe-ef44d63e6d8a
REPORT RequestId: 5aa06e1e-a8a6-4bec-9abe-ef44d63e6d8a  Init Duration: 0.24 ms  Duration: 65.57 ms      Billed Duration: 100 ms Memory Size: 3008 MB    Max Memory Used: 3008 MB      

Which will remind you of the logs you see on AWS when you run a Lambda function.

ECR Repo

To deploy our newly built Lambda function we need to host the resulting image on Amazon ECR. Note that the Lambda function and the container registry must be in the same account and region.

Configuring the repo

So I went to ECR and created a new private repo devops/lambda-test.

Tag and push your Docker image

Let’s tag our image correctly so it matches the repo we’ve just created.

docker tag lambda-test:latest your-account.dkr.ecr.your-region.amazonaws.com/devops/lambda-test:latest

Before you can push to your repo you’ll need to log in. I’m assuming you have permissions to do this.

aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin your-account.dkr.ecr.your-region.amazonaws.com 

And we push it by running

docker push your-account.dkr.ecr.your-region.amazonaws.com/devops/lambda-test:latest

Making it work on AWS

Creating the function

It’s time to create our Lambda function. Let’s go to the Lambda section of the AWS Console.

Go to Create function and select Container image

Add the function name (e.g. docker-hello-world)

And select the image you want to use. You can add the URI or browse your ECR registry to find it.

Here you could also override the entrypoint, cmd, or workdir that are set up in the image by default. We don’t need to for our example.

We also don’t need to change the permissions. By leaving the option by default, AWS will create a new role with basic Lambda permissions. Among them, it will be able to write logs to CloudWatch.

Let’s hit Create function on the bottom right. We now need to wait until the creation process finishes.

Once it’s done, we will see a pop-up similar to this one.

Testing on AWS

We will now follow the pop-up’s advice and test our Lambda function. I’ve created a test event called HelloDave. It contains a JSON document with a name, which is what the Python code is expecting to receive.

Save your changes and click the Invoke button. The function will start running now.

If everything went well, you should see an Execution result: succeeded message and accessing that box will let you see the response and the logs. There’s also a link to CloudWatch so you can check the logs there.


I like this option. At this moment I like it way more than zipping our code and dependencies and pushing them to AWS.

This will allow us to build our own Docker images, tailored to our needs, and since it’s Docker we should be able to add it to our existing CI without any problems.

I also like the fact that you can run it on your own machine easily. Python has a great package that allows you to run an AWS lambda function locally (check python-lambda-local) but I like being able to test it in the container because that’s exactly where the function will be run when deployed to AWS.

See you next time!

Leave a Reply

Your email address will not be published.