Summary
Build custom lambda layers using container-based dev environments.
Overview
As you dive deeper into AWS Lambda functions, inevitably you will run into the need where you will want to augment the runtime to include external libraries. In the case of python, this could libraies included on pypi or even your custom-built libraries for your needs.
AWS provides the lambda runtimes as container images, and we can use this image as a dev-environment to faithfully generate layers specific to our needs.
Pre-reqs
While you should be able to (mostly) follow along below if some of these items are not your strength, this post will likely make much more sense if you have some familiarty with below:
- installing python packages via
pip
- A basic understanding of Docker (building images, containers, and how to attach to a shell)
- Bash/command line
- Comfort with VS code and the Docker extension
Cookbook
Below will walk through the steps to build your own custom layer.
1. AWS supported container image
Below is a modified version of the code that you would use to build a custom image for your layer’s runtime. In this case, I am riffing on top of the necessary bits to build out a dev environment that we will connect to via VS Code.
FROM public.ecr.aws/lambda/python:3.9
WORKDIR /brock
RUN yum -y install zip && yum -y clean all && rm -rf /var/cache
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
# get back to the home folder created as part of this flow
WORKDIR /brock
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
ENTRYPOINT [ "bash" ]
2. Build the image
With above as our Dockefile
, let’s build the image
docker build -t pydev .
3. Compose the image
This part is uncessary, but I find it much easier to reason about my tech stack via compose as opposed to a single docker run command.
version: "3.9"
services:
python:
# image was previously built with docker build -t pydev .
# NOTE: it is also possible to build the image with `build: .` instead of `image:` like below
image: "pydev"
# to have the shell open (e.g. --it in docker run)
# https://stackoverflow.com/a/39150040
stdin_open: true
tty: true
# map the project repo to the working directory created by the image
volumes:
- .:/brock/
Above will map the current directory to the working directory created in the Dockerfile
. As noted earlier, the aim of this post is to use the container as our dev environment, and by mapping the current directory to our “main” folder in the image, we can write code against a consistent environment (the container) but have our work available to us via the volume mapping on our laptop (the host machine).
4. Run the dev-environment
docker-compose up -d
Above will use the docker-compose.yml
file from the previous step and run the container in detached mode.
5. Hop into the container
With the container running, we can attach a shell in VS Code.
SCREENSHOT HERE
- Right click the running container and select attach shell
Assumes VS code and extensions docker (and perhaps others like dev container)
6. Install our python libaries from pip
mkdir layer1
lmdkr python
cd layer1
cp ../requirements.txt .
pip install -r requirements.txt -t python/
cd python
rm -rf *.dist-info __pycache__
cd ..
zip -r lambda-poc.zip python/
Within the terminal, above did the following
- create a folder to house the layer code
- create a subfolder called
/python
as we need this as part of aws expectations - install our requirements into
/python
as the local target - remove uncessary files
- zip up
/python
and use this zip file for our customer layer to use with lambda
On our laptop, the current working directory now should look similar to below.
.
├── Dockerfile
├── README.md
├── docker-compose.yml
├── layer1
│ ├── lambda.py
│ ├── layer-poc.zip
│ ├── python
│ └── requirements.txt
└── requirements.txt
It is worth noting that layer1/python
is the target of our pip install and has a number of python libraies. That is omitted from the output.
If you are interested in how I created the tree above, I installed tree
via brew install tree
and generated above with tree -L 2
.
7. Create a python 3.9 lambda
It doesn’t really matter what you select for other options, but you will need to select a python 3.9 ARM runtime. As I noted earlier, I am on a M1 chip and this keeps our chips consistent.
8. Create your lambda layer.
Create a new layer for lambda via the cloud console, and upload the zip file which we built with the container but is available on your laptop as part of volume mapping. Upload the zip file.
We previously created an ARM-based lambda. When we create our layer, we also need to select python 3.9 and ARM.
9. Update your lambda
As part of our custom layer, we included pandas. Let’s update the boilerplate 3.9 python lambda handler to import pandas.
import json
import pandas as pd
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
10. Finally, test your lambda
With everything now in place, confirm that we successfully used a container as our development environment to drastically simplify the development of our customer lambda layers.
I am using the default test event which is shown below
{"key1": "value1",
"key2": "value2",
"key3": "value3"
}
After running the test, you should see output similar to the success message below.
Test Event Name
test1
Response
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
Function Logs
START RequestId: 2css6be0-bxxxxxxx Version: $LATEST
END RequestId: 2asdfss0-b0xxxxx
REPORT RequestId: 2c0sse0-bxxxxxx
Duration: 1.11 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 113 MB Init Duration: 824.74 ms
Request ID
2c026dd-b061-4xxxxxxx
That’s it!
Conclusion
While above may seem like many steps, by using the AWS supported image, we can reproducibly build our custom layers relative our needs.