Skip to main content
Version: 2.x

Introduction to ZIO Lambda

A ZIO-based AWS Custom Runtime compatible with GraalVM Native Image.

Development CI Badge Sonatype Releases Sonatype Snapshots javadoc ZIO Lambda

Installation​

libraryDependencies += "dev.zio" %% "zio-json" % "0.6.2"
libraryDependencies += "dev.zio" %% "zio-lambda" % "1.0.5"

// Optional dependencies
libraryDependencies += "dev.zio" %% "zio-lambda-event" % "1.0.5"
libraryDependencies += "dev.zio" %% "zio-lambda-response" % "1.0.5"

Usage​

Create your Lambda function by providing it to ZLambdaRunner.serve(...) method.

import zio.Console._
import zio._
import zio.lambda._

object SimpleHandler extends ZIOAppDefault {

def app(request: KinesisEvent, context: Context) = for {
_ <- printLine(event.message)
} yield "Handler ran successfully"

override val run =
ZLambdaRunner.serve(app)
}

zio-lambda depends on zio-json for decoding any event you send to it and enconding any response you send back to the Lambda service. You can either create your own data types or use the ones that are included in zio-lambda-event and zio-lambda-response.

The last step is to define the way your function will be invoked. There are three ways, detailed below:

Lambda layer​

Upload zio-lambda as a Lambda Layer Each release will contain a zip file ready to be used as a lambda layer) and your function. Instructions coming soon!

Direct deployment of native image binary​

  1. Create an AWS Lambda function and choose the runtime where you provide your own bootstrap on Amazon Linux 2

    create-lambda

  2. Run sbt GraalVMNativeImage/packageBin, we'll find the binary present under the graalvm-native-image folder:

    binary-located

  3. Create the following bootstap file (which calls out to the binary) and place it in the same directory alongside the binary:

    #!/usr/bin/env bash

    set -euo pipefail

    ./zio-lambda-example

    bootstrap-alongside-native-binary

  4. Now we can zip both these files up:

    > pwd
    /home/cal/IdeaProjects/zio-lambda/lambda-example/target/graalvm-native-image
    > zip upload.zip bootstrap zio-lambda-example
  5. Take upload.zip and upload it to AWS Lambda and test your function:

    lambda-ui

  6. Test everything out to make sure everything works:

    test-ui

Deployment of native image binary in a Docker container​

Following the steps from Direct deployment of native image binary to produce your native image binary, we can package up the native binary into a Docker image and deploy it like that to AWS Lambda.

FROM gcr.io/distroless/base-debian12
COPY lambda-example/target/graalvm-native-image/zio-lambda-example /app/zio-lambda-example
CMD ["/app/zio-lambda-example"]

NOTE: This Dockerfile is meant to build the lambda-example located in the zio-lambda project and the Dockerfile is placed in the zio-lambda-repository. You will need to adjust this Dockerfile to match your project needs.

Now we can build and tag the Docker image:

docker build -t native-image-binary .

Take this image and push it to AWS ECR:

pass=$(aws ecr get-login-password --region us-east-1) 
docker login --username AWS --password $pass <your_AWS_ECR_REPO>
docker tag native-image-binary <your-particular-ecr-image-repository>:<your-tag>
docker push <your-particular-ecr-image-repository>:<your-tag>

Here is an example:

image-uploaded

Create a Lambda function and choose container image:

lambda-create-container-image

image

Please note that because you incur the overhead of your native binary residing within a Docker container, there is more overhead than the other approach of deploying the binary straight to AWS Lambda