Create a flexible Serverless worker that supports a Pod-first development workflow.
Developing machine learning and AI applications often requires powerful GPUs, making local development of API endpoints challenging. A typical development workflow for Serverless would be to write your handler code, deploy it directly to a Serverless endpoint, send endpoint requests to test, debug using worker logs, and repeat.
This can have signifcant drawbacks, such as:
This tutorial shows how to build a “Pod-first” development environment: creating a flexible, dual-mode Docker image that can be deployed as either a Pod or a Serverless worker.
Using this method, you’ll leverage a Pod—a GPU instance ideal for interactive development, with tools like Jupyter Notebooks and direct IDE integration—as your cloud-based development machine. The Pod will be deployed with a flexible Docker base, allowing the same container image to be seamlessly deployed to a Serverless endpoint.
This workflow lets you develop and thoroughly test your application using a containerized Pod environment, ensuring it works correctly. Then, when you’re ready to deploy to production, you can deploy it instantly to Serverless.
Follow the steps below to create a worker image that leverages this flexibility, allowing for faster iteration and more robust deployments.
To get a basic dual-mode worker up and running immediately, you can clone this repository and use it as a base.
In this tutorial you’ll learn how to:
handler.py
) that adapts its behavior based on a user-specified environment variable.start.sh
) to manage different operational modes.First, create a directory for your project and the necessary files.
Open your terminal and run the following commands:
This creates:
handler.py
: Your Python script with the Runpod handler logic.start.sh
: A shell script that will be the entrypoint for your Docker container.Dockerfile
: Instructions to build your Docker image.requirements.txt
: A file to list Python dependencies.handler.py
fileThis Python script will contain your core logic. It will check for a user-specified environment variable MODE_TO_RUN
to determine whether to run in Pod or Serverless mode.
Add the following code to handler.py
:
Key features:
MODE_TO_RUN = os.getenv("MODE_TO_RUN", "pod")
: Reads the mode from an environment variable, defaulting to pod
.async def handler(event)
: Your core logic.if mode_to_run == "pod" ... else
: This conditional controls what happens when the script is executed directly.
pod
mode, it runs a sample test call to your handler
function, allowing for quick iteration.serverless
” mode, it starts the Runpod Serverless worker.start.sh
scriptThis script will be the entrypoint for your Docker container. It reads the MODE_TO_RUN
environment variable and configures the container accordingly.
Add the following code to start.sh
:
Key features:
case $MODE_TO_RUN in ... esac
: This structure directs the startup based on the mode.serverless
mode: Executes handler.py
, which then starts the Runpod Serverless worker. exec
replaces the shell process with the Python process.pod
mode: Starts up the JupyterLab server for Pod development, then runs sleep infinity
to keep the container alive so you can connect to it (e.g., via SSH or docker exec
). You would then manually run python /app/handler.py
inside the Pod to test your handler logic.Dockerfile
This file defines how to build your Docker image.
Add the following content to Dockerfile
:
Key features:
FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
: Starts with a Runpod base image that comes with nginx, runpodctl, and other helpful base packages.ARG WORKSPACE_DIR=/workspace
and ENV WORKSPACE_DIR=${WORKSPACE_DIR}
: Allows the workspace directory to be set at build time.WORKDIR $WORKSPACE_DIR
: Sets the working directory to the value of WORKSPACE_DIR
.COPY requirements.txt ./requirements.txt
and RUN pip install ...
: Installs Python dependencies.COPY . .
: Copies all application files into the workspace directory.ENV MODE_TO_RUN="pod"
: Sets the default operational mode to “pod”. This can be overridden at runtime.CMD ["$WORKSPACE_DIR/start.sh"]
: Specifies start.sh
as the command to run when the container starts.Instead of building and pushing your image via Docker Hub, you can also deploy your worker from a GitHub repository.
Now, build your Docker image and push it to a container registry like Docker Hub.
Build your Docker image
Build your Docker image, replacing [YOUR_USERNAME]
with your Docker Hub username and choosing a suitable image name:
The --platform linux/amd64
flag is important for compatibility with Runpod’s infrastructure.
Push the image to your container registry
You might need to run docker login
first.
Now that you’ve finished building our Docker image, let’s explore how you would use the Pod-first development workflow in practice.
Deploy the image to a Pod by following these steps:
[YOUR_USERNAME]/dual-mode-worker:latest
.MODE_TO_RUN
and the value to pod
.After connecting to the Pod, navigate to /app
and run your handler directly:
This will execute the Pod-specific test harness in your handler.py
, giving you immediate feedback. You can edit handler.py
within the Pod and re-run it for rapid iteration.
Once you’re confident with your handler.py
logic tested in Pod mode, you’re ready to deploy your dual-mode worker to a Serverless endpoint.
docker.io/[YOUR_USERNAME]/dual-mode-worker:latest
.MODE_TO_RUN
to serverless
.The same image is used, but start.sh
will now direct it to run in Serverless mode, starting the runpod.serverless.start
worker.
After deploying your endpoint in to Serverless mode, you can test it with the following steps:
After a few moments for initialization and processing, you should see output similar to this:
Congratulations! You’ve successfully built, deployed, and tested a dual-mode Serverless worker. Now, let’s explore the recommended iteration process for a Pod-first development workflow:
Develop using Pod mode
MODE_TO_RUN
is set to pod
(or rely on the Dockerfile default)./app
directory.pip install [PACKAGE_NAME]
) or system dependencies (apt-get install [PACKAGE_NAME]
).handler.py
script. Test your changes frequently by running python handler.py
directly in the Pod’s terminal. This will execute the test harness you defined in the elif MODE_TO_RUN == "pod":
block, giving you immediate feedback.Update your Docker image
Once you’re satisfied with a set of changes and have new dependencies:
requirements.txt
file.RUN apt-get update && apt-get install -y [PACKAGE_NAME]
) to your Dockerfile
.handler.py
is saved.Deploy and test in Serverless mode
MODE_TO_RUN
environment variable for the endpoint is set to serverless
.For instructions on how to set environment variables during deployment, see Manage endpoints.
This iterative loop—write your handler, update the Docker image, test in Pod mode, then deploy to Serverless—allows for rapid development and debugging of your Serverless workers.
Now that you’ve mastered the dual-mode development workflow, you can: