- a Docker work pool: stores the infrastructure configuration for your deployment
- a Docker worker: process that polls the Prefect API for flow runs to execute as Docker containers
- a deployment: a flow that should run according to the configuration on your Docker work pool
Executing flows in a long-lived containerThis guide shows how to run a flow in an ephemeral container that is removed after the flow run completes.
To instead learn how to run flows in a static, long-lived container, see this guide.
Create a work pool
A work pool provides default infrastructure configurations that all jobs inherit and can override. You can adjust many defaults, such as the base Docker image, container cleanup behavior, and resource limits. To set up a Docker type work pool with the default values, run:my-docker-pool listed in the output.
Next, check that you can see this work pool in your Prefect UI.
Navigate to the Work Pools tab and verify that you see my-docker-pool listed.
When you click into my-docker-pool, you should see a red status icon signifying that this work pool is not ready.
To make the work pool ready, you’ll need to start a worker.
We’ll show how to do this next.
Start a worker
Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker). To start a worker on your local machine, open a new terminal and confirm that your virtual environment hasprefect installed.
Run the following command in this new terminal to start the worker:
Ready status indicator.
Pro Tip:If
my-docker-pool does not already exist, the below command will create it for you automatically with the default settings for that work pool type, in this case docker.Create the deployment
From the previous steps, you now have:- A work pool
- A worker
Automatically bake your code into a Docker image
Create a deployment from Python code by calling the.deploy method on a flow:
deploy_buy.py
Deployments page in the UI.
By default, .deploy builds a Docker image with your flow code baked into it and pushes the image to the
Docker Hub registry implied by the image argument to .deploy.
Authentication to Docker HubYour environment must be authenticated to your Docker registry to push an image to it.
image argument.
If building a Docker image, your environment with your deployment needs Docker installed and running.
push=False in the .deploy method:
build=False in the .deploy method:
requirements.txt file.
Automatically build a custom Docker image with a local Dockerfile
If you want to use a custom image, specify the path to your Dockerfile viaDockerImage:
my_flow.py
DockerImage object enables image customization.
For example, you can install a private Python package from GCP’s artifact registry like this:
-
Create a custom base Dockerfile.
sample.Dockerfile
-
Create your deployment with the
DockerImageclass:deploy_using_private_package.py
DockerImage class.
Default Docker namespaceYou can set the Once set, you can omit the namespace from your image name when creating a deployment:The above code builds an image with the format
PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE setting to append a default Docker namespace to all images
you build with .deploy. This is helpful if you use a private registry to store your images.To set a default Docker namespace for your current profile run:with_default_docker_namespace.py
<docker-registry-url>/<organization-or-username>/my_image:my_image_tag
when PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE is set.Store your code in git-based cloud storage
While baking code into Docker images is a popular deployment option, many teams store their workflow code in git-based storage, such as GitHub, Bitbucket, or GitLab. If you don’t specify animage argument for .deploy, you must specify where to pull the flow code from at runtime
with the from_source method.
Here’s how to pull your flow code from a GitHub repository:
git_storage.py
entrypoint is the path to the file the flow is located in and the function name, separated by a colon.
See the Store flow code guide for more flow code storage options.
Additional configuration with .deploy
Next, see deployment configuration options.
To pass parameters to your flow, you can use the parameters argument in the .deploy method. Just pass in a dictionary of
key-value pairs.
pass_params.py
job_variables parameter allows you to fine-tune the infrastructure settings for a deployment.
The values passed in override default values in the specified work pool’s
base job template.
You can override environment variables, such as image_pull_policy and image, for a specific deployment with the job_variables
argument.
job_var_image_pull.py
job_variables parameter:
job_var_env_vars.py
requirements.txt copied into it.
See Override work pool job variables for more information about how to customize these variables.
Work with multiple deployments with deploy
Create multiple deployments from one or more Python files that use .deploy.
You can manage these deployments independently of one another to deploy the same flow with different configurations
in the same codebase.
To create multiple deployments at once, use the deploy function, which is analogous to the serve function:
from_source method.
Here’s an example of deploying two flows, one defined locally and one defined in a remote repository:
deploy function.
This is useful if using a monorepo approach to your workflows.