Learn how to background heavy tasks from a web application to dedicated infrastructure.
This example demonstrates how to use background tasks in the context of a web application using Prefect for task submission, execution, monitoring, and result storage. We’ll build out an application using FastAPI to offer API endpoints to our clients, and task workers to execute the background tasks these endpoints defer.
Refer to the examples repository for the complete example’s source code.
This pattern is useful when you need to perform operations that are too long for a standard web request-response cycle, such as data processing, sending emails, or interacting with external APIs that might be slow.
This example will build out:
@prefect.task
definitions representing the work you want to run in the backgroundfastapi
application providing API endpoints to:
POST
request and submit the task to Prefect with .delay()
GET
request using its task_run_id
Dockerfile
to build a multi-stage image for the web app, Prefect server and task worker(s)compose.yaml
to manage lifecycles of the web app, Prefect server and task worker(s)You can follow along by cloning the examples repository or instead use uv
to bootstrap a your own new project:
This example application is structured as a library with a src/foo
directory for portability and organization.
This example does not require:
return
value of your task definitions to your result storage (e.g. a local directory, S3, GCS, etc), enabling caching and idempotency.The core of the background processing is a Python function decorated with @prefect.task
. This marks the function as a unit of work that Prefect can manage (e.g. observe, cache, retry, etc.)
Key details:
@task
: Decorator to define our task we want to run in the background.cache_policy
: Caching based on INPUTS
and TASK_SOURCE
.serve(create_structured_output)
: This function starts a task worker subscribed to newly delay()
ed task runs.The FastAPI application provides API endpoints to trigger the background task and check its status.
Checking Task Status with the Prefect Client
The get_task_result
helper function (in src/foo/_internal/_prefect.py
) uses the Prefect Python client to interact with the Prefect API:
This function fetches the TaskRun
object from the API and checks its state
to determine if it’s Completed
, Failed
, or still Pending
/Running
. If completed, it attempts to retrieve the result using task_run.state.result()
. If failed, it tries to get the error message.
A multi-stage Dockerfile
is used to create optimized images for each service (Prefect server, task worker, and web API). This approach helps keep image sizes small and separates build dependencies from runtime dependencies.
Dockerfile Key Details
base
): Sets up Python, uv
, installs all dependencies from pyproject.toml
into a base layer to make use of Docker caching, and copies the source code.server
): Builds upon the base
stage. Sets the default command (CMD
) to start the Prefect server.task
): Builds upon the base
stage. Sets the CMD
to run the src/foo/task.py
script, which is expected to contain the serve()
call for the task(s).api
): Builds upon the base
stage. Sets the CMD
to start the FastAPI application using uvicorn
.The compose.yaml
file then uses the target
build argument to specify which of these final stages (server
, task
, api
) to use for each service container.
We use compose.yaml
to define and run the multi-container application, managing the lifecycles of the FastAPI web server, the Prefect API server, database and task worker(s).
In a production use-case, you’d likely want to:
Dockerfile
for each servicepostgres
service and configure it as the Prefect database.develop
sectionKey Service Configurations
prefect-server
: Runs the Prefect API server and UI.
build
: Uses a multi-stage Dockerfile
(not shown here, but present in the example repo) targeting the server
stage.ports
: Exposes the Prefect API/UI on port 4200
.volumes
: Uses a named volume prefect-data
to persist the Prefect SQLite database (/root/.prefect/prefect.db
) across container restarts.PREFECT_SERVER_API_HOST=0.0.0.0
: Makes the API server listen on all interfaces within the Docker network, allowing the task
and api
services to connect.task
: Runs the Prefect task worker process (executing python src/foo/task.py
which calls serve
).
build
: Uses the task
stage from the Dockerfile
.depends_on
: Ensures the prefect-server
service is started before this service attempts to connect.PREFECT_API_URL
: Crucial setting that tells the worker where to find the Prefect API to poll for submitted task runs.PREFECT_LOCAL_STORAGE_PATH=/task-storage
: Configures the worker to store task run results in the /task-storage
directory inside the container. This path is mounted to the host using the task-storage
named volume via volumes: - ./task-storage:/task-storage
(or just task-storage:
if using a named volume without a host path binding).PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
: Tells Prefect tasks to automatically save their results using the configured storage (defined by PREFECT_LOCAL_STORAGE_PATH
in this case).PREFECT_LOGGING_LOG_PRINTS=true
: Configures the Prefect logger to capture output from print()
statements within tasks.OPENAI_API_KEY=${OPENAI_API_KEY}
: Passes secrets needed by the task code from the host environment (via a .env
file loaded by Docker Compose) into the container’s environment.api
: Runs the FastAPI web application.
build
: Uses the api
stage from the Dockerfile
.depends_on
: Waits for the prefect-server
(required for submitting tasks and checking status) and optionally the task
worker.PREFECT_API_URL
: Tells the FastAPI application where to send .delay()
calls and status check requests.PREFECT_LOCAL_STORAGE_PATH
: May be needed if the API itself needs to directly read result files (though typically fetching results via task_run.state.result()
is preferred).volumes
: Defines named volumes (prefect-data
, task-storage
) to persist data generated by the containers.
Assuming you have obtained the code (either by cloning the repository or using uv init
as described previously) and are in the project directory:
Prerequisites: Ensure Docker Desktop (or equivalent) with docker compose
support is running.
Build and Run Services: This example’s task uses marvin, which (by default) requires an OpenAI API key. Provide it as an environment variable when starting the services:
This command will:
--build
: Build the container images if they don’t exist or if the Dockerfile/context has changed.--watch
: Watch for changes in the project source code and automatically sync/rebuild services (useful for development).--detach
or -d
to run the containers in the background.Access Services:
This example provides a repeatable pattern for integrating Prefect-managed background tasks with any python web application. You can:
src/**/*.py
to define and submit your specific web app and background tasks.compose.yaml
) further, for example, using different result storage or logging levels.Learn how to background heavy tasks from a web application to dedicated infrastructure.
This example demonstrates how to use background tasks in the context of a web application using Prefect for task submission, execution, monitoring, and result storage. We’ll build out an application using FastAPI to offer API endpoints to our clients, and task workers to execute the background tasks these endpoints defer.
Refer to the examples repository for the complete example’s source code.
This pattern is useful when you need to perform operations that are too long for a standard web request-response cycle, such as data processing, sending emails, or interacting with external APIs that might be slow.
This example will build out:
@prefect.task
definitions representing the work you want to run in the backgroundfastapi
application providing API endpoints to:
POST
request and submit the task to Prefect with .delay()
GET
request using its task_run_id
Dockerfile
to build a multi-stage image for the web app, Prefect server and task worker(s)compose.yaml
to manage lifecycles of the web app, Prefect server and task worker(s)You can follow along by cloning the examples repository or instead use uv
to bootstrap a your own new project:
This example application is structured as a library with a src/foo
directory for portability and organization.
This example does not require:
return
value of your task definitions to your result storage (e.g. a local directory, S3, GCS, etc), enabling caching and idempotency.The core of the background processing is a Python function decorated with @prefect.task
. This marks the function as a unit of work that Prefect can manage (e.g. observe, cache, retry, etc.)
Key details:
@task
: Decorator to define our task we want to run in the background.cache_policy
: Caching based on INPUTS
and TASK_SOURCE
.serve(create_structured_output)
: This function starts a task worker subscribed to newly delay()
ed task runs.The FastAPI application provides API endpoints to trigger the background task and check its status.
Checking Task Status with the Prefect Client
The get_task_result
helper function (in src/foo/_internal/_prefect.py
) uses the Prefect Python client to interact with the Prefect API:
This function fetches the TaskRun
object from the API and checks its state
to determine if it’s Completed
, Failed
, or still Pending
/Running
. If completed, it attempts to retrieve the result using task_run.state.result()
. If failed, it tries to get the error message.
A multi-stage Dockerfile
is used to create optimized images for each service (Prefect server, task worker, and web API). This approach helps keep image sizes small and separates build dependencies from runtime dependencies.
Dockerfile Key Details
base
): Sets up Python, uv
, installs all dependencies from pyproject.toml
into a base layer to make use of Docker caching, and copies the source code.server
): Builds upon the base
stage. Sets the default command (CMD
) to start the Prefect server.task
): Builds upon the base
stage. Sets the CMD
to run the src/foo/task.py
script, which is expected to contain the serve()
call for the task(s).api
): Builds upon the base
stage. Sets the CMD
to start the FastAPI application using uvicorn
.The compose.yaml
file then uses the target
build argument to specify which of these final stages (server
, task
, api
) to use for each service container.
We use compose.yaml
to define and run the multi-container application, managing the lifecycles of the FastAPI web server, the Prefect API server, database and task worker(s).
In a production use-case, you’d likely want to:
Dockerfile
for each servicepostgres
service and configure it as the Prefect database.develop
sectionKey Service Configurations
prefect-server
: Runs the Prefect API server and UI.
build
: Uses a multi-stage Dockerfile
(not shown here, but present in the example repo) targeting the server
stage.ports
: Exposes the Prefect API/UI on port 4200
.volumes
: Uses a named volume prefect-data
to persist the Prefect SQLite database (/root/.prefect/prefect.db
) across container restarts.PREFECT_SERVER_API_HOST=0.0.0.0
: Makes the API server listen on all interfaces within the Docker network, allowing the task
and api
services to connect.task
: Runs the Prefect task worker process (executing python src/foo/task.py
which calls serve
).
build
: Uses the task
stage from the Dockerfile
.depends_on
: Ensures the prefect-server
service is started before this service attempts to connect.PREFECT_API_URL
: Crucial setting that tells the worker where to find the Prefect API to poll for submitted task runs.PREFECT_LOCAL_STORAGE_PATH=/task-storage
: Configures the worker to store task run results in the /task-storage
directory inside the container. This path is mounted to the host using the task-storage
named volume via volumes: - ./task-storage:/task-storage
(or just task-storage:
if using a named volume without a host path binding).PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
: Tells Prefect tasks to automatically save their results using the configured storage (defined by PREFECT_LOCAL_STORAGE_PATH
in this case).PREFECT_LOGGING_LOG_PRINTS=true
: Configures the Prefect logger to capture output from print()
statements within tasks.OPENAI_API_KEY=${OPENAI_API_KEY}
: Passes secrets needed by the task code from the host environment (via a .env
file loaded by Docker Compose) into the container’s environment.api
: Runs the FastAPI web application.
build
: Uses the api
stage from the Dockerfile
.depends_on
: Waits for the prefect-server
(required for submitting tasks and checking status) and optionally the task
worker.PREFECT_API_URL
: Tells the FastAPI application where to send .delay()
calls and status check requests.PREFECT_LOCAL_STORAGE_PATH
: May be needed if the API itself needs to directly read result files (though typically fetching results via task_run.state.result()
is preferred).volumes
: Defines named volumes (prefect-data
, task-storage
) to persist data generated by the containers.
Assuming you have obtained the code (either by cloning the repository or using uv init
as described previously) and are in the project directory:
Prerequisites: Ensure Docker Desktop (or equivalent) with docker compose
support is running.
Build and Run Services: This example’s task uses marvin, which (by default) requires an OpenAI API key. Provide it as an environment variable when starting the services:
This command will:
--build
: Build the container images if they don’t exist or if the Dockerfile/context has changed.--watch
: Watch for changes in the project source code and automatically sync/rebuild services (useful for development).--detach
or -d
to run the containers in the background.Access Services:
This example provides a repeatable pattern for integrating Prefect-managed background tasks with any python web application. You can:
src/**/*.py
to define and submit your specific web app and background tasks.compose.yaml
) further, for example, using different result storage or logging levels.