prefect.flows
Module containing the base workflow class and decorator - for most use cases, using the @flow
decorator is preferred.
Functions
bind_flow_to_infrastructure
select_flow
MissingFlowError
: If no flows exist in the iterableMissingFlowError
: If a flow name is provided and that flow does not existUnspecifiedFlowError
: If multiple flows exist but no flow name was provided
load_flow_from_entrypoint
entrypoint
: a string in the format<path_to_script>\:<flow_func_name>
or a string in the format<path_to_script>\:<class_name>.<flow_method_name>
or a module path to a flow functionuse_placeholder_flow
: if True, use a placeholder Flow object if the actual flow object cannot be loaded from the entrypoint (e.g. dependencies are missing)
- The flow object from the script
ScriptError
: If an exception is encountered while running the scriptMissingFlowError
: If the flow function specified in the entrypoint does not exist
load_function_and_convert_to_flow
serve
*args
: A list of deployments to serve.pause_on_shutdown
: A boolean for whether or not to automatically pause deployment schedules on shutdown.print_starting_message
: Whether or not to print message to the console on startup.limit
: The maximum number of runs that can be executed concurrently.**kwargs
: Additional keyword arguments to pass to the runner.
aserve
serve
instead if calling from a synchronous context.
Args:
*args
: A list of deployments to serve.pause_on_shutdown
: A boolean for whether or not to automatically pause deployment schedules on shutdown.print_starting_message
: Whether or not to print message to the console on startup.limit
: The maximum number of runs that can be executed concurrently.**kwargs
: Additional keyword arguments to pass to the runner.
ignore_storage=True
is provided, no pull from remote storage occurs. This flag
is largely for testing, and assumes the flow is already available locally.
load_placeholder_flow
raises
.
This is useful when a flow can’t be loaded due to missing dependencies or
other issues but the base metadata defining the flow is still needed.
Args:
entrypoint
: a string in the format<path_to_script>\:<flow_func_name>
or a module path to a flow functionraises
: an exception to raise when the flow is called
safe_load_flow_from_entrypoint
entrypoint
: A string identifying the flow to load. Can be in one of the following formats:<path_to_script>\:<flow_func_name>
<path_to_script>\:<class_name>.<flow_method_name>
<module_path>.<flow_func_name>
- Optional[Flow]: The loaded Prefect flow object, or None if loading fails due to errors
- (e.g. unresolved dependencies, syntax errors, or missing objects).
load_flow_arguments_from_entrypoint
flow
decorator.
Args:
entrypoint
: a string in the format<path_to_script>\:<flow_func_name>
or a module path to a flow function
is_entrypoint_async
entrypoint
: A string in the format<path_to_script>\:<func_name>
or a module path to a function.
- True if the function is asynchronous, False otherwise.
Classes
FlowStateHook
A callable that is invoked when a flow enters a given state.
Flow
A Prefect workflow definition.
Wraps a function with an entrypoint to the Prefect engine. To preserve the input
and output types, we use the generic type variables P
and R
for “Parameters” and
“Returns” respectively.
Args:
fn
: The function defining the workflow.name
: An optional name for the flow; if not provided, the name will be inferred from the given function.version
: An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.flow_run_name
: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow’s parameters as variables, or a function that returns a string.task_runner
: An optional task runner to use for task execution within the flow; if not provided, aThreadPoolTaskRunner
will be used.description
: An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.timeout_seconds
: An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.validate_parameters
: By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined asx\: int
and “5” is passed, it will be resolved to5
. If set toFalse
, no validation will be performed on flow parameters.retries
: An optional number of times to retry on flow run failure.retry_delay_seconds
: An optional number of seconds to wait before retrying the flow after failure. This is only applicable ifretries
is nonzero.persist_result
: An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults toNone
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.result_storage
: An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.result_serializer
: An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value ofPREFECT_RESULTS_DEFAULT_SERIALIZER
will be used unless called as a subflow, at which point the default will be loaded from the parent flow.on_failure
: An optional list of callables to run when the flow enters a failed state.on_completion
: An optional list of callables to run when the flow enters a completed state.on_cancellation
: An optional list of callables to run when the flow enters a cancelling state.on_crashed
: An optional list of callables to run when the flow enters a crashed state.on_running
: An optional list of callables to run when the flow enters a running state.
afrom_source
source
: Either a URL to a git repository or a storage object.entrypoint
: The path to a file containing a flow and the name of the flow function in the format./path/to/file.py\:flow_func_name
.
- A new
Flow
instance.
Secret
block:
ato_deployment
name
: The name to give the created deployment.interval
: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.cron
: A cron schedule of when to execute runs of this deployment.rrule
: An rrule schedule of when to execute runs of this deployment.paused
: Whether or not to set this deployment as paused.schedule
: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options liketimezone
orparameters
.schedules
: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such astimezone
.concurrency_limit
: The maximum number of runs of this deployment that can run at the same time.parameters
: A dictionary of default parameter values to pass to runs of this deployment.triggers
: A list of triggers that will kick off runs of this deployment.description
: A description for the created deployment. Defaults to the flow’s description if not provided.tags
: A list of tags to associate with the created deployment for organizational purposes.version
: A version for the created deployment. Defaults to the flow’s version.version_type
: The type of version to use for the created deployment. The version type will be inferred if not provided.enforce_parameter_schema
: Whether or not the Prefect API should enforce the parameter schema for the created deployment.work_pool_name
: The name of the work pool to use for this deployment.work_queue_name
: The name of the work queue to use for this deployment’s scheduled runs. If not provided the default work queue for the work pool will be used.job_variables
: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool forentrypoint_type
: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment._sla
: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud.
deploy
build=False
to skip building and pushing
an image.
Args:
name
: The name to give the created deployment.work_pool_name
: The name of the work pool to use for this deployment. Defaults to the value ofPREFECT_DEFAULT_WORK_POOL_NAME
.image
: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments.build
: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.push
: Whether or not to skip pushing the built image to a registry.work_queue_name
: The name of the work queue to use for this deployment’s scheduled runs. If not provided the default work queue for the work pool will be used.job_variables
: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.interval
: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.cron
: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.rrule
: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.triggers
: A list of triggers that will kick off runs of this deployment.paused
: Whether or not to set this deployment as paused.schedule
: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options liketimezone
orparameters
.schedules
: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options liketimezone
.concurrency_limit
: The maximum number of runs that can be executed concurrently.parameters
: A dictionary of default parameter values to pass to runs of this deployment.description
: A description for the created deployment. Defaults to the flow’s description if not provided.tags
: A list of tags to associate with the created deployment for organizational purposes.version
: A version for the created deployment. Defaults to the flow’s version.version_type
: The type of version to use for the created deployment. The version type will be inferred if not provided.enforce_parameter_schema
: Whether or not the Prefect API should enforce the parameter schema for the created deployment.entrypoint_type
: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.print_next_steps_message
: Whether or not to print a message with next steps after deploying the deployments.ignore_warnings
: Whether or not to ignore warnings about the work pool type._sla
: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud.
from_source
source
: Either a URL to a git repository or a storage object.entrypoint
: The path to a file containing a flow and the name of the flow function in the format./path/to/file.py\:flow_func_name
.
- A new
Flow
instance.
Secret
block:
isclassmethod
ismethod
isstaticmethod
on_cancellation
on_completion
on_crashed
on_failure
on_running
serialize_parameters
jsonable_encoder
to convert to JSON compatible objects without
converting everything directly to a string. This maintains basic types like
integers during API roundtrips.
serve
name
: The name to give the created deployment. Defaults to the name of the flow.interval
: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.cron
: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.rrule
: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.triggers
: A list of triggers that will kick off runs of this deployment.paused
: Whether or not to set this deployment as paused.schedule
: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options liketimezone
orparameters
.schedules
: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options liketimezone
.global_limit
: The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment.parameters
: A dictionary of default parameter values to pass to runs of this deployment.description
: A description for the created deployment. Defaults to the flow’s description if not provided.tags
: A list of tags to associate with the created deployment for organizational purposes.version
: A version for the created deployment. Defaults to the flow’s version.enforce_parameter_schema
: Whether or not the Prefect API should enforce the parameter schema for the created deployment.pause_on_shutdown
: If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running.print_starting_message
: Whether or not to print the starting message when flow is served.limit
: The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, useglobal_limit
.webserver
: Whether or not to start a monitoring webserver for this flow.entrypoint_type
: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
to_deployment
name
: The name to give the created deployment.interval
: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.cron
: A cron schedule of when to execute runs of this deployment.rrule
: An rrule schedule of when to execute runs of this deployment.paused
: Whether or not to set this deployment as paused.schedule
: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options liketimezone
orparameters
.schedules
: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such astimezone
.concurrency_limit
: The maximum number of runs of this deployment that can run at the same time.parameters
: A dictionary of default parameter values to pass to runs of this deployment.triggers
: A list of triggers that will kick off runs of this deployment.description
: A description for the created deployment. Defaults to the flow’s description if not provided.tags
: A list of tags to associate with the created deployment for organizational purposes.version
: A version for the created deployment. Defaults to the flow’s version.version_type
: The type of version to use for the created deployment. The version type will be inferred if not provided.enforce_parameter_schema
: Whether or not the Prefect API should enforce the parameter schema for the created deployment.work_pool_name
: The name of the work pool to use for this deployment.work_queue_name
: The name of the work queue to use for this deployment’s scheduled runs. If not provided the default work queue for the work pool will be used.job_variables
: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool forentrypoint_type
: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment._sla
: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud.
validate_parameters
- A new dict of parameters that have been cast to the appropriate types
ParameterTypeError
: if the provided parameters are not valid
visualize
- ImportError
: Ifgraphviz
isn’t installed.- GraphvizExecutableNotFoundError
: If thedot
executable isn’t found.- FlowVisualizationError
: If the flow can’t be visualized for any other reason.
with_options
name
: A new name for the flow.version
: A new version for the flow.description
: A new description for the flow.flow_run_name
: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow’s parameters as variables, or a function that returns a string.task_runner
: A new task runner for the flow.timeout_seconds
: A new number of seconds to fail the flow after if still running.validate_parameters
: A new value indicating if flow calls should validate given parameters.retries
: A new number of times to retry on flow run failure.retry_delay_seconds
: A new number of seconds to wait before retrying the flow after failure. This is only applicable ifretries
is nonzero.persist_result
: A new option for enabling or disabling result persistence.result_storage
: A new storage type to use for results.result_serializer
: A new serializer to use for results.cache_result_in_memory
: A new value indicating if the flow’s result should be cached in memory.on_failure
: A new list of callables to run when the flow enters a failed state.on_completion
: A new list of callables to run when the flow enters a completed state.on_cancellation
: A new list of callables to run when the flow enters a cancelling state.on_crashed
: A new list of callables to run when the flow enters a crashed state.on_running
: A new list of callables to run when the flow enters a running state.
- A new
Flow
instance.
FlowDecorator
InfrastructureBoundFlow
A flow that is bound to running on a specific infrastructure.
Methods:
submit
submit_to_work_pool
instead.
Args:
*args
: Positional arguments to pass to the flow.**kwargs
: Keyword arguments to pass to the flow.
- A
PrefectFlowRunFuture
that can be used to retrieve the result of the flow run.
submit_to_work_pool
submit
instead.
Args:
*args
: Positional arguments to pass to the flow.**kwargs
: Keyword arguments to pass to the flow.
- A
PrefectFlowRunFuture
that can be used to retrieve the result of the flow run.