- Notifications
You must be signed in to change notification settings - Fork16
danielgerlag/liteflow
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
LiteFlow is a Python library for running workflows. Think: long running processes with multiple tasks that need to track state. It supports pluggable persistence and concurrency providers to allow for multi-node clusters.
Install the "liteflow.core" package
> pip install liteflow.coreA workflow consists of a series of connected steps. Each step produces an outcome value and subsequent steps are triggered by subscribing to a particular outcome of a preceeding step.Steps are usually defined by inheriting from the StepBody abstract class and implementing therun method.
First we define some steps
fromliteflow.coreimport*classHello(StepBody):defrun(self,context:StepExecutionContext)->ExecutionResult:print("Hello world")classGoodbye(StepBody):defrun(self,context:StepExecutionContext)->ExecutionResult:print("Goodbye")returnExecutionResult.next()
Then we define the workflow structure by composing a chain of steps.
classMyWorkflow(Workflow):defid(self):return"MyWorkflow"defversion(self):return1defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello)\ .then(Goodbye)
Theid andversion properties are used by the workflow host to identify a workflow definition.
Each running workflow is persisted to the chosen persistence provider between each step, where it can be picked up at a later point in time to continue execution. The outcome result of your step can instruct the workflow host to defer further execution of the workflow until a future point in time or in response to an external event.
The first time a particular step within the workflow is called, the persistenceData property on the context object isNone. The ExecutionResult produced by therun method can either cause the workflow to proceed to the next step by providing an outcome value, instruct the workflow to sleep for a defined period or simply not move the workflow forward. If no outcome value is produced, then the step becomes re-entrant by setting persistenceData, so the workflow host will call this step again in the future buy will popluate the persistenceData with it's previous value.
Each step is intended to be a black-box, therefore they support inputs and outputs. Each workflow instance carries a data property for holding 'workflow wide' data that the steps can use to communicate.
The following sample shows how to define inputs and outputs on a step, it then shows how to map the inputs and outputs to properties on the workflow data property.
#Our workflow step with inputs and outputsclassAddNumbers(StepBody):def__init__(self):self.input1=0self.input2=0self.output=0defrun(self,context:StepExecutionContext)->ExecutionResult:self.output=self.input1+self.input2returnExecutionResult.next()#A class to hold workflow wide dataclassMyData:def__init__(self):self.value1=0self.value2=0self.value3=0#Our workflow definition with mapped inputs & outputsclassMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello)\ .then(AddNumbers) \ .input('input1',lambdadata,context:data.value1) \ .input('input2',lambdadata,context:data.value2) \ .output('value3',lambdastep:step.output) \ .then(Goodbye)
A workflow can also wait for an external event before proceeding. In the following example, the workflow will wait for an event called"event1" with a key of"key1". Once an external source has fired this event, the workflow will wake up and continue processing.
classMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello) \ .wait_for('event1',lambdadata,context:'key1') \ .then(Goodbye)#External events are published via the host#All workflows that have subscribed to event1, key1, will be passed "hello"host.publish_event('event1','key1','hello')#Data from the published event can be captured and mapped to the workflow data object with an output on the WaitFor stepclassMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello) \ .wait_for('event1',lambdadata,context:'key1') \ .output('captured_value',lambdastep:step.event_data) \ .then(Goodbye)
classDoStuff(StepBody):defrun(self,context:StepExecutionContext)->ExecutionResult:print(f"doing stuff...{context.execution_pointer.context_item}")returnExecutionResult.next()classMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello)\ .for_each(lambdadata,context: ["abc","def","xyz"])\ .do(lambdax:\x.start_with(DoStuff))\ .then(Goodbye)
classMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello)\ .while_(lambdadata,context:data.value1<3)\ .do(lambdado:\do.start_with(DoStuff)\ .input('my_value',lambdadata,context:data.value1)\ .output('value1',lambdastep:step.your_value))\ .then(Goodbye)
classMyWorkflow(Workflow):defbuild(self,builder:WorkflowBuilder):builder\ .start_with(Hello)\ .if_(lambdadata,context:data.value1>3)\ .do(lambdax:\x.start_with(DoStuff))\ .then(Goodbye)
The workflow host is the service responsible for executing workflows. It does this by polling the persistence provider for workflow instances that are ready to run, executes them and then passes them back to the persistence provider to by stored for the next time they are run. It is also responsible for publishing events to any workflows that may be waiting on one.
When your application starts, create a WorkflowHost service usingconfigure_workflow_host, callregister_workflow, so that the workflow host knows about all your workflows, and then callstart to fire up the event loop that executes workflows. Use thestart_workflow method to initiate a new instance of a particular workflow.
fromliteflow.coreimport*host=configure_workflow_host()host.register_workflow(MyWorkflow())host.start()wid=host.start_workflow("MyWorkflow",1,None)
Since workflows are typically long running processes, they will need to be persisted to storage between steps.There are several persistence providers available as separate packages.
- Memory Persistence Provider(Default provider, for demo and testing purposes)
- MongoDB
- (more to come soon...)
By default, the WorkflowHost service will run as a single node using the built-in queue and locking providers for a single node configuration. Should you wish to run a multi-node cluster, you will need to configure an external queueing mechanism and a distributed lock manager to co-ordinate the cluster. These are the providers that are currently available.
- SingleNodeQueueProvider(Default built-in provider)
- Azure
- RabbitMQ(coming soon...)
- LocalLockProvider(Default built-in provider)
- Azure
- Redis Redlock(coming soon...)
- Daniel Gerlag -Initial work
This project is licensed under the MIT License - see theLICENSE.md file for details
About
Python workflow engine
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.