Durable Functions - Durable Task Framework basics

In the previous post I presented some basic concepts behind Durable Functions and reasoning why they

In the previous post I presented some basic concepts behind Durable Functions and reasoning why they've been introduced and what we can achieve with them. This time I'll focus on the very foundation of Durable Functions - Durable Task Framework - and its features. We'll try to understand its mechanics and build a very simple workflow to get an idea how it works. Let's go!

Mechanics

The basic concept of Durable Task Framework is to use Service Bus to orchestrate work and use it as a temporary storage for a state. When Framework is initialized, it creates a Service Bus queue under the hood, which will be used as a the main components to pass messages. Note that queue size is not unrestricted - you have to choose on of the following values:

  • 1024MB
  • 2048MB
  • 3072MB
  • 4096MB
  • 5120MB

any other size will be treated as an error and will result in an exception.

 If you go to Core Concepts sections for Durable Task Framework, you can find following diagram:

https://github.com/Azure/durabletask/wiki/images/concepts.png

It shows the underlying structure of Durable Task Framework and all main elements of the architecture. It may be a little confusing now, but once we start creating orchestrations, all will become easier to understand.

The most important thing here is Task Hub Worker, which allows adding Task Orchiestrations and Task Activities and dispatching to these - to make the long story short, it acts as foundation of your solution.

The difference between Orchiestration and Activity is fairly simple - once Activity is the actual action, which should be performed and we can refer to it as a simple and atomic task, which will be executed, Orchiestration is thing, which aggregates Activities and orchestrates them. You can think about it as a conductor, which is responsible for going the right path.

From Durable Tasks to Durable Functions

You may ask "how do Durable Tasks connect to Durable Functions?" - in fact initially there's no explicit connection. We have to consider what would be the best way to achieve orchestration in the world of serverless. In the previous post I mentioned, that current solution includes using Azure Storage queues, what for sure lets you achieve the goal, but is far from ideal solution. Natural evolution of this idea is to utilize something what is called event sourcing and instead of pushing and fetching messages from queues, just raise an event and wait for an eventual response:

  • Function1 started executing
  • Function1 called Function2
  • Function2 started executing
  • Function2 finished executing
  • Function1 called Function3
  • Function1 finished executing

This is a trivial concept but yet a really powerful one. By storing a state in a such manner(using an append-only log) you're gaining many profits:

  • there's no way to mutate a state with appending another event
  • immutable state - difficult to corrupt
  • no locking
  • it' easy to recreate a state if needed 

Now if you consider, that Activities can be treated as events, there's an easy way to Durable Functions, where each Activity is another function and a state is stored in Azure Storage and maintained by the runtime.

Summary

Today we went a bit deeper into Durable Task Framework and considered connection between this library and Durable Functions. In the next post I'll try to present a basic example of Durable Function and what changes in that approach when creating serverless application.

Add comment