Durable Functions - basic concepts and justification

Recently team responsible for Azure Functions announced, that new cool feature is entering an alpha preview stage - Durable Functions. Because this introduces a completely new way of thinking about serverless in Azure, I decided to go in-depth and prepare a few blog posts regarding both new capabilities and the foundations of Durable Functions.

Concepts

Conceptually Durable Functions are something, what forces you to rethink what you've already learnt about serverless in Azure. When writing common functions like inserting something into a database or passing a message to a queue, you've always been trying to avoid storing state and perform all actions as quickly as possible. This had many advantages:

  • it was easy to write a simple function, which performs basic operations without preparing boilerplate code
  • dividing your module into small services really helped during maintaining your solution
  • scaling was quite simple and unequivocal

All right - it seems that we had all we needed, why one tries to introduce a completely different concept, which raises learning curve of Functions? 

Communication

Normally if you want to communicate between functions, you will have to use queues. It's a perfectly valid solution and in simple scenarios the whole solution won't be cumbersome. However if you're creating a bigger system with several functions orchestrating work between each other, sooner than later you'll hit a wall - communication using queues will become a bottleneck of your solution and maintenance will become a nightmare. 

Additionally - more advanced scenarios(like fan-in/fan-out) are ridiculously hard to achieve.

What about stateless?

For some people concept of introducing a state into functions destroys what is best about them - possibility to scale out seamlessly and treating them as independent fragments of your system. In fact, you have to distinguish "traditional" functions and durable ones - they have different use cases and reasoning between differs a lot. The former should be used as a reaction to an event, they're ideal when you have to take an action in answer to a message. The latter are more sublime during adoption - you'll use them for orchestrating workflows and pipelines, which let you easily perform an end-to-end processing of a message.

Pricing

One more thing considering Durable Functions is pricing, mostly because it is what makes serverless so interesting. In Durable Functions it doesn't change - you'll still pay only for the time, when a function executes - when a function awaits for a result of running other functions, no cost is allocated here. This is thanks to the fact, that once a task is scheduled, execution of a function returns to the Durable Task Framework layer and waits for further actions there.

I strongly recommend you to take a look try something with Durable Functions. This feature is still in an early preview so it might be unstable in some way, but it gives so many possibilities now, that it's really worth a try. You can find more info here: Alpha Preview for Durable Functions.

Monitoring your Function App with ease

This post was created thanks to great support from people directly involved in Azure Functions in Microsoft - Donna Malayeri and Paul Batum

Introduction

Azure Functions make developing small services easier, especially with all those triggers and bindings, which allow you to skip writing boilerplate code and focus directly on your business needs. They are also provided with a possibility to "pay-as-you-go" with Consumption Plan supported. Currently many people take advantage of this feature and try to make the most of it by not exceeding a monthly free cap of executions and execution time. What if you'd like to monitor somehow how many times you function's been executed? Fortunately there're two ways of doing it and both are a piece of cake.

A graph presenting executions of a function triggered once per 5 minutes

Using Azure Portal

The easiest way to monitor your Function App(unfortunately you cannot monitor each function separately, at least not now) is to go to portal and check metrics of an App service plan, which is used to host it(to be more specific - its metrics). To do so I'll quote Paul directly here:

/
> There appears to be a bug that is making this harder than it should be. Try the following steps...
> Open Function App. Platform Features -> All Settings -> Click on Function Execution Count graph -> uncheck count, check units.

By doing those steps, you should be able to see a chart with a metric selected:

Weekly metrics of my function

There's one gotcha here however:

/
> Function Execution units are in mbmilliseconds, you'll need to divide by 1024000 to get gbsec.

One more thing - there's now way to know what's the aggregated value e.g. from a month - for now you have to do it on your own.

Using API

There's one more way to check the metrics - using a REST API of Azure Monitoring. This method is described on StackOverflow also by Paul and with a direct reference to walkthrough of Azure Monitoring REST API. The main idea is to call API, which will return to you something similar to following:

/
{
  "value": [
    {
      "data": [
        {
          "timeStamp": "2016-12-10T00:00:00Z",
          "total": 0
        },
        {
          "timeStamp": "2016-12-10T00:01:00Z",
          "total": 140544
        },
        {
          "timeStamp": "2016-12-10T00:02:00Z",
          "total": 0
        },
        {
          "timeStamp": "2016-12-10T00:03:00Z",
          "total": 0
        },
        {
          "timeStamp": "2016-12-10T00:04:00Z",
          "total": 0
        }
      ],      
      "name": {
        "value": "FunctionExecutionUnits",
        "localizedValue": "Function Execution Units"
      },
      "type": "Microsoft.Insights/metrics",
      "unit": "0"
    }
  ]
}

Once you have the result, it's easy to write a custom tool, which will calculate all metrics and give you an insight into your's functions performance.