How introducing a state in Azure Functions punches you in the face

This post is a bit longer explanation of an issue I reported a few days ago. To make a long story short - I was trying to find a way to fix a bug in my function, where almost each call resulted in an error logged to Application Insights. The exception was a simple Collection was modified; enumeration operation may not execute - I knew what was failing and where but I couldn't find an answer why. I'll try to explain this a little bit and - on the occasion - tell you why state in Azure Functions is a bad, bad thing.

As you can see, each call of my function was polluted by an exception

The background

As you probably know(and if now - you'll get it in a second), before Azure Function can be actually triggered, it has to be initialized. There's a ScriptHostManager class, which is created at the very beginning of the first call to your function - it's responsible for handling the whole host. Once initialized, it waits and listens for one of three things:

  • incoming calls(triggers)
  • a signal for restart
  • an exception(which by the way will cause a restart)

This host will be kept alive for a fixed amount of time and if it's not triggered, it will be closed. Of course it can be scaled if needed, but it's not the case now.

A host is also responsible for digesting all metadata and description related to your functions. It will read function.json file to get information about bindings, find an entry point and validate it against a configuration file. All this operations happen only why time, when host is created, and once it knows everything what it needs, it can invoke your function again and again. However, this is where problem could occur.

The habits

Let's consider following function:

/
public class FooFunction 
{	
	public static void Run(string myQueueItem,
		ICollector<EventLog> fooTable,
		TraceWriter log)
	{
		// Do some processing...	
		fooTable.Add(new FooTable());
	}
}

So far so good - nothing unexpected happens. In fact - this example will work just fine. But let's say we'd like to refactor it a little and extract logic responsible for processing an item:

/
public class FooFunction 
{
	private static ICollector<FooTable> _fooTable;
	
	public static void Run(string myQueueItem,
		ICollector<EventLog> fooTable,
		TraceWriter log)
	{

		_fooTable = fooTable;

		Proc(myQueueItem);
	}
	
	private static void Proc(string myQueueItem) {
		// Do some processing...
		
		_fooTable.Add(new FooTable());
	}
}

Initially there's nothing wrong with this design. On the other hand, we've just introduced a some kind of state, which will last as long as our function is alive. It still could work though and we can live unaware of this flaw. This is how I introduced this bug into my system - I was refactoring my functions as always with a little help of Resharper and in some point just moved ICollector<T> parameter to a static field. 

The problem

As I mentioned, initially you could live with a state in your function and even don't see any problems. If you're using e.g. TimerTrigger, for sure it'll work - you need just a one instance of a function called at specific interval. However, what about triggering a function for each queue item? Per each HTTP request? Or even an event in ServiceBus? In those scenarios, your function will be called concurrently and will simultaneously access your static field, overwriting what other calls have added to a collection. Sooner than later you'll end up with a pretty Collection was modified; enumeration operation may not execute everywhere in your logs. This is why initially this problem won't affect you - if traffic is low enough, it won't trigger a function fast enough to actually make this exception happen.

Please have in mind, that documentation of Azure Functions makes this rather clear, that state should be avoided so introducing it is, well... you deserve to be punished :)

The solution

The solution of this problem is pretty simple - just pass input of your entry point as parameters to other methods in your function. This will help keep your logs clean(interesting fact is that Monitor tab in your function won't show this problem - each call will be marked as successful while the error count will grow!!) and save you from potential other problems related to sharing a state within a function.

 

Limiting data being logged using Application Insights in Azure Functions

As you may know, Azure Functions have a preview of Application Insights integration enabled. This is another great addition to our serverless architecture since we don't have to add this dependency on our own - it's just there. However, there're some problems when it comes to handling the amount of data, which is being collected, especially when your're on an MSDN subscription.

Problem

When you enable Application Insights for your Function App, each function will start collecting different metrics(traces, errors, requests) at different scale. When you go to Azure Portal and access Data volume management tab in the Application Insights blade, you'll see, that there's one metric, which really exceeds our expectations(at least when it comes to the volume of the data traced):

As you can see, Message data takes 75% of the total amount of data collected

When you click on any bar, you'll access Data point volume tab - now we can understand, what kind of 'message' is really being logged:

Although chart says Message, data type related to this particular type of message is Trace

Configuring AI integration

Logging traces is perfectly fine, however we don't always want to do so(especially if you're on an MSDN subscription and don't want to be blocked). If you go to this page, you'll see a detailed information regarding both enabling and working with Application Insights. The most interesting part for us is the configuration section:

/
{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Information",
      "categoryLevels": {
        "Host.Results": "Error",
        "Function": "Error",
        "Host.Aggregator": "Information"
      }
    },
    "aggregator": {
      "batchSize": 1000,
      "flushTimeout": "00:00:30"
    }
  },
  "applicationInsights": {
    "sampling": {
      "isEnabled": true,
      "maxTelemetryItemsPerSecond" : 5
    }
  }
}

As you can see, we're able to set different levels for each category of data being logged. According to comments in this issue on GitHub, the easiest way to actually limit the data being logged is to set your configuration to the following:

/
{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Error",
      "categoryLevels": {
        "Host.Aggregator": "Information"
      }
    }
}

This way you should be able to avoid logging to much data or hitting your daily cap for Application Insights.

I strongly recommend you to play with AI integration in Azure Functions and provide feedback regarding possible features or enhancements. It's a great way to collaborate with a team responsible for a product and a chance to make it even better.