Considering appropriate app service plan for your Azure Functions

As we all(or most of us) know, Azure Functions can be hosted using either a regular app service plan or a consumption plan. We can quickly summarize pros and cons of both:

App service plan:

  • fixed cost(+)
  • easy to scale(+)
  • ability to run 64-bit applications(+)
  • can reuse other app service plans(+)
  • fixed cost(--)
  • some triggers need Always On enabled(-)

Consumption plan

  • pay-as-you-go(++)
  • no need to have Always On for e.g. TimerTrigger(+)
  • somehow more difficult to scale(-)
  • if not designed carefully, the cost may exceed our expectations seriously(-)

All right - but what really should I consider when it comes to choosing the correct plan for my application?


In the current world scalability is something, what should be really considered when designing an application and choosing technologies for it. Let's consider a following example - you're designing an e-commerce application, which has to handle really big traffic spikes from time to time(imagine Black Friday or any other black something). The specifics of traffic on your website could be described as:

  • stable and low traffic for the most of a week
  • an increase during weekend
  • occasional huge spikes during special events 

Now let's relate this to our service plans. What is better for us in such scenario?

Well, the problem with consumption plan is the fact, that it needs a running start. You can't expect, that if your application is hit by a huge traffic spike, it'll scale out immediately. What is more, you don't have a possibility to scale up - you're limited to the resources allocated for you for your instance of the consumption plan. You have to consider one more thing - execution of functions is not throttled in any way by default so you may face a situation, where under a heavy load your function utilize too much CPU/memory at once. You can control throttling by following three properties:

  • maxOutstandingRequests
  • maxConcurrentRequests
  • dynamicThrottlesEnabled

which are described here. The one thing you have to remember when using throttling is the possibility, that your client may get HTTP 429 Too Busy responses. Whether it's a problem, only you can decide.

When using s regular app service plan those traffic spikes are much easier to handle. Since you can have scaling rules, you don't care when scaling out happens - it just does when it hits CPU/memory threshold(or other metric if autoscale is enabled). Additionally you can preprovision extra resources if you know when a traffic spike will happen - this way you're prepared.


When designing cloud solutions, the cost is one of the most important factors. If you choose components poorly or overdesign them, the bill at the end of a month won't make you happy for sure. This is the second thing directly related to service plans, which affects our choice when it comes to select what is the best.

Pay-as-you-go model in consumption plan is something, what really make functions interesting. When designed carefully, you can run them for almost free each month(or pay only a few USD/EUR after the free quota is exceeded). The problem is when you keep your functions "red" - in such scenario, it may be easier and cheaper to use a regular app service plan, which ensures the constant cost of this component and won't surprise you after a busy weekend(how is that I have to pay extra 500$ this month?).

Of course with a regular app service plan you lose flexibility and have to remember to scale your application down(or at least have something to automate this). The compromise here depends on your current needs and how the model of your business looks like. However it's still better to discuss it now and be aware of your possibilities rather than discovering them when functions start to respond with HTTP 503 status.

Application Insights Analytics - digging deeper into your application metrics

Data which is provided by an instance of Application Insights connected with your application is in most cases more than enough. As long as you're logging satisfying amount of information, you can easily track all your metrics and diagnose problems with ease. But what if you'd like to get a deeper insight into "what is really going on there"? I guess it'd possible to use AI's REST API and fetch all the data into your custom tool(or any kind of 3rd-party software) to analyze it - but who needs it when you have Application Insights Analytics?

First look

To access Analytics you need only to access this link - When accessed, you'll see a welcome screen, which out-of-the-box allows you to access some common queries.

A welcome screen gives you a rapid start when it comes to analyze common statistics

I strongly recommend you to try out common queries - they allow you to quickly get an overview of the capabilities of this tool.

Querying the data

For sure you'll notice, that charts and other statistics are the result of a query. This is what makes Analytics a really powerful tool - you can query any kind of metric available to you(like dependency duration, custom events, client OS and many many more) and combine them to get a what you're looking for.

An expanded tab of traces of the left - still there're some missing on the screen...

What is more, when creating or editing a query you can take advantage of inbuilt editor, which helps with a syntax and highlights all your errors. It's definitely much more polished than the one from the Function Apps :)

You can easily add metrics from the tabs on the left and then use an intuitive editor to combine them

Smart Diagnostics

There's a one cool feature, which makes Analytics really helpful in searching the root cause of a problem - Smart Diagnostics. It allows you to quickly discover what is "strange" in this particular fragment of your log(maybe one dependency fails to respond or responds three times longer than usual). 

Those highlighted dots on the chart allow you to run a smart detection on those parts of the data, which doesn't match the rest.

You have to be aware of the fact, that this diagnostics is not perfect and relies on the data you provide(so if you provide not enough data, it will tell you, that something is wrong, but not what it exactly is). Nonetheless is encourage you to gather more and more data, so its proposals are more and more valid and precise.

In the next post we'll try to run more advanced queries and find where limits of this tool are.