Azure Table Storage good practices - log tail pattern

Although Azure Table Storage is pretty simple and straightforward solution, designing it so you can take the most from it is not an easy task. I've seen some projects, where invalid decisions regarding tables structure, wrong partition keys values or "who cares" row keys led to constant degradation of performance and raising cost of maintenance. Many of those problems could be avoided by introducing smart yet easy to introduce patterns, from which one I like the most is the log tail pattern.

The problem

Let's consider following table:

PK | RK | Timestamp | Event

As you may know, rows inserted into Azure Storage table are always stored in ascending order using a row key. This allow you to go from the start to the end of a segments pretty quickly and with predictable performance. If we assume, that we have following rows in our table:

PK | RK | Timestamp  | Event
foo   1   01/01/2000  { "Event": "Some_event"}
foo   2   01/01/2000  { "Event": "Some_event"}
foo   3   01/01/2000  { "Event": "Some_event"}
foo   9999   01/01/2000  { "Event": "Some_event"}
foo   10000   01/01/2000  { "Event": "Some_event"}

we can really quickly fetch a particular row because we can easily query it using PK and RK. We know that 1 will be before 10 and 100 will be stored before 6578. The problem happens when we cannot quickly determine which RK is bigger - mostly because we used e.g. a custom identifier like combination of a GUID and a timestamp. This forces us often to query large portions of a table just to find the most recent records. It'd possible to use a statement in our query like WHERE RowKey > $some_value, but it still introduces some overhead, which could be critical in some scenarios. How can we store our data in Table Storage and retrieve most recent records quickly and efficiently?

Log tail pattern

Fortunately the solution is really easy here and if decision is made early, it doesn't require much effort to introduce. The idea is simple - find a row key, which will "reorder" our table in a way, that the newest rows are also the first ones in a table. The concept seems to be tricky initially, but soon will be the first thing you think about when you hear "Azure Table Storage" ;)

Let's consider following solution(taken from here):

string invertedTicks = string.Format("{0:D19}", DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks);
DateTime dt = new DateTime(DateTime.MaxValue.Ticks - Int64.Parse(invertedTicks));

This will reverse the order how rows are stored in a table and allow you to quickly fetch those you're interested in the most. It's especially useful when creating all kinds of logs and appending to them, where you're usually interested mostly in the most recent records.


I rely heavily on this pattern in many projects I've created(both for my private use and commercial) and it really helps in creating efficient table structure, which can be easily queried and is optimized for a particular use. Of course it cannot be used in all scenarios, but for some it's a must have.

One of the things you have to consider here is that you must pad the reverse tick value with leading zeroes to ensure the string value sorts as expected(something that is fixed by string.Format() in the example). Without this fix you can end with incorrectly ordered rows. Nonetheless it's a small price you have to pay for a proper design and performance.

Migrating schema and data in Azure Table Storage

Recently I faced a problem, when I had to change and adjust schema in tables stored in Azure Table Storage. The issue there was to actually automate changes so I don't have to perform them manually on each environment. This was the reason why I created a simple library called AzureTableStorageMigratorwhich helps in such tasks and eases the whole process.

The basics

The base idea was to actually create two things:

  • a simple fluent API, which will take care of chaining all tasks
  • a table which will hold all migration metadata

Current version(1.0) gives you following possibilities:

  • void Insert<T>(string tableName, T entity, bool createIfNotExists = false)
  • void DeleteTable(string tableName)
  • void CreateTable(string tableName)
  • void RenameTable<T>(string originTable, string destinationTable)
  • void Delete<T>(string tableName, T entity)
  • void Delete(string tableName, string partitionKey)
  • void Delete(string tableName, string partitionKey, string rowKey)
  • void Clear(string tableName)

and when you take a look at the example of usage:

var migrator = new Migrator();
migrator.CreateMigration(_ =>
  _.Insert("table1", new DummyEntity { PartitionKey = "pk", RowKey = DateTime.UtcNow.Ticks.ToString(), Name = "foo"});
  _.Insert("table1", new DummyEntity { PartitionKey = "pk", RowKey = DateTime.UtcNow.Ticks.ToString(), Name = "foo2"});
  _.Insert("table2", new DummyEntity { PartitionKey = "pk", RowKey = DateTime.UtcNow.Ticks.ToString(), Name = "foo"});
}, 1, "1.1", "My first migration!");

you'll see, that's pretty straightforward and self-describing. 

The way how it works is very simple - each CreateMigration() method is described using 3 different values - its id, version number and description. Each time this method is called, it'll add a new record to the versionData table to make sure, that metadata is saved and the same migration won't be run twice.

Why should I use it?

In fact it's not a matter of what you "should" do but rather what is "good" for your project. Versioning is generally a good idea, especially if you follow CI/CD pattern, where the goal is to deploy and rollback with ease. If you perform migrations by hand, you'll eventually face the situation, where rollback is either very time-consuming or almost impossible. 

It's good to remember that making your database a part of your repository(of course in terms of storing schema, not data) is considered a good practice and is one of the main parts of many modern projects.

What's next?

I published ATSM because I couldn't find a tool similar to it, which would help me version tables in Table Storage easily. For sure some new features will be added in the future, however if you find this project interesting, feel free to post an issue or a request - I'll be more than happy to discuss it.