Overview | Mesosphere DC/OS on Azure - introduction and installation

The main problem of managing cloud solutions is the amount of resources you have under your control. Even simple solutions, when you're trying to achieve scalability, high availability and high load, tend to grow rapidly and become difficult to manage. If you're building a big data solution, which requires multiple working clusters and is highly automated, additional tools to control your environment become really helpful.

One dashboard to control them all...

Microsoft Azure has introduced an impressive collection of OSS images and ARM templates, which can be used to deploy and provision the whole environment for tools like MongoDB, Jenkins or Wordpress. If you need one, just go and pick it from Marketplace. After few minutes you'll get all your resources configured and ready to work. One of those OSS tools is Mesosphere DC/OS - service & resource manager powered by Apache Mesos to abstract your datacenter and present all your resources as a one system, which is accessible from one place.

But I already have Azure Portal!

Indeed. What DC/OS gives you is not only a nicer dashboard. It combines all your resources into a one giant unit, presents workloads and helps in optimizing resources utilization. Running two DBs on two VMs while utilizing only 40% of each? Merge them into one machine, disable the second one and cut your expenses by 50%. You'll be happier and you're boss will be happier even more.

How to install it?

Installation of Mesosphere DC/OS is pretty straightforward and is well described here. It basically requires two things:

  • running an ARM template which is available in Marketplace
  • connecting to the VM using SSH

The tricky part is the latter - you have to connect to your node via SSH and tunnel port 80 to your local machine. It works flawlessly under Mac/Linux, on Windows you cannot just run ssh command because it's not there. However, what you can do is to download PuTTY and perform following steps:

  1. After ARM deployment has finished, go to the resource group which was selected for DC/OS and select Deployment
  2. Go to the last deployment and in Outputs find value of MASTERFQDN key and copy it
  3. Open PuTTY, paste copied value into Host Name (or IP address) field and use 2200 port
  4. Go to Connections/SSH/Tunnels and in Port forwarding section select both checkboxes. 
  5. In the same page find Add new forwarded port section and enter 8000 for Source port and localhost:80 as Destination. Radiobuttons should be selected as Local/Auto
  6. Click Open and login as azureuser

Now when you go to http://127.0.0.1:8000/ you should see DC/OS dashboard screen.

What's next?

In the next post I will try to present some basic features of this software. We'll install some packages and find the best way to manage them.

Azure Key Vault - making it right

Recently I spent a couple of hours struggling with one of our Azure Key Vaults - its access policies to be more specific. Short story - I was unable to create a new one from the portal, nor access secrets or keys. After exchanging several emails with Microsoft and few calls with their technician, we managed to find both a quick solution and the root of all evil - invalid ARM template.

Key Vaults are somehow fragile resources when it comes to determining who can access them. You can be a "superadmin hero", still if a Key Vault was created in the other tenant or for an object not related with you, you can see it, you can manage it but if it comes to retrieving keys or secrets, you won't be able to do so. It's perfectly fine - it should be as secure as possible - but the way it informs you, that something is wrong, can be described as... well, lacking.

Consider following example:

{
	"type": "Microsoft.KeyVault/vaults",
	"name": "[parameters('keyVaultName')]",
	"location": "[resourceGroup().location]",
	"apiVersion": "2015-06-01",
	"tags": {
		"displayName": "my-keyvault"
	},
	"properties": {
		"enabledForDeployment": true,
		"enabledForDiskEncryption": true,
		"enabledForTemplateDeployment": true,
		"tenantId": "[parameters('tenantId')]",
		"accessPolicies": [
			{
				"tenantId": "[parameters('tenantId')]",
				"objectId": "[parameters('objectId')]",
				"permissions": {
					"keys": "[parameters('keysPermissions')]",
					"secrets": "[parameters('secretsPermissions')]"
				}
			}
		],
		"sku": {
			"name": "[parameters('vaultSkuName')]",
			"family": "A"
		}
	}
}

It is possible with ARM to create a key vault, which can be accessed e.g. by a group of admins(by passing correct tenantId and objectId). However, let's say you have made a mistake in those fields. Key vault will still be created with an access policy, but following things will happen also:

  • you won't be able to create a new access policy
  • you won't be able to browse keys
  • you won't be able to browse secrets
  • ARM will still be able to retrieve keys and secrets

Trying to add a new policy will result in "Invalid parameter name 'accessPolicy'" error while managing it with Azure Powershell CLI will give "401 Unauthorized" response. I've seen more descriptive messages in my life TBH.

Microsoft guided me to the solution with their article Change a key vault tenant ID after a subscription move, which describes a solution suitable also if a subscription hasn't been moved and a key vault has just been created(for a invalid tenant or an object). Performing steps from the article helped me in restoring proper access to the key vault and finally led to the proper solution of our issue.