Create Storage Queue and Table Storage in your deployment

You’re probably familiar with Azure Storage Accounts. They are great and cheap!
Also, it’s possible to add the features Storage Queues & Table Storage on those accounts. I’m using Storage Queues a lot! Most of the time because I don’t need the enterprise features which Azure Service Bus offers me.
Table Storage is also great if you want to store data in a cheap NoSQL-style database. While I try to avoid Table Storage, in favor of Cosmos DB most of the time, this ‘old’ service still has value in lots of use-cases.

There are however a couple of things that annoy me about the Storage Queue & Table Storage features.
One of them, which this post is about, is not being able to create them via ARM templates. This means you can’t properly deploy along with your other Azure resources. When running your application, you can use the option CreateIfNotExists(..) the SDK offers you. Using this method enables you to create the entities runtime if they don’t exist. I dislike the use of this method because in my opinion the infrastructure should be deployed via something like an ARM template, not inside my application.

To solve my ‘problem’, I have created a small PowerShell script to use inside a deployment pipeline. It’s stored in some shared location and gets packaged inside my deployment artifact. This is the first version of the script.

...
$primaryKey = az storage account keys list -g the-resource-group-name -n mystorageaccountname --query '[0].{value:value}' --output tsv
az storage queue create --name $queueName --subscription $subscriptionId --account-key $primaryKey --account-name $accountName
...

What this script does is listing the keys of my storage account and returning the value of the first (primary) key of this account. This value gets stored in a local variable called $primaryKey. Once this key is retrieved I’m able to create the queue using the parameters which are the input of my script.

This script works and you can use it if you want.
However, there’s a ‘small’ security issue with this script. If you turn on debugging, or verbose logging, you’ll be able to see the value of the $primaryKey variable in the log. That’s NOT something you want!

Because I’m not fluent in PowerShell, it took my quite some time to figure out what a better alternative would be. In the end, I came up with this, very unreadable, script.

az storage account keys list -g $resourceGroup -n $accountName --query '[0].{value:value}' | ConvertFrom-Json | Select-Object -first 1 | ForEach-Object{ az storage queue create --name $queueName --subscription $subscriptionId --account-key $_.value --account-name $accountName } 

# And for creating table storage
az storage account keys list -g $resourceGroup -n $accountName --query '[0].{value:value}' | ConvertFrom-Json | Select-Object -first 1 | ForEach-Object{ az storage table create --name $tableName --subscription $subscriptionId --account-key $_.value --account-name $accountName } 

This script uses the piping feature offered by PowerShell. The trouble I had with it was to get the value of the primary key inside the $_ variable.
To get it, I had to change the output of my original query to an object, select the first entry and iterate over it. The thing I found the least obvious was the ForEach-Object code. I expected to have a single object and not a collection after running the command Select-Object -first 1. I was wrong.

From what I can tell, this second version of my script solves the security hole of the original script. Because I’m piping all of the commands together, no local variables are stored. I’ve turned on Verbose logging locally and Debug logging inside the pipeline and couldn’t find the value of my primary key anymore.

Hopefully, the script above will help you create the complete infrastructure during deployment. If you know of any other security holes the script has, please let me know so I can try to solve it or hand me a better solution in the comments!


Share

comments powered by Disqus