Azure Functions are great! HTTP triggered Azure Functions are also great, but there’s one downside. All HTTP triggered Azure Functions are publicly available. While this might be useful in a lot of scenario’s, it’s also quite possible you don’t want ‘strangers’ hitting your public endpoints all the time.

One way you can solve this is by adding a small bit of authentication on your Azure Functions.

For HTTP Triggered functions you can specify the level of authority one needs to have in order to execute it. There are five levels you can choose from. It’s `Anonymous`, `Function`, `Admin`, `System` and `User`. When using C# you can specify the authorization level in the HttpTrigger-attribute, you can also specify this in the function.json file of course. If you want a Function to be accessed by anyone, the following piece of code will work because the authorization is set to Anonymous.

[FunctionName("Anonymous")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
    HttpRequestMessage req,
    ILogger logger)
{
    // your code

    return req.CreateResponse(HttpStatusCode.OK);
}

If you want to use any of the other levels, just change the AuthorizationLevel enum to any of the other values corresponding to the level of access you want. I’ve created a sample project on GitHub containing several Azure Functions with different authorization levels so you can test out the difference in the authorization levels yourself. Keep in mind, when running the Azure Functions locally, the authorization attribute is ignored and you can call any Function no matter which level is specified.

Whenever a different level a different level as Anonymous is used, the caller has to specify an additional parameter in the request in order to get authorized to use the Azure Function. Adding this additional parameter can be done in 2 different ways.
The first one is adding a `code`-parameter to the querystring which value contains the secret necessary for calling the Azure Function. When using this approach, a request can look like this: https://[myFunctionApp].azurewebsites.net/api/myFunction?code=theSecretCode

Another approach is to add this secret value in the header of your request. If this is your preferred way, you can add a header called `x-functions-key` which value has to contain the secret code to access the Azure Function.

There are no real pros or cons to any of these two approaches, it quite depends on your solution needs and how you want to integrate these Functions.

Which level should I choose?

Well, it depends.

The Anonymous level is pretty straightforward, so I’ll skip this one.

The User level is also fairly simple, as it’s not supported (yet). From what I understand, this level can and will be used in the future in order to support custom authorization mechanisms (EasyAuth). There’s an issue on GitHub which tracks this feature.

The Function level should be used if you want to give some other system (or user) access to this specific Azure Function. You will need to create a Function Key which the end-user/system will have to specify in the request they are making to the Azure Function. If this Function Key is used for a different Azure Function, it won’t get accepted and the caller will receive a 401 Unauthorized.

Only ones left are the Admin and System levels. Both of them are fairly similar, they work with so-called Host Keys. These Host Keys work on all Azure Functions which are deployed on the same system (Function App). This is different from the earlier mentioned Function Keys, which only work for one specific function.
The main difference between these two is, System only works with the _master host key. The Admin level should also work with all of the other defined host keys. I’m writing the word' ‘should’, because I couldn’t get this level to work with any of the other host keys. Both only appeared to be working with the defined _master key. This might have been a glitch at the time, but it’s good to investigate once you get started with this.

How do I set up those keys?

Sadly, you can’t specify them via an ARM template (yet?). My guess is this will never be possible as it’s something you want to manage yourself per environment. So how to proceed? Well head to the Azure Portal and check out the Azure Function you want to see or create keys for.

You can manage the keys for each Azure Function in the portal and even create new ones if you like.

clip_image001

I probably don’t have to mention this, but just to be on the safe side, you don’t want to distribute these keys to a client-side application. The reason for this is pretty obvious, it’s because the key will be sent in the request, therefore it’s not a secret anymore. Anyone can check out this request and see which key is sent to the Azure Function.
You only want to use these keys (both Function Keys and Host Keys) when making requests between server-side applications. This way your keys will never be exposed to the outside world and minimize the chance of a key getting compromised. If for some reason a key does become compromised you can Renew or Revoke a key.

One gotcha!

There’s one gotcha when creating an HTTP Triggered Function with the Admin authorization level. You can’t prefix these Functions with the term ‘Admin’. For example, when creating a function called ‘AdministratorActionWhichDoesSomethingImportant’ you won’t be able to deploy and run it. You’ll receive an error there’s a routing conflict.

[21-8-2018 19:07:41] The following 1 functions are in error:
[21-8-2018 19:07:41] AdministratorActionWhichDoesSomethingImportant : The specified route conflicts with one or more built in routes.

Or when navigating to the Function in the portal you’ll get this error message popped up.

image

Probably something you want to know in advance, before designing your complete API with Azure Functions.

As it happens, I started implementing some new functionality on a project. For this functionality, I needed an Azure Storage Account with a folder (containers) inside. Because it’s a project not maintained by me, I had to do some searching on how to create such a container in the most automated way, because creating containers in storage account isn’t supported… That is, until recently!

In order to create a container inside a storage account, you only have to add a new resource to it. Quite easy, once you know how to do it.

First, let’s start by creating a new storage account.

{
    "name": "[parameters('storageAccountName')]",
    "type": "Microsoft.Storage/storageAccounts",
    "apiVersion": "2018-02-01",
    "location": "[resourceGroup().location]",
    "kind": "StorageV2",
    "sku": {
        "name": "Standard_LRS",
        "tier": "Standard"
    },
    "properties": {
        "accessTier": "Hot"
    }
}

Adding this piece of JSON to your ARM template will make sure a new storage account is created with the specified settings and parameters. Nothing fancy here if you’re familiar with creating ARM templates.

Now for adding a container to this storage account! In order to do so, you need to add a new resource of the type `blobServices/containers` to this template.

{
    "name": "[parameters('storageAccountName')]",
    "type": "Microsoft.Storage/storageAccounts",
    "apiVersion": "2018-02-01",
    "location": "[resourceGroup().location]",
    "kind": "StorageV2",
    "sku": {
        "name": "Standard_LRS",
        "tier": "Standard"
    },
    "properties": {
        "accessTier": "Hot"
    },
    "resources": [{
        "name": "[concat('default/', 'theNameOfMyContainer')]",
        "type": "blobServices/containers",
        "apiVersion": "2018-03-01-preview",
        "dependsOn": [
            "[parameters('storageAccountName')]"
        ],
        "properties": {
            "publicAccess": "Blob"
        }
    }]
}

Deploying this will make sure a container is created with the name `theNameOfMyContainer` inside the storage account. You can even change the permission to this container. The default is `None`, but this can be changed to `Blob` in order to get file access or `Container` if you want people to be able to access the container itself.

It’s too bad you still can’t deploy a Storage Queue or Table Storage via an ARM template yet. Seeing this latest addition of adding containers, I think it’s only a matter of time before the other two will get implemented/supported.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"
}

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.

clip_image001

You can also check out the details of which has happened by clicking on the lines.

image

Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.

clip_image001[5]

Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
{
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
      "get"
    ],
    "certificates": [],
    "storage": []
  }
}],

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.

clip_image001[7]

Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
            
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}.vault.azure.net/", "MyFunctionSecret");
            
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.

image

Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

I’m in the process of adding an ARM template to an open source project I’m contributing to. All of this was pretty straightforward, until I needed to add some secrets and connection strings to the project.

While it’s totally possible to integrate these secrets in your ARM parameter file or in your continuous deployment pipeline, I wanted to do something a bit more advanced and secure. Of course, Azure Key Vault comes to mind! I’ve already used this in some of my other ASP.NET projects and Azure Functions, so nothing new here.

The thing is, the projects I’ve worked on, always retrieved the secrets from Key Vault like the following example:

"adminPassword": {
    "reference": {
        "keyVault": {
        "id": "/subscriptions/<subscription-id>/resourceGroups/examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
        },
        "secretName": "examplesecret"
    }
}

While this isn’t a bad thing per se, I don’t like having the `subscription-id` hardcoded in this configuration, especially when doing open source development. Mainly because other people can’t access my Key Vault, so they’ll run into trouble when deploying this template. Therefore, I started investigating if this subscription id can be added dynamically.

Introducing the Dynamic Id

Lucky for us the ARM-team has us covered! By changing the earlier mentioned configuration a bit you’re able to use the function `subscription().subscriptionId` in order to get your own subscription id.

"adminPassword": {
    "reference": {
        "keyVault": {
        "id": "[resourceId(subscription().subscriptionId,  parameters('vaultResourceGroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
        },
        "secretName": "[parameters('secretName')]"
    }
},

Downside though, this doesn’t work in your parameter file!

It also doesn’t work in your normal ARM template!

So what’s left? Well, using ARM templates in combination with nested templates! Nested templates are the key to using this dynamic id. Nested templates aren’t something I envy using, because it’s easy to get lost in all of those open files.

Well, enough sample configuration for now, let’s see how this looks like in an actual file.

{
    "apiVersion": "2015-01-01",
    "name": "nestedTemplate",
    "type": "Microsoft.Resources/deployments",
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "resourcegroup": {
                "value": "[parameters('resourcegroup')]"
            },
            "hostingPlanName": {
                "value": "[parameters('hostingPlanName')]"
            },
            "skuName": {
                "value": "[parameters('skuName')]"
            },
            "skuCapacity": {
                "value": "[parameters('skuCapacity')]"
            },
            "websiteName": {
                "value": "[parameters('websiteName')]"
            },
            "vaultName": {
                "value": "[parameters('vaultName')]"
            },
            "mySuperSecretValueForTheAppService": {
                "reference": {
                    "keyVault": {
                        "id": "[resourceId(subscription().subscriptionId,  parameters('resourcegroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
                    },
                    "secretName": "MySuperSecretValueForTheAppService"
                }
            }
        }
    }
}

In order to use the dynamic id, you have to add it to the `parameters`-section of the nested template resource. Anywhere else in the process is too early or too late to retrieve those values. Ask me how I know…

The observant reader might also notice me using the `templateLink` property with an URI inside.

"templateLink": {
    "uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
    "contentVersion": "1.0.0.0"
}

This is because you can only use these functions when the nested template is located on a (public) remote location. Another reason why I don’t really like this approach. Linking to a remote location means you can’t use the templates which are located inside the artifact package you are deploying. There is an issue on the feedback portal asking to support local file locations, but it’s not implemented (yet).

For now we just have to copy the template(s) to a remote location during the CI-build process (or do some template-extraction-and-publication-magic in the deployment pipeline). Whenever the CD pipeline runs, it’ll have to try to use the templates which are pushed to this remote location. Sounds like a lot of work, that’s because it is!

You might wonder how does this nested template look like? Well, it looks a lot like a ‘normal’ template

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourcegroup": {
            "type": "string"
        },
        "hostingPlanName": {
            "type": "string",
            "minLength": 1
        },
        "skuName": {
            "type": "string",
            "defaultValue": "F1",
            "allowedValues": [
                "F1",
                "D1",
                "B1",
                "B2",
                "B3",
                "S1",
                "S2",
                "S3",
                "P1",
                "P2",
                "P3",
                "P4"
            ],
            "metadata": {
                "description": "Describes plan's pricing tier and instance size. Check details at https://azure.microsoft.com/en-us/pricing/details/app-service/"
            }
        },
        "skuCapacity": {
            "type": "int",
            "defaultValue": 1,
            "minValue": 1,
            "metadata": {
                "description": "Describes plan's instance count"
            }
        },
        "websiteName": {
            "type": "string"
        },
        "vaultName": {
            "type": "string"
        },
        "mySuperSecretValueForTheAppService": {
            "type": "securestring"
        }
    },
    "variables": {},
    "resources": [{
            "apiVersion": "2015-08-01",
            "name": "[parameters('hostingPlanName')]",
            "type": "Microsoft.Web/serverfarms",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "HostingPlan"
            },
            "sku": {
                "name": "[parameters('skuName')]",
                "capacity": "[parameters('skuCapacity')]"
            },
            "properties": {
                "name": "[parameters('hostingPlanName')]"
            }
        },
        {
            "apiVersion": "2015-08-01",
            "name": "[parameters('webSiteName')]",
            "type": "Microsoft.Web/sites",
            "location": "[resourceGroup().location]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverFarms/', parameters('hostingPlanName'))]"
            ],
            "tags": {
                "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty",
                "displayName": "Website"
            },
            "properties": {
                "name": "[parameters('webSiteName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"
            },
            "resources": [{
                "name": "appsettings",
                "type": "config",
                "apiVersion": "2015-08-01",
                "dependsOn": [
                    "[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
                ],
                "tags": {
                    "displayName": "appSettings"
                },
                "properties": {
                    "MySuperSecretValueForTheAppService": "[parameters('mySuperSecretValueForTheAppService')]"
                }
            }]
        }
    ],
    "outputs": {}
}

This nested template is responsible for creating an Azure App Service with an Application Setting containing the secret we retrieved from Azure Key Vault in the main template. Pretty straightforward, especially if you’ve worked with ARM templates before.

If you want to see the complete templates & solution, check out my GitHub repository with this sample templates.

The deployment

All of this configuration is fun and games, but does it actually do the job?

One way to find out and that’s setting up a proper deployment pipeline! I’m most familiar using VSTS, so that’s the tool I’ll be using.

Create a new Release, add a new artifact to the location of your templates and create a new environment.

For testing purposes, this environment only needs to have a single step based on the `Create or Update Resource Group`-task.

In this task you will need to select the ARM Template file, along with the parameters file you want to use. Of course, all of the secrets I don’t want to specify, or want to override, I’m placing in the `Override template parameters`-section. Most important is the parameter for the `templateBaseUri`. This parameter contains the base URI to the location where the nested template(s) are stored.


image

It makes sense to override this setting as it’s quite possible you don’t want to use the GitHub location over here, but some location on a public blob container created by your CI-build.

Now save this pipeline and queue a release.

If all goes well, the deployment will fail with a `KeyVaultParameterReferenceNotFound` error.

At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
Details:
BadRequest: {
"error": {
"code": "KeyVaultParameterReferenceNotFound",
"message": "The specified KeyVault '/subscriptions/[subscription-id]/resourceGroups/nested-template-sample/providers/Microsoft.KeyVault/vaults/nested-template-vault' could not be found. Please see https://aka.ms/arm-keyvault for usage details."
}
} undefined
Task failed while creating or updating the template deployment.

Or a bit more visual:

clip_image001

This makes sense as we’re trying to retrieve a secret from the Azure Key Vault which doesn’t exist yet!

If you head down to the Azure Portal and check out the resource group you’ll notice both the resource group and the Key Vault has been created.

clip_image001[7]

The only thing which we need to do is add the `MySuperSecretValueForTheAppService` to the Key Vault.

clip_image001[9]

Once it’s added we can try the release again. All steps should be green now.

clip_image001[11]

You can verify in the resource group both the hosting plan and the App Service have been created now.

clip_image001[13]

Zooming in on the Application Settings of the App Service you’re also able to see the secret value which has been retrieved from Azure Key Vault!

clip_image001[15]

Proof the dynamic id is working when using the dynamic id and a nested template!

Too bad a `securestring` is still rendered in plain text on the portal, but that’s a completely different issue.


It has taken me quite some time to figure out all of the above steps. Probably because I’m no CI/CD expert, so I hope the above post will help others who aren’t experts on the matter also.

You might remember me writing on how to warm up your App Service instances when moving between slots. The use of the applicationInitialization-element is implemented on nearly every IIS webserver nowadays and works great, until it doesn’t.

I’ve been working on a project which has been designed, as I’d like to call it, a distributed monolith. To give you an oversimplified overview, here’s what we have.

image

First off we have a single page web application which communicates directly to an ASP.NET Web API, which in turn communicates to a backend WCF service, which in turn also communicates with a bunch of other services. You can probably imagine I’m not very happy with this kind of a design, but I can’t do much about it currently.

One of the problems with this design is having cold-starts whenever a service is being deployed.
Since we’re deploying continuously to test & production there are a lot of cold starts. Using the applicationInitialization-element really helped spinning up our App Services, but we were still facing some slowness whenever the WCF service was being deployed to any of our environments. This service is being deployed to an ‘old-fashioned’ Cloud Service so we figured the applicationInitialization-element should just work as it’s still running on IIS.

After having added some logging to our warmup endpoints in the WCF service we quickly discovered these methods were never hit whenever the service was being deployed or swapped. Strange!

I started looking inside the WCF configuration and discovered HTTP GET requests should just work as expected.

<behaviors>
 <serviceBehaviors>
  <behavior name="MyBehavior">
    <serviceMetadata httpsGetEnabled="true" httpGetEnabled="true" />
  </behavior>
 </serviceBehaviors>
</behaviors>

The thing is, we apparently also have some XML transformation going on for all of our cloud environments, so these attributes were set to `false` for both the Test and Production services. This means we can’t do a warmup request via the applicationInitialization-element as it’s doing a normal GET request to the specified endpoints.

Since we still need this WCF service to be hot right after a deployment and swap VIP I had to come up with something else. There are multiple things you can do at this point, like creating a PowerShell script which runs right after a deployment, do a smoke-test in your release pipeline which touches all instances, etc. All of them didn’t feel quite right.

What I came up with is to extend the WebRole.OnStart method! This way I can know for sure that every time the WCF Service starts my code is being executed. Plus, all of the warmup source code is located in the same project as the service implementation which makes it easier to track. In order to do a warmup request you need to do a couple of things. First off, we need to figure out what the local IP-number is of the currently running instance. Once we’ve retrieved this IP-number we can use it to do a HTTP request to our (local) warmup endpoint. As I mentioned earlier, HTTP GET is disabled via our configuration, so we need to use a different method, like POST.

There’s an excellent blog post by Kevin Williamson on how to implement such a feature. In this post he states the following:

If your website takes several minutes to warmup (either standard IIS/ASP.NET warmup of precompilation and module loading, or warming up a cache or other app specific tasks) then your clients may experience an outage or random timeouts.  After a role instance restarts and your OnStart code completes then your role instance will be put back in the load balancer rotation and will begin receiving incoming requests.  If your website is still warming up then all of those incoming requests will queue up and time out.  If you only have 2 instances of your web role then IN_0, which is still warming up, will be taking 100% of the incoming requests while IN_1 is being restarted for the Guest OS update.  This can lead to a complete outage of your service until your website is finished warming up on both instances.  It is recommended to keep your instance in OnStart, which will keep it in the Busy state where it won't receive incoming requests from the load balancer, until your warmup is complete.

He also posts a small code snippet which can be used as inspiration for implementation, so I went and used this piece of code.

One of the problems you will face when implementing this, is realizing your IoC-container isn’t spun up yet. This doesn’t has to be a problem per se, but I wanted to add some logging to this code in order to check if everything was working correctly (or not). Well, this isn’t possible!
In order to add some kind of logging, I had to resort to writing entries to the Windows Event Log. This isn’t ideal of course, but it’s the cleanest way I could come up with. By adding entries to the event log you ‘only’ have to enable RDP on your cloud service in order to check what has happened. Needless to say, the RDP settings are reset every time you deploy a new update to the cloud service, so enabling it is quite safe in our scenario.

Adding this logging to my solution really saved the day, because after having added the HTTP request to the OnStart method I still couldn’t see the warmup events being triggered. By checking out the Event Log I quickly discovered this had to do with the installed certificate on the endpoint. The error I was facing told me the following

Could not establish a trust relationship for the SSL/TLS secure channel

This makes sense of course as the certificate is registered on the hostname and we’re now making a request directly to the internal IP-number which obviously is different (service.customer.com !== 12.34.56.78). Removing the certificate check is rather easy, but you should only do this when you’re 100% sure to do a thing like this. If you remove the certificate check on a global scope you’re opening up yourself for a massive amount of problems!

For future reference, here’s a piece of code which resembles what I came up with.

private void Warmup()
{
    string loggingPrefix = $"{nameof(WebRole)} - {nameof(Warmup)} - ";

    using (var eventLog = new EventLog("Application"))
    {
        eventLog.Source = "Application";
        eventLog.WriteEntry($"{loggingPrefix}Starting {nameof(Warmup)}.", EventLogEntryType.Information);

        IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
        string ip = null;
        foreach (IPAddress ipaddress in ipEntry.AddressList)
        {
            if (ipaddress.AddressFamily.ToString() == "InterNetwork")
            {
                ip = ipaddress.ToString();
                eventLog.WriteEntry($"{loggingPrefix}Found IPv4 address is `{ip}`.", EventLogEntryType.Information);
            }
        }
        string urlToPing = $"https://{ip}/V1/MyService.svc/WarmUp";
        HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
        req.Method = "POST";
        req.ContentLength = 0;
        req.ContentType = "application/my-webrole-startup";

        RemoveCertificateValidationToMakeRequestOnInstanceInsteadOfPublicHostname(req);

        try
        {
            eventLog.WriteEntry($"{loggingPrefix}Posting to `{urlToPing}`.", EventLogEntryType.Information);
            var response = req.GetResponse();
        }
        catch (WebException webException)
        {
            // Warmup failed for some reason.
            eventLog.WriteEntry($"{loggingPrefix}Posting to endpoint `{urlToPing}` failed for reason: {webException.Message}.", EventLogEntryType.Error);
        }
        eventLog.WriteEntry($"{loggingPrefix}Finished {nameof(Warmup)}.", EventLogEntryType.Information);
    }
}

private static void RemoveCertificateValidationToMakeRequestOnInstanceInsteadOfPublicHostname(HttpWebRequest req)
{
    req.ServerCertificateValidationCallback = delegate { return true; };
}

This piece of code is based on the sample provided by Kevin Williamson with some Event Log logging added to it, a POST request and removed the certificate check.

Hope it helps you when facing a similar issue!