Warming up your web applications and websites is something which we have been doing for quite some time now and will probably be doing for the next couple of years also. This warmup is necessary to ‘spin up’ your services, like the just-in-time compiler, your database context, caches, etc.

I’ve worked in several teams where we had solved the warming up of a web application in different ways. Running smoke-tests, pinging some endpoint on a regular basis, making sure the IIS application recycle timeout is set to infinite and some more creative solutions.
Luckily you don’t need to resort to these kind of solutions anymore. There is built-in functionality inside IIS and the ASP.NET framework. Just add an `applicationInitialization`-element inside the `system.WebServer`-element in your web.config file and you are good to go! This configuration will look very similar to the following block.

    <add initializationPage="/Warmup" />

What this will do is invoke a call to the /Warmup-endpoint whenever the application is being deployed/spun up. Quite awesome, right? This way you don’t have to resort to those arcane solutions anymore and just use the functionality which is delivered out of the box.

The above works quite well most of the time.
However, we were noticing some strange behavior while using this for our Azure App Services. The App Services weren’t ‘hot’ when a new version was deployed and swapped. This probably isn’t much of a problem if you’re only deploying your application once per day, but it does become a problem when your application is being deployed multiple times per hour.

In order to investigate why the initialization of the web application wasn’t working as expected I needed to turn on some additional monitoring in the App Service.
The easiest way to do this is to turn on the Failed Request Tracing in the App Service and make sure all requests are logged inside these log files. Turning on the Failed Request Tracing is rather easy, this can be enabled inside the Azure Portal.


In order to make sure all requests are logged inside this log file, you have to make sure all HTTP status codes are stored, from all possible areas. This requires a bit of configuration in the web.config file. The trace-element will have to be added, along with the traceFailedRequests-element.

    <add path="*">
        <add provider="WWW Server" 	
		verbosity="Verbose" />
      <failureDefinitions statusCodes="200-600" />

As you can see I’ve configured this to trace all status codes from 200 to 600, which results in all possible HTTP response codes.

Once these settings were configured correctly I was able to do some tests between the several tiers and configurations in an App Service. I had read a post by Ruslan Y stating the use of slot settings might help in our problems with the warmup functionality.
In order to test this I’ve created an App Service for all of the tiers we are using, Free and Standard, in order to see what happens exactly when deploying and swapping the application.
All of the deployments have been executed via TFS Release Management, but I’ve also checked if a right-click deployment from Visual Studio resulted in different logs. I was glad to see they resulted in having the same entries in the log files.


I first tested my application in the Free App Service (F1). After the application was deployed I navigated to the Kudu site to download the trace logs.

Much to my surprise I couldn’t find anything in the logs. There were a lot of log files, but none of them contained anything which closely resembled something like a warmup event. This does validate the earlier linked post, stating we should be using slot settings.

You probably think something like “That’s all fun and games, but deployment slots aren’t available in the Free tier”. That’s a valid thought, but you can configure slot settings, even if you can’t do anything useful with it.

So I added a slot setting to see what would happen when deploying. After the deploying the application I downloaded the log files again and was happy to see the a warmup event being triggered.

  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="Headers">
    Host: localhost
    User-Agent: IIS Application Initialization Warmup

This is what you want to see, a request by a user agent called `IIS Application Initialization Warmup`.

Somewhere later in the file you should see a different EventData block with your configured endpoint(s) inside it.

  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="RequestURL">/Warmup</Data>

If you have multiple warmup endpoints you should be able to see each of them in a different EventData-block.

Well, that’s about anything for the Free tier, as you can’t do any actual swapping.


On the Standard App Service I started with a baseline test by just deploying the application without any slots and slot settings.

After deploying the application to the App Service without a slot setting, I did see a warmup event in the logs. This is quite strange, to me, as there wasn’t a warmup event in the logs for the Free tier. This means there are some differences between the Free and Standard tiers considering this warmup functionality.

After having performed the standard deployment, I also tested the other common scenario’s.
The second scenario I tried was deploying the application to the Staging slot and press the Swap VIP button on the Azure portal. Both of these environments (staging & production) didn’t have a slot setting specified.
So, I checked the log files again and couldn’t find a warmup event or anything which closely resembled this.

This means deploying directly to the Production slot DOES trigger the warmup, but deploying to the Staging slot and execute a swap DOESN’T! Strange, right?

Let’s see what happens when you add a slot setting to the application.
Well, just like the post of Ruslan Y states, if there is a slot setting the warmup is triggered after swapping your environment. This actually makes sense, as your website has to ‘restart’ after swapping environments if there is a slot setting. This restarting also triggers the warmup, like you would expect when starting a site in IIS.

How to configure this?

Based on these tests it appears you probably always want to configure a slot setting, even if you are on the Free tier, when using the warmup functionality.

Configuring slot settings is quite easy if you are using ARM templates to deploy your resources. First of all you need to add a setting which will be used as a slot setting. If you haven’t one already, just add something like `Environment` to the `properties` block in your template.

"properties": {
  "Environment": "ProductionSlot"

Now for the trickier part, actually defining a slot setting. Just paste the code block from below.

  "apiVersion": "2015-08-01",
  "name": "slotconfignames",
  "type": "config",
  "dependsOn": [
"properties": {
  "appSettingNames": [ "Environment" ]

When I added this to the template I got red squigglies underneath `slotconfignames` in Visual Studio, but you can ignore them as this is valid setting name.

What the code block above does is telling your App Service the application setting `Environment` is a slot setting.

After deploying your application with these ARM-template settings you should see this setting inside the Azure Portal with a checked checkbox.


Some considerations

If you want to use the Warmup functionality, do make sure you use it properly. Use the warmup endpoint(s) to ‘start up’ your database connection, fill your caches, etc.

Another thing to keep in mind is the swapping (or deploying) of an App Service is done after all of the Warmup endpoint(s) are finished executing. This means if you have some code which will take multiple seconds to execute it will ‘delay’ the deployment because of this.

To conclude, please use the warmup-functionality provided by IIS (and Azure) instead of those old-fashioned methods and if deploying to an App Service, just add a slot setting to make sure it always triggers.

Hope the above helps if you have experienced similar issues and don’t have the time to investigate the issue.

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!


This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

It has become increasingly important to have your site secured via some kind of certificate. Even your Google ranking is affected by it.

The main problem with SSL/TLS certificates is the fact most of them cost money. Now, I don’t have any problem with paying some money for something like a certificate, but it will cost quite a lot if I want to set this up for all of my sites & domains. In theory it’s possible to create a self-signed certificate and publish your site with it, but that’s not a very good idea as there’s no one who trusts your self-signed certificate besides yourself.

Luckily Mozilla is helping us, poor content-creators, out with their service called Let’s Encrypt. Let’s Encrypt is a rather new Certificate Authority which is offering a free, open and automated service to create certificates. Their Getting Started guide contains some details on how to set this up for your website or hosting provider.

This is all fun and games, but when hosting your site(s) in the Azure App Service ecosystem you can’t do much with the steps defined in the Getting Started guide. At least, I couldn’t make any sense off it.

There’s a developer who has been so kind to create a so called Site extension for an Azure App Service called Azure Let’s Encrypt. It comes in two flavors for both x86 and x64 systems. Depending on which platform you have deployed your site to, you need to activate one corresponding this platform.


In order to access these site extensions you’ll have to log in to the Kudu environment of your site (https://yourAzureSite.scm.azurewebsites.net/SiteExtensions/#gallery).

Once it’s installed you can launch the extension and will navigate to the configuration area of this extension. This screen will show you a number of fields for which most of them have to be filled with correct data.


This form can look quite impressive if you are not familiar with these things. I’m not very familiar with these terms also, but Nik Molnar has a nice post with some details on the matter.

He mentions you should first create two new application settings within the Azure Portal for the App Service you want to enable SSL. The name/key of these settings are AzureWebJobsStorage and AzureWebJobsDashboard. Both should contain a connectionstring to your Azure Storage account, which will look something like the following DefaultEndpointsProtocol=https;AccountName=[storage account name];AccountKey=[storage account key]. The storage account is necessary for the WebJobs, which will be created by the site extension in order to refresh the SSL certificate.

Next up is the hardest part, creating a Server Principal and retrieving a Client Id and Client Secret from it. In this context, a Server Principal can be seen as some kind of authentication server.
If you already have a Server Principal, you can skip the step of creating one. I still had to add one to my subscription. The following script will create one for you.

$uri = 'http://mysubdomain.mydomain.nl'
$password = 'SomeStrongPassword'

$app = New-AzureRmADApplication -DisplayName 'MySNP' -HomePage $uri -IdentifierUris $uri -Password $password

Needles to say, your PowerShell context needs to be logged in into your Azure subscription before you are able to run this command.

You are now ready to add your application to the server principal with the following commands.

New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId

New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $app.ApplicationId

These commands will add the application and make sure it has the Contributor role, so it will have enough permissions to install and configure certificates.
Make sure you write down the $app.ApplicationId which will be used as the Client Id in the site extension later on.

Now that we have all the information to configure the Let’s Encrypt site extension, we are ready to install it on our App Service. In order to make this even easier, please configure the following Application Settings in your App Service.



Your AAD domain


Your Azure Subscription Id
Can be found on the Overview page of an App Service


The $app.ApplicationId we saved from before


The $password value used in the earlier script


The resource group name your App Service is created on

After installing the site extension, navigate to the configuration page. If all is set up correctly, the fields will all be prefilled with the correct information.


Check the settings and adjust them if necessary. When you are sure everything is set up correct, proceed to the next page.

This next page isn’t very interesting at this time, so we can continue to the final page of the wizard.


Nothing very special over here, just make sure to fill out your e-mail address so you are able to receive notifications from Let’s Encrypt when necessary.

Also good to note, don’t check the Use Staging option. By checking this box the extension will use the test API of Let’s Encrypt and will not create a certificate for you.

Press the big blue button and your site will be available with the certificate after a few moments. The extension uses the challenge-response system of Let’s Encrypt to create a certificate for you. This means it will create a couple of directories in the wwwroot folder of your App Service. This folder structure will look like .well-known\acme-challenge.


If this succeeds, Let’s Encrypt is able to create your certificate.

The challenge-response system folders are hard-coded in the extension. If you run your site in subfolder, like public, website, build, etc. you have to specify this via a special variable. You can add the letsencrypt:WebRootPath key in the application settings and specify the site folder in the value, for example site\wwwroot\public. This is very important to remember. I had forgotten about this on one of my sites and couldn’t figure out why the creation of the SSL certificate didn’t work.

Now that you know how to secure your sites with a free certificate, go set this up right away!

Including a lot of files in your website can impact the performance of your site. Your browser needs to request all those files from the webserver(s) and download them individually. Luckily this fetching is pretty fast and your browser can do multiple requests at once. However, there is a maximum to the number of requests a browser can make, so if you include 100 external files, will probably be (relatively) slow.
I’ve tested this by creating a new MVC 3 web application, copying the Site.css file 12 times and include all of them in the head-element of the page. Below you can see the FireBug and YSlow reports for this test page.

I’ve pressed the Refresh-button several times and came to the conclusion each individual file has a loading time between 5ms and 25ms.
Even though 13x25ms still is pretty fast, you probably understand it’s better to minimize the number of requests, because each request has some overhead and some have to wait for the other to be completed.

To minimize the number of files which need to be included in a website, devigners often create one huge CSS file and one huge JavaScript file which contains everything needed for the website to work. This way the browser only needs to make 3 requests to load the page, the HTML, the CSS and the JavaScript. An additional request will be made for the JavaScript framework you are using (if any) and some more additional requests will be made to fetch the images of the page you are loading.
To test if including 1 big file really is faster, I’ve tested it again with FireBug and YSlow. For this test, I’ve concatenated the contents of theSite.css file 13 times in the SiteFull.css file, so the total styling file size will be the same as with 13 independent files.



As you can see in these results, loading a single CSS-file will only take about 2ms to 19ms. Now let’s say my development computer was super-fast while testing with the SiteFull.css file, even if fetching the file would take up twice as much time, it would still be faster as 13x25ms or even 13x5ms.

The conclusion to this test is: Bundling all styling in 1 huge file will give much faster loading times as separating them in several smaller files.

This conclusion is widely spread and most devigners already know about it, so it would be obsolete for me to point it out again. The thing which bothered me about this approach is the usage of mobile devices and/or having low bandwidth.
Putting all styling for your complete website is a waste of bandwidth if the user will only check out 1 or 2 pages. This user probably doesn’t need most of the style sheet, yet he has downloaded it. Loading the full set of styling on your device is probably useful when caching it locally (or on a proxy server), but if you don’t need it, why load it anyway?

Because I wanted to test if loading style sheets (and maybe JavaScript files) can be done smarter, I’ve created a possible solution for this. I’ve introduced a new HttpHandler, called CSSXHandler. This CSSXHandler will handle all requests which have the cssx-extension.
The implementation is fairly simple. You make a request to let’s say the homepage.cssx file. The handler will pick up this request, load the necessary CSS files in memory and output the necessary contents for this request.
The initial implementation for this handler looks like this.

public void ProcessRequest(HttpContext context)
    var currentStylesheet = DetermineRequestedStylesheet(context);

    switch (currentStylesheet.ToLower())
        case "homepage":

private string DetermineRequestedStylesheet(HttpContext context)
    int locationOfLastSlash = context.Request.RawUrl.LastIndexOf("/");
    int locationOfExtension = context.Request.RawUrl.LastIndexOf(".cssx");
    int numberOfCharactersBetweenSlashAndExtension = locationOfExtension - locationOfLastSlash;
    return context.Request.RawUrl.Substring(locationOfLastSlash + 1, numberOfCharactersBetweenSlashAndExtension - 1);

private void GenerateHomepageStylesheet(HttpContext context)
    var fullCssFileStream = new System.IO.StreamReader(context.Server.MapPath("~/Content/SiteFull.css"));
    string fullCssFileBody = fullCssFileStream.ReadToEnd();

    context.Response.AddHeader("Pragma", "no-cache");
    context.Response.AddHeader("Content-Type", "text/css");

As you can see, I’m loading the SiteFull.css file and read the contents in a string and output it in a whole. This isn’t production ready code and needs improvement if you want to use it, but it’ll give you an idea how to set it up.

Because I wanted to test the performance penalty of this CSSXHandler compared to loading the single SiteFull.css file in the head, I’ve tested loading the page using this handler (with the homepage.cssx in the head).



I was pretty surprised by the results. Loading the homepage.cssx file was between 2ms and 9ms, never slower. Compared to loading the SiteFull.css file, which had a maximum loading time of 19ms, that’s almost 50% faster (max). I didn’t believe this at first, but after pressing the refresh button a couple of times more, I couldn’t get it past the 9ms.
Some time ago I’ve read somewhere that when having routing in place, the ASP.NET ISAPI filter(s) first handle static files and they are processed ‘correctly’ afterwards. Too bad I can’t find a decent source for it at the moment. But I figured this is probably the reason for the relative slow loading of the CSS file.

In the above scenario, the full contents of the CSS file are still returned to the browser. The reason for me to write this handler was to minimize the output, so only the necessary styling is returned. To accomplish this, I’ve altered the CSSXHandler a bit. Now it looks like this:

public void ProcessRequest(HttpContext context)
    var currentStylesheet = DetermineRequestedStylesheet(context);

    switch (currentStylesheet.ToLower())
        case "homepage":
            GenerateHomepageStylesheetWithOnlyStuffForTheHomePage(context, currentStylesheet);

private string DetermineRequestedStylesheet(HttpContext context)
    int locationOfLastSlash = context.Request.RawUrl.LastIndexOf("/");
    int locationOfExtension = context.Request.RawUrl.LastIndexOf(".cssx");
    int numberOfCharactersBetweenSlashAndExtension = locationOfExtension - locationOfLastSlash;
    return context.Request.RawUrl.Substring(locationOfLastSlash + 1, numberOfCharactersBetweenSlashAndExtension - 1);

private void GenerateHomepageStylesheetWithOnlyStuffForTheHomePage(HttpContext context, string currentStyleSheet)
    var fullCssFileStream = new System.IO.StreamReader(context.Server.MapPath("~/Content/SiteFullCSSX.css"));
    string fullCssFileBody = fullCssFileStream.ReadToEnd();

    context.Response.AddHeader("Pragma", "no-cache");
    context.Response.AddHeader("Content-Type", "text/css");

    // Start of a region in the CSS file: /* REGION homepage */
    // End of a region in the CSS file: /* ENDREGION */
    string patternForMatchingRegionForCurrentStyleSheet = @"\/\* REGION "+ currentStyleSheet + @" \*\/(.*?)\/\* ENDREGION \*\/";
    var matchingRegion = new Regex(patternForMatchingRegionForCurrentStyleSheet, RegexOptions.Singleline);

    var matches = matchingRegion.Matches(fullCssFileBody);
    foreach (Match match in matches)

To be able to output only the styling which is necessary for the homepage, there needs to be something in place which tells us what this is. For this I’ve chosen to implement regions in the style sheet, like below.

/* REGION homepage */

/* All necessary styling for the homepage */


All styling necessary for the homepage needs to be implemented in these blocks (there can be multiple blocks in the CSS file(s).
The regular expression will search for these blocks and output the contents of it. By implementing this technique you will only output the styling which is needed for the specific page.
Of course I’ve tested this implementation also, below are the FireBug and YSlow reports.



The performance is fairly similar to outputting the complete SiteFull.css file via the handler, so there isn’t much of a penalty for using the regular expression. However, if you look closely, the file size is much smaller!

The above test scenario only contained 1 match for the regular expression. I wanted to see what the performance was when I added a lot more matches in the big file and also tried out a different notation to support multiple style sheets with the same block.

/* REGION homepage, about, portfolio, blog */

Using this and having several other matches in the file resulted in the following statistics.



The result still is fairly similar to the first CSSX test (between 2ms and 9ms). This means you can generate a really dynamic CSS file, load it via the CSSX handler and get a really fast and small style sheet in your page.

All of these tests are performed on my local machine. The specified code isn’t meant for production environments, as it would need some adjustments. Also, I’ve only refreshed the pages about 10 times on my local development machine. If you want to use this in a production environment, consider the consequences before you do. Caching is a lot harder, you don’t have 1 single file in the cache, but multiple. Also, I don’t know if a proxy or browser would cache a file with the cssx extension. It would be wise to load-test this solution first.

At least I proved it’s possible to generate really dynamic style sheets which load fast and have the smallest possible filesize. If you want to try this handler out yourself, or have improvements, I’ve placed the solution on BitBucket as a public repository.
Would love to hear what you think about this and if it’s usable in the real world? I hope to try it out soon on a new website of mine.

Tijdens het werk zat ik weer eens in een CSS bestand wat wijzigingen aan te brengen. Hier stuitte ik ineens op een regel met

@media screen
css stuff

Zoiets had ik nog nooit gezien en vroeg me dan ook af wat dit betekende. De ontwikkelaar in kwestie wist ook niet meer precies wat dit deed, dus heb ik even gezocht naar dit soort statements.

Blijkbaar kun je in CSS aangeven welk type 'scherm' welke stijl moet hebben. Zo kun je bijvoorbeeld voor een projector, beeldscherm, printer en pocket pc hetzelfde CSS-bestand gebruiken, maar met andere waarden. Dit is wel enorm gaaf.
Door bijvoorbeeld dit te gebruiken:

p { color: green; }
@media screen, projection, tv {
#foo { position: absolute; }
@media print {
#navi { display: none; }
@media handheld {
#foo { position: static; }

Heb je de p-tag voor een handheld, printer, scherm, beamer en televisie anders gedefinieerd.
Je moet wel uitkijken dat je geen idioot moeilijke en onoverzichtelijke code aan het maken bent, maar dit biedt toch potenties.
Volledige uitleg is op deze website te vinden:

Behalve de leuke media-types kun je ook nog gebruik maken van een *, # en .-teken.
Het .-teken was bij mij al bekend. Dit heeft volgens mij betrekking op de namen van de klassen.
Het #-teken ken ik sinds vandaag ook. Deze wordt gebruikt om de stijl van een bepaald ID in je HTML te definieren. Dit kan ook handig zijn, als een speciaal ID iets meer moet opvallen als de rest. Zo kun je weer weer grote if-statements besparen in je code.
Het *-teken is ook wel handig. Dit wordt gebruikt om alle elementen van een bepaald type eenzelfde stijl te geven. Als je dit dus opgeeft voor een body-tag, krijgen alle body's de stijl die bij het *-teken is gebruikt, ongeacht de klasse die ze hebben.
Ook kan het bijvoorbeeld zo worden gebruikt

* {

Een mooiere uitleg met ander voorbeeld is hier te vinden: http://www.dustindiaz.com/css-asterisk-the-universal-rule/

Toch weer een leerzame dag geweest.