The last two posts had me writing about how logging can be implemented in your Azure Functions and how you can reuse class libraries using a different logging library, like log4net. You probably already have some logging- and monitoring system in place, but if you’re starting to use Azure Functions (or any other Azure service for that matter), the best tooling to use is Application Insights, in my opinion. You don’t even have to use Azure services in order to use Application Insights. You can also integrate it with any other on-premise server or client application.

For those of you who aren’t yet familiar with Application Insights, you should check it out immediately! It’s an awesome tool in Azure which enables you to view logging, metrics, exceptions, performance and more of your applications. It’s also possible to create enormous dashboards, reports and alerts, so everything you need in order to monitor your applications. A real must-have for a professional devops team.

Integrate with Azure Functions

Integrating your Azure Functions (Function App) with Application Insights is pretty straightforward.
The easiest way is to integrate is by selecting Application Insights when creating a Function App. Just press `On`, the location you want it deployed and proceed with creating the Function App. This will make sure the newly created Application Insights instance will be used by your Function App.

image

If you have neglected to turn this feature on, or have decided you want to use Application Insights after the Function App has been created, no worries you can still turn on this feature. If you need to integrate it manually, you should navigate to the `Application Settings` of your Function App and add a new entry with the key name `APPINSIGHTS_INSTRUMENTATIONKEY` and the value has to be the Instrumentation Key which can be found on the overview page of your Application Insights instance.

Having done so, you will immediately see a new notification popping up on the `Monitor` page of your Azure Function stating you can now check out your monitoring over there!

clip_image001

That’s all you need to do in order to integrate a Function App with this awesome monitoring system. Your operationsdevops people will be grateful for adding this to your solution.

Why add it?

‘Easy to add’ isn’t a very compelling reason to add Application Insights to your Function App. But if you take a moment to check out all of the basic features, I think you will see the power of this tool.

All of the metrics!

Just taking a look at the Overview page is fascinating already. Over here you are able to see how many server instances are running at this time, the response times of your functions and how many requests are being handled at a specific timespan.

clip_image001[5]

Clicking on either one of these graphs will show you even more details!

I like the Live Stream option a lot, because it gives me the feeling I’m an uber-cool-operations-guy by showing me a lot of graphs and telemetry data which gives you a good first impression of the status of your application.

clip_image001[7]

I know it can be quite overwhelming at first. Just spend a couple of minutes clicking and investigating all of the options is enough for most developers to understand what’s going on and which information is useful for your job.

One of the other useful pages available is the Performance page, this is especially useful if you’re providing public HTTP triggered endpoints via your Azure Functions.

clip_image001[9]

You can do some filtering, selecting timespans and inspect a lot of other metrics which have to do with performance of your piece of code.

Logging & Exceptions

Even though all of those metrics are useful in a production environment, as a developer I don’t do a lot with all of this information (mostly because I don’t have permission to view this data in Production).

What I do find interesting though is the availability of logging and being able to query through it quite fast. As I already mentioned, all of the Azure Function logging is stored in Application Insights (if configured) and you are able to search through all of this logging in the Search page. This is the main reason why I’ve spent some time in configuring log4net with a special appender in order to see ALL of my logging in Application Insights.

Just head down to the Search page and you’ll see all of the logging which has occurred in a specific interval.

clip_image001[11]

Obviously, you can change these filters and interval if you want to investigate some stuff. Most useful, to me, is the filtering on severity level of my logging messages and of course, being able to track (unhandled) Exceptions which are a special type.

clip_image001[1]

While this search page is great for doing some quick research and investigation, there’s an even more awesome page you can check out to see if there have been any exceptions. At this time, this page is labeled Failures. On this page you are able to see when exceptions have occurred and which type of exceptions have been thrown.

clip_image001[3]

Clicking on a specific exception will provide you with some more details.

clip_image001[5]

Zooming in further on such an exception will provide you with a lot of information of the client, server and even the stacktrace of the exception.

clip_image001[7]

As you probably  know, having a stacktrace is crucial information to do some proper investigation on problems which have occurred in the past. Along with all of the other logging you can store within Application Insights it provides you with all of the tooling and analysis information to do proper monitoring and troubleshooting of your service(s).

One of the other features which I like very much is the ability to set Alerts when something fishy is going on, like an increase of exceptions or Errors/Criticals logged being stored. This way you don’t have to keep track of the dashboards all day, but get notified when there’s something important to look at. I’ve only scratched the surface of Application Insights so far. There are a lot of other things you can check out and configure in order to automate your DevOps workflow as much as possible.

I need more!

Even though I like Application Insights very much, I can image a ‘real’ operations persons will probably find it a bit too ‘light’ as it might not provide all of the information they want to see. Well, no worries! The team has you covered and has added a button labelled Analytics on the overview page.

image

Pressing this button will navigate you to a new environment which is meant for power users of this system. You can do some SQL-like statements over here, create charts, query just about everything and visualize it just the way you want.

image

I’m not very familiar with this piece of tooling (yet), but it sure looks amazing and I know I’ll put some time in this as it appears to be even more powerful and useful compared to the ‘default’ Application Insights. My guess this piece has been built on top of the Kusto platform, which is an great piece of technology I’d like to get my hands on! I’ll be sure to follow up on these pieces of tooling, but for now I’ll leave it to this and hope I’ve triggered you in using Application Insights!

So you might remember me posting about using the Let’s Encrypt site extension for Azure App Services, called Azure Let’s Encrypt, created by SJKP.

This has quite well for over a year now and even works for Function Apps.

However, last month I got notified my SSL certificate was expired on one of my sites. Strange, as an automated job should just handle this for me. I thought the job probably didn’t execute because of some glitch in the matrix. Therefore I logged in manually, started the site extension wizard again and was stuck on this screen.

/posts/files/8f2e3008-da76-46b2-b583-065827452f3f.png

The reason I was stuck was because the ClientId and ClientSecret didn’t work anymore. As these settings hadn’t changed since I started using this extension I found it quite strange.

Apparently, the Server Principal, which I had created last year, somehow had changed and I didn’t know how to change it back. Lucky for me, managing the AAD isn’t very hard to do nowadays. With a bit of trial and error I was able to create a new SNP and use these details on the Let’s Encrypt site extension.

Creating a new application in AAD

First thing you need to do is add a new Appliction to your AAD. Be sure to pick the option App registrations over here and press the New application registration.

clip_image001

When creating an application you have to specify a name, I chose `LetsEncrypt` and which type it is. Just choose the `Web app/ API` option over here. The other mandatory field called `Sign-on URL` isn’t used in our scenario so you can use any URL you like.

When your application is created you’ll see be navigated to the overview page of this application. Be sure to copy the Application ID from over here as you need it later on. This value has to be used as the ClientId in the site extension.

image

Next thing we need to do is add a Key to this application. You can add new keys by Settings link and choose the Keys option. This key will be used as the Client Secret. Be sure to copy the value after saving as this is the only time you’ll be able to see it.

image

Also note the Expires option.

The default expiration date is set to 1 year from now. This has led me to believe the ClientSecret of my earlier SNP is probably expired. In hindsight I could probably have updated the value in my old SNP and be done with it.

We now have everything we need from our application, so the next thing is to set up the resource group.

Set up your resource group

We need the newly created application to do stuff inside our resource group. Therefore we need to add some permission to it.

To do so, head down to the resource group which contains your app service(s) and pick the Access control (IAM) option.

clip_image001[5]

From over here you can select your newly created application and grant it the Contributor role.

clip_image001[7]

If everything goes well you’ll see the application is added to the list of contributors of this resource.

image

Running the wizard again

Everything should be set up correctly now so you can head back to the wizard of the site extension. Be sure to fill out the ClientId and ClientSecret with the newly retrieved values from the application.

After doing so and trying to proceed to the next screen I was prompted with the message `The ClientId registered under application settings [guid] does not match the ClientId you entered here [guid]` as you can see in the screenshot below.

image

The first time I ran this wizard (a year ago) it was able to create and update the application settings of the App Service. Apparently this has changed and I had to change the Application Settings by myself in the App Service before I was able to continue in the Let’s Encrypt site extension.

For completeness sake, if you’re running a Function App, you can find the settings under All settings, which will navigate you to the App Service settings.

clip_image001[9]

After you’ve changed these settings you should be able to proceed and continue with requesting your SSL certificates.

That’s all there is to it!

Hope it helps whenever you run into problems if your SNP doesn’t work anymore. As I already mentioned, in hindsight it would probably have been much easier by just updating the Key of my original SNP, which I’ll probably need to do in 2 years from now when this new secret will expire.

Warming up your web applications and websites is something which we have been doing for quite some time now and will probably be doing for the next couple of years also. This warmup is necessary to ‘spin up’ your services, like the just-in-time compiler, your database context, caches, etc.

I’ve worked in several teams where we had solved the warming up of a web application in different ways. Running smoke-tests, pinging some endpoint on a regular basis, making sure the IIS application recycle timeout is set to infinite and some more creative solutions.
Luckily you don’t need to resort to these kind of solutions anymore. There is built-in functionality inside IIS and the ASP.NET framework. Just add an `applicationInitialization`-element inside the `system.WebServer`-element in your web.config file and you are good to go! This configuration will look very similar to the following block.

<system.webServer>
  ...
  <applicationInitialization>
    <add initializationPage="/Warmup" />
  </applicationInitialization>
</system.webServer>

What this will do is invoke a call to the /Warmup-endpoint whenever the application is being deployed/spun up. Quite awesome, right? This way you don’t have to resort to those arcane solutions anymore and just use the functionality which is delivered out of the box.

The above works quite well most of the time.
However, we were noticing some strange behavior while using this for our Azure App Services. The App Services weren’t ‘hot’ when a new version was deployed and swapped. This probably isn’t much of a problem if you’re only deploying your application once per day, but it does become a problem when your application is being deployed multiple times per hour.

In order to investigate why the initialization of the web application wasn’t working as expected I needed to turn on some additional monitoring in the App Service.
The easiest way to do this is to turn on the Failed Request Tracing in the App Service and make sure all requests are logged inside these log files. Turning on the Failed Request Tracing is rather easy, this can be enabled inside the Azure Portal.

image

In order to make sure all requests are logged inside this log file, you have to make sure all HTTP status codes are stored, from all possible areas. This requires a bit of configuration in the web.config file. The trace-element will have to be added, along with the traceFailedRequests-element.

<tracing>
  <traceFailedRequests>
    <clear/>
    <add path="*">
      <traceAreas>
        <add provider="WWW Server" 	
        areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,Rewrite,iisnode"
		verbosity="Verbose" />
      </traceAreas>
      <failureDefinitions statusCodes="200-600" />
    </add>
  </traceFailedRequests>
</tracing>

As you can see I’ve configured this to trace all status codes from 200 to 600, which results in all possible HTTP response codes.

Once these settings were configured correctly I was able to do some tests between the several tiers and configurations in an App Service. I had read a post by Ruslan Y stating the use of slot settings might help in our problems with the warmup functionality.
In order to test this I’ve created an App Service for all of the tiers we are using, Free and Standard, in order to see what happens exactly when deploying and swapping the application.
All of the deployments have been executed via TFS Release Management, but I’ve also checked if a right-click deployment from Visual Studio resulted in different logs. I was glad to see they resulted in having the same entries in the log files.

Free

I first tested my application in the Free App Service (F1). After the application was deployed I navigated to the Kudu site to download the trace logs.

Much to my surprise I couldn’t find anything in the logs. There were a lot of log files, but none of them contained anything which closely resembled something like a warmup event. This does validate the earlier linked post, stating we should be using slot settings.

You probably think something like “That’s all fun and games, but deployment slots aren’t available in the Free tier”. That’s a valid thought, but you can configure slot settings, even if you can’t do anything useful with it.

So I added a slot setting to see what would happen when deploying. After the deploying the application I downloaded the log files again and was happy to see the a warmup event being triggered.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="Headers">
    Host: localhost
    User-Agent: IIS Application Initialization Warmup
  </Data>
</EventData>

This is what you want to see, a request by a user agent called `IIS Application Initialization Warmup`.

Somewhere later in the file you should see a different EventData block with your configured endpoint(s) inside it.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="RequestURL">/Warmup</Data>
</EventData>

If you have multiple warmup endpoints you should be able to see each of them in a different EventData-block.

Well, that’s about anything for the Free tier, as you can’t do any actual swapping.

Standard

On the Standard App Service I started with a baseline test by just deploying the application without any slots and slot settings.

After deploying the application to the App Service without a slot setting, I did see a warmup event in the logs. This is quite strange, to me, as there wasn’t a warmup event in the logs for the Free tier. This means there are some differences between the Free and Standard tiers considering this warmup functionality.

After having performed the standard deployment, I also tested the other common scenario’s.
The second scenario I tried was deploying the application to the Staging slot and press the Swap VIP button on the Azure portal. Both of these environments (staging & production) didn’t have a slot setting specified.
So, I checked the log files again and couldn’t find a warmup event or anything which closely resembled this.

This means deploying directly to the Production slot DOES trigger the warmup, but deploying to the Staging slot and execute a swap DOESN’T! Strange, right?

Let’s see what happens when you add a slot setting to the application.
Well, just like the post of Ruslan Y states, if there is a slot setting the warmup is triggered after swapping your environment. This actually makes sense, as your website has to ‘restart’ after swapping environments if there is a slot setting. This restarting also triggers the warmup, like you would expect when starting a site in IIS.

How to configure this?

Based on these tests it appears you probably always want to configure a slot setting, even if you are on the Free tier, when using the warmup functionality.

Configuring slot settings is quite easy if you are using ARM templates to deploy your resources. First of all you need to add a setting which will be used as a slot setting. If you haven’t one already, just add something like `Environment` to the `properties` block in your template.

"properties": {
  ...
  "Environment": "ProductionSlot"
}

Now for the trickier part, actually defining a slot setting. Just paste the code block from below.

{
  "apiVersion": "2015-08-01",
  "name": "slotconfignames",
  "type": "config",
  "dependsOn": [
    "[resourceId('Microsoft.Web/Sites', 
				parameters('mySiteName'))]"
],
"properties": {
  "appSettingNames": [ "Environment" ]
}

When I added this to the template I got red squigglies underneath `slotconfignames` in Visual Studio, but you can ignore them as this is valid setting name.

What the code block above does is telling your App Service the application setting `Environment` is a slot setting.

After deploying your application with these ARM-template settings you should see this setting inside the Azure Portal with a checked checkbox.

image

Some considerations

If you want to use the Warmup functionality, do make sure you use it properly. Use the warmup endpoint(s) to ‘start up’ your database connection, fill your caches, etc.

Another thing to keep in mind is the swapping (or deploying) of an App Service is done after all of the Warmup endpoint(s) are finished executing. This means if you have some code which will take multiple seconds to execute it will ‘delay’ the deployment because of this.

To conclude, please use the warmup-functionality provided by IIS (and Azure) instead of those old-fashioned methods and if deploying to an App Service, just add a slot setting to make sure it always triggers.

Hope the above helps if you have experienced similar issues and don’t have the time to investigate the issue.

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates

image

Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.

clip_image001[4]

You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.

image

There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
{
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);

    store.Close();

    foreach (var certificate in certificateCollection)
    {
        if (certificate.Thumbprint == CertificateThumprint)
        {
            return certificate;
        }
    }
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);
}

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);
        newCsp.ImportCspBlob(privateKeyBlob);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.