So, you’ve heard about Azure Functions?  Possibly read about them somewhere?  Why all the fuss?  What exactly are Azure Functions? Well, you’ve stopped at the right place.  In this post and the next couple of posts, I’m going to talk about Azure Functions, along with their history, use cases and some tutorials.  So, let’s get started.

Introduction

Microsoft describes Azure Functions as “an event-based serverless compute experience to accelerate your development.  [Azure Functions architecture] can scale based on demand and you pay only for the resources you consume.”  Microsoft continues by saying, “Don’t worry about the infrastructure and provisioning of servers, especially when your Functions call rate scales up.”

With these statements, there’s two components to which I want to bring attention: 1) serverless compute; and, 2) scalability.  We’ll talk more about these later on, but as a brief overview… In the past, in order to deploy a web application or server process, a virtual machine or Web Role/Worker Role/App Service was required for hosting.  There were at least three problems with this.  First, Azure customers had to pay for constant up-time which was typically billed at a rate across 744 hours/month.  For a web API or background service that wasn’t very active, it was very expensive, especially if the background service required some major processing power whenever is was invoked.  Second, Azure customers were required to have some knowledge around infrastructure when deploying to virtual machines (i.e. networking, firewalls, IIS, etc.).  Many developers today have some good knowledge around DevOps, but this wasn’t always the case.  Configuring things like IIS made many developers nervous when they were only accustomed to simply clicking a button in Visual Studio to debug their application.  Once the code was checked in, it was someone else’s responsibility to deploy.  Third, Azure customers had to focus on various scaling configurations for their applications.  This required knowledge around CPU utilization and message queuing – again, more infrastructure knowledge than many developers had in their tool belt.  As you’ll see, Functions remove all of these burdens from IT and developers – keeping costs low, deployments efficient and applications easier to manage overall.

History

Worker Roles

Before we look more closely at Azure Functions, let’s take a trip down memory lane.  How did we get here?  In version 1 of Azure, there were primarily three offerings: 1) Web Roles; 2) Worker Roles; and, 3) Storage.  Storage was exactly that – blob storage in the cloud.  Web roles were a packaged web application deployed to Azure that were meant to be accessible by public endpoints.  Worker roles were very similar to a background process you would find in a Windows Client/Server machine or a CRON job on a Linux environment.  While worker roles did have WCF service endpoints, these were not meant to be public – they were endpoints that allowed connections by a given web application for management purposes only (e.g. starting, stopping, pausing a worker thread).  Worker roles were designed and meant to be long-running processes for dealing with things like messaging queues, ETL on data, etc.  Again, in simplicity, web roles were web applications or web sites that typically would run in IIS and worker roles were long-running background processes.

Web and Worker roles were deployed into a VM and thus were, technically, PaaS.  For configuring both roles in regards to size and initial number of instances, a configuration file would be deployed along with each package.  Once deployed, you, again, could determine how both of these scaled based on various metrics.  But, here’s the problem with Worker Roles.  Let’s say that you had a job that needed to check a queue every 10 minutes for incoming messages (i.e. emails, text messages, etc.) to process.  This process only required a maximum of 1 minute of run time for processing all messages.  This means that the process only needed to run 10 minutes of every hour; 240 minutes of of every day; or, 124 hours of every month.  Even though the Worker Role only actually performed a function for 124 hours each month, you were still billed 744 hours each month.  Worker Roles constantly run in a ‘Waited’ state (i.e. paused on a loop).  This loop, in our case, would run, pause/wait for 10 minutes, then run again.  It would continue to do so and, though the loop is paused or waiting, the function is still “running”.  Therefore, you’re still paying for VM compute time over the entire month.  Finally, because Worker Roles were being deployed to a VM, even if you had “stopped” the background thread, you’re still paying for the reserved VM compute.  In the end, there was really no straightforward way, save some major programming in PowerShell, to automatically deploy a Worker Role, start up the process, shutdown the process once it’s done, then destroy the Worker Role (and do this entire procedure every 10 minutes).  There had to be a better (and cheaper) way.  Enter Web Jobs.

Web Jobs

Web Jobs, a part of App Services, was Microsoft’s answer to the dilemma regarding the “overcharge” of billable hours on Worker Roles. Web Jobs, like Worker Roles in the fact that they run background processes, were designed to run on an event – either a schedule of some sort, a method call programmatically, or by accessing them through a web endpoint (URL address).  The schedules could run every X minutes or at a certain time(s) of day (e.g. 8:00 am, 12:00 pm, and 5:00 pm) on specific days.  By taking our example above, we could then reduce our billing and requiring us to pay for only the 124 hours the process is actually running (versus the 744 hours like a Worker Role).  Unfortunately, Web Jobs introduced a different issue.

Let’s look at a scenario to better understand the limitation.  Imagine that you have a very busy web site that is running in App Services.  Because of the load of that web site (i.e. number of visitors/requests), you’ve had to scale that web site up tremendously. On the other hand, like above, the Web Job doesn’t need to be that big and only needs to run for a minute in order to process all of the incoming requests.  Both, the web site and Web Job, need to be associated with a App Service Plan.  While is doesn’t have to be, typically, an App Service Plan is a “boundary container” for a given application.  In other words, all application services required for a particular application are tied to a specific App Service Plan.  In our case, we most likely would put both, the web site (App Service) and the Web Job, in the same App Service Plan.  When it comes to scaling, you don’t define the scaling capabilities and limits on the individual App Service or the Web Job.  Instead you define them on the actual App Service Plan.  What this means is that when we need to scale the web site (App Service) to handle incoming requests, we must choose a larger App Service Plan.  This App Service Plan, again, also contains the Web Job.  So, when we increase the App Service Plan to support the web site, we are over-scaling for our Web Job, and theforefore we are being billed for resources we don’t need.  The same is true if the reverse was the case – the Web Job needed a ton of horsepower (e.g. ETL scripts for data calculations), but the web site (App Service) was fairly small.

So, the problem with Web Jobs is that while you only pay for the time the Web Job is processing – in the case of our example above, 124 hours –  you are paying a lot more money than necessary to perform the task.  Its like going to a car rental and asking for a basic sedan to get your from point A to point B, but they end up giving you a Ferrari.  While the horsepower maybe nice, its unnecessary for the basic task and way too expensive.

Functions

Azure Functions, are a PaaS offering from Azure, meaning that a lot of the maintenance associated with running a typical web hosting environment is managed for you by Azure.  This also includes security, scalability and failover; redundancy and bursting are managed by Azure.

Azure Functions remove the constraints of both, Worker Roles and Web Jobs, with the ability to scale independently of a web application while only paying for what you use while you use it.  But, like Web Jobs, Azure Functions are event-driven.  In other words, unlike Worker Roles that run continuously in the background, Azure Functions only run a few seconds (if that long) to accomplish a certain task.  Those events can range from a typical HTTP request to a message on a queue or an event related to blob storage (e.g. a new file being uploaded).

Finally, Azure Functions auto-scale as demand increases or decreases. There’s two options when creating an Azure Functions application in Azure – Consumption Plan or App Service Plan.  The App Service plan is more traditional and closely aligned to a Web Job payment model where you have a predetermined assignment of resources associated with the Function application.  This allows you to have predictable costs and scale.  You still only pay for what you use, but there’s a maximum limit of available resources.  Alternatively, the Consumption Plan is similar to a pay-for-what-you-use model in that there’s virtually no limit to what you consume.  Your application can scale as much as needed to service demand.

Azure Functions can currently be developed in C#, F# or Node.js.  There’s a couple of boilerplate solutions in Azure to help you get started creating Azure Functions.  There are future plans to allow Azure Functions to support PowerShell and Python.