Serverless Apps Architecture Patterns and Azure Implementation
Serverless Apps Architecture Patterns and Azure Implementation
PUBLISHED BY
All rights reserved. No part of the contents of this book may be reproduced or transmitted in any
form or by any means without the written permission of the publisher.
This book is provided “as-is” and expresses the author’s views and opinions. The views, opinions and
information expressed in this book, including URL and other Internet website references, may change
without notice.
Some examples depicted herein are provided for illustration only and are fictitious. No real association
or connection is intended or should be inferred.
Microsoft and the trademarks listed at https://www.microsoft.com on the “Trademarks” webpage are
trademarks of the Microsoft group of companies.
All other marks and logos are property of their respective owners.
Author:
Contributor:
Editors:
This guide explains the components of the Azure serverless platform and focuses specifically on
implementation of serverless using Azure Functions. You’ll learn about triggers and bindings as well as
how to implement serverless apps that rely on state using durable functions. Finally, business
examples and case studies will help provide context and a frame of reference to determine whether
serverless is the right approach for your projects.
IaaS still requires heavy overhead because operations is still responsible for various tasks. These tasks
include:
• Patching and backing up servers.
• Installing packages.
• Keeping the operating system up-to-date.
• Monitoring the application.
The next evolution reduced the overhead by providing Platform as a Service (PaaS). With PaaS, the
cloud provider handles operating systems, security patches, and even the required packages to
support a specific platform. Instead of building a VM then configuring .NET and standing up Internet
Information Services (IIS) servers, developers simply choose a “platform target” such as “web
application” or “API endpoint” and deploy code directly. The infrastructure questions are reduced to:
Another feature of serverless is micro-billing. It’s common for web applications to host Web API
endpoints. In traditional bare metal, IaaS and even PaaS implementations, the resources to host the
APIs are paid for continuously. That means you pay to host the endpoints even when they aren’t
being accessed. Often you’ll find one API is called more than others, so the entire system is scaled
based on supporting the popular endpoints. Serverless enables you to scale each endpoint
independently and pay for usage, so no costs are incurred when the APIs aren’t being called.
Migration may in many circumstances dramatically reduce the ongoing cost to support the endpoints.
Additional resources
• Azure Architecture center
• Best practices for cloud applications
Monoliths .................................................................................................................................................................................... 1
Microservices ............................................................................................................................................................................. 3
Serverless ................................................................................................................................................................................ 9
Summary.............................................................................................................................................................................. 10
Scaling .................................................................................................................................................................................. 17
Monitoring, tracing, and logging............................................................................................................................... 18
i Contents
Managing failure and providing resiliency............................................................................................................. 18
Scheduling .......................................................................................................................................................................... 19
Event-based processing................................................................................................................................................. 20
Event Grid................................................................................................................................................................................. 31
Scenarios.............................................................................................................................................................................. 32
Performance targets........................................................................................................................................................ 34
Azure resources................................................................................................................................................................. 35
Conclusion........................................................................................................................................................................... 36
ii Contents
The orchestrator function ............................................................................................................................................. 38
Monitoring .......................................................................................................................................................................... 41
Customer reviews.................................................................................................................................................................. 46
GraphQL.................................................................................................................................................................................... 47
Conclusion.......................................................................................................................... 51
iii Contents
CHAPTER 1
Architecture approaches
Understanding existing approaches to architecting enterprise apps helps clarify the role played by
serverless. There are many approaches and patterns that evolved over decades of software
development, and all have their own pros and cons. In many cases, the ultimate solution may not
involve deciding on a single approach but may integrate several approaches. Migration scenarios
often involve shifting from one architecture approach to another through a hybrid approach.
This chapter provides an overview of both logical and physical architecture patterns for enterprise
applications.
Architecture patterns
Modern business applications follow a variety of architecture patterns. This section represents a survey
of common patterns. The patterns listed here aren’t necessarily all best practices, but illustrate
different approaches.
Monoliths
Many business applications follow a monolith pattern. Legacy applications are often implemented as
monoliths. In the monolith pattern, all application concerns are contained in a single deployment.
Everything from user interface to database calls is included in the same codebase.
Unfortunately, the monolith pattern tends to break down at scale. Major disadvantages of the
monolith approach include:
• Difficult to work in parallel in the same code base.
• Any change, no matter how trivial, requires deploying a new version of the entire application.
• Refactoring potentially impacts the entire application.
• Often the only solution to scale is to create multiple, resource-intensive copies of the monolith.
• As systems expand or other systems are acquired, integration can be difficult.
• It may be difficult to test due to the need to configure the entire monolith.
• Code reuse is challenging and often other apps end up having their own copies of code.
Many businesses look to the cloud as an opportunity to migrate monolith applications and at the
same time refactor them to more usable patterns. It’s common to break out the individual
applications and components to allow them to be maintained, deployed, and scaled separately.
N-Layer applications
N-layer application partition application logic into specific layers. The most common layers include:
• User interface
• Business logic
• Data access
Microservices
Microservices architectures contain common characteristics that include:
• Isolation of the database (often the front end doesn’t have direct access to the database back
end).
• Reuse of the API (for example, mobile, desktop, and web app clients can all reuse the same APIs).
• The need to buy excess for “just in case” or peak demand scenarios.
• Securing physical access to the hardware.
• Responsibility for hardware failure (such as disk failure).
• Cooling.
• Configuring routers and load balancers.
• Power redundancy.
• Securing software access.
Virtualization of hardware, via “virtual machines” enables Infrastructure as a Service (IaaS). Host
machines are effectively partitioned to provide resources to instances with allocations for their own
memory, CPU, and storage. The team provisions the necessary VMs and configures the associated
networks and access to storage.
Although virtualization and Infrastructure as a Service (IaaS) address many concerns, it still leaves
much responsibility in the hands of the infrastructure team. The team maintains operating system
versions, applies security patches, and installs third-party dependencies on the target machines. Apps
often behave differently on production machines compared to the test environment. Issues arise due
to different dependency versions and/or OS SKU levels. Although many organizations deploy N-Tier
applications to these targets, many companies benefit from deploying to a more cloud native model
PaaS addresses the challenges common to IaaS. PaaS allows the developer to focus on the code or
database schema rather than how it gets deployed. Benefits of PaaS include:
• Pay for use models that eliminate the overhead of investing in idle machines.
• Direct deployment and improved DevOps, continuous integration (CI), and continuous delivery
(CD) pipelines.
• Automatic upgrades, updates, and security patches.
• Push-button scale out and scale up (elastic scale).
The main disadvantage of PaaS traditionally has been vendor lock-in. For example, some PaaS
providers only support ASP.NET, Node.js, or other specific languages and platforms. Products like
Azure App Service have evolved to address multiple platforms and support a variety of languages and
frameworks for hosting web apps.
Managing containers across hosts typically requires an orchestration tool such as Kubernetes.
Configuring and managing orchestration solutions may add additional overhead and complexity to
projects. Fortunately, many cloud providers provide orchestration services through PaaS solutions to
simplify the management of containers.
The following image illustrates an example Kubernetes installation. Nodes in the installation address
scale out and failover. They run Docker container instances that are managed by the master server.
The kubelet is the client that relays commands from Kubernetes to Docker.
Functions as a Service (FaaS) is a specialized container service that is similar to serverless. A specific
implementation of FaaS, called OpenFaaS, sits on top of containers to provide serverless capabilities.
OpenFaaS provides templates that package all of the container dependencies necessary to run a piece
of code. Using templates simplifies the process of deploying code as a functional unit. OpenFaaS
targets architectures that already include containers and orchestrators because it can use the existing
infrastructure. Although it provides serverless functionality, it specifically requires you to use Docker
and an orchestrator.
Serverless
A serverless architecture provides a clear separation between the code and its hosting environment.
You implement code in a function that is invoked by a trigger. After that function exits, all its needed
resources may be freed. The trigger might be manual, a timed process, an HTTP request, or a file
upload. The result of the trigger is the execution of code. Although serverless platforms vary, most
provide access to pre-defined APIs and bindings to streamline tasks such as writing to a database or
queueing results.
Serverless is an architecture that relies heavily on abstracting away the host environment to focus on
code. It can be thought of as less server.
Container solutions provide developers existing build scripts to publish code to serverless-ready
images. Other implementations use existing PaaS solutions to provide a scalable architecture.
The abstraction means the DevOps team doesn’t have to provision or manage servers, nor specific
containers. The serverless platform hosts code, either as script or packaged executables built with a
related SDK, and allocates the necessary resources for the code to scale.
The following illustration diagrams four serverless components. An HTTP request causes the Checkout
API code to run. The Checkout API inserts code into a database, and the insert triggers several other
functions to run to perform tasks like computing tasks and fulfilling the order.
Summary
There’s a broad spectrum of available choices for architecture, including a hybrid approach. Serverless
simplifies the approach, management, and cost of application features at the expense of control and
portability. However, many serverless platforms do expose configuration to help fine-tune the
solution. Good programming practices can also lead to more portable code and less serverless
platform lock-in. The following table illustrates the architecture approaches side by side. Choose
serverless based on your scale needs, whether or not you want to manage the runtime, and how well
you can break your workloads into small components. You’ll learn about potential challenges with
serverless and other decision points in the next chapter.
Recommended resources
• Azure application architecture guide
• Azure Cosmos DB
• Azure SQL
• N-Tier architecture pattern
• Kubernetes on Azure
• Microservices
• Virtual machine N-tier reference architecture
• Virtual machines
• What is Docker?
• Wingtip Tickets SaaS application
Serverless hosts often use an existing container-based or PaaS layer to manage the serverless
instances. For example, Azure Functions is based on Azure App Service. The App Service is used to
scale out instances and manage the runtime that executes Azure Functions code. For Windows-based
functions, the host runs as PaaS and scales out the .NET runtime. For Linux-based functions, the host
leverages containers.
The WebJobs Core provides an execution context for the function. The Language Runtime runs scripts,
executes libraries and hosts the framework for the target language. For example, Node.js is used to
run JavaScript functions and the .NET Framework is used to run C# functions. You’ll learn more about
language and platform options later in this chapter.
Some projects may benefit from taking an “all-in” approach to serverless. Applications that rely heavily
on microservices may implement all microservices using serverless technology. The majority of apps
are hybrid, following an N-tier design and using serverless for the components that make sense
Web apps
Web apps are great candidates for serverless applications. There are two common approaches to web
apps today: server-driven, and client-driven (such as Single Page Application or SPA). Server-driven
web apps typically use a middleware layer to issue API calls to render the web UI. SPA applications
make REST API calls directly from the browser. In both scenarios, serverless can accommodate the
middleware or REST API request by providing the necessary business logic. A common architecture is
to stand up a lightweight static web server. The Single Page Application (SPA) serves HTML, CSS,
JavaScript, and other browser assets. The web app then connects to a microservices back end.
Mobile developers can build business logic without becoming experts on the server side. Traditionally,
mobile apps connected to on-premises services. Building the service layer required understanding the
server platform and programming paradigm. Developers worked with operations to provision servers
Serverless abstracts the server-side dependencies and enables the developer to focus on business
logic. For example, a mobile developer who builds apps using a JavaScript framework can build
serverless functions with JavaScript as well. The serverless host manages the operating system, a
Node.js instance to host the code, package dependencies, and more. The developer is provided a
simple set of inputs and a standard template for outputs. They then can focus on building and testing
the business logic. They’re therefore able to use existing skills to build the back-end logic for the
mobile app without having to learn new platforms or become a “server-side developer.”
Most cloud providers offer mobile-based serverless products that simplify the entire mobile
development lifecycle. The products may automate the provisioning of databases to persist data,
handle DevOps concerns, provide cloud-based builds and testing frameworks and the ability to script
business processes using the developer’s preferred language. Following a mobile-centric serverless
approach can streamline the process. Serverless removes the tremendous overhead of provisioning,
configuring, and maintaining servers for the mobile back end.
The sheer volume of devices and information often dictates an event-driven architecture to route and
process messages. Serverless is an ideal solution for several reasons:
• Enables scale as the volume of devices and data increases.
• Accommodates adding new endpoints to support new devices and sensors.
• Facilitates independent versioning so developers can update the business logic for a specific
device without having to deploy the entire system.
• Resiliency and less downtime.
15 CHAPTER 2 | Serverless architecture
The pervasiveness of IoT has resulted in several serverless products that focus specifically on IoT
concerns, such as Azure IoT Hub. Serverless automates tasks such as device registration, policy
enforcement, tracking, and even deployment of code to devices at the edge. The edge refers to
devices like sensors and actuators that are connected to, but not an active part of, the Internet.
Managing state
Serverless functions, as with microservices in general, are stateless by default. Avoiding state enables
serverless to be ephemeral, to scale out, and to provide resiliency without a central point of failure. In
some circumstances, business processes require state. If your process requires state, you have two
options. You can adopt a model other than serverless, or interact with a separate service that provides
state. Adding state can complicate the solution and make it harder to scale, and potentially create a
single point of failure. Carefully consider whether your function absolutely requires state. If the answer
is “yes,” determine whether it still makes sense to implement it with serverless.
There are several solutions to adopt state without compromising the benefits of serverless. Some of
the more popular solutions include:
• Use a temporary data store or distributed cache, like Redis
• Store state in a database, like SQL or CosmosDB
• Handle state through a workflow engine like durable functions
The bottom line is that you should be aware of the need for any state management within processes
you’re considering to implement with serverless.
Long-running processes
Many benefits of serverless rely on the processes being ephemeral. Short run times make it easier for
the serverless provider to free up resources as functions end and share functions across hosts. Most
cloud providers limit the total time your function can run to around 10 minutes. If your process may
take longer, you might consider an alternative implementation.
There are a few exceptions and solutions. One solution may be to break your process into smaller
components that individually take less time to run. If your process runs long because of dependencies,
you can also consider an asynchronous workflow using a solution like durable functions. Durable
functions pause and maintain the state of your process while it’s waiting on an external process to
finish. Asynchronous handling reduces the time the actual process runs.
• Some providers allow users to pay for service levels that guarantee infrastructure is “always on”.
• Implement a keep-alive mechanism (ping the endpoint to keep it “awake”).
• Use orchestration like Kubernetes with a containerized function approach (the host is already
running so spinning up new instances is extremely fast).
A popular approach to solve schema versioning is to never modify existing properties and columns,
but instead add new information. For example, consider a change to move from a Boolean
“completed” flag for a todo list to a “completed date.” Instead of removing the old field, the database
change will:
1. Add a new “completed date” field.
2. Transform the “completed” Boolean field to a computed function that evaluates whether the
completed date is after the current date.
3. Add a trigger to set the completed date to the current date when the completed Boolean is set
to true.
The sequence of changes ensures that legacy code continues to run “as is” while newer serverless
functions can take advantage of the new field.
For more information about data in serverless architectures, see Challenges and solutions for
distributed data management.
Scaling
It’s a common misconception that serverless means “no server.” It’s in fact “less server.” The fact there
is a backing infrastructure is important to understand when it comes to scaling. Most serverless
platforms provide a set of controls to handle how the infrastructure should scale when event density
increases. You can choose from a variety of options, but your strategy may vary depending on the
Rules often specify how to scale-up (increase the host resources) and scale-out (increase the number
of host instances) based on varying parameters. Triggers for scales may include schedule, request
rates, CPU utilization, and memory usage. Higher performance often comes at a greater cost. The less
expensive, consumption-based approaches may not scale as quickly when the request rate suddenly
increases. There is a trade-off between paying up front “insurance cost” versus paying strictly “as you
go” and risking slower responses due to sudden increases in demand.
Inter-service dependencies
A serverless architecture may include functions that rely on other functions. In fact, it isn’t uncommon
in a serverless architecture to have multiple services call each other as part of an interaction or
distributed transaction. To avoid strong coupling, it’s recommended that services don’t reference each
other directly. When the endpoint for a service needs to change, direct references could result in
major refactoring. A suggested solution is to provide a service discovery mechanism, such as a
registry, that provides the appropriate end point for a request type. Another solution is to leverage
messaging services like queues or topics for communication between services.
To continue the circuit-breaker pattern, services need to be fault tolerant and resilient. Fault tolerance
refers to the ability of your application to continue running even after unexpected exceptions or
invalid states are encountered. Fault tolerance is typically a function of the code itself and how it’s
written to handle exceptions. Resiliency refers to how capable the app is at recovering from failures.
Resiliency is often managed by the serverless platform. The platform should be able to spin up a new
serverless function instance when the existing one fails. The platform should also be intelligent
enough to stop spinning up new instances when every new instance fails.
Scheduling
Scheduling tasks is a common function. The following diagram shows a legacy database that doesn’t
have appropriate integrity checks. The database must be scrubbed periodically. The serverless
function finds invalid data and cleans it. The trigger is a timer that runs the code on a schedule.
Using CQRS, a read might involve a special “flattened” entity that models data the way it’s consumed.
The read is handled differently than how it’s stored. For example, although the database may store a
contact as a header record with a child address record, the read could involve an entity with both
Serverless can accommodate the CQRS pattern by providing the segregated endpoints. One serverless
function accommodates queries or reads, and a different serverless function or set of functions
handles update operations. A serverless function may also be responsible for keeping the read model
up-to-date, and can be triggered by the database’s change feed. Front-end development is simplified
to connecting to the necessary endpoints. Processing of events is handled on the back end. This
model also scales well for large projects because different teams may work on different operations.
Event-based processing
In message-based systems, events are often collected in queues or publisher/subscriber topics to be
acted upon. These events can trigger serverless functions to execute a piece of business logic. An
example of event-based processing is event-sourced systems. An “event” is raised to mark a task as
complete. A serverless function triggered by the event updates the appropriate database document. A
second serverless function may use the event to update the read model for the system. Azure Event
Grid provides a way to integrate events with functions as subscribers.
Events are informational messages. For more information, see Event Sourcing pattern.
Serverless endpoints triggered by HTTP calls can be used to handle the API requests. For example, an
ad services company may call a serverless function with user profile information to request custom
advertising. The serverless function returns the custom ad and the web page renders it.
Stream processing
Devices and sensors often generate streams of data that must be processed in real time. There are a
number of technologies that can capture messages and streams from Event Hubs and IoT Hub to
Service Bus. Regardless of transport, serverless is an ideal mechanism for processing the messages
and streams of data as they come in. Serverless can scale quickly to meet the demand of large
volumes of data. The serverless code can apply business logic to parse the data and output in a
structured format for action and analytics.
Recommended resources
• Azure Event Grid
• Azure IoT Hub
• Challenges and solutions for distributed data management
• Designing microservices: identifying microservice boundaries
• Event Hubs
• Event Sourcing pattern
• Implementing the Circuit Breaker pattern
• IoT Hub
• Service Bus
• Working with the change feed support in Azure Cosmos DB
You can also use Application Insights, a serverless platform for capturing diagnostic traces and
telemetry. Application Insights are available to applications of all types (desktop, mobile, or web) as
well as serverless implementations. The platform is visualized in the following diagram:
Azure Functions
Azure functions provide a serverless compute experience. A function is invoked by a trigger (such as
access to an HTTP endpoint or a timer) and executes a block of code or business logic. Functions also
support specialized bindings that connect to resources like storage and queues.
Another hosting option for function apps is the Premium plan. This plan provides an “always on”
instance to avoid cold start, supports advanced features like VNet connectivity, and runs on premium
hardware.
To build from the Azure CLI, see Create your first function using the Azure CLI.
To create a function from Visual Studio, see Create your first function using Visual Studio.
• Blob Storage: invoke your function when a file or folder is uploaded or changed in storage.
• HTTP: invoke your function like a REST API.
• Queue: invoke your function when items exist in a queue.
• Timer: invoke your function on a regular cadence.
Examples of bindings include:
{
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "images/{name}",
"connection": "AzureWebJobsStorage"
},
{
"name": "$return",
"type": "queue",
"direction": "out",
"queueName": "images",
"connection": "AzureWebJobsStorage"
}
],
"disabled": false
}
The example is a simple function that takes the name of the file that was modified or uploaded to
blob storage, and places it on a queue for later processing.
For a full list of triggers and bindings, see Azure Functions triggers and bindings concepts.
Adding Application Insights to existing apps is as easy as adding an instrumentation key to your
application’s settings. With Application Insights you can:
• Create custom charts and alerts based on metrics such as number of function invocations, the
time it takes to run a function, and exceptions
• Analyze failures and server exceptions
• Drill into performance by operation and measure the time it takes to call third-party
dependencies
• Monitor CPU usage, memory, and rates across all servers that host your function apps
• View a live stream of metrics including request count and latency for your function apps
• Use Analytics to search, query, and create custom charts over your function data
The following code measures how long it takes to insert a new row into an Azure Table Storage
instance:
var operation = TableOperation.Insert(entry);
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
await table.ExecuteAsync(operation);
telemetry.TrackDependency("AzureTableStorageInsert", "Insert", startTime,
timer.Elapsed, true);
Logic Apps can do more than just connect your cloud services (like functions) with cloud resources
(like queues and databases). You can also orchestrate on-premises workflows with the on-premises
gateway. For example, you can use the Logic App to trigger an on-premises SQL stored procedure in
response to a cloud-based event or conditional logic in your workflow. Learn more about Connecting
to on-premises data sources with Azure On-premises Data Gateway.
Once the app is triggered, you can use the visual designer to build out steps, loops, conditions, and
actions. Any data ingested in a previous step is available for you to use in subsequent steps. The
following workflow loads URLs from a CosmosDB database. It finds the ones with a host of t.co then
searches for them on Twitter. If it finds corresponding tweets, it updates the documents with the
related tweets by calling a function.
Event Grid
Azure Event Grid provides serverless infrastructure for event-based applications. You can publish to
Event Grid from any source and consume messages from any platform. Event Grid also has built-in
support for events from Azure resources to streamline integration with your applications. For example,
you can subscribe to blob storage events to notify your app when a file is uploaded. Your application
can then publish a custom event grid message that is consumed by other cloud or on-premises
applications. Event Grid was built to reliably handle massive scale. You get the benefits of publishing
and subscribing to messages without the overhead of setting up the necessary infrastructure.
Scenarios
Event Grid addresses several different scenarios. This section covers three of the most common ones.
Ops automation
Event Grid can help speed automation and simplify policy enforcement by notifying Azure
Automation when infrastructure is provisioned.
You can use Event Grid to connect your app to other services. Using standard HTTP protocols, even
legacy apps can be easily modified to publish Event Grid messages. Web hooks are available for other
services and platforms to consume Event Grid messages.
Serverless apps
Event Grid can trigger Azure Functions, Logic Apps, or your own custom code. A major benefit of
using Event Grid is that it uses a push mechanism to send messages when events occur. The push
architecture consumes fewer resources and scales better than polling mechanisms. Polling must check
for updates on a regular interval.
Performance targets
Using Event Grid you can take advantage of the following performance guarantees:
[{
"id": "03e24f21-a955-43cc-8921-1f61a6081ce0",
"eventType": "myCustomEvent",
"subject": "foo/bar/12",
"eventTime": "2018-09-22T10:36:01+00:00",
"data": {
"favoriteColor": "blue",
"favoriteAnimal": "panther",
"favoritePlanet": "Jupiter"
},
"dataVersion": "1.0"
}]
Azure resources
A major benefit of using Event Grid is the automatic messages produced by Azure. In Azure, resources
automatically publish to a topic that allows you to subscribe for various events. The following table
lists the resource types, message types, and events that are available automatically.
Azure
resource Event type Description
Azure Microsoft.Resources.ResourceWriteSuccess Raised when a resource create or
subscription update operation succeeds.
Microsoft.Resources.ResourceWriteFailure Raised when a resource create or
update operation fails.
Microsoft.Resources.ResourceWriteCancel Raised when a resource create or
update operation is canceled.
Microsoft.Resources.ResourceDeleteSuccess Raised when a resource delete
operation succeeds.
Microsoft.Resources.ResourceDeleteFailure Raised when a resource delete
operation fails.
Microsoft.Resources.ResourceDeleteCancel Raised when a resource delete
operation is canceled. This event
happens when a template deployment
is canceled.
Blob storage Microsoft.Storage.BlobCreated Raised when a blob is created.
Microsoft.Storage.BlobDeleted Raised when a blob is deleted.
You can access Event Grid from any type of application, even one that runs on-premises.
Conclusion
In this chapter you learned about the Azure serverless platform that is composed of Azure Functions,
Logic Apps, and Event Grid. You can use these resources to build an entirely serverless app
architecture, or create a hybrid solution that interacts with other cloud resources and on-premises
servers. Combined with a serverless data platform such as Azure SQL or CosmosDB, you can build fully
managed cloud native applications.
Recommended resources
• App service plans
• Application Insights
• Application Insights Analytics
• Azure: Bring your app to the cloud with serverless Azure Functions
• Azure Event Grid
• Azure Event Grid event schema
• Azure Event Hubs
• Azure Functions documentation
• Azure Functions triggers and bindings concepts
• Azure Logic Apps
• Azure Service Bus
• Azure Table Storage
• Connecting to on-premises data sources with Azure On-premises Data Gateway
• Create your first function in the Azure portal
• Create your first function using the Azure CLI
• Create your first function using Visual Studio
• Functions supported languages
• Monitor Azure Functions
Various patterns exist today that assist with the coordination of application state between internal and
external systems. It’s common to come across solutions that rely on centralized queuing systems,
distributed key-value stores, or shared databases to manage that state. However, these are all
additional resources that now need to be provisioned and managed. In a serverless environment, your
code could become cumbersome trying to coordinate with these resources manually. Azure Functions
offers an alternative for creating stateful functions called Durable Functions.
Durable Functions is an extension to the Azure Functions runtime that enables the definition of
stateful workflows in code. By breaking down workflows into activities, the Durable Functions
extension can manage state, create progress checkpoints, and handle the distribution of function calls
across servers. In the background, it makes use of an Azure Storage account to persist execution
history, schedule activity functions and retrieve responses. Your serverless code should never interact
with persisted information in that storage account, and is typically not something with which
developers need to interact.
Orchestrator functions are unable to make use of bindings other than the
OrchestrationTriggerAttribute. This attribute can only be used with a parameter type of
DurableOrchestrationContext. No other inputs can be used since deserialization of inputs in the
function signature isn’t supported. To get inputs provided by the orchestration client, the
GetInput<T> method must be used.
Also, the return types of orchestration functions must be either void, Task, or a JSON serializable
value.
Error handling code has been left out for brevity
[FunctionName("PlaceOrder")]
public static async Task<string> PlaceOrder([OrchestrationTrigger]
DurableOrchestrationContext context)
{
OrderRequestData orderData = context.GetInput<OrderRequestData>();
return trackingNumber;
}
The completed Task<string> from StartNewAsync should contain the unique ID of the orchestration
instance. This instance ID can be used to invoke operations on that specific orchestration. The
orchestration can be queried for the status or sent event notifications.
Similar to orchestration functions, the return types of activity functions must be either void, Task, or a
JSON serializable value.
Any unhandled exceptions that get thrown within activity functions will get sent up to the calling
orchestrator function and presented as a TaskFailedException. At this point, the error can be caught
and logged in the orchestrator, and the activity can be retried.
[FunctionName("CheckAndReserveInventory")]
public static bool CheckAndReserveInventory([ActivityTrigger] DurableActivityContext
context)
{
OrderRequestData orderData = context.GetInput<OrderRequestData>();
Recommended resources
• Durable Functions
• Bindings for Durable Functions
• Manage instances in Durable Functions
Orchestration patterns
Durable Functions makes it easier to create stateful workflows that are composed of discrete, long
running activities in a serverless environment. Since Durable Functions can track the progress of your
Function chaining
In a typical sequential process, activities need to execute one after the other in a particular order.
Optionally, the upcoming activity may require some output from the previous function. This
dependency on the ordering of activities creates a function chain of execution.
The benefit of using Durable Functions to implement this workflow pattern comes from its ability to
do checkpointing. If the server crashes, the network times out or some other issue occurs, Durable
functions can resume from the last known state and continue running your workflow even if it’s on
another server.
[FunctionName("PlaceOrder")]
public static async Task<string> PlaceOrder([OrchestrationTrigger]
DurableOrchestrationContext context)
{
OrderRequestData orderData = context.GetInput<OrderRequestData>();
return trackingNumber;
}
In the preceding code sample, the CallActivityAsync function is responsible for running a given activity
on a virtual machine in the data center. When the await returns and the underlying Task completes,
the execution will be recorded to the history table. The code in the orchestrator function can make
use of any of the familiar constructs of the Task Parallel Library and the async/await keywords.
The following code is a simplified example of what the ProcessPayment method may look like:
[FunctionName("ProcessPayment")]
public static bool ProcessPayment([ActivityTrigger] DurableActivityContext context)
{
OrderRequestData orderData = context.GetInput<OrderRequestData>();
ApplyCoupons(orderData);
if(IssuePaymentRequest(orderData)) {
return true;
}
return false;
}
In this scenario, the DurableOrchestrationClient’s ability to check the status of a running workflow
becomes useful. When using an HttpTrigger to start a workflow, the CreateCheckStatusResponse
method can be used to return an instance of HttpResponseMessage. This response provides the client
with a URI in the payload that can be used to check the status of the running process.
[FunctionName("OrderWorkflow")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Function, "POST")]HttpRequestMessage req,
[OrchestrationClient ] DurableOrchestrationClient orchestrationClient)
{
OrderRequestData data = await req.Content.ReadAsAsync<OrderRequestData>();
The sample result below shows the structure of the response payload.
{
"id": "instanceId",
"statusQueryGetUri": "http://host/statusUri",
"sendEventPostUri": "http://host/eventUri",
"terminatePostUri": "http://host/terminateUri"
}
Using your preferred HTTP client, GET requests can be made to the URI in statusQueryGetUri to
inspect the status of the running workflow. The returned status response should resemble the
following code.
{
"runtimeStatus": "Running",
"input": {
"$type": "DurableFunctionsDemos.OrderRequestData, DurableFunctionsDemos"
},
"output": null,
"createdTime": "2018-01-01T00:22:05Z",
"lastUpdatedTime": "2018-01-01T00:22:09Z"
}
As the process continues, the status response will change to either Failed or Completed. On
successful completion, the output property in the payload will contain any returned data.
Monitoring
For simple recurring tasks, Azure Functions provides the TimerTrigger that can be scheduled based on
a CRON expression. The timer works well for simple, short-lived tasks, but there might be scenarios
Durable Functions allows for flexible scheduling intervals, lifetime management, and the creation of
multiple monitor processes from a single orchestration function. One use case for this functionality
might be to create watchers for stock price changes that complete once a certain threshold is met.
[FunctionName("CheckStockPrice")]
public static async Task CheckStockPrice([OrchestrationTrigger]
DurableOrchestrationContext context)
{
StockWatcherInfo stockInfo = context.GetInput<StockWatcherInfo>();
const int checkIntervalSeconds = 120;
StockPrice initialStockPrice = null;
DateTime fireAt;
DateTime exitTime = context.CurrentUtcDateTime.Add(stockInfo.TimeLimit);
if (initialStockPrice == null)
{
initialStockPrice = currentStockPrice;
fireAt = context.CurrentUtcDateTime.AddSeconds(checkIntervalSeconds);
await context.CreateTimer(fireAt, CancellationToken.None);
continue;
}
DurableOrchestrationContext’s CreateTimer method sets up the schedule for the next invocation of
the loop to check for stock price changes. DurableOrchestrationContext also has a
Recommended resources
• Azure Durable Functions
• Unit Testing in .NET Core and .NET Standard
This example uses serverless to do a map/reduce operation on a big data set. It determines the
average speed of New York Yellow taxi trips per day in 2017.
Customer reviews
This sample showcases the new Azure Functions tooling for C# Class Libraries in Visual Studio. Create
a website where customers submit product reviews that are stored in Azure storage blobs and
CosmosDB. Add an Azure Function to perform automated moderation of the customer reviews using
Azure Cognitive Services. Use an Azure storage queue to decouple the website from the function.
File processing and validation using Azure Functions, Logic Apps, and Durable Functions
An example of how a developer could implement an in-editor data visualization solution for their
game. In fact, an Unreal Engine 4 Plugin and Unity Plugin were developed using this sample as its
backend. The service component is game engine agnostic.
GraphQL
Create a serverless function that exposes a GraphQL API.
This sample implements a new communication protocol to enable reliable upstream communication
from IoT devices. It automates data gap detection and backfill.
A reference architecture that walks you through the decision-making process involved in designing,
developing, and delivering the Rideshare by Relecloud application (a fictitious company). It includes
hands-on instructions for configuring and deploying all of the architecture’s components.
Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with Azure
Functions
Recommended resources
• Azure Functions on Linux
• Big Data Processing: Serverless MapReduce on Azure
• Create serverless applications
• Customer Reviews App with Cognitive Services
• File processing and validation using Azure Functions, Logic Apps, and Durable Functions
• Implementing a simple Azure Function with a Xamarin.Forms client
• In-editor game telemetry visualization
• IoT Reliable Edge Relay
• Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with
Azure Functions
• Run Console Apps on Azure Functions
• Serverless functions for GraphQL
• Serverless Microservices reference architecture
Benefits of using serverless. Serverless solutions provide the important benefit of cost savings
because serverless is implemented in a pay-per-use model. Serverless makes it possible to
independently scale, test, and deploy individual components of your application. Serverless is uniquely
suited to implement microservices architectures and integrates fully into a DevOps pipeline.
Code as a unit of deployment. Serverless abstracts the hardware, OS, and runtime away from the
application. Serverless enables focusing on business logic in code as the unit of deployment.
Triggers and bindings. Serverless eases integration with storage, APIs, and other cloud resources.
Azure Functions provides triggers to execute code and bindings to interact with resources.
Microservices. The microservices architecture is becoming the preferred approach for distributed and
large or complex mission-critical applications that are based on multiple independent subsystems in
the form of autonomous services. In a microservice-based architecture, the application is built as a
collection of services that can be developed, tested, versioned, deployed, and scaled independently.
Serverless is an architecture well-suited for building these services.
Serverless platforms. Serverless isn’t just about the code. Platforms that support serverless
architectures include serverless workflows and orchestration, serverless messaging and event services,
and serverless databases.
Serverless as a tool, not the toolbox. Serverless is not the exclusive solution to application
architecture. It is a tool that can be leveraged as part of a hybrid application that may contain
traditional tiers, monolith back ends, and containers. Serverless can be used to enhance existing
solutions and is not an all-or-nothing approach to application development.
51 CHAPTER 6 | Conclusion