Part 2: Building Home Automation Recipes with Pulumi

Pulumi

In the previous post, I walked-through a home automation recipe where I showed you how easy it is to wire-up existing automation platform with your own custom solution that runs in the cloud. Let’s take a look at how to deploy that on Azure using…code — not templates.

Getting Ready

Project Creation

Create an empty directory somewhere on your local disk. You can init a git repo as well if you’d like, but not necessary for using Pulumi for this project. If you are looking to use Pulumi at work, you should definitely use an SCM of some kind.

Open your favorite terminal window depending on your OS, and cd to your newly-created directory, then run pulumi new (assuming pulumi is on your PATH).

This will show you a few templates, that you can use to create a project. You don’t need to use any of them. In fact, you don’t even have to run pulumi new. You can do this all manually if you prefer that, but it’s certainly the easiest way to get started. Let’s select the azure-typescript template.

Run pulumi new in a new empty directory.

In the screencast snippet above I accepted the defaults for everything. Feel free to enter your own values. You can see that the screencast ends with Pulumi installing the dependencies. Pulumi will create a minimal TS-based app with the required npm dependencies for Azure and TypeScript (since I chose the azure-typescript template).

Imports And Config

import * as pulumi from "@pulumi/pulumi";
import * as azure from "@pulumi/azure";
import * as path from "path";

Let’s take a look at the imports at the top of the index.ts file. All pulumi npm packages will be under @pulumi.

@pulumi/pulumi is the core Pulumi SDK providing things like the configuration class, the programming model constructs for things components and custom resources. It also contains the types for recognizing inputs and outputs, and some helpers to work with both of those.

@pulumi/azure is the Azure-specific SDK for creating Azure resources.

path is the standard NodeJS package.

import { buildFunctionsProject} from "./projectBuilder";

projectBuilder is a TypeScript file that exports just one function buildFunctionsProject.

const namePrefix = "grge-mon";
const config = new pulumi.Config();

The Config class is used to retrieve config key/values stored in the Pulumi.<stack_name>.yaml file. Learn more here.

const twilioAccountToken = config.requireSecret("twilioAccountToken");

This is a really cool part. You can add secrets to your config and retrieve them easily in your code. Pulumi will track the value of this variable as a secret. You add secrets to your stack config using pulumi config --secret set <key> <value>. Learn more here.

buildFunctionsProject(path.join("..", "GarageDoorMonitor"));

As mentioned before, the buildFunctionsProject is an exported function used from another file. This function builds our .NET Core Functions project.

Creating The Infrastructure

Alright. Let’s create some Azure resources.

const resourceGroup = new azure.core.ResourceGroup(`${namePrefix}-group`);

First, we will need a resource group to put our resources into. This is an Azure construct and not a Pulumi-specific thing. If you have worked with Azure, everything you know about it applies even when you use a programming language in Pulumi.

Azure KeyVault

const kv = new azure.keyvault.KeyVault(`${namePrefix}-vault`, {
    resourceGroupName: resourceGroup.name,
    skuName: "standard",
    tenantId: azure.config.tenantId!,
    accessPolicies: [{
        tenantId: azure.config.tenantId!,
        // The current principal has to be granted permissions to Key Vault so that it can actually add and then remove
        // secrets to/from the Key Vault. Otherwise, 'pulumi up' and 'pulumi destroy' operations will fail.
        //
        // NOTE: This object ID value is NOT what you see in the Azure AD's App Registration screen.
        // Run `az ad sp show` from the Azure CLI to list the correct Object ID to use here.
        objectId: "your-SP-object-ID",
        secretPermissions: ["delete", "get", "list", "set"],
    }],
});

We create the KeyVault resource using the new operator. This sort of looks like the properties in the ARM template for creating a KeyVault, except, here you get the advantage of strongly-typed arguments. This makes it really easy to specify values without second-guessing what you need to provide to a property.

Note: A KeyVault resource in Azure uses Access Policies to restrict who can administer it. This also means that if you use a service principal (or your own personal account) to run the Pulumi app, it will need access to the KeyVault to add/remove secrets/keys/certificates. This is why we specify the object id of the account in the access policies initially.

Adding a secret

const twilioSecret = new azure.keyvault.Secret(`${namePrefix}-twil`, {
    keyVaultId: kv.id,
    value: twilioAccountToken,
});

Let’s add a secret to the KeyVault. Notice how we specified the KeyVault to which the secret should be added by simply referencing the variable kv from the previous step. That’s how you would normally pass values to anything that depends on a value from another object, right? But why am I calling this out like it’s a big deal? Remember that this is no ordinary app. We are dealing with infrastructure resources here. These are not just some variables with values stored in memory.

These variables represent actual resources on Azure. This also means that when the KeyVault is still being created, a secret cannot be added to it. So how does Pulumi know when to extract the id property from it? Well, the answer to that is resource ordering. Pulumi knows that the resource represented by kv needs to finish creating before a new Secret resource is added to it. This is similar to how you would specify dependsOn in an ARM template to tell ARM how to order your resources and to flow outputs of one resource’s creation to another as an input. Instead, in Pulumi this happens automatically as you just go about writing regular TypeScript code.

String interpolation with infrastructure resource outputs

const twilioSecretUri = pulumi.interpolate`${twilioSecret.vaultUri}secrets/${twilioSecret.name}/${twilioSecret.version}`;

String interpolation in modern JavaScript (and TypeScript) is achieved by enclosing a string using the backtick character and using ${} to insert a variable. But here we are dealing with special resources. The variables used in the string format are not (yet) present. This means it cannot be evaluated yet, or else we would get undefinedsecrets/undefined/undefined as the value of twilioSecretUri. This is why the Pulumi SDK provides pulumi.interpolate. Use that with the standard JS interpolation characters and Pulumi will recognize that the interpolation must be considered for resource availability before evaluation.

Creating an app insights dashboard is a piece of cake

const appInsights = new azure.appinsights.Insights(`${namePrefix}-ai`, {
    applicationType: "web",
    resourceGroupName: resourceGroup.name,
});

There is not much to say here (and that’s a good thing here!) The code is pretty self-explanatory.

Function App

const durableFunctionApp = new azure.appservice.ArchiveFunctionApp(`${namePrefix}-funcs`, {
    resourceGroup,
    archive: new pulumi.asset.FileArchive("../GarageDoorMonitor/bin/Debug/netcoreapp2.1/publish"),
    appSettings: {
        "runtime": "dotnet",
        "TwilioAccountToken": pulumi.interpolate`@Microsoft.KeyVault(SecretUri=${twilioSecretUri})`,
        "APPINSIGHTS_INSTRUMENTATIONKEY": pulumi.interpolate`${appInsights.instrumentationKey}`,
        "TimerDelayMinutes": config.getNumber("timerDelayMinutes") || 2,
    },
    httpsOnly: true,
    identity: {
        type: "SystemAssigned"
    }
});

// Now that the app is created, update the access policies of the keyvault and
// grant the principalId of the function app access to the vault.
const principalId = durableFunctionApp.functionApp.identity.apply(id => id.principalId);

Pulumi provides some higher-level helpers for well-known/popular resources such as Azure Functions to ease with the package/deployment. Normally you will have to package your code as a zip and deploy it to the Function App. You can do this using a built-in task extension in Azure DevOps, or you can manually zip up your functions and deploy them directly on Azure. Using Pulumi to do this is very easy. And because Pulumi tracks every resource, it will only trigger an update to the code package if you truly changed your functions code. Otherwise, nothing is changed.

KeyVault access

// Grant App Service access to KV secrets
const appAccessPolicy = new azure.keyvault.AccessPolicy(`${namePrefix}-app-policy`, {
   keyVaultId: kv.id,
   tenantId: azure.config.tenantId!,
   objectId: principalId,
   secretPermissions: ["get"],
}, { dependsOn: durableFunctionApp });

Our Function App needs access to the KeyVault to access the secret. So let’s create a new access policy and attach it to the KeyVault.

Outputs

export const webhookUrl = durableFunctionApp.endpoint;

At a basic level, think of your Pulumi app as a thing that is creating several things, but there is typically some output that is of interest to you for your application code. For example, the URL for the API service, IP address of a loadbalancer, the hostname of a managed Cosmos DB instance etc. To create an output you simply need to export it.

You can retrieve outputs from your stack later if you would like using the Pulumi CLI by running pulumi stack output <outputName> where <outputName> is the name of the variable that you exported.

Outputs play an important role in making your infrastructure modular. In this post we only used a single stack. In a more practical scenario, you are perhaps working with multiple teams and each of them may want to have their own stack. But you may have a dependency on the output of one of those stacks, you can use outputs from another stack in your own stack using a StackReference.

Pulumi Console And The Managed Backend

Just like the Azure Portal, Pulumi has a Console UI. The Console gives you a detailed view of all the resources in each stack, their outputs, the timeline of events, activities etc.

Pulumi tracks the state of your resources. It shows you diffs from the current state as you make changes to your infrastructure. At the beginning of this post I stated I would explain why you should sign-up for an account. To get the state tracking and diffs you don’t need an account on Pulumi Console. However, if you want to keep that state safe, highly-available, and make sure concurrent updates are not performed on your infrastructure, which you will need when you are developing for production, you should use the Pulumi-managed backend, and let it take care of all of that for you. The alternative to this is, to manage all that on your own. Learn more about that here.

Closing Notes

  • It is important to note that what you saw above is not a Pulumi-flavor of TS. It is just TS. The same TS you would use to develop Angular apps or whatever else you use TS for these days. This also means everything that the language provides is available for you to use. There are no restrictions, other than what the Pulumi resource provider inflicts on your app.

  • Review the Pulumi Programming Model page for some advanced concepts. Particularly, the Components.

  • For JS and TS-based Pulumi apps, the runtime is NodeJS. So each TS file is transpiled to JS and executed inside the Node runtime just like any other Node app. This also means that you can use just about any Node-compatible npm package, including the built-in Node packages as well. In fact, I used the child_process package to execute the dot net publish command.

  • If you are planning to use Pulumi on Azure DevOps, then checkout Pulumi’s free Task Extension for build and release definitions.

Building Home Automation Recipes with Pulumi

Using Pulumi to automate home automation recipes

Home automation is easier than ever with a plethora of IoT-connected devices enabling everyday appliances in your home to be internet-enabled. When you write a custom piece of integration for your IoT, often times deployment to the cloud becomes an afterthought, only to become a nightmare when you actually want to update it later, and you don’t remember where/how you deployed it. With Pulumi, you don’t have to worry about that anymore. You can develop your IoT integration app, as well as your program your infrastructure as an app.

The Garage Door Opener

Most people have an internet hub connected to their automatic garage door opener, which allows them to remotely monitor the garage door as well as open/close it using a mobile app. But what about when you forget to close it and it stays open? Neither the app nor the existing home automation recipes on the home automation website IFTTT have a way to remind you that you left it open. To solve this problem I tried not to build something of my own, but instead try to use Zapier, a task automation platform.

Note: The source code for this post is available here.

The First Attempt

My first attempt involved using Zapier. It would have worked if there was a way to update a state while waiting for a timer to fire. I used IFTTT to connect the myQ service to fire a Webhook request each time the garage door opened or closed. The webhook receiver was a Zapier webhook “catch” action, which I then connected to a timer delay before sending me a text message via Twilio. It mostly worked, except, if I closed the garage door before the timer fires, there was no way for me to update the state and therefore, cancel sending the text message.

Here’s the “zap” I ended up creating:

A Zap on Zapier using built-in actions.

Durable Functions on Azure Functions

Durable Functions is an extension to the already popular Azure Functions platform. This means, you can write functions with an external trigger (HTTP, Queue etc.) and have it trigger an orchestration. Each orchestration instance is automatically tracked by the platform. Checkout the API reference to see what you can control about an orchestration instance.

Durable Function Types

There are other durable function types, learn more about them here. The following are just the types used in this project.

Orchestration Functions

Each function has a trigger type that identifies _how_ that function can be triggered. Orchestration functions are no different. Orchestration functions typically don’t do any work other than, you guessed it, orchestrate other functions that do the work.

Activity Functions

Activity functions are responsible for most of the work in an orchestration. You can make HTTP calls, call other activity functions etc.

Entity Functions

Entity functions are only available as part of the Durable Functions 2.x beta. This is in public preview.

Entity functions allow you to represent your orchestration instance with a state. It is up to you on whether each orchestration instance has its own entity or if your state is a singleton. This is controlled by the way that entities are persisted. Each entity is made up of two components:

  • An entity name: a name that identifies the type of the entity (for example, “Counter”).
  • An entity key: a string that uniquely identifies the entity among all other entities of the same name (for example, a GUID).

IFTTT + Azure Durable Functions + Twilio + Pulumi

This is the high-level view of the solution I finally ended up with.

  • IFTTT receives signals from the garage door opener.
  • IFTTT then calls the function app.
  • The function app waits for couple of minutes and if the door still isn’t closed by then, sends a text message using Twilio.

Function App

The only external trigger in the function app is the HTTP trigger used in this function. An orchestration instance is created only through the orchestration client, injected into the HTTP-triggered function as a param.

[FunctionName("Main")]
public static async Task RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext ctx,
    ILogger log)
{
    var delay = Environment.GetEnvironmentVariable("TimerDelayMinutes");
    var delayTimeSpan = TimeSpan.FromMinutes(Convert.ToInt32(delay));
    DateTime timer = ctx.CurrentUtcDateTime.Add(delayTimeSpan);
    log.LogInformation($"Setting timer to expire at {timer.ToLocalTime().ToString()}");
    await ctx.CreateTimer(timer, CancellationToken.None);

    try
    {
        // The use of a critical block, though optional, is recommended here.
        // Updates to durable entities are serial, by default.
        // Having the lock ensures that the entity state we are reading is guaranteed to
        // be the current value of the entity.
        using (await ctx.LockAsync(EntityId))
        {
            var currentState = await ctx.CallEntityAsync<string>(EntityId, "read", null);
            log.LogInformation($"Current state is {currentState}.");
            // If the door is closed already, then don't do anything.
            if (currentState.ToLowerInvariant() == "closed")
            {
                log.LogInformation("Looks like the door was already closed. Will skip sending text message.");
                return;
            }
            await ctx.CallActivityAsync("SendTextMessage", null);
        }
    }
    catch (LockingRulesViolationException ex)
    {
        log.LogError(ex, "Failed to lock/call the entity.");
    }
    catch (Exception ex)
    {
        log.LogError(ex, "Unexpected exception occurred.");
    }
}

Deploying the Infrastructure using Pulumi

We will use Pulumi to deploy our function app. The Pulumi app creates the function app along with the Key Vault containing the Twilio account token necessary for the API call to send a text message. For more information, see the README file in the infrastructure folder of the source repo.

Once the Pulumi app is deployed, you can get the URL for your function app, in order to complete the IFTTT Applet creation in the next step.

IFTTT Applets

IFTTT allows you to create custom applets, which is basically creating your own recipe of “this” and “that”. To create a new applet, click your avatar in the top-right corner on https://ifttt.com, and click Create.

Click on + This and choose the service called myQ. Most of the garage door openers here in the USA are made by the Chamberlain Group anyway and you are most likely using one of those. All of those openers work with the myQ internet gateway. Your alternative would be to buy a myQ Smart Hub.

Click + That and search for Webhook to select it. You will need the URL of the Function App that was deployed using Pulumi. You can get this URL by navigating to https://portal.azure.com, too. Since the infrastructure was deployed using Pulumi, we can easily fetch its output by running pulumi stack output webhookUrl in the infrastructure folder. We can now complete the Webhook action’s configuration in IFTTT.

Note: Since the function app is exposed to the internet, we don’t want just about anyone to be able to call it. Instead, we will use the built-in function app authorization keys to allow only IFTTT to invoke it. Any other caller without the function key will receive a 401 Unauthorized error due to this auth requirement.

Completing the IFTTT applet creation for webhook action.

Twilio

In order to send a text message, create an account on Twilio and purchase a Programmable SMS number. Your account SID and token can be found on the dashboard page of the Twilio Console or on the Settings page under the API Credentials section.

Final Notes

A few things important things to note:

  • Entity functions (part of Durable Functions 2.x) are a preview feature, though, the Durable Function extensions (1.x) themselves is GA.
  • The KeyVault in the infrastructure is not necessary for a project like this, but it is very easy to create one with Pulumi. And with Azure’s new Managed Identity, it is even easier to configure application access to secrets.
  • To learn more about security best practices on Azure, read this excellent post by Mikhail Shilkov.

In the next post, we will take a closer look at the Pulumi app used to deploy the Azure Function App.