The New Full-Stack Dev

There was a time when calling oneself a full-stack dev was met with confused faces. It was a fairly new idea. No one really understood what it really meant and if it was real. It was sort of like gluten and people claiming gluten allergy.

Years later, both couldn’t be more real. In the early days of being a full-stack dev, it meant you could write services powering the UI (aka the backend), as well as some really cool jQuery selectors (like $("#someDiv .items:nth-child(2)") — I’ll let you figure out what that means.), some HTML and some CSS. If you knew jQuery you were the dev to go to at work.

The full-stack developer movement couldn’t have started without javascript templating, backbone.js, and later SASS. Pretty soon after those were introduced a lot more developers claimed to be full-stack developers, which was bolstered by the creation of Twitter’s Bootstrap. And suddenly, there was an explosion in the number of developers who could comfortably write both UI, and backend services. The gates to UI development were knocked-down pretty hard. There was probably a hot new frontend framework every week. From BackboneJS (the OG of bringing Model/View/Collection paradigm to frontend development, in my opinion) to today’s React/Angular/Vue — today’s big 3.

A new framework was announced on JS Weekly!? SWIIITCH!!

Web Developer

From there it was a natural progression for developers to try and tackle writing mobile apps with PhoneGap, which was later renamed Apache Cordova. Arguably one of the most contested ways to write a mobile app. It was nonetheless a way for a web developer to also try their hand at mobile development too. To this day, there are numerous frameworks that promise the “write-once-run-anywhere” type of semantics. Xamarin, Flutter, React Native, NativeScript, Ionic Framework are all popular frameworks in that space. Regardless of what you and I believe the best way to build a mobile app to be, it was yet another skill a full-stack developer could add to their skillset.

Other Developments

Back when cloud services were first introduced, almost no one cared how you got to “the cloud”, so as long as you were “on the cloud”. Scripting languages became uber-popular in the ops area. Knowledge of bash, and PowerShell were (and probably still are) indispensable. Tools like Chef, Puppet invigorated the infrastructure space with their ideas. There are probably more tools that I have probably never even heard of.

AWS has CloudFormation templates, Azure has ARM templates, and Google has…uh, I don’t even know and I don’t want to. This is a clear and present problem.

All this while every major cloud provider was cooking up their own template-based toolset to deploy services to their respective clouds. Surely all of the great minds that thought of the amazing ways to build software for the cloud could have seen the problem a mile away, right? Nope. AWS has CloudFormation templates, Azure has ARM templates, and Google has…uh, I don’t even know and I don’t want to. This is a clear and present problem. It is what kept the infrastructure space away from the general reach of your average developer who just wants to deploy a simple service to the cloud.

Redefining Full-Stack Dev

With HashiCorp’s Terraform, this changed. Infrastructure “gurus” started to see the need for a system that would let them deploy repeatable, and predictable infrastructure to the cloud. This works really well if this is what you want. But if you don’t want to learn a new DSL?

If you noticed at the beginning of this post, every innovative piece of technology introduced in the developer ecosystem, involved taking something that was accessible to a niche of developers and turning it into a programmable thing. HTML became JS template, CSS was too brittle, so SASS was created which brought some general-purpose programming language paradigms to CSS (when was the last time you wrote CSS? I mean, raw CSS, not SCSS as you know it today.), mobile apps could be built frontend frameworks. At each stage, the adoption of the respective technology was through-the-roof.

We have been going about trying to solve the cloud infrastructure tooling the wrong way all this time.

Allowing developers to do what they do best — write software, will certainly bring infrastructure to the masses. This is why tools like Pulumi will disrupt this space. The movement to bring infrastructure to the masses has begun. Pulumi brings the programmability, the “coding” aspect to infrastructure. For the first time, it really feels like I can truly program the cloud.

That is not to say the tools that have been created so far have no place in building infrastructure. Nope. Quite the opposite. I think there is a need for these tools. For instance, not everyone is using the cloud the way cloud providers want us to use them. Many of them are tied to legacy on-premise systems, leading to the so-called “Hybrid Environments”. Whatever the setup, there is a place for each of those tools.

And for those of us building something new for the cloud, there’s Pulumi.

So let me define the new full-stack dev. A developer who can work on all tiers of an application, including the infrastructure.

Part 2: Building Home Automation Recipes with Pulumi

Pulumi

In the previous post, I walked-through a home automation recipe where I showed you how easy it is to wire-up existing automation platform with your own custom solution that runs in the cloud. Let’s take a look at how to deploy that on Azure using…code — not templates.

Getting Ready

Project Creation

Create an empty directory somewhere on your local disk. You can init a git repo as well if you’d like, but not necessary for using Pulumi for this project. If you are looking to use Pulumi at work, you should definitely use an SCM of some kind.

Open your favorite terminal window depending on your OS, and cd to your newly-created directory, then run pulumi new (assuming pulumi is on your PATH).

This will show you a few templates, that you can use to create a project. You don’t need to use any of them. In fact, you don’t even have to run pulumi new. You can do this all manually if you prefer that, but it’s certainly the easiest way to get started. Let’s select the azure-typescript template.

Run pulumi new in a new empty directory.

In the screencast snippet above I accepted the defaults for everything. Feel free to enter your own values. You can see that the screencast ends with Pulumi installing the dependencies. Pulumi will create a minimal TS-based app with the required npm dependencies for Azure and TypeScript (since I chose the azure-typescript template).

Imports And Config

import * as pulumi from "@pulumi/pulumi";
import * as azure from "@pulumi/azure";
import * as path from "path";

Let’s take a look at the imports at the top of the index.ts file. All pulumi npm packages will be under @pulumi.

@pulumi/pulumi is the core Pulumi SDK providing things like the configuration class, the programming model constructs for things components and custom resources. It also contains the types for recognizing inputs and outputs, and some helpers to work with both of those.

@pulumi/azure is the Azure-specific SDK for creating Azure resources.

path is the standard NodeJS package.

import { buildFunctionsProject} from "./projectBuilder";

projectBuilder is a TypeScript file that exports just one function buildFunctionsProject.

const namePrefix = "grge-mon";
const config = new pulumi.Config();

The Config class is used to retrieve config key/values stored in the Pulumi.<stack_name>.yaml file. Learn more here.

const twilioAccountToken = config.requireSecret("twilioAccountToken");

This is a really cool part. You can add secrets to your config and retrieve them easily in your code. Pulumi will track the value of this variable as a secret. You add secrets to your stack config using pulumi config --secret set <key> <value>. Learn more here.

buildFunctionsProject(path.join("..", "GarageDoorMonitor"));

As mentioned before, the buildFunctionsProject is an exported function used from another file. This function builds our .NET Core Functions project.

Creating The Infrastructure

Alright. Let’s create some Azure resources.

const resourceGroup = new azure.core.ResourceGroup(`${namePrefix}-group`);

First, we will need a resource group to put our resources into. This is an Azure construct and not a Pulumi-specific thing. If you have worked with Azure, everything you know about it applies even when you use a programming language in Pulumi.

Azure KeyVault

const kv = new azure.keyvault.KeyVault(`${namePrefix}-vault`, {
    resourceGroupName: resourceGroup.name,
    skuName: "standard",
    tenantId: azure.config.tenantId!,
    accessPolicies: [{
        tenantId: azure.config.tenantId!,
        // The current principal has to be granted permissions to Key Vault so that it can actually add and then remove
        // secrets to/from the Key Vault. Otherwise, 'pulumi up' and 'pulumi destroy' operations will fail.
        //
        // NOTE: This object ID value is NOT what you see in the Azure AD's App Registration screen.
        // Run `az ad sp show` from the Azure CLI to list the correct Object ID to use here.
        objectId: "your-SP-object-ID",
        secretPermissions: ["delete", "get", "list", "set"],
    }],
});

We create the KeyVault resource using the new operator. This sort of looks like the properties in the ARM template for creating a KeyVault, except, here you get the advantage of strongly-typed arguments. This makes it really easy to specify values without second-guessing what you need to provide to a property.

Note: A KeyVault resource in Azure uses Access Policies to restrict who can administer it. This also means that if you use a service principal (or your own personal account) to run the Pulumi app, it will need access to the KeyVault to add/remove secrets/keys/certificates. This is why we specify the object id of the account in the access policies initially.

Adding a secret

const twilioSecret = new azure.keyvault.Secret(`${namePrefix}-twil`, {
    keyVaultId: kv.id,
    value: twilioAccountToken,
});

Let’s add a secret to the KeyVault. Notice how we specified the KeyVault to which the secret should be added by simply referencing the variable kv from the previous step. That’s how you would normally pass values to anything that depends on a value from another object, right? But why am I calling this out like it’s a big deal? Remember that this is no ordinary app. We are dealing with infrastructure resources here. These are not just some variables with values stored in memory.

These variables represent actual resources on Azure. This also means that when the KeyVault is still being created, a secret cannot be added to it. So how does Pulumi know when to extract the id property from it? Well, the answer to that is resource ordering. Pulumi knows that the resource represented by kv needs to finish creating before a new Secret resource is added to it. This is similar to how you would specify dependsOn in an ARM template to tell ARM how to order your resources and to flow outputs of one resource’s creation to another as an input. Instead, in Pulumi this happens automatically as you just go about writing regular TypeScript code.

String interpolation with infrastructure resource outputs

const twilioSecretUri = pulumi.interpolate`${twilioSecret.vaultUri}secrets/${twilioSecret.name}/${twilioSecret.version}`;

String interpolation in modern JavaScript (and TypeScript) is achieved by enclosing a string using the backtick character and using ${} to insert a variable. But here we are dealing with special resources. The variables used in the string format are not (yet) present. This means it cannot be evaluated yet, or else we would get undefinedsecrets/undefined/undefined as the value of twilioSecretUri. This is why the Pulumi SDK provides pulumi.interpolate. Use that with the standard JS interpolation characters and Pulumi will recognize that the interpolation must be considered for resource availability before evaluation.

Creating an app insights dashboard is a piece of cake

const appInsights = new azure.appinsights.Insights(`${namePrefix}-ai`, {
    applicationType: "web",
    resourceGroupName: resourceGroup.name,
});

There is not much to say here (and that’s a good thing here!) The code is pretty self-explanatory.

Function App

const durableFunctionApp = new azure.appservice.ArchiveFunctionApp(`${namePrefix}-funcs`, {
    resourceGroup,
    archive: new pulumi.asset.FileArchive("../GarageDoorMonitor/bin/Debug/netcoreapp2.1/publish"),
    appSettings: {
        "runtime": "dotnet",
        "TwilioAccountToken": pulumi.interpolate`@Microsoft.KeyVault(SecretUri=${twilioSecretUri})`,
        "APPINSIGHTS_INSTRUMENTATIONKEY": pulumi.interpolate`${appInsights.instrumentationKey}`,
        "TimerDelayMinutes": config.getNumber("timerDelayMinutes") || 2,
    },
    httpsOnly: true,
    identity: {
        type: "SystemAssigned"
    }
});

// Now that the app is created, update the access policies of the keyvault and
// grant the principalId of the function app access to the vault.
const principalId = durableFunctionApp.functionApp.identity.apply(id => id.principalId);

Pulumi provides some higher-level helpers for well-known/popular resources such as Azure Functions to ease with the package/deployment. Normally you will have to package your code as a zip and deploy it to the Function App. You can do this using a built-in task extension in Azure DevOps, or you can manually zip up your functions and deploy them directly on Azure. Using Pulumi to do this is very easy. And because Pulumi tracks every resource, it will only trigger an update to the code package if you truly changed your functions code. Otherwise, nothing is changed.

KeyVault access

// Grant App Service access to KV secrets
const appAccessPolicy = new azure.keyvault.AccessPolicy(`${namePrefix}-app-policy`, {
   keyVaultId: kv.id,
   tenantId: azure.config.tenantId!,
   objectId: principalId,
   secretPermissions: ["get"],
}, { dependsOn: durableFunctionApp });

Our Function App needs access to the KeyVault to access the secret. So let’s create a new access policy and attach it to the KeyVault.

Outputs

export const webhookUrl = durableFunctionApp.endpoint;

At a basic level, think of your Pulumi app as a thing that is creating several things, but there is typically some output that is of interest to you for your application code. For example, the URL for the API service, IP address of a loadbalancer, the hostname of a managed Cosmos DB instance etc. To create an output you simply need to export it.

You can retrieve outputs from your stack later if you would like using the Pulumi CLI by running pulumi stack output <outputName> where <outputName> is the name of the variable that you exported.

Outputs play an important role in making your infrastructure modular. In this post we only used a single stack. In a more practical scenario, you are perhaps working with multiple teams and each of them may want to have their own stack. But you may have a dependency on the output of one of those stacks, you can use outputs from another stack in your own stack using a StackReference.

Pulumi Console And The Managed Backend

Just like the Azure Portal, Pulumi has a Console UI. The Console gives you a detailed view of all the resources in each stack, their outputs, the timeline of events, activities etc.

Pulumi tracks the state of your resources. It shows you diffs from the current state as you make changes to your infrastructure. At the beginning of this post I stated I would explain why you should sign-up for an account. To get the state tracking and diffs you don’t need an account on Pulumi Console. However, if you want to keep that state safe, highly-available, and make sure concurrent updates are not performed on your infrastructure, which you will need when you are developing for production, you should use the Pulumi-managed backend, and let it take care of all of that for you. The alternative to this is, to manage all that on your own. Learn more about that here.

Closing Notes

  • It is important to note that what you saw above is not a Pulumi-flavor of TS. It is just TS. The same TS you would use to develop Angular apps or whatever else you use TS for these days. This also means everything that the language provides is available for you to use. There are no restrictions, other than what the Pulumi resource provider inflicts on your app.

  • Review the Pulumi Programming Model page for some advanced concepts. Particularly, the Components.

  • For JS and TS-based Pulumi apps, the runtime is NodeJS. So each TS file is transpiled to JS and executed inside the Node runtime just like any other Node app. This also means that you can use just about any Node-compatible npm package, including the built-in Node packages as well. In fact, I used the child_process package to execute the dot net publish command.

  • If you are planning to use Pulumi on Azure DevOps, then checkout Pulumi’s free Task Extension for build and release definitions.

Static sites and Functions

This is the fifth and last part of a series of posts I am writing about building a static site with VueJS. In this post, I will walk-through how you could use Functions-as-a-Service for your next project…or your current too.

Static sites typically don’t get all of the infrastructure attention, that other stuff does. Many developers still think that SPAs, whether static sites or not, need to be hosted with an always-running server infrastructure. It is unnecessary. With the advent of ServiceWorkers and the Progressive Web App movement, you really don’t need a server running all of the time.

Most devs are familiar with using a CMS like WordPress and then buying a domain to serve a website. Most websites don’t need all of that infrastructure. The price is a modest $4/mo according to their pricing page, you just get the basic with the paid plan. Not a big deal, though. But if you wanted to do SEO, custom analytics etc., you are looking at the next pricing tier or perhaps the most expensive one, at $25/mo, if you are looking for a few more knobs/levers to turn.

This is the architecture I used for Net Your Problem.

Static websites architecture with Functions-Page-1
Fig.1 – A simple cloud architecture for SPAs.

I have automated the part where I build the VueJS app and upload it to the Azure Storage account using a PowerShell script (see this gist), which is purely based on the AzureRM PS module.

Great. Now, let’s talk about how these infrastructure systems talk to each other, to cohesively power your next project.

The inner details

A CDN works by aggressively caching static resources (JS, CSS, HTML, PNGs etc.). You can also give a hint to a CDN service to cache additional mime-types by setting the Cache-Control header attribute. A typical CDN service has “edge” nodes all over the world. When you provision a CDN service from any of the cloud providers, you are not choosing the edge nodes in a particular region. CDN services are globally distributed, by default. Each CDN service does their billing differently. For example, I know that the Azure CDN service offers a tiered pricing model based on a group of countries in each tier. So traffic from different countries will be billed at different rates, based on the amount of data transferred from the CDN to the clients (browsers).

As shown in fig.1, the CDN is connect to the Function App, meaning that the CDN will source the static assets from the Function App. But the Function App in turn is connected to a storage account. Technically, all of these can be services from any of the 3 major cloud providers (Azure, AWS, GCP). It doesn’t make sense, though, to create individual services in multiple clouds. You would be charged more for inter-data center data transfer. So it is best to co-locate all of these, except of course, the CDN, which is always global, regardless of whose CDN service you end up using.

The connection between the CDN and the Function App is pretty simple, as it is just a matter of specifying the origin settings for the CDN. The connection between the Function App and the Storage Account requires a little bit more than just specifying the URL. We have to detect the incoming request at the Function and proxy it to the storage account, to let the storage account serve the static asset. Essentially, the Function App serves as a reverse-proxy here for some URL patterns, and for others, as a service that may or may not return a response, i.e., if there is an actual function that can handle the request.

Bonus! Let’s talk about automation

Let’s introduce a CI/CD pipeline into the mix. Azure has a really good all-in-one Ops platform called Azure DevOps, previously known as Visual Studio Team Services or VSTS, and even before that, Visual Studio Online. Anyway, the point of the platform is like Bitbucket or GitHub, where you have everything in one place. CI/CD pipeline integrations, release management, project planning (Kanban as well as sprint-based, whichever you are into), private/public repos, wikis, and also a private feed for your Nuget (.Net packages), npm or maven packages too!

Don’t take my word for it, though. After all, I am just some random programmer on the internet. People say a lot of things. But seriously, you should check it out.

Static websites architecture with Functions-Page-2

Here’s the screenshot of the pipelines page showing you the CI and PROD build pipelines for Net Your Problem.

Screenshot_2018-10-09 Builds - Pipelines

Here’s the CI/CD pipeline in Azure DevOps.

The pipeline is simple. Run npm ci –> npm install –> npm run build –> upload to Az Storage –> store artifacts. That’s it. I have a trigger for this pipeline, which would kick-off this build every time a branch is updated through a merge.

Admittedly, all of this may look like overkill. Trust me, it is not. I spend about a minute to run the build and then to upload the files to the Azure Storage. Then, sometimes, I have to purge the CDN cache, because, well, it works too well sometimes :). Overall, I could spend anywhere between 1-10 mins, and on average ~5 mins, deploying some changes. Now, repeat this several times as I am actively developing something or want to see how things look in Dev, or I want to show something to my ahem client (my girlfriend), the time investment adds-up really quickly. This setup allows me to focus just on the coding part and push up my changes for a quick review on the dev site, before I create a release for the PROD and have approved changes go to the live site immediately. All of this is a pleasant experience for me. To my girlfriend, it makes no difference, and that’s a good thing. She just sees the dev site and the live site. That’s it.

You see, the delays in development, often affects our customers. When I have a pipeline, that works well, my customer doesn’t get affected by it. They just see what they need to see. To them, in the end, it matters if they are able to see what they want and if it works. If the process gets in their way, they simply won’t get it. If I could use the overused automobile world for an analogy. This experience would be akin to taking our car to a shop for an oil change. At the end of it, we just want to drive our car out of the shop with a new oil filter, and fresh oil. We don’t care and most of us don’t want to know how they were able to do an oil change without an appointment. On the other hand, if the oil change took too long, then we want an explanation and all of the shop’s fancy equipment and ISO certifications wouldn’t save them from our negative experience.