The New Full-Stack Dev

There was a time when calling oneself a full-stack dev was met with confused faces. It was a fairly new idea. No one really understood what it really meant and if it was real. It was sort of like gluten and people claiming gluten allergy.

Years later, both couldn’t be more real. In the early days of being a full-stack dev, it meant you could write services powering the UI (aka the backend), as well as some really cool jQuery selectors (like $("#someDiv .items:nth-child(2)") — I’ll let you figure out what that means.), some HTML and some CSS. If you knew jQuery you were the dev to go to at work.

The full-stack developer movement couldn’t have started without javascript templating, backbone.js, and later SASS. Pretty soon after those were introduced a lot more developers claimed to be full-stack developers, which was bolstered by the creation of Twitter’s Bootstrap. And suddenly, there was an explosion in the number of developers who could comfortably write both UI, and backend services. The gates to UI development were knocked-down pretty hard. There was probably a hot new frontend framework every week. From BackboneJS (the OG of bringing Model/View/Collection paradigm to frontend development, in my opinion) to today’s React/Angular/Vue — today’s big 3.

A new framework was announced on JS Weekly!? SWIIITCH!!

Web Developer

From there it was a natural progression for developers to try and tackle writing mobile apps with PhoneGap, which was later renamed Apache Cordova. Arguably one of the most contested ways to write a mobile app. It was nonetheless a way for a web developer to also try their hand at mobile development too. To this day, there are numerous frameworks that promise the “write-once-run-anywhere” type of semantics. Xamarin, Flutter, React Native, NativeScript, Ionic Framework are all popular frameworks in that space. Regardless of what you and I believe the best way to build a mobile app to be, it was yet another skill a full-stack developer could add to their skillset.

Other Developments

Back when cloud services were first introduced, almost no one cared how you got to “the cloud”, so as long as you were “on the cloud”. Scripting languages became uber-popular in the ops area. Knowledge of bash, and PowerShell were (and probably still are) indispensable. Tools like Chef, Puppet invigorated the infrastructure space with their ideas. There are probably more tools that I have probably never even heard of.

AWS has CloudFormation templates, Azure has ARM templates, and Google has…uh, I don’t even know and I don’t want to. This is a clear and present problem.

All this while every major cloud provider was cooking up their own template-based toolset to deploy services to their respective clouds. Surely all of the great minds that thought of the amazing ways to build software for the cloud could have seen the problem a mile away, right? Nope. AWS has CloudFormation templates, Azure has ARM templates, and Google has…uh, I don’t even know and I don’t want to. This is a clear and present problem. It is what kept the infrastructure space away from the general reach of your average developer who just wants to deploy a simple service to the cloud.

Redefining Full-Stack Dev

With HashiCorp’s Terraform, this changed. Infrastructure “gurus” started to see the need for a system that would let them deploy repeatable, and predictable infrastructure to the cloud. This works really well if this is what you want. But if you don’t want to learn a new DSL?

If you noticed at the beginning of this post, every innovative piece of technology introduced in the developer ecosystem, involved taking something that was accessible to a niche of developers and turning it into a programmable thing. HTML became JS template, CSS was too brittle, so SASS was created which brought some general-purpose programming language paradigms to CSS (when was the last time you wrote CSS? I mean, raw CSS, not SCSS as you know it today.), mobile apps could be built frontend frameworks. At each stage, the adoption of the respective technology was through-the-roof.

We have been going about trying to solve the cloud infrastructure tooling the wrong way all this time.

Allowing developers to do what they do best — write software, will certainly bring infrastructure to the masses. This is why tools like Pulumi will disrupt this space. The movement to bring infrastructure to the masses has begun. Pulumi brings the programmability, the “coding” aspect to infrastructure. For the first time, it really feels like I can truly program the cloud.

That is not to say the tools that have been created so far have no place in building infrastructure. Nope. Quite the opposite. I think there is a need for these tools. For instance, not everyone is using the cloud the way cloud providers want us to use them. Many of them are tied to legacy on-premise systems, leading to the so-called “Hybrid Environments”. Whatever the setup, there is a place for each of those tools.

And for those of us building something new for the cloud, there’s Pulumi.

So let me define the new full-stack dev. A developer who can work on all tiers of an application, including the infrastructure.

Infrastructure with DotNet Core 3.0

Using Pulumi to automate home automation recipes

In a previous two-part blog series I wrote about building a home automation recipe and automating its deployment with a Pulumi app. That was written in TypeScript. Now that Pulumi supports DotNet Core 3.0 with Pulumi CLI 1.5.0+, I ported over the infrastructure to C# as well, and added it as a C# project to my existing Solution.

A C# app and a C# infrastructure app. Nice. It was worth the wait.

What do you think would have happened when I ran pulumi up in the C# version of the same infrastructure? You wouldn’t expect any changes since the infrastructure wasn’t changed — just the language that it was written in, right? Well, that’s exactly what happened (or didn’t happen, depending on how you look at it.) Nothing changed when I ran a pulumi preview after I completed porting over the application.

That, my friends, is the power of programming languages…and proper infrastructure as code.

Building Home Automation Recipes with Pulumi

Using Pulumi to automate home automation recipes

Home automation is easier than ever with a plethora of IoT-connected devices enabling everyday appliances in your home to be internet-enabled. When you write a custom piece of integration for your IoT, often times deployment to the cloud becomes an afterthought, only to become a nightmare when you actually want to update it later, and you don’t remember where/how you deployed it. With Pulumi, you don’t have to worry about that anymore. You can develop your IoT integration app, as well as your program your infrastructure as an app.

The Garage Door Opener

Most people have an internet hub connected to their automatic garage door opener, which allows them to remotely monitor the garage door as well as open/close it using a mobile app. But what about when you forget to close it and it stays open? Neither the app nor the existing home automation recipes on the home automation website IFTTT have a way to remind you that you left it open. To solve this problem I tried not to build something of my own, but instead try to use Zapier, a task automation platform.

Note: The source code for this post is available here.

The First Attempt

My first attempt involved using Zapier. It would have worked if there was a way to update a state while waiting for a timer to fire. I used IFTTT to connect the myQ service to fire a Webhook request each time the garage door opened or closed. The webhook receiver was a Zapier webhook “catch” action, which I then connected to a timer delay before sending me a text message via Twilio. It mostly worked, except, if I closed the garage door before the timer fires, there was no way for me to update the state and therefore, cancel sending the text message.

Here’s the “zap” I ended up creating:

A Zap on Zapier using built-in actions.

Durable Functions on Azure Functions

Durable Functions is an extension to the already popular Azure Functions platform. This means, you can write functions with an external trigger (HTTP, Queue etc.) and have it trigger an orchestration. Each orchestration instance is automatically tracked by the platform. Checkout the API reference to see what you can control about an orchestration instance.

Durable Function Types

There are other durable function types, learn more about them here. The following are just the types used in this project.

Orchestration Functions

Each function has a trigger type that identifies _how_ that function can be triggered. Orchestration functions are no different. Orchestration functions typically don’t do any work other than, you guessed it, orchestrate other functions that do the work.

Activity Functions

Activity functions are responsible for most of the work in an orchestration. You can make HTTP calls, call other activity functions etc.

Entity Functions

Entity functions are only available as part of the Durable Functions 2.x beta. This is in public preview.

Entity functions allow you to represent your orchestration instance with a state. It is up to you on whether each orchestration instance has its own entity or if your state is a singleton. This is controlled by the way that entities are persisted. Each entity is made up of two components:

  • An entity name: a name that identifies the type of the entity (for example, “Counter”).
  • An entity key: a string that uniquely identifies the entity among all other entities of the same name (for example, a GUID).

IFTTT + Azure Durable Functions + Twilio + Pulumi

This is the high-level view of the solution I finally ended up with.

  • IFTTT receives signals from the garage door opener.
  • IFTTT then calls the function app.
  • The function app waits for couple of minutes and if the door still isn’t closed by then, sends a text message using Twilio.

Function App

The only external trigger in the function app is the HTTP trigger used in this function. An orchestration instance is created only through the orchestration client, injected into the HTTP-triggered function as a param.

[FunctionName("Main")]
public static async Task RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext ctx,
    ILogger log)
{
    var delay = Environment.GetEnvironmentVariable("TimerDelayMinutes");
    var delayTimeSpan = TimeSpan.FromMinutes(Convert.ToInt32(delay));
    DateTime timer = ctx.CurrentUtcDateTime.Add(delayTimeSpan);
    log.LogInformation($"Setting timer to expire at {timer.ToLocalTime().ToString()}");
    await ctx.CreateTimer(timer, CancellationToken.None);

    try
    {
        // The use of a critical block, though optional, is recommended here.
        // Updates to durable entities are serial, by default.
        // Having the lock ensures that the entity state we are reading is guaranteed to
        // be the current value of the entity.
        using (await ctx.LockAsync(EntityId))
        {
            var currentState = await ctx.CallEntityAsync<string>(EntityId, "read", null);
            log.LogInformation($"Current state is {currentState}.");
            // If the door is closed already, then don't do anything.
            if (currentState.ToLowerInvariant() == "closed")
            {
                log.LogInformation("Looks like the door was already closed. Will skip sending text message.");
                return;
            }
            await ctx.CallActivityAsync("SendTextMessage", null);
        }
    }
    catch (LockingRulesViolationException ex)
    {
        log.LogError(ex, "Failed to lock/call the entity.");
    }
    catch (Exception ex)
    {
        log.LogError(ex, "Unexpected exception occurred.");
    }
}

Deploying the Infrastructure using Pulumi

We will use Pulumi to deploy our function app. The Pulumi app creates the function app along with the Key Vault containing the Twilio account token necessary for the API call to send a text message. For more information, see the README file in the infrastructure folder of the source repo.

Once the Pulumi app is deployed, you can get the URL for your function app, in order to complete the IFTTT Applet creation in the next step.

IFTTT Applets

IFTTT allows you to create custom applets, which is basically creating your own recipe of “this” and “that”. To create a new applet, click your avatar in the top-right corner on https://ifttt.com, and click Create.

Click on + This and choose the service called myQ. Most of the garage door openers here in the USA are made by the Chamberlain Group anyway and you are most likely using one of those. All of those openers work with the myQ internet gateway. Your alternative would be to buy a myQ Smart Hub.

Click + That and search for Webhook to select it. You will need the URL of the Function App that was deployed using Pulumi. You can get this URL by navigating to https://portal.azure.com, too. Since the infrastructure was deployed using Pulumi, we can easily fetch its output by running pulumi stack output webhookUrl in the infrastructure folder. We can now complete the Webhook action’s configuration in IFTTT.

Note: Since the function app is exposed to the internet, we don’t want just about anyone to be able to call it. Instead, we will use the built-in function app authorization keys to allow only IFTTT to invoke it. Any other caller without the function key will receive a 401 Unauthorized error due to this auth requirement.

Completing the IFTTT applet creation for webhook action.

Twilio

In order to send a text message, create an account on Twilio and purchase a Programmable SMS number. Your account SID and token can be found on the dashboard page of the Twilio Console or on the Settings page under the API Credentials section.

Final Notes

A few things important things to note:

  • Entity functions (part of Durable Functions 2.x) are a preview feature, though, the Durable Function extensions (1.x) themselves is GA.
  • The KeyVault in the infrastructure is not necessary for a project like this, but it is very easy to create one with Pulumi. And with Azure’s new Managed Identity, it is even easier to configure application access to secrets.
  • To learn more about security best practices on Azure, read this excellent post by Mikhail Shilkov.

In the next post, we will take a closer look at the Pulumi app used to deploy the Azure Function App.

Static sites and Functions

This is the fifth and last part of a series of posts I am writing about building a static site with VueJS. In this post, I will walk-through how you could use Functions-as-a-Service for your next project…or your current too.

Static sites typically don’t get all of the infrastructure attention, that other stuff does. Many developers still think that SPAs, whether static sites or not, need to be hosted with an always-running server infrastructure. It is unnecessary. With the advent of ServiceWorkers and the Progressive Web App movement, you really don’t need a server running all of the time.

Most devs are familiar with using a CMS like WordPress and then buying a domain to serve a website. Most websites don’t need all of that infrastructure. The price is a modest $4/mo according to their pricing page, you just get the basic with the paid plan. Not a big deal, though. But if you wanted to do SEO, custom analytics etc., you are looking at the next pricing tier or perhaps the most expensive one, at $25/mo, if you are looking for a few more knobs/levers to turn.

This is the architecture I used for Net Your Problem.

Static websites architecture with Functions-Page-1
Fig.1 – A simple cloud architecture for SPAs.

I have automated the part where I build the VueJS app and upload it to the Azure Storage account using a PowerShell script (see this gist), which is purely based on the AzureRM PS module.

Great. Now, let’s talk about how these infrastructure systems talk to each other, to cohesively power your next project.

The inner details

A CDN works by aggressively caching static resources (JS, CSS, HTML, PNGs etc.). You can also give a hint to a CDN service to cache additional mime-types by setting the Cache-Control header attribute. A typical CDN service has “edge” nodes all over the world. When you provision a CDN service from any of the cloud providers, you are not choosing the edge nodes in a particular region. CDN services are globally distributed, by default. Each CDN service does their billing differently. For example, I know that the Azure CDN service offers a tiered pricing model based on a group of countries in each tier. So traffic from different countries will be billed at different rates, based on the amount of data transferred from the CDN to the clients (browsers).

As shown in fig.1, the CDN is connect to the Function App, meaning that the CDN will source the static assets from the Function App. But the Function App in turn is connected to a storage account. Technically, all of these can be services from any of the 3 major cloud providers (Azure, AWS, GCP). It doesn’t make sense, though, to create individual services in multiple clouds. You would be charged more for inter-data center data transfer. So it is best to co-locate all of these, except of course, the CDN, which is always global, regardless of whose CDN service you end up using.

The connection between the CDN and the Function App is pretty simple, as it is just a matter of specifying the origin settings for the CDN. The connection between the Function App and the Storage Account requires a little bit more than just specifying the URL. We have to detect the incoming request at the Function and proxy it to the storage account, to let the storage account serve the static asset. Essentially, the Function App serves as a reverse-proxy here for some URL patterns, and for others, as a service that may or may not return a response, i.e., if there is an actual function that can handle the request.

Bonus! Let’s talk about automation

Let’s introduce a CI/CD pipeline into the mix. Azure has a really good all-in-one Ops platform called Azure DevOps, previously known as Visual Studio Team Services or VSTS, and even before that, Visual Studio Online. Anyway, the point of the platform is like Bitbucket or GitHub, where you have everything in one place. CI/CD pipeline integrations, release management, project planning (Kanban as well as sprint-based, whichever you are into), private/public repos, wikis, and also a private feed for your Nuget (.Net packages), npm or maven packages too!

Don’t take my word for it, though. After all, I am just some random programmer on the internet. People say a lot of things. But seriously, you should check it out.

Static websites architecture with Functions-Page-2

Here’s the screenshot of the pipelines page showing you the CI and PROD build pipelines for Net Your Problem.

Screenshot_2018-10-09 Builds - Pipelines

Here’s the CI/CD pipeline in Azure DevOps.

The pipeline is simple. Run npm ci –> npm install –> npm run build –> upload to Az Storage –> store artifacts. That’s it. I have a trigger for this pipeline, which would kick-off this build every time a branch is updated through a merge.

Admittedly, all of this may look like overkill. Trust me, it is not. I spend about a minute to run the build and then to upload the files to the Azure Storage. Then, sometimes, I have to purge the CDN cache, because, well, it works too well sometimes :). Overall, I could spend anywhere between 1-10 mins, and on average ~5 mins, deploying some changes. Now, repeat this several times as I am actively developing something or want to see how things look in Dev, or I want to show something to my ahem client (my girlfriend), the time investment adds-up really quickly. This setup allows me to focus just on the coding part and push up my changes for a quick review on the dev site, before I create a release for the PROD and have approved changes go to the live site immediately. All of this is a pleasant experience for me. To my girlfriend, it makes no difference, and that’s a good thing. She just sees the dev site and the live site. That’s it.

You see, the delays in development, often affects our customers. When I have a pipeline, that works well, my customer doesn’t get affected by it. They just see what they need to see. To them, in the end, it matters if they are able to see what they want and if it works. If the process gets in their way, they simply won’t get it. If I could use the overused automobile world for an analogy. This experience would be akin to taking our car to a shop for an oil change. At the end of it, we just want to drive our car out of the shop with a new oil filter, and fresh oil. We don’t care and most of us don’t want to know how they were able to do an oil change without an appointment. On the other hand, if the oil change took too long, then we want an explanation and all of the shop’s fancy equipment and ISO certifications wouldn’t save them from our negative experience.

Data-driven sites

This is the fourth part in a 5-part series about building a static site using VueJS. In this part, I’ll show you an example of how I built Net Your Problem by thinking of static content as data, rather than…well, static content.

We’ll go through this article looking at one specific section of Net Your Problem – the Projects section. There are two cards, with a title, a cover image for the card, and a button that opens a modal dialog.

Screenshot_2018-10-09 Net Your Problem
The Projects section of Net Your Problem.

Let’s look at the template for this,

There are two projects shown on the page, the template only shows one <card> tag with some data-bindings, along with some code to track some events with Google Analytics. Nothing too crazy here. But where is all of the content coming from? Netlify CMS.

If you think of your site as simply the presentation layer, that needs to serve content, think of a CMS as the database that stores your data, i.e., content. Netlify is a bit special. It actually doesn’t use any database, well, technically no. But one could argue that its use of a version-controlled filesystem is like a database. After all, a database has files too. Anyway, back to Netlify and how it works. Netlify basically provides a content editing platform on top of popular git version control systems like GitHub, and BitBucket. You can read about them here.

What I have done for Net Your Problem is, use Netlify CMS as the content editing platform, almost like WordPress. I italicized “almost” because, although WordPress is a CMS too. It differs in many ways. Well, first major difference is that, WordPress uses databases. You also need to host your content on their platform. On top of that, the articles, very much like this one, can only be published to a sub-domain of their own domain, at least under the Free plan. You can install WordPress on your own servers if you are adventurous and want to deal with all of the jazz of the setup and maintenance.

The thing with Netlify is, that the content simply gets stored as Markdown, JSON, or TOML files in your favorite version control SaaS platform. The files are organized in directory structures, that makes it easy for you to read the files. You can simply make ajax calls using the public APIs for GitHub or Bitbucket. The downside (if at all!) is, that you have to make your repository public in order to be able to call it anonymously, i.e., without authentication, from your website’s JS.

Let’s look at the code to fetch the content, which Netlify stores in our content repository. The gist below shows the script portion of the same Projects component, for which we saw the <template> portion above.

A few things to note,

  • axios is used as the HTTP client library
  • There are two API calls
    • GET the list of projects to show the cards.
    • GET the full content for a project when the user clicks on the READ MORE button in the card.

Once the content is downloaded from the JSON file, which is stored in the Bitbucket repo, I just update the data property that has a template binding attached to it and since the content is stored as a JSON file, the response from the Bitbucket API is…yep, JSON. This link will show you the response for the Projects lists JSON file.

And here’s the modal dialog that shows the full content of a “project” from Net Your Problem. The content itself is a markdown string, which is fed to the <vue-markdown> component, which you can see in the template above.

Screenshot_2018-10-09 Net Your Problem(1)
A modal dialog showing the content for one of the “projects” on Net Your Problem.

We just looked at one section in the site, but I am happy to report that 100% (…ok ok 99.9%..the header navigation is hard-coded) of the site is built this way. At first, all of the content was hard-coded in the site, and I slowly started to convert each component to be completely driven by data fetched through the APIs.

Convinced? Head over to the Netlify CMS docs to get started.

Components in a static site too

In the 2nd part of this series, we learned about some basics of rendering a view. We ended that topic by having a look at the router. The router just had one path, the root (/), which was mapped to a single component called the HelloComponent.

Here’s what we’ll do in this post:

  • Examine the HelloComponent
  • Add a new route, and a new component to handle the route
  • Add a nav link to take the user to the new route
  • Render another component inline without routing

HelloComponent

As you can see, the HTML code for most of what you see when you navigate to http://localhost:8080 comes from this file. So how does Vue know where to render the contents of this component? If you recall, the App.vue, the parent component, has a tag called and I mentioned that this is the output for any component that is routed to by the router.

Add a new route, and a new component to handle the route

Create a new file under the components/ folder. Let’s call it TestComponent.vue. And paste the following content.

Yes, I realize it doesn’t do much. I wanted to show you what a basic component looks like. It doesn’t need to have anything more than just that. With just that much content, you get a valid component. You should be able to imagine now, how easy it is to create a component and have your entire site split into pieces (components), that all come together eventually in the browser.

But wait. We are not done adding the component. We just added a file called TestComponent.vue but we haven’t really used it anywhere. So let’s add a new route to the router/index.js file.

Your router should now look like this:

We just added a new route to the router called /test, which routes to the newly-created TestComponent. Let’s test this out by going to: http://localhost:8080/#/test. You should see most of the content replaced with just the word “Test”. This means our new component has rendered. Great. We confirmed our new component to be working by manually going to the /test route.

Add a nav link to take the user to the new route

Let’s look at adding a route-link so that the user can navigate to this newly-created component of ours. Update the App.vue with this. Somewhere inside the <template> tag, add this markup

<router-link to="/test">Go to Test</router-link>

Vue will take care of the rest. It will render the router-link as an anchor (<a>) tag. You could do this programmatically too, if you don’t want to use an anchor tag. Refresh your browser and you should see a Go to Test link on your page. Click it. You should now see the contents of the TestComponent. That was it.

That’s it. We just learned how to use components in Vue to compose our app of little pieces, which are building blocks for a larger site or web app. I highly recommend reading more about VueRouter here.

Render another component inline without routing

So we saw how we could link to a custom component. What if we simply want to render another component inline, in the context of a parent component? Yep. You can do that too.

To do that, let’s first remove the <router-link> tag, and update your App.vue to this:

Then, go to http://localhost:8080 and you should see the contents of the TestComponent rendered inside the contents of App.vue, which also happens to consist of the contents from HelloComponent. So instead of replacing the contents from HelloComponent, we just augmented the contents of our new component in there.

Since you are rendering the component inline, there is no need for that new route, that we added to the router/index.js. You can remove the /test route from it as well, if you’d like.

Remote debugging an embedded WebView in Android

And there I was, thinking that I was done with AzureStorageExplorer for Android v1.0.0. But, nope. In my last round of testing, I couldn’t even login using my work account. The Sign In button didn’t do anything. I went back a few versions to see if the Android update for WebView¬†could have done something to break it. That wasn’t it. The OAuth client library I was using had the recommended code for enabling Javascript in the WebView client so I knew that wasn’t it. I was able to login through the Azure OAuth flow using my personal Azure account. So, I knew there was something about the enterprise account login screen, which could have changed recently that ended up breaking the authentication in the app. It turns out that it did. Read on to know how I found out.

The first step was to prove that something was indeed broken in the login screen for enterprise accounts in Azure in embedded WebViews. To do this, I looked up Chrome’s remote debugging options and found this page describing how to remotely debug WebViews in mobile apps. So, I enabled the setting as recommended by Google, which required adding this piece of code in my activity’s onCreate(Bundle) method.

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
  WebView.setWebContentsDebuggingEnabled(true);
}

I ran the app, launched Chrome’s Developer Tools, and¬†connected to the app’s WebView. This is what I found..

2016-12-26_09-49-21

That last error in the console is what was logged when I pressed the Sign In button after entering my work account credentials for Azure. It turns out that the page’s JS now has code to store something in the browser’s localStorage. The reason this isn’t a problem, if you were logging into Azure through a desktop or even a mobile browser, is because localStorage is enabled for every site by default in the non-embedded browser view. But for a WebView in an Android app you need to explicitly enable the DOM storage, just like you need to enable JS through the WebSettings class.

After finding this, it was easy to know where I needed to make the fix. So, I set out to fork the source repo of the OAuth library I was using, made the necessary changes and created a pull request. If you are using the same library and have run into this issue, you could clone my repo and build the project to produce the patched .jar (or an .aar), which you can use in your project directly until the author of the library can get to my pull request (if at all!).

Unofficial Tesla Android client for controlling your Model S

Tesla Motors
Tesla Motors

GitHub link: https://github.com/praneetloke/MyTesla

But there’s already an official app from Tesla for the iOS and Android, then why this? Because I wanted to and besides there aren’t any open source Android clients. There are couple of Ruby clients and a node.js client. The node.js client is particularly of interest to me since it has visual examples of each API it supports. The telemetry streaming API is my favorite. If you remember, that’s what Tesla used to debunk the NY Times reporter for his fake report on the Model S sometime ago.

Anyway, back to the MyTesla client. You can fork it, download it, modify it, do whatever you want with it. This client is unstable, unofficial, and most importantly, unverified (since I don’t own a Tesla Model S, unfortunately). So please use this with caution. If you find bugs, please raise an issue in my repo or you can fork-fix-pull. If you are able to help with its stabilization by testing this on your Tesla Model S, please let me know. You can hit me up on G+ or LinkedIn.

There are 3 REST API clients for your use in this project. LoginClient, VehicleStatusClient and VehicleCommandClient. I have already pulled in these 3 REST interfaces into a custom Android Application class called TeslaApplication. I did this to remove any dependency from consumers having to pull in AndroidAnnotations as well. On a side note, you should check out AndroidAnnotations. It is awesome! Anyway, in your project, you can simply extend the TeslaApplication and be on your way. You don’t need to put anything in it. I have not actually tested to see if such an extension is actually required if you were to reference this project but be my guest.

I have also made a LoginActivity which has a boilerplate login form which you can present to your users. It handles submitting the email and password to the API and inspecting the response to see if it was successful. I actually plan to change this to a login dialog instead or perhaps have both since a dialog only needs a layout. Then you can choose either depending on your needs. When login fails, it currently doesn’t do anything. I am yet to work on that. I also need to work on some cookie transfer from LoginClient to the other two clients because from what I saw in the AndroidAnnotations sources, there doesn’t seem to be a unified storage for cookies acquired by REST interfaces.

When you look at the library I made, you will notice that I didn’t use primitives. That’s because of the nature of the API itself. It’s unofficial and there are unconfirmed properties whose values are unknown and sometimes null. So this being Java, I couldn’t use primitives in some cases. For those that had confirmed values I could have used primitives but I felt I needed to be consistent rather than have you guess what you’ll be using when you inspect an object. And yes, I am talking about primitives and objects because I actually went ahead and created model classes (POJOs) for all of the endpoints. This should make interaction way easier. It uses Gson for type conversion. I chose Gson over Jackson mapper for its lightweight and performance. Gson doesn’t have all of the features Jackson has but it does the job, fast too.

If you have watched this clip of the guy issuing commands to a Tesla Model S, you’ll be at least half as excited as I was to find a REST client and play with it if you have a Tesla Model S. Of course, you would more likely already have the official Tesla app. But if you are into programming and diving into things on your own, this is for you. I wish my VW CC was capable of something like this.

Credits

  • Tim Dorr (and many others that commented on each API endpoint in the Apiary blueprint) for his excellent Apiary documentation based on his findings. He has implemented his very own ruby implementation of the API here.
  • AndroidAnnotations
  • Spring Android

Other clients

  • node.js
  • Ruby. For Ruby, there’s also the one from Tim Dorr himself.

Ford TDK

Guess what?? Last week, I got the Ford TDK which I won from participating in an app idea contest held by Ford. Here’re the pictures!

TDK power cord and instructions manual.
Instructions and power cord for the Ford/Lincoln TDK 3.0

Ford/Lincoln TDK
Ford/Lincoln TDK

[UPDATE] Weird problem with GridView OnScrollListener and list navigation listener

It turns out that the problem was much simpler than I had originally thought.

The Problem

It was actually two fold. The scroll listener for some reason fires the change event as soon as the gridview’s adapter is set even when there are no items in it. My scroll listener was something like this:

private AbsListView.OnScrollListener mOnScrollListener = new AbsListView.OnScrollListener({
  @Override
  public void onScrollStateChanged (AbsListView view, int scrollState) {}

  @Override
  public void onScroll (AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) {
    if ((firstVisibleItem + lastVisibleItem) >= totalItemCount && !mAsyncTasksPending) {
      loadMore();
    }
  }
});

The above code ended up calling the loadMore(); even before the gridview’s adapter was loaded with the first page results from the service. That ended up making two requests wastefully. The second problem is because of how I had declared the loadMore(); method. I use AndroidAnnotations to reduce boilerplate code. I annotated loadMore(); with @Background to execute the service request in a background thread to prevent the UI from locking up. But, I didn’t need to do this since I was already taking care that the service request be executed in a background thread using another library called async-http, which by the way is freaking awesome! My guess is, when I set the method to be executed in a background thread it was holding up the gridview’s listener thread somehow which caused the events to accumulate in the call stack. This would probably explain why when I paused the activity and resumed it, it would function normally and the moment I scrolled it and caused it to call loadMore() it would freeze again and none of the callbacks would fire.

A little more than the solution

First, what I ended up doing was removing the redundant @Background annotation from loadMore();. Then, I realized that my activity was a little bloated. So, I created a fragment and shoved the gridview, and managing its adapter into the fragment. I was then able to keep my activity lean. By doing so, I have also opened up the possibility of introducing additional fragments to handle different views if necessary in the future which makes this all the more easy to manage without bloating my activity.

Next, I updated the gridview’s onScroll callback. It was still being called as soon as the adapter was set in the gridview. Initially, the adapter is empty and I don’t want it to call loadMore(); when the first page results haven’t been filled in the adapter yet. Loading the first page is handled through the list navigation listener since that sets the currently selected item for which I then go and fetch the list of images. So now, this is how my updated OnScrollListener looks.

private AbsListView.OnScrollListener mOnScrollListener = new AbsListView.OnScrollListener({
  @Override
  public void onScrollStateChanged (AbsListView view, int scrollState) {}

  @Override
  public void onScroll (AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) {
    //only load more if the visibleItemCount is > 0, and the user has reached the last row of visible items
    if (visibleItemCount > 0 && ((firstVisibleItem + lastVisibleItem) >= totalItemCount) && !mAsyncTasksPending) {
      loadMore();
    }
  }
});