Using Azure IoT Hub to connect my home to the cloud

I’ve written about my hybrid local/cloud home automation architecture previously: in summary most of the moving parts and automation logic live on a series of Raspberry Pis on my home network, using a MQTT broker to communicate with each other. I bridge this “on-prem” system with the cloud in order to route incoming events, e.g. actions initiated via my Alexa Skills, from the cloud to my home, and to send outgoing events, e.g. notifications via Twilio or Twitter.

Historically this bridging was done using Octoblu, having a custom Octoblu MQTT connector running locally and an Octoblu flow running in the cloud for the various inbound and outbound event routing and actions. However, with Octoblu going away as a managed service hosted by Citrix, I, like other Octoblu users, needed to find an alternative solution. I decided to give Azure IoT Hub and its related services a try, partly to solve my immediate need and partly to get some experience with that service. Azure IoT isn’t really the kind of end-user/maker platform that Octoblu is, and there are some differences in concepts and architecture, however for my relatively simple use-case it was fairly straightforward to make Azure IoT Hub and Azure Functions do what I need them to do. Here’s how.

I started by creating an instance of an Azure IoT Hub, using the free tier (which allows up to about 8k messages per day), and within this manually creating a single device to represent my entire home environment (this is the same model I used with Octoblu).

After some experimentation I settled on using the Azure IoT Edge framework (V1, not the more recently released V2) to communicate with the IoT Hub. This framework is a renaming and evolution of the Azure IoT Gateway SDK and allows one or more devices to be connected via a single client service framework. It is possible to create standalone connectors to talk to IoT Hub in a similar manner to how Octoblu connectors work, but I decided to use the Edge framework to give me more flexibility in the future.

There are various ways to consume the IoT Edge/gateway framework; I chose to use the NPM packaged version, adding my own module and configuration. In this post I’ll refer to my instance of the framework as the “gateway”. The overall concept for the framework is that a number of modules can be linked together, with each module acting as either a message source, sink, or both. The set of modules and linkage are defined in a JSON configuration file. The modules typically include one or more use-case specific modules, e.g. to communicate with a physical device; a module to bidirectionally communicate with the Azure IoT Hub; and a mapping module to map between physical device identifiers and IoT Hub deviceNames and deviceKeys.

The requirements for my gateway were simple:

  1. Connect to the local MQTT broker, subscribe to a small number of MQTT topics and forward messages on them to Azure IoT Hub.
  2. Receive messages from Azure IoT Hub and publish them to the local MQTT broker.

To implement this I built a MQTT module for the Azure IoT Edge framework. I opted to forego the usual mapping module (it wouldn’t add value here) and instead have the MQTT module set the deviceName and deviceKey for IoT Hub directly, and perform its own inbound filtering. The configuration for the module pipeline is therefore very simple: messages from the IoT Hub module go to the MQTT module, and vice-versa.

The IoT Edge framework runs the node.js MQTT module in an in-process JavaScript interpreter, with the IoT Hub being a native code module that runs in the same process. Thus the whole gateway is run as a single program with the configuration supplied as its argument.

The gateway runs on a Pi with my specific deviceName and deviceKey, along with MQTT config, stored locally in a file “/home/pi/.iothub.json” that look like this:

{
  "iothub":{
    "deviceName":"MyMQTTBroker",
    "deviceKey":"<deviceKey for device as defined in Azure IoT Hub>",
    "hostname":"<my_iot_hub>.azure-devices.net"
  },
  "localmqtt":{
    "url":"mqtt://10.52.2.41",
    "protocol":"{\"protocolId\": \"MQIsdp\", \"protocolVersion\": 3}"
  }
}

The gateway can now happily send and receive messages from Azure IoT Hub but that isn’t very useful on its own. The next step was to setup inbound message routing from my Alexa Skills.

aziotIn the previous Octoblu implementation the Alexa Skills simply called an Octoblu Trigger (in effect a webhook) with a message containing a MQTT topic and message body. The Octoblu flow then sent this to the device representing my home environment and the connector running on a Pi picked it up and published it into the local MQTT broker. The Azure solution is essentially the same. I created an Azure Function (equivalent to an AWS Lambda function) using a JavaScript HTTP trigger template, that can be called with a topic and message body, this then calls the Azure IoT Hub (via a NPM library) to send a “cloud-to-device” (C2D) message to the MQTT gateway device – the gateway described above then picks this up and publishes it via the local broker just like the Octoblu connector did. I then updated my Alexa Skills’ Lambda Functions to POST to this Azure Function rather than to the Octoblu Trigger.

The code for the Azure function is really just argument checking and plumbing to call into the library that in turn calls the Azure IoT Hub APIs. In order to get the necessary Node libraries into the function I defined a package.json and used the debug console to run “npm install” to populate the image (yeah, this isn’t pretty, I know) – see the docs for details on how to do this.

If you’re wondering why I’m using both AWS Lambda and Azure Functions the reason is that Alexa Smart Home skills (the ones that let you do “Alexa, turn on the kitchen lights”) can only use Lambda functions as backends, they cannot use general HTTPS endpoints like custom skills can. In a different project I have completely replaced an Alexa Skill’s Lambda function with an Azure function (which, like here, calls into IoT Hub) to reduce the number of moving parts.

So with all of this I can now control lights, TV, etc. via Alexa like I could previously, but now using Azure IoT rather than Octoblu to provide the cloud->on-prem message routing.

logicappThe final use-case to solve was the outbound message case, which was limited to sending alerts via Twitter (I had used Twilio before but stopped this some time back). My solution started with a simple Azure Logic App which is triggered by a HTTP request and then feeds into a “Post a Tweet” action. The Twitter “connection” for the logic app is created in a very similar manner to how it was done by Octoblu, requiring me to authenticate to Twitter and grant permission for the Logic App to access my account. I defined a message scheme for the HTTP request which allowed me to POST JSON messages to it and use the parsed fields (actually just the “message” field for now) in the tweet.

I then created a second Azure Function which is configured to be triggered by Event Hub messages using the embedded Event Hub in the IoT Hub (if that all sounds a bit complex, just use the function template and it’ll become clearer). In summary this function gets called for each device-to-cloud event (or batch of events) received by the IoT Hub. If the message has a topic of “Alert” then the body of the message is sent to the Logic App via its trigger URL (copy and paste from the Logic App designer UI). I added the “request” NPM module to the Function image using the same procedure as for the iot-hub libraries above.

The overall flow is thus:

  1. Something within my home environment publishes a message on the “Alert” topic.
  2. The Azure IoT gateway’s MQTT module is subscribed to the “Alert” topic, receives the published message, attaches the Azure IoT Hub deviceName and deviceKey and sends it as an event via the IoT Hub module which sends it via AMQP to Azure IoT Hub.
  3. Azure IoT Hub invokes the second Azure Function with the event.
  4. The Function pulls out the MQTT topic and payload from the event and calls the Logic App with them.
  5. The Logic App pulls out the message payload and send this as a tweet using the pre-authorised Twitter connector.

Although all of this seems quite complex, it’s actually fairly simple overall: the IoT hub acts an a point of connection, with the on-prem gateway forwarding events to and from it, and a pair of Azure Functions being used for device-to-cloud and cloud-to-device messages respectively.

simple

It was all going well until I discovered that the spin-up time for an Azure Function that’s been dormant for a while can be huge – well beyond the timeout of an Alexa Skill. This is partly caused by the time it takes for the function runtime to load in all the node modules from the slow backing store, and partly just slow spin-up of the (container?) environment that Azure Functions run within. A common practice is to ensure that functions are invoked sufficiently often that Azure doesn’t terminate them. I followed this practice by adapting my existing heartbeat service running on a Pi that publishes a heartbeat MQTT every 2 minutes to also have it call the first Azure function (the one that the Alexa Skills call) with a null argument; and to keep the second function alive I simply had the MQTT gateway subscribe the heartbeat topic thereby ensuring the event handler function ran at least once every 2 minutes as well.

 

Advertisements

A universal IR remote control using MQTT

In some previous hacking I created an add-on for the Kodi media player which allowed me to control Kodi and the TV it is connected to (by using an IR blaster) using messages published through my home MQTT broker. The original purpose for this hack was to enable voice control via an Amazon Echo Smart Home Skill.

alloffI’ve since added another use-case where a single push-button connected to a Raspberry Pi Zero W publishes a single MQTT message which my rules engine picks up and then publishes a number of messages to act as an “all off” button: it sends “off” messages to all lights in the room (these are a mixture of LightwaveRF and Philips Hue – both interfaced via services that subscribe to the local MQTT broker); a “pause” message to Kodi; and a TV “off” message to the TV.

However, despite having this capability I still use separate traditional IR remote controls for the TV and Kodi, and a 433MHz control for the LightwaveRF lights. It seemed like a good idea to take advantage of the MQTT control to reduce the need for so many remote controls so I set about turning the Hama remote control I use for Kodi into a universal control for all the devices.

The strategy I used was to have the Hama remote control publish MQTT messages and add some rules to the broker’s rules engine to map these to the required MQTT messages(s) to control Kodi, the TV, or the lights. I chose to connect the Hama USB IR receiver to a new Raspberry Pi Zero W – I could have left this connected to the Pi 3 running Kodi and created a new add-on to talk to it but I have future plans that call for another Pi in this location and this seemed like it would be easier… – and set about building a small service to run on the Pi to relay received IR commands to the MQTT broker.

iroverview
Overview of the overall architecture

After a few false starts with LIRC I settled on consuming events from the remote control via the Linux event subsystem (same as handles keyboard and mouse). There are some node.js libraries to enable this but I found a much more complete library available for Python which, critically, implements the “grab” functionality to prevent “keystrokes” from the IR control also going to the Pi’s login console.

I’ve already implemented a few Python MQTT clients using the Paho library (including the Kodi add-on itself) so I recycled existing code and simply added an input listener to attach to the two event devices associated with the IR control (hard-coded for now) and, after a little processing of the event, publish an MQTT message for each button press. The Hama remote acts like a keyboard and some of the buttons include key modifiers: this means that a single button push could involve up to 6 events: e.g. key-down for left-shift, key-down for left-ctrl, key-down for ‘T’, followed by three key-up events in the reverse order. My code maintains a simple cache of the current state of the modifier keys so that when I get a key-down event for a primary key (e.g. ‘T’ in the above example) I can publish a MQTT message including the key and its active modifiers.

 for event in self.device.read_loop():
     if event.type == evdev.ecodes.EV_KEY:
         k = evdev.categorize(event)
         set_modifier(k.keycode, k.keystate)
         if not is_modifier(k.keycode) and not is_ignore(k.keycode):
             if k.keystate == 1:
                 msg = k.keycode + get_modifiers()
                 self.mqttclient.publish(self.topic, msg)

(The full code for the service can be found here.)

This results in MQTT messages of the form

IR/room2av KEY_VOLUMEUP
IR/room2av KEY_VOLUMEDOWN
IR/room2av KEY_LEFT
IR/room2av KEY_RIGHT
IR/room2av KEY_DOWN
IR/room2av KEY_UP
IR/room2av KEY_PAGEUP
IR/room2av KEY_PAGEDOWN
IR/room2av KEY_T_KEY_LEFTCTRL_KEY_LEFTSHIFT

The next step was to add a rule to the rules engine to handle these. The rules engine is a simple MQTT client that runs on the same Raspberry Pi as the MQTT broker; it listens to topics of interest and based on incoming messages and any relevant state (stored in Redis) publishes message(s) and updates state. In this case there is no state to worry about, it is simply a case of mapping incoming “IR/*” messages to outbound messages.

A (partial) example is:

function handle(topic, message, resources) {
  switch (topic) {
  case "IR/room2av":
    switch (message.toString()) {
    case "KEY_UP":
      resources.mqtt.publish("KODI/room2/KODI", "Action(Up)");
      break;
    case "KEY_DOWN":
      resources.mqtt.publish("KODI/room2/KODI", "Action(Down)");
      break;
    case "KEY_PAGEUP":
      resources.mqtt.publish("Light/room2/Lamp", "on");
      resources.mqtt.publish("Light/room2Ceiling", "on");
      break;
    case "KEY_PAGEDOWN":
      resources.mqtt.publish("Light/room2/Lamp", "off");
      resources.mqtt.publish("Light/room2Ceiling", "off");
      break;
    case "KEY_VOLUMEUP":
      resources.mqtt.publish("KODI/room2/TV", "VOL_p");
      break;
    case "KEY_VOLUMEDOWN":
      resources.mqtt.publish("KODI/room2/TV", "VOL+m");
      break;
...

Here we can see how buttons pushes from this one IR remote are routed to multiple devices:

  • the “up” and “down” navigation buttons result in messages being sent to Kodi (the message content is simply passed to Kodi as a “builtin” command via the xbmc.executebuiltin(…) API available to add-ons);
  • the “+” and “-” channel buttons (which map to PAGEUP and PAGEDOWN keycodes) have been abused to turn the lights on and off – note the two separate messages being sent, these actually end up going to LightwaveRF and Philips Hue devices respectively; and
  • the “+” and “-” volume buttons send IR commands to the TV (this happens to be via the Kodi add-on but is distinct from the Kodi control) – the “VOL_p” and “VOL+m” being the names of the IR codes in the TV’s LIRC config file.

A major gotcha here is that when controlling a device such as the TV with an IR blaster, there will be an overlap between the blast of IR from the Hama device and from the IR blaster connected to the Kodi Pi, and the TV will find it difficult to isolate the IR intended for it. To avoid this I’ve had to put tape over the TV’s IR receiver and IR blaster which is glued to it such that IR from the Hama control can’t get through.

The end result is that I can now use a single IR remote control to navigate and control Kodi, turn the TV on and off and adjust its volume, and control the lights in the room. Because everything is MQTT under the hood, and I’ve got plumbing to route messages pretty much anywhere I want, there is no reason why that IR remote control can’t do other things too. For example it could turn off all the lights in the entire house, or turn off a TV in another room (e.g. if I’ve forgotten to do so when I left that room), or even to cause an action externally via my Azure IoT gateway (more on this in a future blog post). And because the rules engine can use state and other inputs to decide what to do, the action of the IR remote control could even be “contextual”, doing different things depending on circumstances.

 

A Homage to Octoblu.

As you may have seen today Citrix announced that going forward the company will no longer focus on building its own IoT platform, rather, it will focus on applying IoT technology to other Citrix initiatives, and using existing IoT platforms to do so. Although there is a very bright future for IoT in Citrix (there are a number of exciting things in the pipeline I can’t talk about here), sadly this means that the freely available octoblu.com IoT platform service will be closed down in 30 days.

Those who know me or follow me on Twitter will know that I’m a big fan of Octoblu. I started playing with the technology soon after Citrix acquired Octoblu three years ago and I was hooked. In the last year or so I’ve had the opportunity to get more deeply involved with the Octoblu team and our Workspace IoT products and services. I’ve been inspired by the platform itself, what can be done with it, and with the enthusiasm, pragmatism, innovative style and friendliness of the Octoblu engineering team. I’ve learned a lot from the Octoblu team about how to develop scalable, cloud-native services using modern devops and CI/CD tools and techniques. Octoblu was also my introduction to Node.js (which I use in all sorts of places now) and CoffeeScript (which I still don’t really get on with 🙂 ).

Over the years I’ve used Octoblu in so many ways:

I’ve been fortunate enough to present a number of Octoblu and IoT sessions at various events, including:

and to produce demos for others’ presentations:

  • A Slack interface to ShareFile
  • An IoT chatbot using Slack to troubleshoot meeting room AV
  • Various Amazon Alexa demos controlling slideshows, launching apps via Citrix Receiver, and other stuff
  • and more!

I’ve loved using the online interactive designer to create some really powerful flows with no, or very little, coding. See below for a gallery of some of my favourites.

It is thanks to Octoblu, Chris Matthieu, and the entire Octoblu team that I’ve had these opportunities and gained these perspectives – without you I’d never have done any of this. For that, I thank you all.

Hack the planet!

SystemTestXenServerConnector

pptflow

SYN132-ShareFileSlack

Paddington

WORK_ACCOUNT_SmartSpacesV2

WORK_ACCOUNT_chatbotforblogpost

An IoT-connected PowerPoint multi-device show!

Today I was honoured to have the opportunity to discuss the challenges and opportunities IoT brings to the world of technical documentation in a keynote for the inaugural Cambridge meet-up of the Write The Docs community.

One of the topics I covered was how IoT enables human-computer interaction across a much broader range of devices than the traditional screen, keyboard and pointing device – the 4th generation user interface as Steve Wilson describes it – leading to richer, more natural, and more immersive experiences for users of applications (in the broadest sense).

To help illustrate this in the context of a presentation I decided to extend the slideshow beyond the projector and slide clicker and include lights and buttons to create a more entertaining experience. For example I talked about some of the opportunities devices such as Amazon Echo could create. As I reached the relevant slide a set of LEDs came on to illuminate an actual Echo device on a stand on the stage. I also had a large push-button illuminated with controllable multi-coloured LEDs; this became a software-defined push-button which did different things during the course of the presentation including advancing slides, turning off the LEDs, and so on. The LED colour and action assigned to the button were controlled based on the slide being projected.

As usual I used Octoblu as the IoT automation platform. A key input to the flow was a trigger for each time the PowerPoint presentation advanced a slide. To do this I created a PowerPoint macro named “OnSlideShowPageChange” (called in the obvious manner by the application) which sent a HTTP POST to an Octoblu trigger with the current slide number and the total number of slides from the presentation.

Sub OnSlideShowPageChange(ByVal SSW As SlideShowWindow)
    Set objHTTP = CreateObject("MSXML2.ServerXMLHTTP")
    URL = "https://triggers.octoblu.com/v2/flows/<UUID>/triggers/<UUID>"
    objHTTP.Open "POST", URL, False
    objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
    objHTTP.setRequestHeader "Content-Type", "application/json"
    objHTTP.send "{""currentSlide"":" & SSW.View.CurrentShowPosition & ",""numSlides"":" & SSW.Presentation.Slides.Count & "}"
End Sub
To drive PowerPoint I created a new Octoblu connector that runs on my laptop that can remote control a PowerPoint slideshow. This is a very simple connector that really just adds an Octoblu shim on top of the excellent slideshow NPM library. The connector can do a few things but the most useful is the “GotoSlide” endpoint which can take a slide number, or one of “next”, “prev”, “first” or “last”. A simple example of using it in a flow is:
pptnode
(BTW if you want to use my connectors see https://github.com/jamesbulpin/my-meshblu-connectors for how to add my custom connector repository to your own Octoblu account. All the custom connectors in this blog post can be found in that repo.)
The flow looks rather more complex that it really is:
pptflowannotated
There are essentially five main parts to the flow:
  1. The trigger from the PowerPoint macro (telling us which slide is being shown) setting the state of a string of WS2811 LEDs (using my WS2811 Raspberry Pi connector which is also in the repo mentioned above) based on which slide is being shown. These LEDs illuminate the Echo and a few other props. The JSON templates contain configuration to tell the WS2811 connector which LEDs to turn on to illuminate the relevant prop. When the final slide is shown the entire string is put into a moving pattern (just for fun 🙂 ).
  2. The slide number is also used to show a slide progress bar on a second string of WS2811 LEDs. The connector takes the current slide number and the total number of slides. With the LEDs in the string mapping to slides in the presentation in a one-to-one manner, it turns LEDS green for slides already shown, red for those not yet shown, and yellow for the currently showing slide.
  3. The slide number is used to define the colour of the illuminated push-button (or to turn it off entirely) and to send it a string which the button will return (see below) if pushed – this makes the function of the button dynamically configurable depending on which slide is being shown.
  4. If the push-button is pushed it sends the string configured in #3 above. This part of the flow dispatches messages depending on that string. If it’s “ON” all the LEDs on both strings are put into a moving pattern and “OFF” turns all LEDs off. Otherwise it’s interpreted as a slideshow command (e.g. “next”) and routed to the PowerPoint connector described earlier.
  5. An additional effect of the button push in the “ON” and “OFF” cases is setting the button to do the opposite, i.e. if the push was “ON” then the button is reconfigured for “OFF”.

An aside on the WS2811 connector: This works by specifying a mode (via a parameter on the message) and, depending on the mode, some additional parameters such as colour or slide number. Currently the available modes are:

  • “off” – what you’d expect
  • “solid” (or “color”) – display a solid colour on all LEDs – requires the “color” parameter (can be a word such as “red” or hash hex value such as “#00553e”)
  • “slide” – the slide progress bar described above – requires “slide” and “slidemax” parameters
  • “colorwheel” – a moving and colour-changing dynamic pattern
  • “twinkle” – colorwheel modulated with a random twinkling (each LED turning on and off randomly)
  • “percent” – show a percentage bargraph (“VU meter” style) on the LEDs – requires the “percent” parameter, turns on the first percent% of the LEDs
  • “direct” – takes a JSON object containing a description of which LEDs should be which colour: pass the object (e.g. using the JSON Template node and passing {{msg}} in the parameter) containing a key named “groups” which is a list of objects with “color” (string color name/hascode) and “leds” (a list of integer LED index with the first LED on the wire being zero) parameters.

e.g.:

{
  "groups":[
    {
      "color":"blue",
      "leds":[1,3,5,9]
    },
    {
      "color":"red",
      "leds":[0,10,132]
    }
  ]
}

(This WS2811 connector, developed in collaboration with John Moody, is a better version of the one I described in a previous post – see that post for info on the wiring.)

Now of course there are ways to do similar things to all of this without IoT but like many things with IoT it’s the removal of barriers of cost, accessibility and vendor compatibility that make an IoT approach interesting.

If you’re at Citrix Synergy 2017 in Orlando later this month be sure to join me and a growing list of IoT and automation experts for SYN401 to see what else is possible with the Octoblu platform. I can guarantee that the PowerPoint (there won’t be much – it’s a far more interactive session than that!) will be even more IoT than in today’s event!

Displaying messages on my Christmas tree lights

IoT and Christmas go well together – all those festive lights and sounds are just asking to be connected up to the Internet and automated! My own Christmas tree is of course no exception and, as I wrote about over on the Citrix blogs, now sports a string of individually-addressable LEDs controlled by a Raspberry Pi which is in turn managed by Octoblu.

But I decided to go a step further than simply displaying some fun flashing patterns and colours on the tree: seeing as every LED can be individually managed, if I could determine the exact position on the tree of each LED I could come up with geometric patterns or even use it as a sort of dot matrix display to show messages.

iotreecalibratewebcamMy solution was to connect a webcam to the Raspberry Pi and write a script that turned each LED on one by one, captured an image from the webcam, and found the coordinates of the bright spot on the image. After the script ran through all 100 LEDs it then normalised the coordinates to a 100 x 100 grid and output a list of coordinates that I could then use in my LED controller program. The code is a bit gross, being a mash-up of various bits of other scripts quickly bolted together, including a web server that I used to view the image while positioning the camera.

iotreecalibrateTo visually check the output I wrote a quick bit of HTML/JavaScript that used the pre-normalised coordinates to overlay blobs on top of a captured image from the webcam (with all LEDs lit) – I could then see if the blobs lined up with the LEDs. As you can see in the image here there is at least one misalignment caused by the reflection of the light on a nearby white surface.

So armed with a set of coordinates for the LEDs I then extended my existing WS2811 LED control program (see my post on the Citrix blog for more on this, and on the hardware setup) to use coordinates, instead of just LED sequence number, for its patterns.

Firstly I created a simple set of test patterns that moved a vertical line horizontally across the tree, a horizontal line vertically and then the two different diagonals. I also updated the VU meter effect to use Y coordinates instead of LED sequence number.

However the most fun piece was rendering text on the tree. To do this I found a suitable encoding of a dot matrix font online (there are loads out there) and morphed it into a suitable form to include in Node.js. I then wrote a very simple rendering function that runs over a string of text one character at a time, using the font configuration to determine which pixels in each column of the 5×7 matrix should be illuminated. The end result was an array with an entry per column (for the entire message) with the entry being an encoding of which row LEDs should be illuminated. I also created a similar array to record which colour to use for each column – in this case populated to cycle through red, green and blue on a character-by-character basis (this helps the eye to see the characters better).

var dmmessage = " XMAS";

var dmarray;
var dmcolor;
var colorcycle = 255;
function renderMessage() {
  dmarray = new Uint32Array(dmmessage.length * 7 + 5);
  dmcolor = new Uint32Array(dmmessage.length * 7 + 5);
  for (var i = 0; i < dmmessage.length; i++) {
    var fonttablebase = (dmmessage.charCodeAt(i) - 0x20) * 5;
    for (var j = 0; j < 5; j++) {
      dmarray[i*7+j] = font5x7[fonttablebase + j];
      dmcolor[i*7+j] = colorcycle;
    }
    if (colorcycle == 0xff0000) {
      colorcycle = 255;
    } else {
      colorcycle = colorcycle << 8;
    }
  }
}

In the main part of the WS2811 driver program, where a loop iterates over the 100 LEDs choosing which colour to display for each, the code uses the incrementing offset variable (increments once per 100-LED update cycle, every 80ms) to index into the column array, offset by the X coordinate of the LED. A quantised Y coordinate of the LED is used to look up the row pixel on/off data from the array entry – the quantisation effectively creating 7 rows across the surface of the tree.

function dotmatrix_y(y) {
  if (y < 25.0) return 8;
  if (y < 40.0) return 0;
  if (y < 50.0) return 1;
  if (y < 60.0) return 2;
  if (y < 70.0) return 3;
  if (y < 80.0) return 4;
  if (y < 90.0) return 5;
  return 6;
}

function dotmatrix(pos) {
  var x = xy.xy[pos][0];
  var ydot = dotmatrix_y(xy.xy[pos][1]);

  var idx = Math.floor(((offset + x)/10)%dmarray.length);
  var column = dmarray[idx];

  if ((1<<ydot)&column) {
    return dmcolor[idx];
  }
  return 0;
}

I added a new parameter to the Octoblu message to allow the display string to be set. I updated the Octoblu flow and Alexa skill to allow a command such as “Alexa, tell the Christmas tree to say Hello”.

Alexa intent schema (new part in bold, full code here):

{
  "intents": [
    {
      "intent": "ModeIntent",
      "slots": [
        {
          "name": "mode",
          "type": "LIST_OF_MODES"
        }
      ]
    },
    {
      "intent": "ColorIntent",
      "slots": [
        {
          "name": "color",
          "type": "LIST_OF_COLORS"
        }
      ]
    },
    {
      "intent": "TextIntent",
      "slots": [
        {
          "name": "text",
          "type": "LIST_OF_WORDS"
        }
      ]
    },
    {
      "intent": "AskOctobluIntent"
    },
    {
      "intent": "SorryIntent"
    }
  ]
}

AWS Lambda intent handler (new part in bold, full code here):

def on_intent(intent_request, session):
    """ Called when the user specifies an intent for this skill """

    print("on_intent requestId=" + intent_request['requestId'] +
          ", sessionId=" + session['sessionId'])

    intent = intent_request['intent']
    intent_name = intent_request['intent']['name']
    
    mode = None
    color = None
    text = None

    session_attributes = {}
    speech_output = None

    # Dispatch to your skill's intent handlers
    if intent_name == "ModeIntent":
        if ('mode' in intent['slots']) and ('value' in intent['slots']['mode']):
            mode = intent['slots']['mode']['value']
    elif intent_name == "ColorIntent":
        if ('color' in intent['slots']) and ('value' in intent['slots']['color']):
            color = intent['slots']['color']['value']
            mode = "solid"
    elif intent_name == "TextIntent":
        if ('text' in intent['slots']) and ('value' in intent['slots']['text']):
            text = intent['slots']['text']['value']
            mode = "dotmatrix"
    elif intent_name == "AskOctobluIntent":
        mode = "octoblu"
    elif intent_name == "SorryIntent":
        speech_output = "I'm sorry James, I can't do that"
    else:
        raise ValueError("Invalid intent")

    obmsg = {"debug":intent}
    if mode:
        obmsg["mode"] = mode
    if color:
        obmsg["color"] = color
    if text:
        obmsg["text"] = text
    url = octoblu_trigger
    data = urllib.urlencode(obmsg)
    req = urllib2.Request(url, data)
    response = urllib2.urlopen(req)
    the_page = response.read()
    
    if not speech_output:
        speech_output = "OK"
    return build_response(session_attributes, build_speechlet_response(speech_output))

WS2811 Octoblu message handler (new part in bold):

conn.on('ready', function(rdata){
  console.log('UUID AUTHENTICATED!');
  console.log(rdata);

  clearTimeout(connectionTimer);
  connectionTimer = undefined;

  conn.update({
    "uuid": meshbluJSON.uuid,
    "token": meshbluJSON.token,
    "messageSchema": MESSAGE_SCHEMA,
  });

  conn.on('message', function(data){
    console.log('Octoblu message received');
    console.log(data);
    mode = data.mode;
    color = ("color" in data)?data.color:null;
    if (("text" in data) && data.text) {
      dmmessage = " " + data.text.toUpperCase();
      renderMessage();
      offset = 0;
    }
    handle_message(mode, color)
  });

});

Octoblu connects together the HTTPS POST request from the Alexa skill handler with the WS2811 driver program.

iotreeoctoblu

And there we have it – Alexa-controlled scrolling message display on a Christmas tree thanks to Raspberry Pi and Octoblu!

 

Automating HDMI input switching

I recently wrote about automating TV power control using IR LEDs. This enabled me to turn a TV and amplifier on and off with Amazon Echo and as part of other automation flows. However, with a cable TV box, Kodi media player, PlayStation 3 and Google Cast I still needed to choose the TV’s input source manually. I’d like to be able to say “Alexa, turn on cable”, or “Alexa, turn on Google Cast” and have it not only turn on the TV and amp as it can now, but also switch to the correct input.

One option would be to take advantage of automatic HDMI switching driven from a source coming online – but with things like the Cast and Kodi player being always-on this made this a no-go.

Another option was to use an external HDMI switcher box with an IR remote control and blast IR commands at this in the same way as for the TV itself. In fact this was my original plan and I carefully selected a HDMI switcher that was not only getting decent reviews, but also had a remote control button per channel (simply having a “next input” cycle button was not suitable because I’d have no way to know when I got to the input I wanted). However when I received the switcher I thought I’d try something different, just for fun.

Hardware

This particular switcher has an LED for each input, showing which is selected, and a single push button to cycle through the inputs. Having opened up the box by prising off the plastic feet and unscrewing the 4 small screws underneath I took a good look at the circuit board. I observed that the LEDs were wired with the cathodes connected to ground, therefore the anodes would be at a positive voltage when the LED was lit – this is good enough to directly drive a Raspberry Pi GPIO input pin in pull-up mode. By connecting all 5 LED anodes to GPIO pins I could easily tell which input was selected. The push button was also connected to ground, suggesting it was being used in a pull-up fashion. I took a gamble that I could get away with driving this directly from a Raspberry Pi GPIO output without any further interfacing. To make life even easier all of the LED and switch pins were through-hole soldered into the switcher’s PCB (i.e. not surface mount like most of the board) meaning small wires could be fairly easily soldered onto them.

hdmiwiring

Even more conveniently the arrangement of the box, although very compact, had plenty of room to bring in a 9 core cable (5 LEDs, 1 switch, 1 ground and two unused wires) leading to a neat appearance. To route the cable out I drilled a 5mm hole close to the IR socket on the edge of the box and routed the wires over the top side of the PCB.

On the Raspberry Pi end I simply chose 6 unused GPIO pins and connected the LEDs and switch input as well as connecting ground on the HDMI box to a ground pin on the GPIO header.

Software

I wanted to use the same Raspberry Pi running LibreELEC and Kodi that I was already using to drive the IR emitters. This meant extending the existing Kodi add-on with the capability to read the HDMI switcher’s LED status and drive its push button. The general strategy was to have an incoming MQTT message set the desired input number (1-5) and then send button push pulses until the associated LED lit up. One potential gotcha here is that this switcher skips over inputs that do not have an active HDMI source, therefore if the MQTT message requests an input with either nothing connected, or with a source that’s not switched on, the button pushing could go on forever. To avoid this I limited the number of button push attempts per MQTT message to 10.

The prerequisite was to add the “Script – Raspberry Pi Tools” Kodi add-on (search the registry from the Kodi add-on manager UI) to add the GPIO Python library.

The code is pretty simple. At start of day I set up GPIO – the first 5 pins are the inputs from the switcher’s LEDs and the final pin is the output to the push button:

sys.path.append('/storage/.kodi/addons/virtual.rpi-tools/lib')
import RPi.GPIO as GPIO

# Setup pins for HDMI switcher control
GPIO.setmode(GPIO.BCM)
GPIO.setup(21, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(8, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(16, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(12, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(7, GPIO.IN, pull_up_down = GPIO.PUD_UP)
GPIO.setup(20, GPIO.OUT)
GPIO.output(20, 1)

And initialise some global variables to record the desired input number and the count of button push attempts:

hdmiinput = 1
hdmiattempts = -1

In the MQTT message handler I added another sub-topic case:

            elif ll[2] == "HDMI":
                try:
                    hdmiinput = int(msg.payload)
                    hdmiattempts = 0
                except:
                    pass

And in the main loop (which cycles every 0.5s):

        if (hdmiattempts > -1) and (hdmiattempts < 10):
            for i in range(4):
                hdminow = 0
                if GPIO.input(21): hdminow = 1
                if GPIO.input(8): hdminow = 2
                if GPIO.input(16): hdminow = 3
                if GPIO.input(12): hdminow = 4
                if GPIO.input(7): hdminow = 5
                log("HDMI switcher currently on %s, want %s" % (hdminow, hdmiinput))
                if hdminow == hdmiinput:
                    hdmiattempts = -1
                    break
                else:
                    log("Sending HDMI switch button trigger")
                    GPIO.output(20, 0)
                    time.sleep(0.1)
                    GPIO.output(20, 1)
                    time.sleep(0.1)
                    hdmiattempts = hdmiattempts + 1

The full Kodi add-on code can be found here. It’s quick and dirty, but it works.

Automation with Echo

So now I have a way to change HDMI input to the TV by sending a MQTT message such as “KODI/Lounge/HDMI=4“. To use this with Amazon Echo I extended my existing TV control with specific cases for each of the 4 inputs in use. The configuration for the Alexa smart home skill adapter (see Controlling custom lighting with Amazon Echo and a skill adapter for more on this) sends a single MQTT command into my home broker:

                {
                    "applianceId":"loungetvcable",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Virgin Media",
                    "friendlyDescription":"Living room TV and amp on cable TV",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvcable"
                    }
                },
                {
                    "applianceId":"loungetvcast",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Chrome Cast",
                    "friendlyDescription":"Living room TV and amp on Google Cast",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvcast"
                    }
                },
                {
                    "applianceId":"loungetvps3",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"PS3",
                    "friendlyDescription":"Living room TV and amp on PS3",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvps3"
                    }
                },
                {
                    "applianceId":"loungetvkodi",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Kodi",
                    "friendlyDescription":"Living room TV and amp on Kodi",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvkodi"
                    }
                },

(See this gist for the context – these rules are just added to the list)

Rules in the rules engine then turn this into TV power control and in the “on” case, also sends the HDMI control message:

  case 'compound/loungetvcable':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "1");
    }
    break;
  case 'compound/loungetvcast':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "2");
    }
    break;
  case 'compound/loungetvps3':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "5");
    }
    break;
  case 'compound/loungetvkodi':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "4");
    }
    break;

In closing

By adding explicit input selection to the set of MQTT commands I can send I can now build more complex automation actions that not only turn things on or off but can also select inputs. This enables a single Echo command such as “Alexa, turn on Chrome Cast” to get all the necessary devices into the right state to achieve the desired outcome of the TV and amp being on and displaying the output of the Cast.

hdmisidebar

 

Using infrared control from Amazon Echo

A while back I built a skill adapter to allow Amazon Echo’s smart home skill system to manage my LightwaveRF lights via my custom Octoblu+MQTT system. But why stop there? If I can say “Alexa, turn on the living room light” why can’t I also say “Alexa, turn on the TV“? With IoT mains power sockets such as Belkin Wemo or LightwaveRF I could power on and off the TV but I prefer to leave the TV in standby (yes, I know this uses power and kills the planet). Instead I decided to solve the problem by using infrared emitters glued to the front of the devices I wanted to control and allowing Alexa to control these.

Hardware

ircircuit2Each TV in my house is co-located with a Raspberry Pi running the Kodi media player. Therefore it made sense to add the IR emitter capability to these rather than add more hardware. There are various options for emitting IR including USB IR blasters and professional systems design for commercial AV use-cases. I chose a homebrew route and built my own emitters using an IR LED, a NPN transistor and a couple of resistors. The whole thing connects to a Raspberry Pi GPIO pin. Using a transistor allows for higher LED currents than can be used directly from the GPIO pin and allows multiple emitters to be connected to the same GPIO pin, e.g. to connect several home entertainment devices.

irhead2

I wired the four components together in free space (no board) and connected to a three core cable. I used offcuts of cable sheath and heatshrink tubing to insulate and encapsulate everything. The end result is an IR LED poking out the top of black blob on the end of a black cable. It looks a bit ugly close-up but once attached to a black TV case it’s hardly noticeable.

I connected the other end of the cable to the GPIO header on the Raspberry Pi using +5V, GND and GPIO17 (this is the default for the LIRC Pi GPIO drive) pins. I then positioned and superglued the emitter onto the front of the TV such that it could see the TV’s IR receiver but without completely blocking it.

irtv

Kodi interface

The Raspberry Pi runs the LibreELEC distribution of Kodi. To set this up to use GPIO-driven IR required me to SSH to the Pi as root and edit /flash/config.txt to add a line “dtoverlay=lirc-rpi” – this is needed to load the kernel driver for Raspberry Pi GPIO for the Linux IR subsystem. I then needed to download the specific IR configurations I needed for my devices. For example for my Sony Bravia TV I ran:

 wget -O "/storage/.config/lircd.conf" \
  'https://sourceforge.net/p/lirc-remotes/code/ci/master/tree/remotes/sony/RM-ED009.lircd.conf?format=raw'

For controlling multiple devices multiple remote control configs can be concatenated into the lircd.conf file.

After a reboot it was time to code a Kodi add-on. This add-on runs as a service and connects to my home MQTT broker in order to receive published messages (more of which later). The MQTT message handler spawns the “irsend” command (comes with the LibreELEC distribution) to send the named IR command(s). For example, the following code is used to control the TV and an amplifier at the same time (using two IR emitters wired to the same GPIO pin):

# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    global player, hdmiattempts, hdmiinput

    try:
        msgpayload = str(msg.payload)
        print(msg.topic+" "+msgpayload)
        ll = msg.topic.split("/")
        if len(ll) > 2:
            if ll[2] == "TV":
                log("Incoming TV command: " + msgpayload)
                ircmds = []
                if msgpayload.lower() == "off":
                    ircmds.append(["Sony_RM-ED009-12", "KEY_POWER", 5])
                    ircmds.append(["Marantz_RMC-73", "Amp_Standby", 1])
                elif msgpayload.lower() == "on":
                    ircmds.append(["Sony_RM-ED009-12", "KEY_POWER", 5])
                    ircmds.append(["Marantz_RMC-73", "Amp_Standby", 1])
                else:
                    ll = msgpayload.split("/")
                    ircmds.append([ll[0],ll[1],1])
                if len(ircmds) > 0:
                    log("Sending IR commands: " + str(ircmds))
                    for ircmd in ircmds:
                        cmd = "/usr/bin/irsend --count %u -d /run/lirc/lircd-lirc0 SEND_ONCE %s %s" % (ircmd[2], ircmd[0], ircmd[1])
                        os.system(cmd)
                        time.sleep(1)
    except Exception, e:
        log("MQTT handler exception: " + str(e))

I developed the add-on in-place on the Raspberry Pi (as opposed to building a distributable package and importing it through Kodi’s UI). To do this I firstly created a directory:

# mkdir /storage/.kodi/addons/service.mqtt.jamesbulpin

Within this I created two files:

And a subdirectory “resources” containing:

Overall the structure was this:

LoungeTV:~/.kodi/addons/service.mqtt.jamesbulpin # find . -type f
./resources/__init__.py
./resources/settings.xml
./resources/language/english/strings.xml
./resources/paho/__init__.py
./resources/paho/mqtt/__init__.py
./resources/paho/mqtt/client.py
./resources/paho/mqtt/subscribe.py
./resources/paho/mqtt/publish.py
./mqttservice.py
./addon.xml

The full code can be found at https://bitbucket.org/jbulpin/kodihomeautomation – note that this includes hard-coded IR devices and commands so would need customisation before use.

kodiconfigThe easiest way to get Kodi to discover the add-on is to reboot. It may also be necessary to enable the add-on via the “My add-ons” menu in Kodi. The settings dialog can also be accessed from here – this allows setting of the MQTT broker IP address and of the name that will be used in the MQTT topic the add-on subscribes to – if this is set to “Lounge” then the topic prefix is “KODI/Lounge“.

With the add-on running I tested it by manually sending MQTT messages:

# mosquitto_pub -m on -t KODI/Lounge/TV

Control with Amazon Echo

So far I’ve described how to get from a MQTT message through to an infrared control action on a TV. The next step was to have the MQTT message be sent in response to an Amazon Echo voice command. Happily this was very easy because I’d already built infrastructure to send MQTT commands (albeit to manage lighting) from Alexa. To add TV control was just a matter of adding another entry to the list of devices returned by the skill adapter when a discovery was initiated, and then initiating a discovery (“Alexa, discover my smart home devices“).

Example of an existing entry:

                {
                    "applianceId":"DiningRoomLights",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Dining room lights",
                    "friendlyDescription":"Fireplace lights in the dining room",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"Light/Dining/Fireplace"
                    }
                },

New entry for the TV:

                {
                    "applianceId":"loungetv",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"TV",
                    "friendlyDescription":"Living room TV and amp",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"KODI/Lounge/TV"
                    }
                },

You can see this just uses the KODI MQTT message instead of a lighting message. Alexa has no idea it’s a TV rather than a light that’s being controlled. The entire skill adapter code can be seen here.

Of course it’s also possible to control multiple devices from a single Alexa command, just by sending multiple MQTT messages.

An enhancement

This all works well but you may have noticed that the IR commands to turn the TV on and off are the same – this is because that’s usually how IR remote controls work. This means that any command, via Alexa or otherwise, will actually toggle the current power state rather than move to the defined state. In most cases this isn’t a problem, for example why would you ask Alexa to turn the TV on if it’s on already? However for other more complicated automation activities I’d like to be able to control the TV without knowing whether it’s on or off already (e.g. I want an “all off” button or timer for a bedroom that will turn off all lights and the TV no matter what was already on).

To do this I added an input to sense if the TV was on – power being available on a TV USB port was my chosen method. I used a small opto-isolator, driven by the USB power, with the other side connected to a GPIO input.

To use this in the Kodi add-on I first had to add the “Script – Raspberry Pi Tools” add-on (search the registry from the Kodi add-on manager UI to find this) to add the GPIO Python library. To use this from my add-on code I added:

sys.path.append('/storage/.kodi/addons/virtual.rpi-tools/lib')
import RPi.GPIO as GPIO

# Set up pin for TV power monitoring (active low)
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN)

Then added simple checks in the MQTT message handler to only send IR commands if the current state is not the commanded state:

                if msgpayload.lower() == "off":
                    if GPIO.input(23) == 0:
                        # Only send if the TV is on - (GPIO pin is low)
                        ircmds.append(["Vestel_TV", "OFF", 5])
                elif msgpayload.lower() == "on":
                    if GPIO.input(23) == 1:
                        # Only send if the TV is off - (GPIO pin is high)
                        ircmds.append(["Vestel_TV", "OFF", 5])

In a future blog post I’ll show how I extended this TV control to add automatic input source selection by interfacing to, and automating a HDMI switcher box.

alexatv