Mimicking arm movement with a pan-and-tilt unit and Fitbit

This is one of these “solution looking for a problem” things. Last week I visited the (relatively) new Raspberry Pi store in Cambridge. The odds of me leaving empty-handed were very low and I ended up with a few impulse purchases of cool looking gadgets. One of these was the cute pan-tilt Pi hat from Pimoroni. I didn’t really have a purpose for it but it looked like a fun thing to play with.

My first exercise was to build a simple face tracker that used OpenCV to detect a face in the image captured from the camera attached to the pan-tilt unit, and then attempt to move the camera to put the face in the centre of the frame.

My second experiment, and the subject of this post, was to couple the pan-tilt unit to my new Fitbit Versa watch, using the orientation sensors in the watch to control the pan-tilt unit in response to movement of the watch, i.e. to have the unit mimic what my arm is doing.

The Fitbit can detect orientation in three axes, however only roll and pitch can be considered absolute because they are relative to gravity; yaw is relative to an arbitrary datum because there is no detectable or logical datum the watch can work to (other orientation systems use magnetic bearing, for example, to use magnetic north as a datum to reference yaw orientation too.

watch_axes

As the pan-tilt unit only has two axes, it made sense to ignore the watch’s yaw orientation and use its roll and pitch to control pan and tilt respectively. Pitch and tilt are a natural fit, however pan, in normal use-cases at least, is a yaw, not roll, concept. However, turning the pan-tilt unit through 90 degrees conveniently turns the pan axis into a roll axis and hence allows for direct mimic of the watch orientation.

pantilt

The high level software architecture is that the Pi runs a small Python web server that provides a HTTP POST API that takes data from the watch, processes it (see later), and commands the pan and tilt positions accordingly. The Fitbit watch runs an app which repeatedly polls the device’s orientation API to get orientation, then sends this via the companion app on the smartphone to the HTTP API on the Pi.

Fitbit watch apps are little JavaScript things that can be side-loaded on to a device using the Fitbit Studio IDE and enabling the developer bridge in the device itself. I used the getting started guide to help build this app. Conveniently the Fitbit Studio has a template for an app that can read watch sensors, including the orientation sensor – I started with this. Full code is at https://github.com/jamesbulpin/fitbit-pantilt however a summary of the relevant bits for periodically sampling the orientation sensor is:

import { OrientationSensor } from "orientation";
const orientation = new OrientationSensor();
orientation.start();
setInterval(function() {
    // Read the sensor from orientation.quaternion
}, 1000);

However, the watch app itself cannot directly communicate over the network. To enable this, Fitbit has the concept of a companion app, another JavaScript thing that runs within the Fitbit app on the linked smartphone. It is started automatically when the watch app starts. The libraries provide a message-based communication channel between the two apps. I therefore added the relevant code to the watch app to send the orientation data (and in fact a bunch of other sensor data due to using the template code and being too lazy to remove the unwanted bits 🙂 ):

import * as messaging from "messaging";
...
setInterval(function() {
    // Read the sensor from orientation.quaternion
    if (messaging.peerSocket.readyState === messaging.peerSocket.OPEN) {
       messaging.peerSocket.send({orientation:orientation.quaternion});
    }
}, 1000);

and created a companion app that would receive this data and send it to the Pi’s API (see below; local IP address hard-coded):

import * as messaging from "messaging";

messaging.peerSocket.onmessage = (evt) => {
    fetch("http://10.0.0.138:9090/",
          {
              method:"POST",
              body:JSON.stringify(evt.data)
          }).then(function(resp) {}).catch(function (error) {});
}

The watch app also displays the sensor data on screen (thanks to the template code) which is a useful debugging tool. See https://github.com/jamesbulpin/fitbit-pantilt for the full code.

On the Raspberry Pi a little Python webserver implements the API that the companion app calls. I chose to write this in Python to take advantage of the existing library that provides control of the pan-tilt device. This code also maps the Fitbit representation of orientation (a quaternion) to a roll-pitch-yaw representation. It then adjusts the roll and pitch values such that when the watch is horizontal, the pan-tilt head (mounted as in the photo above) positions its camera platform horizontally. The code is messy, don’t judge me 🙂

The end result is that the camera platform on top of the pan-tilt unit more-or-less follows the pitch and roll orientation of the watch, albeit with a little lag and jerkiness. Here’s what it looks like.

 

 

Advertisements

Connecting Big Mouth Billy Bass to Azure IoT

Back in 2016 I reverse-engineered and hacked a “Big Mouth Billy Bass” to connect it to the internet and have it speak and mimic via its motorized mouth any words sent to it. I used the Octoblu IoT platform to automate calling a text-to-speech API and to provide the cloud-to-device connectivity for the the fish. With the Octoblu service having since been shutdown the IoT talking fish has been silent for many months. Until today.

In re-animating Billy Bass I had two objectives: (1) to bring it back to life using a different platform, specifically Azure IoT Hub; and (2) to experiment with using the (currently in preview) Azure Cognitive Services text-to-speech API.

I didn’t make any changes to the hardware or the Arduino firmware described in my original blog post (part 1). The primary changes were to the “piaudio.js” node.js program that runs on the Raspberry Pi. As described in the original blog post (part 2) this was originally an Octoblu connector which acted as an agent to maintain a connection to Octoblu’s mesh network and handle incoming cloud-to-device messages. In the Octoblu implementation the Octoblu “flow” in the cloud took the text string that the fish was required to say and called a text-to-speech API to get back a mp3 file; it then sent this to the connector which played it on the Pi using a local audio player which in turn sent audio to the speaker in the fish with the embedded Arduino synthesizing the fish’s mouth movements from the analog audio being played.

That approach isn’t suitable for Azure IoT Hub because the latter has a 256kB maximum message size, which may not be enough for some speech mp3s, and accounting is done at either 4kB or 0.5kB units, meaning that conveying mp3 data in messages can very quickly burn through allowances or rack up large bills. Instead I decided to have the IoT message carry just the text to be spoken and have the piaudio.js script on the Pi connect to the text-to-speech service to acquire the synthesized mp3; this does potentially add delay due to the extra round-trip to the endpoint, but it seems like a reasonable trade-off. With Azure Cognitive Services currently previewing a TTS API it seemed like a good opportunity to experiment with that.

Firstly I created a new IoT device in my existing Azure IoT Hub resource. I named this device “BillyBass” and copied the connection string to use in the code described below.

Azure IoT Portal

The changes to piaudio.js were fairly straightforward. I retained the code to talk to the fish’s Arduino over serial-over-USB (this controls the head and tail movement and the colour of the eye LEDs), and the code that launched omxplayer to play the mp3, but ripped out the Octoblu connector code. In its place I added a basic usage of the Azure IoT device SDK which called the existing message handler (as used by the Octoblu client code) on each received message.

In this case I chose to make my messages JSON objects.

The other main change was to call the Azure text-to-speech API directly from the piaudio.js code. This is a very simple REST API to use requiring firstly a call to get a bearer token (which lasts for 10 minutes so can be cached) followed by a call to perform text-to-speech conversion using a SSML (Speech Synthesis Markup Language) request as input and mp3 data as output.

Auth:

TTS:

The full code can be found here, with library dependencies here.

The fish can now be controlled by sending an Azure IoT Hub cloud-to-device message to the “BillyBass” device using a JSON object with at least a “text” key with the value being the text to synthesize into speech. Optionally I can also add a “color” key with a RGB hex-string value (e.g. “#FF8000”) to set the colour of the eyes during the speaking; and a second “color2” to provide a slowly alternating colour pattern on the eyes.

To test the fish I used the manual “Message To Device” function in the Azure portal to send hand-crafted JSON.

Azure Portal

The next step is to find something interesting to do with this… 🙂

 

An internet-connected Lego minifig

Who doesn’t love a cute little Lego minifig? Who doesn’t smile when they see brightly coloured LEDs in interesting places? Who doesn’t get excited about being able to control stuff via the internet? Nobody. How about combining all three?

Here’s how to build your very own IoT, LED-illuminated, minifig.

Step 1 is to acquire a suitable fig. Mine is, of course, a Citrix-branded one acquired from this year’s Citrix Synergy conference. (Other brands are available 🙂 ) As we’re going to be shining LED light through the minifig’s body, it’s best to use one with a white torso.

Step 2, which really only applies if you’ve got a Citrix minifig and you wish to taunt the Citrix branding people who are still trying to rid the world of the red dots that used to form part of the logo, is to use a small drill (I used a 0.8mm bit with a Dremel 3000) to drill holes through the front of the minifig’s torso where the two dots in the logo are located. Be careful not to drill through the back too. (You might want to make sure other minifigs nearby don’t witness this drastic surgery!)

Drilling holes in the minifig's torso

The LED I recommend is a single segment from a flexible strip of PCB-mounted WS2812B LEDs manufactured by Alitove (this item at Amazon UK). You could use pretty much any 5V WS2811/2812 LED that will physically fit, however this particular model fits the minifig torso well and being a surface-mount device, in the orientation it assumes inside the minifig, shines its light towards the front of the figure. I used scissors to cut a single segment from this strip.

Strip of Alitove 5V WS2812B LEDs

Step 3 is to create space inside the minifig’s torso to house the LED. Use a sharp knife to cut away the two diagonal protruding plastic veins on the back of the torso. You may need to use a small pair of pliers to twist and pull out the plastic. You may also wish to cut out the front veins as well, to give light from the LEDs a less obstructed path to the minifig’s front.

Inside the minifig torso, showing the rear plastic veins cut away

Make sure the LED fits inside the torso, with the LED’s lens pointing towards the minifig’s front. You may need to slightly trim the sides of the PCB.

Inside the minifig's torso, showing the LED in situ

Step 4 is to solder small wires to the LED PCB’s positive, negative and signal pads. For the signal pad make sure it’s the input one. Usually the arrow on the PCB points away from the input and towards the output; these LEDs have the signal in the middle, with the positive and negative towards the edges of the PCB. It’s best to direct the wires sideways, along the PCB, rather than perpendicularly away from it – this will allow the torso to fit snugly against the legs later. In the photo below the positive is red, negative blue, and signal white. After soldering, check the LED still fits inside the minifig’s torso.

Soldering wires to the LED PCB.

Because we’re using the space inside the torso for the LED, the usual manner of attaching the minifig’s legs won’t work. Therefore, step 5 is to cut off the studs at the top of legs, making the top of the leg unit as flat as possible.

Cutting off the studs from the top of the mnifig's legs.

Step 6 is to create a route for the wires to exit the minifig. The wires will route from the torso through the top of the legs and then out the existing holes on the back of the legs. Ensure that the leg joint it straight (i.e. as if the minifig was standing up) and drill a hole through the top of the hinge to create a hole from above the legs into the hollow inside of the leg. This should be done on the outside of the leg to avoid the hinge itself. I used a 1.6mm drill bit which created a hole big enough for two wires. Do this for both legs. You could of course also have the wires exit from the back of the torso using holes drilled there, which would allow the legs to bend, whereas in my case the legs are fixed due to the wires fouling the hinge.

Drilling a hole through the minifig's legs to allow wire egress.

Step 7 is to install the LED: ensure the LED is facing forwards and thread the three wires through the two holes drilled in the top of the legs and then out the existing holes in the back of the legs, as in the photo below.

LED wires routed through the minifig's legs

Ensure that wires are positioned such that the LED can be pushed down against the top of the legs.

LED sitting on top of the minifig's legs.

Step 8 is to attach the torso and legs. Because we’ve removed the studs this will requiring glue or poly cement (I used the latter). First, before applying any glue or cement, check everything fits by pushing the torso over the LED and ensuring the torso fits snugly against the legs. You may need to trim plastic, adjust wires, etc. as necessary. Apply the glue/cement according to manufacturer’s instructions and hold the two pieces together until the bond is made. You can then place the head on the minifig in the normal way.

Using poly cement to attach the legs and torso.

Step 9 is to connect the wires to a suitable device. In this case I used an Arduino Uno, wired in the same manner as in my Controlling my IoT Christmas Jumper with CheerLights hack. The positive and negative wires connect to the Arduino’s 5V and GND respectively, and the signal wire connects to digital I/O pin 8. I used crimp connectors to make this connection. Additionally, and this is optional, I added a second 0.1″ crimped plug and second connection in the wires to allow me to more easily detach the minifig from the Arduino. In the photo below this connection is out of view at the back but you can see how the wiring from this to the Arduino itself is a separate 3-core cable with red (5V), blue (GND), and green (signal, D8) wires.

Completed minifig connected to the Arduino.

Step 10 is the software on the Arduino. Firstly, if you don’t have it already, install the Adafruit NeoPixel library according to these instructions. Using the Arduino IDE, create a sketch using the same code as in my Christmas Jumper hack, but modify the WS2811_LED_COUNT variable to be 1 instead of 20. Use the IDE and a USB cable to upload the sketch to the Arduino. To test it, open a serial monitor (from the Tools menu in the Arduino IDE) and enter commands such as “COLOR #FF0000” (ensuring that the serial monitor is configured to use newline line ending and 9600 baud) and “COLOR #00FF00” to turn the LED red and green respectively.

Step 11 is to connect this to the internet. There are many possibilities here: all you need is a script/program on a computer or Raspberry Pi that sends COLOR commands to the Arduino. An example from the Christmas Jumper hack is this node.js program which runs on a Raspberry Pi, to which the Arduino is connected via USB. The program polls the CheerLights API and changes the colour of the minifig’s LED to match the CheerLights colour – this makes your minifig glow the same colour as thousands of other internet-connected lights across the world. To use this on a Raspberry Pi:

  1. Ensure you have an up-to-date node.js and npm on the Pi (see here for how)
  2. Create a directory, e.g. /home/pi/minifig and download my code to a file in this directory, e.g. leds.js
  3. Change directory to /home/pi/minifig and install the required libraries using the command “npm install serialport tinycolor2 request”
  4. Run the program: “node leds.js” – you should see the minifig show a colour within a few seconds.
  5. Test the change by tweeting a new colour to CheerLights

Whether you’ve got a Citrix minifig like mine, a custom minifig that looks a little like you, or just one stolen from your kids, you can now have it be an internet-connected, LED-illuminated minifig!

Controlling my IoT Christmas Jumper with CheerLights

‘Twas the night before Christmas Jumper Day, when all through the house… not a single festive sweater could be found!

Each year in the Citrix office in Cambridge, UK, we take part in the annual Save the Children Christmas Jumper Day. But at 8pm the evening before, I found myself without a suitable Yuletide sweater to wear, so I decided to make my own. Happily, I had some useful bits and pieces sitting on my workbench, so I set about making myself an IoT-controlled, multi-color LED Christmas jumper. I later decided to connect it to CheerLights. Here’s how it works.

20171217_094449The lights themselves are a string of WS2811 red-green-blue, individually controllable LEDs, meaning each LED can be set to a different color under the control of suitable software. I’m a huge fan of these LEDs; they can be easily connected to Raspberry Pis, Arduinos, and many other devices. They can be chained together to form delightfully elaborate displays with very simple wiring; and can produce some really funky colors and effects. In the past, I’ve used them as Christmas tree lights (including using them to scroll dot-matrix messages on the tree!), for jazzing up PowerPoint presentations, for showing load on a cluster of servers, to illuminate a telephone box panel, and more.

In previous projects, I’ve connected these lights either directly to a Raspberry Pi, or to an Arduino, which itself is connect to a Pi via serial-over-USB. The former method is a little hit-and-miss because the 3.3v output from the Pi isn’t always enough to drive the 5v control input to the LEDs, in this case some additional electronics are needed to make it all work. Annoyingly, the particular LEDs I found to use for the Christmas jumper couldn’t handle the 3.3v signal so, to save time soldering an interfacing circuit, I adopted the Arduino method (most Arduinos drive their outputs at 5v). I recycled an Arduino sketch I created some time ago for a big push button that had a circle of 8 WS281x LEDs within its translucent shell, stripped out all the code for the push button, leaving just the part that could take a command over the serial-over-USB channel to change the LED colors (such as “COLOR #FF0000” to show red) — code here.

20171215_203429When I first created this IoT Christmas jumper I controlled it via an Alexa skill – see my Citrix blog post for more details on this. However I later became aware of CheerLights – a project that allows lights across the world to be synchronized to one color and be controlled by anyone via Twitter. My jumper seemed like a great fit for this so I set about modifying the code to work with it. I created a basic node.js program (code here) to run on a Raspberry Pi Zero W that polls the CheerLights API, from which it receives color commands which it then sends to the Arduino via serial-over-USB. It adjusts the color value to reduce the brightness of the LEDs and extend the battery life. I added a call to this script from /etc/rc.local to have it run on boot.

The final step (for phase 1 – there’s more!) was to attach the LEDs and Arduino to a suitable jumper, put it on, connect the Pi to a USB power pack, and secure the whole thing in my pockets, under my belt, and so on. Now my Christmas jumper will change color at the same time as many other lights across the world, all controllable by anyone who wants to.

IMG_8673f

As a bonus I modified the Alexa skill I was using for the original version of the hack to have it send a #CheerLights tweet in response to an Alexa command. This was done by creating an Azure Logic App to send the tweet and calling that from the Azure Function that I am using as the Alexa skill handler.

 

Using Azure IoT Hub to connect my home to the cloud

I’ve written about my hybrid local/cloud home automation architecture previously: in summary most of the moving parts and automation logic live on a series of Raspberry Pis on my home network, using a MQTT broker to communicate with each other. I bridge this “on-prem” system with the cloud in order to route incoming events, e.g. actions initiated via my Alexa Skills, from the cloud to my home, and to send outgoing events, e.g. notifications via Twilio or Twitter.

Historically this bridging was done using Octoblu, having a custom Octoblu MQTT connector running locally and an Octoblu flow running in the cloud for the various inbound and outbound event routing and actions. However, with Octoblu going away as a managed service hosted by Citrix, I, like other Octoblu users, needed to find an alternative solution. I decided to give Azure IoT Hub and its related services a try, partly to solve my immediate need and partly to get some experience with that service. Azure IoT isn’t really the kind of end-user/maker platform that Octoblu is, and there are some differences in concepts and architecture, however for my relatively simple use-case it was fairly straightforward to make Azure IoT Hub and Azure Functions do what I need them to do. Here’s how.

I started by creating an instance of an Azure IoT Hub, using the free tier (which allows up to about 8k messages per day), and within this manually creating a single device to represent my entire home environment (this is the same model I used with Octoblu).

After some experimentation I settled on using the Azure IoT Edge framework (V1, not the more recently released V2) to communicate with the IoT Hub. This framework is a renaming and evolution of the Azure IoT Gateway SDK and allows one or more devices to be connected via a single client service framework. It is possible to create standalone connectors to talk to IoT Hub in a similar manner to how Octoblu connectors work, but I decided to use the Edge framework to give me more flexibility in the future.

There are various ways to consume the IoT Edge/gateway framework; I chose to use the NPM packaged version, adding my own module and configuration. In this post I’ll refer to my instance of the framework as the “gateway”. The overall concept for the framework is that a number of modules can be linked together, with each module acting as either a message source, sink, or both. The set of modules and linkage are defined in a JSON configuration file. The modules typically include one or more use-case specific modules, e.g. to communicate with a physical device; a module to bidirectionally communicate with the Azure IoT Hub; and a mapping module to map between physical device identifiers and IoT Hub deviceNames and deviceKeys.

The requirements for my gateway were simple:

  1. Connect to the local MQTT broker, subscribe to a small number of MQTT topics and forward messages on them to Azure IoT Hub.
  2. Receive messages from Azure IoT Hub and publish them to the local MQTT broker.

To implement this I built a MQTT module for the Azure IoT Edge framework. I opted to forego the usual mapping module (it wouldn’t add value here) and instead have the MQTT module set the deviceName and deviceKey for IoT Hub directly, and perform its own inbound filtering. The configuration for the module pipeline is therefore very simple: messages from the IoT Hub module go to the MQTT module, and vice-versa.

The IoT Edge framework runs the node.js MQTT module in an in-process JavaScript interpreter, with the IoT Hub being a native code module that runs in the same process. Thus the whole gateway is run as a single program with the configuration supplied as its argument.

The gateway runs on a Pi with my specific deviceName and deviceKey, along with MQTT config, stored locally in a file “/home/pi/.iothub.json” that look like this:

{
  "iothub":{
    "deviceName":"MyMQTTBroker",
    "deviceKey":"<deviceKey for device as defined in Azure IoT Hub>",
    "hostname":"<my_iot_hub>.azure-devices.net"
  },
  "localmqtt":{
    "url":"mqtt://10.52.2.41",
    "protocol":"{\"protocolId\": \"MQIsdp\", \"protocolVersion\": 3}"
  }
}

The gateway can now happily send and receive messages from Azure IoT Hub but that isn’t very useful on its own. The next step was to setup inbound message routing from my Alexa Skills.

aziotIn the previous Octoblu implementation the Alexa Skills simply called an Octoblu Trigger (in effect a webhook) with a message containing a MQTT topic and message body. The Octoblu flow then sent this to the device representing my home environment and the connector running on a Pi picked it up and published it into the local MQTT broker. The Azure solution is essentially the same. I created an Azure Function (equivalent to an AWS Lambda function) using a JavaScript HTTP trigger template, that can be called with a topic and message body, this then calls the Azure IoT Hub (via a NPM library) to send a “cloud-to-device” (C2D) message to the MQTT gateway device – the gateway described above then picks this up and publishes it via the local broker just like the Octoblu connector did. I then updated my Alexa Skills’ Lambda Functions to POST to this Azure Function rather than to the Octoblu Trigger.

The code for the Azure function is really just argument checking and plumbing to call into the library that in turn calls the Azure IoT Hub APIs. In order to get the necessary Node libraries into the function I defined a package.json and used the debug console to run “npm install” to populate the image (yeah, this isn’t pretty, I know) – see the docs for details on how to do this.

If you’re wondering why I’m using both AWS Lambda and Azure Functions the reason is that Alexa Smart Home skills (the ones that let you do “Alexa, turn on the kitchen lights”) can only use Lambda functions as backends, they cannot use general HTTPS endpoints like custom skills can. In a different project I have completely replaced an Alexa Skill’s Lambda function with an Azure function (which, like here, calls into IoT Hub) to reduce the number of moving parts.

So with all of this I can now control lights, TV, etc. via Alexa like I could previously, but now using Azure IoT rather than Octoblu to provide the cloud->on-prem message routing.

logicappThe final use-case to solve was the outbound message case, which was limited to sending alerts via Twitter (I had used Twilio before but stopped this some time back). My solution started with a simple Azure Logic App which is triggered by a HTTP request and then feeds into a “Post a Tweet” action. The Twitter “connection” for the logic app is created in a very similar manner to how it was done by Octoblu, requiring me to authenticate to Twitter and grant permission for the Logic App to access my account. I defined a message scheme for the HTTP request which allowed me to POST JSON messages to it and use the parsed fields (actually just the “message” field for now) in the tweet.

I then created a second Azure Function which is configured to be triggered by Event Hub messages using the embedded Event Hub in the IoT Hub (if that all sounds a bit complex, just use the function template and it’ll become clearer). In summary this function gets called for each device-to-cloud event (or batch of events) received by the IoT Hub. If the message has a topic of “Alert” then the body of the message is sent to the Logic App via its trigger URL (copy and paste from the Logic App designer UI). I added the “request” NPM module to the Function image using the same procedure as for the iot-hub libraries above.

The overall flow is thus:

  1. Something within my home environment publishes a message on the “Alert” topic.
  2. The Azure IoT gateway’s MQTT module is subscribed to the “Alert” topic, receives the published message, attaches the Azure IoT Hub deviceName and deviceKey and sends it as an event via the IoT Hub module which sends it via AMQP to Azure IoT Hub.
  3. Azure IoT Hub invokes the second Azure Function with the event.
  4. The Function pulls out the MQTT topic and payload from the event and calls the Logic App with them.
  5. The Logic App pulls out the message payload and send this as a tweet using the pre-authorised Twitter connector.

Although all of this seems quite complex, it’s actually fairly simple overall: the IoT hub acts an a point of connection, with the on-prem gateway forwarding events to and from it, and a pair of Azure Functions being used for device-to-cloud and cloud-to-device messages respectively.

simple

It was all going well until I discovered that the spin-up time for an Azure Function that’s been dormant for a while can be huge – well beyond the timeout of an Alexa Skill. This is partly caused by the time it takes for the function runtime to load in all the node modules from the slow backing store, and partly just slow spin-up of the (container?) environment that Azure Functions run within. A common practice is to ensure that functions are invoked sufficiently often that Azure doesn’t terminate them. I followed this practice by adapting my existing heartbeat service running on a Pi that publishes a heartbeat MQTT every 2 minutes to also have it call the first Azure function (the one that the Alexa Skills call) with a null argument; and to keep the second function alive I simply had the MQTT gateway subscribe the heartbeat topic thereby ensuring the event handler function ran at least once every 2 minutes as well.

 

A universal IR remote control using MQTT

In some previous hacking I created an add-on for the Kodi media player which allowed me to control Kodi and the TV it is connected to (by using an IR blaster) using messages published through my home MQTT broker. The original purpose for this hack was to enable voice control via an Amazon Echo Smart Home Skill.

alloffI’ve since added another use-case where a single push-button connected to a Raspberry Pi Zero W publishes a single MQTT message which my rules engine picks up and then publishes a number of messages to act as an “all off” button: it sends “off” messages to all lights in the room (these are a mixture of LightwaveRF and Philips Hue – both interfaced via services that subscribe to the local MQTT broker); a “pause” message to Kodi; and a TV “off” message to the TV.

However, despite having this capability I still use separate traditional IR remote controls for the TV and Kodi, and a 433MHz control for the LightwaveRF lights. It seemed like a good idea to take advantage of the MQTT control to reduce the need for so many remote controls so I set about turning the Hama remote control I use for Kodi into a universal control for all the devices.

The strategy I used was to have the Hama remote control publish MQTT messages and add some rules to the broker’s rules engine to map these to the required MQTT messages(s) to control Kodi, the TV, or the lights. I chose to connect the Hama USB IR receiver to a new Raspberry Pi Zero W – I could have left this connected to the Pi 3 running Kodi and created a new add-on to talk to it but I have future plans that call for another Pi in this location and this seemed like it would be easier… – and set about building a small service to run on the Pi to relay received IR commands to the MQTT broker.

iroverview
Overview of the overall architecture

After a few false starts with LIRC I settled on consuming events from the remote control via the Linux event subsystem (same as handles keyboard and mouse). There are some node.js libraries to enable this but I found a much more complete library available for Python which, critically, implements the “grab” functionality to prevent “keystrokes” from the IR control also going to the Pi’s login console.

I’ve already implemented a few Python MQTT clients using the Paho library (including the Kodi add-on itself) so I recycled existing code and simply added an input listener to attach to the two event devices associated with the IR control (hard-coded for now) and, after a little processing of the event, publish an MQTT message for each button press. The Hama remote acts like a keyboard and some of the buttons include key modifiers: this means that a single button push could involve up to 6 events: e.g. key-down for left-shift, key-down for left-ctrl, key-down for ‘T’, followed by three key-up events in the reverse order. My code maintains a simple cache of the current state of the modifier keys so that when I get a key-down event for a primary key (e.g. ‘T’ in the above example) I can publish a MQTT message including the key and its active modifiers.

 for event in self.device.read_loop():
     if event.type == evdev.ecodes.EV_KEY:
         k = evdev.categorize(event)
         set_modifier(k.keycode, k.keystate)
         if not is_modifier(k.keycode) and not is_ignore(k.keycode):
             if k.keystate == 1:
                 msg = k.keycode + get_modifiers()
                 self.mqttclient.publish(self.topic, msg)

(The full code for the service can be found here.)

This results in MQTT messages of the form

IR/room2av KEY_VOLUMEUP
IR/room2av KEY_VOLUMEDOWN
IR/room2av KEY_LEFT
IR/room2av KEY_RIGHT
IR/room2av KEY_DOWN
IR/room2av KEY_UP
IR/room2av KEY_PAGEUP
IR/room2av KEY_PAGEDOWN
IR/room2av KEY_T_KEY_LEFTCTRL_KEY_LEFTSHIFT

The next step was to add a rule to the rules engine to handle these. The rules engine is a simple MQTT client that runs on the same Raspberry Pi as the MQTT broker; it listens to topics of interest and based on incoming messages and any relevant state (stored in Redis) publishes message(s) and updates state. In this case there is no state to worry about, it is simply a case of mapping incoming “IR/*” messages to outbound messages.

A (partial) example is:

function handle(topic, message, resources) {
  switch (topic) {
  case "IR/room2av":
    switch (message.toString()) {
    case "KEY_UP":
      resources.mqtt.publish("KODI/room2/KODI", "Action(Up)");
      break;
    case "KEY_DOWN":
      resources.mqtt.publish("KODI/room2/KODI", "Action(Down)");
      break;
    case "KEY_PAGEUP":
      resources.mqtt.publish("Light/room2/Lamp", "on");
      resources.mqtt.publish("Light/room2Ceiling", "on");
      break;
    case "KEY_PAGEDOWN":
      resources.mqtt.publish("Light/room2/Lamp", "off");
      resources.mqtt.publish("Light/room2Ceiling", "off");
      break;
    case "KEY_VOLUMEUP":
      resources.mqtt.publish("KODI/room2/TV", "VOL_p");
      break;
    case "KEY_VOLUMEDOWN":
      resources.mqtt.publish("KODI/room2/TV", "VOL+m");
      break;
...

Here we can see how buttons pushes from this one IR remote are routed to multiple devices:

  • the “up” and “down” navigation buttons result in messages being sent to Kodi (the message content is simply passed to Kodi as a “builtin” command via the xbmc.executebuiltin(…) API available to add-ons);
  • the “+” and “-” channel buttons (which map to PAGEUP and PAGEDOWN keycodes) have been abused to turn the lights on and off – note the two separate messages being sent, these actually end up going to LightwaveRF and Philips Hue devices respectively; and
  • the “+” and “-” volume buttons send IR commands to the TV (this happens to be via the Kodi add-on but is distinct from the Kodi control) – the “VOL_p” and “VOL+m” being the names of the IR codes in the TV’s LIRC config file.

A major gotcha here is that when controlling a device such as the TV with an IR blaster, there will be an overlap between the blast of IR from the Hama device and from the IR blaster connected to the Kodi Pi, and the TV will find it difficult to isolate the IR intended for it. To avoid this I’ve had to put tape over the TV’s IR receiver and IR blaster which is glued to it such that IR from the Hama control can’t get through.

The end result is that I can now use a single IR remote control to navigate and control Kodi, turn the TV on and off and adjust its volume, and control the lights in the room. Because everything is MQTT under the hood, and I’ve got plumbing to route messages pretty much anywhere I want, there is no reason why that IR remote control can’t do other things too. For example it could turn off all the lights in the entire house, or turn off a TV in another room (e.g. if I’ve forgotten to do so when I left that room), or even to cause an action externally via my Azure IoT gateway (more on this in a future blog post). And because the rules engine can use state and other inputs to decide what to do, the action of the IR remote control could even be “contextual”, doing different things depending on circumstances.

 

A Homage to Octoblu.

As you may have seen today Citrix announced that going forward the company will no longer focus on building its own IoT platform, rather, it will focus on applying IoT technology to other Citrix initiatives, and using existing IoT platforms to do so. Although there is a very bright future for IoT in Citrix (there are a number of exciting things in the pipeline I can’t talk about here), sadly this means that the freely available octoblu.com IoT platform service will be closed down in 30 days.

Those who know me or follow me on Twitter will know that I’m a big fan of Octoblu. I started playing with the technology soon after Citrix acquired Octoblu three years ago and I was hooked. In the last year or so I’ve had the opportunity to get more deeply involved with the Octoblu team and our Workspace IoT products and services. I’ve been inspired by the platform itself, what can be done with it, and with the enthusiasm, pragmatism, innovative style and friendliness of the Octoblu engineering team. I’ve learned a lot from the Octoblu team about how to develop scalable, cloud-native services using modern devops and CI/CD tools and techniques. Octoblu was also my introduction to Node.js (which I use in all sorts of places now) and CoffeeScript (which I still don’t really get on with 🙂 ).

Over the years I’ve used Octoblu in so many ways:

I’ve been fortunate enough to present a number of Octoblu and IoT sessions at various events, including:

and to produce demos for others’ presentations:

  • A Slack interface to ShareFile
  • An IoT chatbot using Slack to troubleshoot meeting room AV
  • Various Amazon Alexa demos controlling slideshows, launching apps via Citrix Receiver, and other stuff
  • and more!

I’ve loved using the online interactive designer to create some really powerful flows with no, or very little, coding. See below for a gallery of some of my favourites.

It is thanks to Octoblu, Chris Matthieu, and the entire Octoblu team that I’ve had these opportunities and gained these perspectives – without you I’d never have done any of this. For that, I thank you all.

Hack the planet!

SystemTestXenServerConnector

pptflow

SYN132-ShareFileSlack

Paddington

WORK_ACCOUNT_SmartSpacesV2

WORK_ACCOUNT_chatbotforblogpost