Displaying messages on my Christmas tree lights

IoT and Christmas go well together – all those festive lights and sounds are just asking to be connected up to the Internet and automated! My own Christmas tree is of course no exception and, as I wrote about over on the Citrix blogs, now sports a string of individually-addressable LEDs controlled by a Raspberry Pi which is in turn managed by Octoblu.

But I decided to go a step further than simply displaying some fun flashing patterns and colours on the tree: seeing as every LED can be individually managed, if I could determine the exact position on the tree of each LED I could come up with geometric patterns or even use it as a sort of dot matrix display to show messages.

iotreecalibratewebcamMy solution was to connect a webcam to the Raspberry Pi and write a script that turned each LED on one by one, captured an image from the webcam, and found the coordinates of the bright spot on the image. After the script ran through all 100 LEDs it then normalised the coordinates to a 100 x 100 grid and output a list of coordinates that I could then use in my LED controller program. The code is a bit gross, being a mash-up of various bits of other scripts quickly bolted together, including a web server that I used to view the image while positioning the camera.

iotreecalibrateTo visually check the output I wrote a quick bit of HTML/JavaScript that used the pre-normalised coordinates to overlay blobs on top of a captured image from the webcam (with all LEDs lit) – I could then see if the blobs lined up with the LEDs. As you can see in the image here there is at least one misalignment caused by the reflection of the light on a nearby white surface.

So armed with a set of coordinates for the LEDs I then extended my existing WS2811 LED control program (see my post on the Citrix blog for more on this, and on the hardware setup) to use coordinates, instead of just LED sequence number, for its patterns.

Firstly I created a simple set of test patterns that moved a vertical line horizontally across the tree, a horizontal line vertically and then the two different diagonals. I also updated the VU meter effect to use Y coordinates instead of LED sequence number.

However the most fun piece was rendering text on the tree. To do this I found a suitable encoding of a dot matrix font online (there are loads out there) and morphed it into a suitable form to include in Node.js. I then wrote a very simple rendering function that runs over a string of text one character at a time, using the font configuration to determine which pixels in each column of the 5×7 matrix should be illuminated. The end result was an array with an entry per column (for the entire message) with the entry being an encoding of which row LEDs should be illuminated. I also created a similar array to record which colour to use for each column – in this case populated to cycle through red, green and blue on a character-by-character basis (this helps the eye to see the characters better).

var dmmessage = " XMAS";

var dmarray;
var dmcolor;
var colorcycle = 255;
function renderMessage() {
  dmarray = new Uint32Array(dmmessage.length * 7 + 5);
  dmcolor = new Uint32Array(dmmessage.length * 7 + 5);
  for (var i = 0; i < dmmessage.length; i++) {
    var fonttablebase = (dmmessage.charCodeAt(i) - 0x20) * 5;
    for (var j = 0; j < 5; j++) {
      dmarray[i*7+j] = font5x7[fonttablebase + j];
      dmcolor[i*7+j] = colorcycle;
    }
    if (colorcycle == 0xff0000) {
      colorcycle = 255;
    } else {
      colorcycle = colorcycle << 8;
    }
  }
}

In the main part of the WS2811 driver program, where a loop iterates over the 100 LEDs choosing which colour to display for each, the code uses the incrementing offset variable (increments once per 100-LED update cycle, every 80ms) to index into the column array, offset by the X coordinate of the LED. A quantised Y coordinate of the LED is used to look up the row pixel on/off data from the array entry – the quantisation effectively creating 7 rows across the surface of the tree.

function dotmatrix_y(y) {
  if (y < 25.0) return 8;
  if (y < 40.0) return 0;
  if (y < 50.0) return 1;
  if (y < 60.0) return 2;
  if (y < 70.0) return 3;
  if (y < 80.0) return 4;
  if (y < 90.0) return 5;
  return 6;
}

function dotmatrix(pos) {
  var x = xy.xy[pos][0];
  var ydot = dotmatrix_y(xy.xy[pos][1]);

  var idx = Math.floor(((offset + x)/10)%dmarray.length);
  var column = dmarray[idx];

  if ((1<<ydot)&column) {
    return dmcolor[idx];
  }
  return 0;
}

I added a new parameter to the Octoblu message to allow the display string to be set. I updated the Octoblu flow and Alexa skill to allow a command such as “Alexa, tell the Christmas tree to say Hello”.

Alexa intent schema (new part in bold, full code here):

{
  "intents": [
    {
      "intent": "ModeIntent",
      "slots": [
        {
          "name": "mode",
          "type": "LIST_OF_MODES"
        }
      ]
    },
    {
      "intent": "ColorIntent",
      "slots": [
        {
          "name": "color",
          "type": "LIST_OF_COLORS"
        }
      ]
    },
    {
      "intent": "TextIntent",
      "slots": [
        {
          "name": "text",
          "type": "LIST_OF_WORDS"
        }
      ]
    },
    {
      "intent": "AskOctobluIntent"
    },
    {
      "intent": "SorryIntent"
    }
  ]
}

AWS Lambda intent handler (new part in bold, full code here):

def on_intent(intent_request, session):
    """ Called when the user specifies an intent for this skill """

    print("on_intent requestId=" + intent_request['requestId'] +
          ", sessionId=" + session['sessionId'])

    intent = intent_request['intent']
    intent_name = intent_request['intent']['name']
    
    mode = None
    color = None
    text = None

    session_attributes = {}
    speech_output = None

    # Dispatch to your skill's intent handlers
    if intent_name == "ModeIntent":
        if ('mode' in intent['slots']) and ('value' in intent['slots']['mode']):
            mode = intent['slots']['mode']['value']
    elif intent_name == "ColorIntent":
        if ('color' in intent['slots']) and ('value' in intent['slots']['color']):
            color = intent['slots']['color']['value']
            mode = "solid"
    elif intent_name == "TextIntent":
        if ('text' in intent['slots']) and ('value' in intent['slots']['text']):
            text = intent['slots']['text']['value']
            mode = "dotmatrix"
    elif intent_name == "AskOctobluIntent":
        mode = "octoblu"
    elif intent_name == "SorryIntent":
        speech_output = "I'm sorry James, I can't do that"
    else:
        raise ValueError("Invalid intent")

    obmsg = {"debug":intent}
    if mode:
        obmsg["mode"] = mode
    if color:
        obmsg["color"] = color
    if text:
        obmsg["text"] = text
    url = octoblu_trigger
    data = urllib.urlencode(obmsg)
    req = urllib2.Request(url, data)
    response = urllib2.urlopen(req)
    the_page = response.read()
    
    if not speech_output:
        speech_output = "OK"
    return build_response(session_attributes, build_speechlet_response(speech_output))

WS2811 Octoblu message handler (new part in bold):

conn.on('ready', function(rdata){
  console.log('UUID AUTHENTICATED!');
  console.log(rdata);

  clearTimeout(connectionTimer);
  connectionTimer = undefined;

  conn.update({
    "uuid": meshbluJSON.uuid,
    "token": meshbluJSON.token,
    "messageSchema": MESSAGE_SCHEMA,
  });

  conn.on('message', function(data){
    console.log('Octoblu message received');
    console.log(data);
    mode = data.mode;
    color = ("color" in data)?data.color:null;
    if (("text" in data) && data.text) {
      dmmessage = " " + data.text.toUpperCase();
      renderMessage();
      offset = 0;
    }
    handle_message(mode, color)
  });

});

Octoblu connects together the HTTPS POST request from the Alexa skill handler with the WS2811 driver program.

iotreeoctoblu

And there we have it – Alexa-controlled scrolling message display on a Christmas tree thanks to Raspberry Pi and Octoblu!

 

Automating HDMI input switching

I recently wrote about automating TV power control using IR LEDs. This enabled me to turn a TV and amplifier on and off with Amazon Echo and as part of other automation flows. However, with a cable TV box, Kodi media player, PlayStation 3 and Google Cast I still needed to choose the TV’s input source manually. I’d like to be able to say “Alexa, turn on cable”, or “Alexa, turn on Google Cast” and have it not only turn on the TV and amp as it can now, but also switch to the correct input.

One option would be to take advantage of automatic HDMI switching driven from a source coming online – but with things like the Cast and Kodi player being always-on this made this a no-go.

Another option was to use an external HDMI switcher box with an IR remote control and blast IR commands at this in the same way as for the TV itself. In fact this was my original plan and I carefully selected a HDMI switcher that was not only getting decent reviews, but also had a remote control button per channel (simply having a “next input” cycle button was not suitable because I’d have no way to know when I got to the input I wanted). However when I received the switcher I thought I’d try something different, just for fun.

Hardware

This particular switcher has an LED for each input, showing which is selected, and a single push button to cycle through the inputs. Having opened up the box by prising off the plastic feet and unscrewing the 4 small screws underneath I took a good look at the circuit board. I observed that the LEDs were wired with the cathodes connected to ground, therefore the anodes would be at a positive voltage when the LED was lit – this is good enough to directly drive a Raspberry Pi GPIO input pin in pull-up mode. By connecting all 5 LED anodes to GPIO pins I could easily tell which input was selected. The push button was also connected to ground, suggesting it was being used in a pull-up fashion. I took a gamble that I could get away with driving this directly from a Raspberry Pi GPIO output without any further interfacing. To make life even easier all of the LED and switch pins were through-hole soldered into the switcher’s PCB (i.e. not surface mount like most of the board) meaning small wires could be fairly easily soldered onto them.

hdmiwiring

Even more conveniently the arrangement of the box, although very compact, had plenty of room to bring in a 9 core cable (5 LEDs, 1 switch, 1 ground and two unused wires) leading to a neat appearance. To route the cable out I drilled a 5mm hole close to the IR socket on the edge of the box and routed the wires over the top side of the PCB.

On the Raspberry Pi end I simply chose 6 unused GPIO pins and connected the LEDs and switch input as well as connecting ground on the HDMI box to a ground pin on the GPIO header.

Software

I wanted to use the same Raspberry Pi running LibreELEC and Kodi that I was already using to drive the IR emitters. This meant extending the existing Kodi add-on with the capability to read the HDMI switcher’s LED status and drive its push button. The general strategy was to have an incoming MQTT message set the desired input number (1-5) and then send button push pulses until the associated LED lit up. One potential gotcha here is that this switcher skips over inputs that do not have an active HDMI source, therefore if the MQTT message requests an input with either nothing connected, or with a source that’s not switched on, the button pushing could go on forever. To avoid this I limited the number of button push attempts per MQTT message to 10.

The prerequisite was to add the “Script – Raspberry Pi Tools” Kodi add-on (search the registry from the Kodi add-on manager UI) to add the GPIO Python library.

The code is pretty simple. At start of day I set up GPIO – the first 5 pins are the inputs from the switcher’s LEDs and the final pin is the output to the push button:

sys.path.append('/storage/.kodi/addons/virtual.rpi-tools/lib')
import RPi.GPIO as GPIO

# Setup pins for HDMI switcher control
GPIO.setmode(GPIO.BCM)
GPIO.setup(21, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(8, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(16, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(12, GPIO.IN, pull_up_down = GPIO.PUD_UP) 
GPIO.setup(7, GPIO.IN, pull_up_down = GPIO.PUD_UP)
GPIO.setup(20, GPIO.OUT)
GPIO.output(20, 1)

And initialise some global variables to record the desired input number and the count of button push attempts:

hdmiinput = 1
hdmiattempts = -1

In the MQTT message handler I added another sub-topic case:

            elif ll[2] == "HDMI":
                try:
                    hdmiinput = int(msg.payload)
                    hdmiattempts = 0
                except:
                    pass

And in the main loop (which cycles every 0.5s):

        if (hdmiattempts > -1) and (hdmiattempts < 10):
            for i in range(4):
                hdminow = 0
                if GPIO.input(21): hdminow = 1
                if GPIO.input(8): hdminow = 2
                if GPIO.input(16): hdminow = 3
                if GPIO.input(12): hdminow = 4
                if GPIO.input(7): hdminow = 5
                log("HDMI switcher currently on %s, want %s" % (hdminow, hdmiinput))
                if hdminow == hdmiinput:
                    hdmiattempts = -1
                    break
                else:
                    log("Sending HDMI switch button trigger")
                    GPIO.output(20, 0)
                    time.sleep(0.1)
                    GPIO.output(20, 1)
                    time.sleep(0.1)
                    hdmiattempts = hdmiattempts + 1

The full Kodi add-on code can be found here. It’s quick and dirty, but it works.

Automation with Echo

So now I have a way to change HDMI input to the TV by sending a MQTT message such as “KODI/Lounge/HDMI=4“. To use this with Amazon Echo I extended my existing TV control with specific cases for each of the 4 inputs in use. The configuration for the Alexa smart home skill adapter (see Controlling custom lighting with Amazon Echo and a skill adapter for more on this) sends a single MQTT command into my home broker:

                {
                    "applianceId":"loungetvcable",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Virgin Media",
                    "friendlyDescription":"Living room TV and amp on cable TV",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvcable"
                    }
                },
                {
                    "applianceId":"loungetvcast",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Chrome Cast",
                    "friendlyDescription":"Living room TV and amp on Google Cast",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvcast"
                    }
                },
                {
                    "applianceId":"loungetvps3",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"PS3",
                    "friendlyDescription":"Living room TV and amp on PS3",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvps3"
                    }
                },
                {
                    "applianceId":"loungetvkodi",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Kodi",
                    "friendlyDescription":"Living room TV and amp on Kodi",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"compound/loungetvkodi"
                    }
                },

(See this gist for the context – these rules are just added to the list)

Rules in the rules engine then turn this into TV power control and in the “on” case, also sends the HDMI control message:

  case 'compound/loungetvcable':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "1");
    }
    break;
  case 'compound/loungetvcast':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "2");
    }
    break;
  case 'compound/loungetvps3':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "5");
    }
    break;
  case 'compound/loungetvkodi':
    resources.mqtt.publish("KODI/Lounge/TV", message.toString());
    if (message.toString() == "on") {
      resources.mqtt.publish("KODI/Lounge/HDMI", "4");
    }
    break;

In closing

By adding explicit input selection to the set of MQTT commands I can send I can now build more complex automation actions that not only turn things on or off but can also select inputs. This enables a single Echo command such as “Alexa, turn on Chrome Cast” to get all the necessary devices into the right state to achieve the desired outcome of the TV and amp being on and displaying the output of the Cast.

hdmisidebar

 

Using infrared control from Amazon Echo

A while back I built a skill adapter to allow Amazon Echo’s smart home skill system to manage my LightwaveRF lights via my custom Octoblu+MQTT system. But why stop there? If I can say “Alexa, turn on the living room light” why can’t I also say “Alexa, turn on the TV“? With IoT mains power sockets such as Belkin Wemo or LightwaveRF I could power on and off the TV but I prefer to leave the TV in standby (yes, I know this uses power and kills the planet). Instead I decided to solve the problem by using infrared emitters glued to the front of the devices I wanted to control and allowing Alexa to control these.

Hardware

ircircuit2Each TV in my house is co-located with a Raspberry Pi running the Kodi media player. Therefore it made sense to add the IR emitter capability to these rather than add more hardware. There are various options for emitting IR including USB IR blasters and professional systems design for commercial AV use-cases. I chose a homebrew route and built my own emitters using an IR LED, a NPN transistor and a couple of resistors. The whole thing connects to a Raspberry Pi GPIO pin. Using a transistor allows for higher LED currents than can be used directly from the GPIO pin and allows multiple emitters to be connected to the same GPIO pin, e.g. to connect several home entertainment devices.

irhead2

I wired the four components together in free space (no board) and connected to a three core cable. I used offcuts of cable sheath and heatshrink tubing to insulate and encapsulate everything. The end result is an IR LED poking out the top of black blob on the end of a black cable. It looks a bit ugly close-up but once attached to a black TV case it’s hardly noticeable.

I connected the other end of the cable to the GPIO header on the Raspberry Pi using +5V, GND and GPIO17 (this is the default for the LIRC Pi GPIO drive) pins. I then positioned and superglued the emitter onto the front of the TV such that it could see the TV’s IR receiver but without completely blocking it.

irtv

Kodi interface

The Raspberry Pi runs the LibreELEC distribution of Kodi. To set this up to use GPIO-driven IR required me to SSH to the Pi as root and edit /flash/config.txt to add a line “dtoverlay=lirc-rpi” – this is needed to load the kernel driver for Raspberry Pi GPIO for the Linux IR subsystem. I then needed to download the specific IR configurations I needed for my devices. For example for my Sony Bravia TV I ran:

 wget -O "/storage/.config/lircd.conf" \
  'https://sourceforge.net/p/lirc-remotes/code/ci/master/tree/remotes/sony/RM-ED009.lircd.conf?format=raw'

For controlling multiple devices multiple remote control configs can be concatenated into the lircd.conf file.

After a reboot it was time to code a Kodi add-on. This add-on runs as a service and connects to my home MQTT broker in order to receive published messages (more of which later). The MQTT message handler spawns the “irsend” command (comes with the LibreELEC distribution) to send the named IR command(s). For example, the following code is used to control the TV and an amplifier at the same time (using two IR emitters wired to the same GPIO pin):

# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    global player, hdmiattempts, hdmiinput

    try:
        msgpayload = str(msg.payload)
        print(msg.topic+" "+msgpayload)
        ll = msg.topic.split("/")
        if len(ll) > 2:
            if ll[2] == "TV":
                log("Incoming TV command: " + msgpayload)
                ircmds = []
                if msgpayload.lower() == "off":
                    ircmds.append(["Sony_RM-ED009-12", "KEY_POWER", 5])
                    ircmds.append(["Marantz_RMC-73", "Amp_Standby", 1])
                elif msgpayload.lower() == "on":
                    ircmds.append(["Sony_RM-ED009-12", "KEY_POWER", 5])
                    ircmds.append(["Marantz_RMC-73", "Amp_Standby", 1])
                else:
                    ll = msgpayload.split("/")
                    ircmds.append([ll[0],ll[1],1])
                if len(ircmds) > 0:
                    log("Sending IR commands: " + str(ircmds))
                    for ircmd in ircmds:
                        cmd = "/usr/bin/irsend --count %u -d /run/lirc/lircd-lirc0 SEND_ONCE %s %s" % (ircmd[2], ircmd[0], ircmd[1])
                        os.system(cmd)
                        time.sleep(1)
    except Exception, e:
        log("MQTT handler exception: " + str(e))

I developed the add-on in-place on the Raspberry Pi (as opposed to building a distributable package and importing it through Kodi’s UI). To do this I firstly created a directory:

# mkdir /storage/.kodi/addons/service.mqtt.jamesbulpin

Within this I created two files:

And a subdirectory “resources” containing:

Overall the structure was this:

LoungeTV:~/.kodi/addons/service.mqtt.jamesbulpin # find . -type f
./resources/__init__.py
./resources/settings.xml
./resources/language/english/strings.xml
./resources/paho/__init__.py
./resources/paho/mqtt/__init__.py
./resources/paho/mqtt/client.py
./resources/paho/mqtt/subscribe.py
./resources/paho/mqtt/publish.py
./mqttservice.py
./addon.xml

The full code can be found at https://bitbucket.org/jbulpin/kodihomeautomation – note that this includes hard-coded IR devices and commands so would need customisation before use.

kodiconfigThe easiest way to get Kodi to discover the add-on is to reboot. It may also be necessary to enable the add-on via the “My add-ons” menu in Kodi. The settings dialog can also be accessed from here – this allows setting of the MQTT broker IP address and of the name that will be used in the MQTT topic the add-on subscribes to – if this is set to “Lounge” then the topic prefix is “KODI/Lounge“.

With the add-on running I tested it by manually sending MQTT messages:

# mosquitto_pub -m on -t KODI/Lounge/TV

Control with Amazon Echo

So far I’ve described how to get from a MQTT message through to an infrared control action on a TV. The next step was to have the MQTT message be sent in response to an Amazon Echo voice command. Happily this was very easy because I’d already built infrastructure to send MQTT commands (albeit to manage lighting) from Alexa. To add TV control was just a matter of adding another entry to the list of devices returned by the skill adapter when a discovery was initiated, and then initiating a discovery (“Alexa, discover my smart home devices“).

Example of an existing entry:

                {
                    "applianceId":"DiningRoomLights",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"Dining room lights",
                    "friendlyDescription":"Fireplace lights in the dining room",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"Light/Dining/Fireplace"
                    }
                },

New entry for the TV:

                {
                    "applianceId":"loungetv",
                    "manufacturerName":"James Bulpin",
                    "modelName":"LWRF",
                    "version":"v0.1",
                    "friendlyName":"TV",
                    "friendlyDescription":"Living room TV and amp",
                    "isReachable":True,
                    "actions":[
                        "turnOn",
                        "turnOff"
                    ],
                    "additionalApplianceDetails":{
                        "mqttTopics":"KODI/Lounge/TV"
                    }
                },

You can see this just uses the KODI MQTT message instead of a lighting message. Alexa has no idea it’s a TV rather than a light that’s being controlled. The entire skill adapter code can be seen here.

Of course it’s also possible to control multiple devices from a single Alexa command, just by sending multiple MQTT messages.

An enhancement

This all works well but you may have noticed that the IR commands to turn the TV on and off are the same – this is because that’s usually how IR remote controls work. This means that any command, via Alexa or otherwise, will actually toggle the current power state rather than move to the defined state. In most cases this isn’t a problem, for example why would you ask Alexa to turn the TV on if it’s on already? However for other more complicated automation activities I’d like to be able to control the TV without knowing whether it’s on or off already (e.g. I want an “all off” button or timer for a bedroom that will turn off all lights and the TV no matter what was already on).

To do this I added an input to sense if the TV was on – power being available on a TV USB port was my chosen method. I used a small opto-isolator, driven by the USB power, with the other side connected to a GPIO input.

To use this in the Kodi add-on I first had to add the “Script – Raspberry Pi Tools” add-on (search the registry from the Kodi add-on manager UI to find this) to add the GPIO Python library. To use this from my add-on code I added:

sys.path.append('/storage/.kodi/addons/virtual.rpi-tools/lib')
import RPi.GPIO as GPIO

# Set up pin for TV power monitoring (active low)
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN)

Then added simple checks in the MQTT message handler to only send IR commands if the current state is not the commanded state:

                if msgpayload.lower() == "off":
                    if GPIO.input(23) == 0:
                        # Only send if the TV is on - (GPIO pin is low)
                        ircmds.append(["Vestel_TV", "OFF", 5])
                elif msgpayload.lower() == "on":
                    if GPIO.input(23) == 1:
                        # Only send if the TV is off - (GPIO pin is high)
                        ircmds.append(["Vestel_TV", "OFF", 5])

In a future blog post I’ll show how I extended this TV control to add automatic input source selection by interfacing to, and automating a HDMI switcher box.

alexatv

Hacking Big Mouth Billy Bass – part 3/3

 

A few weeks ago there was a lot of interest on social media in a video showing a Big Mouth Billy Bass animatronic novelty device seemingly take on the role of Amazon Echo, including animating its mouth and body in sync with the speech. With my recent exploits in connecting strange crap to Octoblu I decided to have a go at automating and Internet-connecting Billy Bass.

In this three-part blog series I’ll cover:

  1. Reverse-engineering Billy Bass and automating its movements based on audio input
  2. Connecting Billy Bass to the Octoblu IoT automation platform and synthesising speech for it
  3. (This part) controlling Billy Bass with Amazon Echo and Slack

Recap and introduction

In the first part of this blog series I described how I added an Arduino to Big Mouth Billy Bass to enable it to be controlled by commands over USB-serial as well as automatically moving its mouth in sync with audio input. In the second part I described how I connected this to the Internet using the Octoblu IoT automation platform. In this final part I’ll describe how I used the Octoblu integration to control the fish from Amazon Echo and Slack.

Where we got to last time was an Octoblu flow that could take a message containing a text string, send it to the Voice RSS text-to-speech service, then send to resulting mp3 to the Billy Bass fish to play via its speaker and move motors in sync. The examples in this blog post look at how to generate a message with suitable text in order to feed this flow.

Amazon Echo

The goal here was to create a “skill” for Amazon Echo to allow voice control of the fish, e.g. saying “Alexa, tell Billy Bass to say hello world” would send a message of “hello world” to the text-to-speech service as described in the previous part of this blog post series.

In the Octoblu web interface I edited my “Billy Bass” flow to add a trigger node to act as a HTTPS POST endpoint for the Alexa scripts to call – I named this “Call from Alexa skill”. I connected this to a “Compose” node which is just there to tidy up the incoming message from the POST data into the form the rest of the flow is expecting: it creates a key named “text” with value “{{msg.data.text}}”.

billybassalexatrigger

I’ve previously created a smart home skill adapter for Echo – creating a custom skill is broadly similar so I won’t repeat all the details again but will highlight the key parts specific to fishy conversations.

Firstly I created a new Alexa skill in the Amazon Developer Console using the custom skill type and an invocation name of “billy bass”. I then defined an interaction model for the skill – this describes the forms of wording that Echo will recognise and defines slots that are, in effect, variables that will be populated when voice recognition takes places. The intent schema defines just one intent – asking the fish to say something:

The slot is the word(s) that we’re asking the fish to say. The slot can either be a constrained type (such as a date, or fixed, enumerated list) or, as is the case here, an open-ended list. I defined the LIST_OF_SAYINGS slot type as a custom slot type with three simple examples: “hello world | how are you today | what time is it”.

The final part of the interaction model is to provide some sample utterances which tell Alexa what kind of phrasing maps to the intents (in theory this doesn’t need to be an exhaustive list of all ways in which a user may state the intention). Here I used the single sample: “BillySay to say {Saying}” where “BillySay” corresponds to the intent and “{Saying}” is a placeholder for the open-ended slot.

Overall this looks like:

billybassechomodel

alexabpThe next step was to create an AWS Lambda function to handle the skill. This is called each time Alexa matches an intent on this skill. I used a Python 2.7 blueprint for an Alexa skill. The blueprint prompts to create a trigger from the Alexa Skills Kit to Lambda – let it create this. In the Lambda function configuration page I edited the code (see below) and set up the role – I used the same role as for my home skill adapter (see here for details).

The core part of the code is the on_intent function which simply takes the text that Alexa recognised as being the “Saying” slot in the intent in the schema defined above and sends it to my Octoblu trigger as a HTTPS POST with the text being in the “text” field of the posted message. The skill ID (“amzn1.ask.skill.<UUID>”) can be found in the skill details in the Amazon Developer Console. The Octoblu trigger URL can be found from the “thing inspector” in the Octoblu web interface having clicked on the relevant Trigger node (the “Call from Alexa skill” one here).

Having saved and published the Lambda function I added its ARN (this should be displayed on the Lambda function’s pages in the AWS console) to the Alexa skill via the configuration page in the Amazon Developer Console.

And that’s pretty much it. After restarting the Octoblu flow I was able to say to the Echo “Alexa, tell Billy Bass to say hello world” – this led to Alexa matching the skill and intent I defined above then calling the Lambda function with the intent and its slot value (the words the fish has been asked to say). This function then called Octoblu via a HTTPS POST to the trigger and this then got passed to Voice RSS for text-to-speech and then back to Octoblu to be routed to the connector running on the Raspberry Pi and ultimately to the fish itself. With the fish and the Echo next to each other you can get the fish to talk to Alexa, e.g. say “Alexa, tell Billy Bass to say Alexa what time is it” – the fish will say “Alexa what time is it” and then the Echo will answer.

Slack

I’ve been playing with Slack as a way to have human interaction with Octoblu flows. Adding a way to control the Big Mouth Billy Bass seemed like a natural next step 🙂

Luckily Octoblu already has a good way to connect to Slack using a streaming “slurry” thing. I set this up using a private test Slack account I already had by first creating a bot account and extracting an API token for it (see Slack docs for details on doing this). I added the bot to each Slack channel I wanted it to respond to. I then navigated Octoblu’s web interface to “All Things” and selecting the “Slack Streaming Bot (beta)” thing. Having clicked on this I was asked to authorize access and then to provide the bot’s token.

Having saved this thing I then went to my existing “Billy Bass” flow, added the new Slack thing and clicked the “UPDATE PERMISSIONS” button to enable Slack to send messages to this flow. I routed Slack messages through a non-null filter on “{{msg.data.text}}” to ensure there is a parsable Slack message in there, then through a function node to look for Slack messages that start with “Fish:” and extract the rest of the message to forward to to the text-to-speech service an ultimately to the fish itself. The second filter node removed any messages that didn’t match the “Fish:” prefix.

slackfish

So now I can type into a Slack channel “Fish: hello world” and the Billy Bass will say “hello world.

In closing

So there we have it – an IoT talking fish integrated with Amazon Echo and Slack using Octoblu, Raspberry Pi, Arduino, Voice RSS and a handful of electronic components. Clearly not the most practical IoT device ever created but hopefully an illustration of the power of these tools.

 

Controlling custom lighting with Amazon Echo and a skill adapter

ctb2adfxeaaptil-jpg-largeAmazon launched Echo in the UK today – it’s been a long wait! My pre-ordered Echo arrived this morning and my first priority was to get it (her?) to control my home lighting. As I’ve previously written about, my LightwaveRF lights are currently managed by a custom set of scripts communicating within the house using MQTT pub-sub. This means that Amazon Echo (or Alexa, as I’ll refer to it/her from now on) doesn’t know how to interface with them like she does with Philips Hue, for example.

Luckily Amazon has made available its Smart Home Skill API which allows individuals and home automation vendors to provide “skill adapters” for the various home automation “skills” Alexa already has. This means it is possible to say “Alexa, turn on the bedroom light” and have Alexa use whatever underlying system and technology you have to execute the command. This is preferable to defining a new custom skill because it avoids the need to use a skill trigger word (e.g. “Alexa, turn on the bedroom light using LightwaveRF”). AWS Lambda is used to provide the concrete implementation for the abstract actions.

octoblualexaIn my case the skill adapter will make a HTTPS call to an Octoblu trigger node passing details of the required action (essentially just a MQTT message with the topic being the light to control and the message body being the action (on or off)). The Octoblu flow then messes about a bit with the JSON structure before passing the message to an existing Meshblu device that connects my home MQTT world with Octoblu. In reality I’m using Octoblu and Meshblu here as firewall-bridging plumbing to get a MQTT message from the Lambda function into my home environment.

Having signed up for my Amazon developer account I followed Amazon’s steps to create a Smart Home Skill. This started by creating a new skill (of type “Smart Home Skill API”) in the Alexa Skills Kit section of the Amazon Developer Console – I chose to name this skill “James’s Lights”.

To provide the actual implementation I created a Python 2.7 Lambda function named “JamesLightsSkillAdapter” in AWS (using the eu-west-1 (Ireland) region to co-locate with the Alexa voice service for the UK) based on the alexa-smart-home-skill-adapter blueprint. I based the code on the template provided in the “steps to create” page above. For the role I selected “Create new role from template(s)”.

The code handles two separate types of message from the Alexa service:

  1. Device discovery – this is an action initiated by the end user from the Alexa app (or by voice) to get an inventory of devices that Alexa can control. In the Lambda function this is implemented by returning a big blob of JSON with an entry for each device. The “friendlyName” item being the words Alexa will recognise to control the device. I’m using the additionalApplianceDetails item to record the MQTT topic that will be used to control this device. My initial prototype implementation hard-codes the entire inventory in the Lambda function – clearly not a long term solution!
  2. TurnOnRequest and TurnOffRequest commands – these are issued by the Alexa service when Alexa is asked to turn a device on or off and the device is recognised as one in the inventory. The Lambda function is called with the relevant JSON blob and therefore my code can pull out the previously configured MQTT topic and send that as part of a HTTPS POST to the Octoblu trigger mentioned above.

I tested the Lambda function using the “Alexa Smart Home – Control” sample event template, manually crafting the JSON messages to match the ones the Alexa service will be sending. After testing it’s important to make sure to enable the Alexa trigger for the Lambda function.

echoskill

Back in the Developer Portal I configured the skill to use an endpoint in AWS Europe using the Lambda ARN created above. As this will be a private skill adapter I didn’t really need account linking (this is how a regular user would link their Echo with, for example, their Philips Hue account) but the console wouldn’t let me proceed without setting it up. Therefore I followed this great blog post’s instructions on how to use Amazon itself as the linked account.

file-28-09-2016-21-56-25Having saved all the skill configuration and enabling testing I then used the Alexa app on my iPad to add this newly created skill (in the “Your skills” section) and logged in with my Amazon account as the account linking step. From there is was a simple matter of letting the app and Alexa discover the devices and then Alexa was good to go.

I can now say to Alexa: “Alexa, turn the lounge light on” and a second or so later it’ll come on and Alexa will say “OK”. What happens under the hood is more interesting though:

  1. The Alexa cloud service processes the message, figuring out its a smart home control command
  2. The Alexa service looks through my discovered device list and identifies the “lounge light” is one that is controlled via this skill adapter.
  3. The Alexa service makes a call the my AWS Lambda function with a JSON message including the configuration for the requested light as well as the “TurnOnRequest” command.
  4. My Lambda function makes a HTTPS POST call to the Octoblu trigger with a MQTT-like message including the topic string for the requested light and the “on” message.
  5. The Ocotblu flow forwards this message via Meshblu to a simple Meshblu connector I have running at home.
  6. My Meshblue connector publishes the messaage to my local MQTT broker.
  7. The LightwaveRF script also running at home and subscribed to “Light/#” messages picks up the message and looks up the room/device codes which it then send via UDP to the LightwaveRF bridge.
  8. The LightwaveRF bridge sends the appropriate 433MHz transmission which is picked up by the paired light fitting and the power is switch on.

As this is all highly customized to me I’ll be leaving the app in its testing state, not putting it forward for certification (which would certainly fail!) and publication.

Future work

Right now the implementation hard-codes my device list as well as the URL of my Octoblu trigger. I’d like to, at the very least, make the device list dynamically generated from the equivalent configuration inside my home MQTT environment.

What I’ve built here is basically an Alexa to MQTT bridge. This means I’m not limited to 1-to-1 control of individual lights. With the right MQTT messages in the device discovery JSON blob I could also control groups of lights, timed sequences, or anything else I care to code up.