I’ve recently been fine-tuning my smart home setup to be more proactive and intelligent, particularly with smarter monitoring and alerting. One area I’ve focused on is the integration of cameras with other platforms such as motion sensing, image processing, and notifications to create a more intelligent monitoring system that gives me information that I actually care about with few to no false positives. To that end, I’ve settled on a combination of Home Assistant and its built-in support for TensorFlow, an open source machine learning platform developed by Google Brain. The fact that someone with no formal computer science education is able to implement a machine learning platform in ten minutes is mind-blowing and Home Assistant makes it even easier by supporting TensorFlow out of the box. It took me about five minutes to set up the TensorFlow platform on my Home Assistance instance and another twenty minutes before I had my first working automation.
TensorFlow is a big deal because it adds a layer of image processing with any cameras that are integrated into Home Assistant. For example, if you have a camera in your office that you only want to record when motion is detected, an image processor can help improve the accuracy of what’s reported. Say you have a fan in your office and the motion of the fan generates false alerts. You probably don’t care that your fan is moving and only want to be alerted if a person or maybe your dog trigger the motion sensor. An image processor like TensorFlow uses sophisticated machine learning to evaluate large datasets. This training then helps the platform understand the difference between objects, such as your dog and the office fan. Adding a layer of imaging processing helps reduce or eliminate false by recognizing the objects you actually care about and “tuning out” any background noise. There’s obviously a lot more happening under the hood which is out of scope for this article. The video below does a nice job of giving a succinct introduction to the power of TensorFlow.
In this article, I’ll share the few easy steps I took to setup TensorFlow with Home Assistant and the basic automation I’m now using for intelligent alerting. This is only an introduction to using TensorFlow with Home Assistant. I intend to write a deep dive with additional examples of all the powerful capabilities HA and TensorFlow offer in future articles.
Before I go any further, let’s setup TensorFlow.
Setting up TensorFlow
Home Assistant added TensorFlow as an official component in v0.82. Note that in my setup, I’m running Home Assistant using the official HA Docker container and not Hassbian or HASS.io. Check here if you’re using some other setup. TensorFlow can be resource intensive, especially if you don’t optimize the scan intervals. So far my Intel NUC7i5BNH has handled it fine without any performance issues.
As of v0.91, TensorFlow requires a few additional files to be installed in your Home Assistant config folder. I used this simple script to collect dependencies and add them to a TensorFlow folder within the mounted HA Docker container. Next, select a model from the Detection Model Zoo depending on your machine’s specs and any special use cases you need such as faces, vehicles, or general object recognition. For my purposes, I needed a general purpose dataset and have had great success with the faster_rcnn_inception_v2_coco set.
Add the model to your newly created TensorFlow folder and then add the specifics to your Home Assistant configuration.yaml. I’m using an !include to create an image_processing.yaml file that looks like this:
Starting at the top of the .yaml, TensorFlow is established as the imaging platform. Next, I’ve set the scan interval for once a week (604,800 seconds = 1 week). This may seem counter-intuitive but the defaults scan every ten seconds and waste resources. I’ve enabled automation to call the image processing scan service only when motion is detected.
Source identifies which cameras I want to apply the image processing to. File out stores two images anytime a scan is performed, latest image and the preceding image with a date/time stamp appended to the file name. This is important as the latest file will be used to send an image when certain conditions are met as part of the notification. The model is whichever set you chose earlier, in my case faster_rcnn_inception_v2_coco. Finally, I’ve set the image processing to only look for any objects that are recognized as either a dog or a person.
Restart Home Assistant and you’ll now see some new entities under the image_processing domain. Using the Services page, you can call the image_processing.scan service to test that your output files are saving correctly to the specified file_out location.
Keep in mind if you’re using Home Assistant in a Docker container, the container is looking at the location’s config and not where you’ve mounted the volume.
For me this means using /config/www/tensorflow/ as opposed to /opt/docker/hass-config. If you run into any issues or don’t see any files, check your logs for clues.
Automating with Node-RED
Once you’ve tested the service, it’s time to automate the alerting. I use Node-RED and Telegram for my automation and notification platforms. For this automation, I’m using Yi cameras and Xiaomi motion sensors. Both devices are inexpensive and easily integrate with Home Assistant without relying on additional gateways or apps (find out how to get rid of your Xiaomi hub with Zigbee2mqtt).
In Node-RED, I have a simple flow that does the following:
- Check for movement in the office
- When Movement is detected, call the image processing scan service.
- Check if I’m home using the person component. If I am, stop the flow.
- Check if the image processing scan identified a person. TensorFlow sets the state of the image processing scan based on the number of identified objects. In my case, I’m always expecting at least a 1 for something I care about (either a dog or a person). If neither of those objects are found, the state remains at 0. In this node, I’m checking the state and halting the flow if it’s 0.
- Finally, if all the above conditions are met, the latest image is sent via Telegram which I receive on my mobile device or in any Chrome browser I’m signed into. Telegram offers some advanced features but for me, I just send the latest image capture with a generic message that potentially suspicious activity was detected in the applicable location.
See the full Node-RED flow on Pastebin.
With the basic flow in place, I can easily copy/paste across any other cameras/locations I’m using.
Initially, I used Home Assistant’s native automation to call the image processing scan service but had too many issues with latency. With Node-RED, I typically receive the image within two seconds of conditions being met.
So there you have it. Endless possibilities of alerting and monitoring with a few simple clicks. Before implementing TensorFlow, I had disabled motion based alerting altogether because of the high volume of false positives. Since implementing it, I haven’t had a single false positive and performance has been rock solid.
This is only the beginning and as always, I’m blown away by the power of Home Assistant and the ease of use (all things considered).