If you look at the entity details, it’ll list out every object that was detected along with a level of confidence. After it’s restarted, you should see a new entity for each one you created, e.g. You’ll need to restart Home Assistant after adding the custom component. The rest of them are identical minus the camera. Note that each instance references a corresponding camera, e.g. Mkdir -p /usr/share/hassio/homeassistant/www/amazon_rekognition In my instance that would be done with the following: You’ll also need to create that snapshot directory. Save_file_folder: /config/www/amazon_rekognition The contents of the file look like this: - platform: amazon_rekognitionĪws_access_key_id: !secret aws_access_keyĪws_secret_access_key: !secret aws_secret_key In this case we’ll be using amazon_rekognition_fdc.yaml as a reference point, but there’s also amazon_rekognition_bdc.yaml, amazon_rekognition_byc.yaml, etc. In that directory I have one yaml file for each camera that I’m pulling. usr/share/hassio/homeassistant/image_processing on my Hass.io Ubuntu installation. There is also a image_processing directory in the root configuration directory, i.e. Image_processing: !include_dir_merge_list image_processing As far as configuration, I have this in my configuration.yaml file: Just follow the instructions to get this up and running. BYC for Back Yard Camera, FDC for Front Door Camera, etc. I just use 3-letter acronyms for the camera, e.g. Keep the naming scheme consistent: you’ll understand why later in the guide. The 23456 port was one I selected for the built-in BI web server. BI makes it easy to access them with the short names.
bdc.yaml: - platform: mjpegĪgain, one yaml per camera. In the camera directory I have a yaml for each camera, with most looking like this, e.g. In the configuration.yaml file I have this: I’d highly recommend !secret here but I’ll show you a mock-up of what it looks like. I have a separate yaml file for each of the 4 PoE cameras. Home Assistant pulls the camera feeds directly from Blue Iris instead of the PoE camera. Step #1: Configure the Cameras in Blue Iris and Home Assistant The custom component is pretty damn seamless and the Rekognition processing is as fast as native processing. TensorFlow not working on a Hass.io install. They all have nuances and dependencies, e.g. I’ve given up on the native image processing in Home Assistant.
Yes, it’s paid software that requires Windows, but I have yet to find anything remotely close to the functionality for the price. This provides you with the steps of integrating Blue Iris with Home Assistant. You’ll need to restart Node-RED after installing it.
Simply go to Manage palette, click on Install, search for node-red-contrib-pushover, and install the module. The stock Pushover component for Home Assistant hasn’t been updated to support image attachments, but the Node-RED module seems to be actively maintained and supports everything we’ll need. All of the automations are done through here. This is also much easier than maintaining all of the stand-alone docker containers that I was previously using. I’m running Hassio.io under Ubuntu for the additional horsepower. This is less about writing a de facto guide and more about providing the process so it can be adapted as components change over time. Best of all, it’ll send inline snapshots from the camera when a person is detected using Pushover. This demonstrates how you can integrate Blue Iris, Amazon Rekognition, and Pushover to do person detection with Blue Iris. You’ll find copypasta from my previous guides just to avoid reinventing the wheel. This is the latest iteration of the process I’m using for image detection and notifications.
Why? I’m hesitant to nuke old content and there is no easy way to update them since I’ve changed the setup a few times, along with underlying components changing on me. I’ve got a few guides that look fairly similar to this: Blue Iris with Pushover and Blue Iris with iOS.