Sunday, November 13, 2022

The easiest way yet to house a remote Z-wave controller

I have HomeAssitant running on a server. But the Z-wave controller --a USB dongle-- needs to be centrally located in the building.

I used a Raspberry Pi 3 at that central location, and it has an ethernet cable running through the wall to the server. So the network is reliable.

In the past, I've run Raspbian on the Pi. For a couple years I ran USBIP. Then I ran a docker container of zwave2js. But both suffered from the same problem: Every month or so I needed to remember to login to the pi and perform maintenance. The docker container in particular would get stale and break the connection to HomeAssistant.

So we're trying something new:

  • Replacing Raspbian with Ubuntu Core, which will update automatically.
  • Replacing the docker container with a Snap package, which will also update automatically.

This turned out to be much easier than I expected:

  1. Install Ubuntu Core on a Pi
  2. Install and configure the Zwave-JS-UI snap

Installing the snap was literally this easy:

sudo snap install zwave-js-ui
sudo snap start zwave-js-ui

And then open a web browser to port 3000 on the pi. All configuration is done through the web ui. And HomeAssistant picked up the data immediately.

Bridging LXD Containers in Ubuntu Core

I'm setting up a set of server containers on an Ubuntu Core 22 base.

This is slightly different from deb-based Ubuntu in several ways.

The hardware is a salvaged laptop motherboard, without keyboard or monitor.

Thursday, October 7, 2021

Installing Ubuntu Core onto 64-bit Bare Metal

I have a re-purposed AMD64 laptop motherboard, ready to become an experimental Ubuntu Core server.

It's in fine condition. You can see that it boots an Ubuntu LiveUSB's "Try Ubuntu" environment just fine. Attached to the motherboard is a new 60GB SSD for testing. The real server will use a 1TB HDD.

But Ubuntu Core doesn't install on bare metal from a Live USB. It's still easy, though.

1. Boot a "Try Ubuntu" Environment on the target system.

  • Test your network connection. The picture shows a wireless connection. This particular laptop has a wireless chip that is recognized out-of-the box, so I didn't need to get out the long network cable.
  • Test that your storage device works. You can see in the picture that Gnome Disks can see the storage device.

2. Terminal: sudo fdisk -l. Locate the storage device that you want to install Ubuntu Core onto.

  • The entire storage device will be erased.
  • My storage device is at /dev/sda today. It might be different next boot. Yours might be different.

3. Open the web browser and download Ubuntu Core.

4. Write Ubuntu Core to the storage device.

  • Warning: This command will erase your entire storage device. If there is anything valuable on your storage device, then you have skipped too many steps!
    xzcat Downloads/<.img.xz file> | sudo dd of=/dev/<target_storage_device> bs=32M status=progress; sync
  • So mine was
    xzcat Downloads/ubuntu-core-20-amd64.img.xz | sudo dd of=/dev/sda bs=32M status=progress; sync
  • Source: https://ubuntu.com/download/intel-nuc

5. Reboot into Ubuntu Core.

  • When prompted by the "Try Ubuntu" environment, remove the LiveUSB so you are booting from your newly-written storage device.
  • Be patient. My first boot into Ubuntu Core led to a black screen for nearly a minute before the system acknowledged that it actually has been working the entire time.
  • After 3-4 minutes of non-interactive setup alternating between blank screens and scrolling setup output, Ubuntu Core finally asked me two questions:  Which network to connect to, and my Ubuntu SSO e-mail address.
  • Finally, the system rebooted again. This time it didn't ask any question - just displayed the new Ubuntu Core system's IP address.

6. Log into Ubuntu Core.

    On my Desktop:
    me@Desktop:~$ ssh me@192.168.1.x
    Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-77-generic x86_64)
Success: A working Ubuntu Core on bare metal.

Wednesday, November 25, 2020

Basic SNMP for HomeAssistant

I have a lovely OKI MB480 printer. It's been reliable for 10 years. And I want to display it's status in HomeAssistant.

Like this...


The printer speaks SNMP and Home Assistant has an SNMP Sensor, so let's learn some SNMP and find a way to make the two talk to each other.

SNMP keeps it's overhead low by not transmitting a lot of information. What is transmitted is compressed by encoding. Here's an example:

$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.4.1.2001.1.1.1.1.2.20.0
iso.3.6.1.4.1.2001.1.1.1.1.2.20.0 = STRING: "Ready To Print/Power Save"

Each of those numbers has meaning, so you need to know exactly what to ask for. Also, there is a client-server (manager-agent) arrangement to figure out (and install), three different versions of SNMP, and finally migrating a successful query into a Home Assistant format.



How to ask SNMP a question

The printer has a built in SNMP agent (server). Let's install an SNMP manager (client) on my laptop.

$ sudo apt install snmp

Now we can make two simple queries: walk (return a whole tree) and get (return one item). the tree may be quite lengthy -- on this printer, it's 1900 lines.

$snmpwalk 10.10.10.3
snmpwalk: No securityName specified

Oops, we are missing two more elements:

  • A version number. We're going to stick with version 1, the easiest.
  • A community name. This is somewhat like a username; it defines access. Communities get replaced by real usernames and passwords in version 3. The most common community name is "public"

These are defined by the remote agent (server). For example, the printer supports v1 and v3, but not v2.

$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.2.1.1.5.0
iso.3.6.1.2.1.1.5.0 = STRING: "OKI-MB480-224E59"

$ snmpwalk -v 1 -v public 10.10.10.3 > walkfile     // Use redirection for lengthy output


Finding the right question to ask

Now that we have connectivity, we need a dictionary to understand all those number encodings. That dictionary is called a MIB file. It's a structured text file that defines all of the numbers and positions and response codes.

  1. The SNMP package that we installed has MIBs disabled by default. Enable them.
    • Edit the /etc/snmp.conf file
    • Comment out the "mib:" line

  2. Install the package of standard MIB files.
       sudo apt install snmp-mibs-downloader

The MIB for my printer wasn't in the package. I foind it online, downloaded it, and stored it in /home/$ME/.snmp/mibs/. The snmp command automatically looks for MIBs there, too.

Here's the same query using the proper MIB as a dictionary:

$ snmpget -v 1 -c public -m OKIDATA-MIB 10.10.10.3 sysName.0
SNMPv2-MIB::sysName.0 = STRING: OKI-MB480-224E59

$ snmpget -v 1 -c public -m OKIDATA-MIB -O n 10.10.10.3 sysName.0    // '-O' formats output. 'n'=numeric
.1.3.6.1.2.1.1.5.0 = STRING: OKI-MB480-224E59

So now it's a matter of using snmpwalk to locate the fields that I want to ask for. I chose three fields:

  • Current Status: OKIDATA-MIB::stLcdMessage.0
  • Drum Usage: OKIDATA-MIB::usageDrumCurrentLevel.1
  • Toner Percent Remaining: OKIDATA-MIB::usageTonerCurrentLevel.1

Obtain the correspiding numeric code (called an OID) for each field using the -On flag, and test the OID without the MIB.

$ snmpget -v 1 -m OKIDATA-MIB -c public -O n 10.10.10.3 usageDrumCurrentLevel.1
.1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1 = STRING: "2298"

$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1
SNMPv2-SMI::enterprises.2001.1.1.1.1.100.4.1.1.3.1 = STRING: "2298"


Migrating a successful query into Home Assistant

Here's what the same SNMP query looks like in a Home Assistant config:

Sensor:
  - platform: snmp
    version: 1                # Optional: Default is 1
    community: public         # Optional: Default is public
    host: 10.10.10.3
    baseoid: .1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1
    name: Printer Drum Remaining
    unit_of_measurement: '%'
    # A drum lasts about 25,000 impressions. Convert usage to a percentage of 25,000
    value_template: '{{ 100 - ((value | int) / 250.00 )) | int}}'

Sunday, August 16, 2020

Installing Home Assistant Core in an LXD Container (Part 2)

Last time, we built a basic LXD container, and then build HomeAssistant inside.

This time, we're going to add a few more elements.

  • We're going to do all the steps on the Host instead of diving inside the container. So we're going to use lxc exec and lxc push. The goal is to make spinning up a new container scriptable
  • We're going to start/stop the HomeAssistant application using a systemd service
  • We're going to keep the data and config outside the container and use an lxd disk device to mount the data. Even if we destroy the container, the data and config survive to be mounted another day.

Preparing LXD

We're going to skip LXD initialization in this example. There's one addition from last time: We're going to add shiftfs, which permits us to chown mounted data. The macvlan profile and shiftfs enablement are persistent -- if you already have them, you don't need to redo them. All of these commands occur on the Host (we have not created the container yet!)

   # Create a macvlan profile, so the container will get it's IP Address from
   # the router instead of the host. This works on ethernet, but often not on wifi 
   ip route show default 0.0.0.0/0
   lxc profile copy default lanprofile
   lxc profile device set lanprofile eth0 nictype macvlan
   lxc profile device set lanprofile eth0 parent enp3s5

   # Test that macvlan networking is set up
   lxc profile show lanprofile
     config: {}
     description: Default LXD profile  // Copied. Not really the default
     devices:
       eth0:                           // Name, not real device
         nictype: macvlan              // Correct network type
         parent: enp3s5                // Correct real device
         type: nic

   # Enable shiftfs in LXD so data mounts work properly
   sudo snap set lxd shiftfs.enable=true
   sudo systemctl reload snap.lxd.daemon

   # Test that shiftfs is enabled:
   Host$ lxc info | grep shiftfs
    shiftfs: "true"

Create the Container and Initial Configuration

If LXD is already set up, then start here. We will mount the external data location, set the timezone and do all that apt setup. But this time, we will do all the commands on the Host instead of inside the container. We will also create the sources.list file on the host and push it into the container.

   # Create the container named "ha"
   lxc launch -p lanprofile ubuntu:focal ha

   # Mount the existing HomeAssistant data directory
   # Skip on the first run, since there won't be anything to mount
   # Shiftfs is needed, else the mounted data is owned by nobody:nogroup
   # Chown is needed because shiftfs changes the owner to 'ubuntu'
   lxc config device add ha data_mount disk source=/somewhere/else/.homeassistant path=/root/ha_data
   lxc config device set ha data_mount shift=true
   lxc exec ha -- chown -R root:root /root

   # Set the timezone non-interactively
   lxc exec ha -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
   lxc exec ha -- dpkg-reconfigure -f noninteractive tzdata

   # Reduce apt sources to Main and Universe only
   # Create the new sources.list file on the host in /tmp
   # Paste all of these lines at once into the Host terminal
   cat <<EOF > /tmp/container-sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
   deb http://security.ubuntu.com/ubuntu focal-security main universe
   EOF

   # Push the file into the container
   lxc file push /tmp/container-sources.list ha/etc/apt/sources.list

   # Apt removals and additions
   lxc exec ha -- apt autoremove openssh-server
   lxc exec ha -- apt update
   lxc exec ha -- apt upgrade
   lxc exec ha -- apt install python3-pip python3-venv

Create the Venv, Build HomeAssistant, and Test

This method is simpler than all that mucking around activating and venv and paying attention to your prompt. All these command are issued on the Host. You don't need a container shell prompt.

   # Setup the homeassistant venv in a dir called 'ha_system'
   #We will use the root account since it's an unprivileged container.
   lxc exec ha -- python3 -m venv --system-site-packages /root/ha_system

   # Build and install HomeAssistant
   lxc exec ha -- /root/ha_system/bin/pip3 install homeassistant

   # Learn the container's IP address. Need this for the web browser. 
   lxc list | grep ha

   # Run HomeAssistant
   lxc exec ha -- /root/ha_system/bin/hass -c "/root/ha_data"

   # Use your browser to open the the IP address:8123
   # HA takes a couple minutes to start up. Be patient.
   # Stop the server from within the Web UI or ^C to exit when done.

Start HomeAssistant at Boot (Container Startup)

The right way to do autostart is a systemd service file on the container. Like with the sources.list file, we will create it on the host, then push it into the container, then enable it. There's one optional ExecPreStart line - it will slow each startup slightly while it checks for and installs updates.

   cat <<EOF > /tmp/container-homeassistant.service
   [Unit]
   Description=Home Assistant
   After=network-online.target

   [Service]
   Type=simple
   User=root
   PermissionsStartOnly=true
   ExecPreStart=/root/ha_system/bin/pip3 install --upgrade homeassistant
   ExecStart=/root/ha_system/bin/hass -c "/root/ha_data"

   [Install]
   WantedBy=multi-user.target
   EOF

   # Push the .service file into the container, and enable it
   lxc file push /tmp/container-homeassistant.service ha/etc/systemd/system/homeassistant.service
   lxc exec ha -- systemctl --system daemon-reload
   lxc exec ha -- systemctl enable homeassistant.service
   lxc exec ha -- systemctl start homeassistant.service

Now we can test it. The last command should start HA. The same command with 'stop' should gracefully stop HA. Restarting the container should gracefully stop HA, and then restart it automatically. Your web browser UI should pick up each stop and start. You did it!


Final Notes

Remember how you start without any HomeAssitant data to mount? Now that you have a running HA Core, you can save a set of data:

   lxc file pull ha/root/.homeassistant /somewhere/else/.homeassistant --recursive

And remember to clean up your mess when youare done:

   lxc stop ha
   lxc delete ha

Saturday, August 15, 2020

Installing Home Assistant Core in an LXD Container (Part 1)

I've been running HomeAssistant Core reliably in an LXD container for almost two years now, so it's probably time to start detailing how to do it.

This is a step-by-step example of how to do it for folks who aren't very familiar with LXD containers and their features.

Installing LXD (documentation)

If you haven't used LXD before, you need to install it (it's a Snap) and initialize it (tell it where the storage is located). The initialization defaults are sane, so you should not have problems.

   sudo snap install lxd
   sudo lxd init

Container Profile: Macvlan Networking (optional)

A macvlan profile is one easy way for the container to get it's IP address from the router instead of the host. This means you can use a MAC Address filter to issue a permanent IP address. This works on ethernet, but often not on wifi. You only need to set up this profile ONCE, and it's easiest to do BEFORE creating the container. Since the container doesn't exist yet, all of these commands are done on the Host.

   # Get the real ethernet device (enp3s5 or some such)
   ip route show default 0.0.0.0/0

   # Make mistakes on a copy
   lxc profile copy default lanprofile

   # Change nictype field to macvlan
   #  'eth0' is a virtual device, not a real eth device
   lxc profile device set lanprofile eth0 nictype macvlan

   # Change parent field to real eth interface
   lxc profile device set lanprofile eth0 parent enp3s5

Create the Container

Create a new container named 'ha'. This command is done on the Host.

   # Create the container named "ha"
   lxc launch -p lanprofile ubuntu:focal ha

   # Learn the container's IP address. Need this for the web browser. 
   lxc list | grep ha

   # Get a root shell prompt inside the container
   lxc shell ha

Initial Setup in the Container

Let's get a shell set up timezone and apt. These commands are done on the Container root prompt.


   // This is one way to set the timezone
   dpkg-reconfigure tzdata

   // Reduce apt sources to Main and Universe only
   cat <<EOF > /etc/apt/sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
   deb http://security.ubuntu.com/ubuntu focal-security main universe
   EOF

   // Tweak: Remove openssh-server
   apt autoremove openssh-server

   // Populate the apt package database and bring the container packages up-to-date
   apt update
   apt upgrade
   
   // Install the python packages needed for HomeAssistant
   apt install python3-pip python3-venv

   # Setup the homeassistant venv in the root home dir (/root)
   # --system-site-packages allows the venv to use the many deb packages that are already
   #    installed as dependencies instead of donwnloading pip duplicates 
   python3 -m venv --system-site-packages /root

Install and Run HomeAssistant

Now we move into a virtual environment inside the container, build HomeAssistant, and give it a first run. If you try to build or run HomeAssistant outside the venv, it will fail with cryptic errors.

   // Activate the installed venv. Notice how the prompt changes.
   root@ha:~# source bin/activate
   (root) root@ha:~#
   
   // Initial build of HomeAssistant. This takes a few minutes.
   (root) root@ha:~# python3 -m pip install homeassistant

   // Instead of first build, this is where you would upgrade
   (root) root@ha:~# python3 -m pip install --upgrade homeassistant

   // Initial run to set up and test.
   (root) root@ha:~# hass

   // After a minute or two, open the IP Address (port 8123). Example: http://192.168.1.18:8123
   // Use the Web UI to shut down the application. Or use CTRL+C.

   // Exit the venv
   (root) root@ha:~# deactivate

   // Exit the container and return to the Host shell.
   root@ha:~# exit
   Host:~$


There's a lot more to talk about in future posts:

  • The systemd service that starts HomeAssistant at container startup.
  • Creating an LXD disk device to keep the HomeAssistant data in. If I rebuild the container for some reason, I can simply connect it to the data.
  • Adding a USBIP client. The Z-Wave controller is elsewhere in the building, and USBIP lets me control it like it's attached to the host. That also means adding a USB device to the container.
  • Collecting Host hearbeat statistics for the HomeAssistant dashboard, and pushing those into the container regularly.
  • Backing up and restoring HomeAssistant data and configurations.

Friday, August 14, 2020

LXD Containers on a Home Server

LXD Containers are very handy, and I use them for quite a few services on my home hobby & fun server. Here's how I set up my containers after a year of experimenting. Your mileage will vary, of course. You may have very different preferences than I do.

1. Networking:

I use macvlan networking. It's a simple, reliable, low-overhead way to pull an IP address from the network DHCP server (router). I set the IP address of many machines on my network at the router.

The container and server cannot communicate using TCP/UDP with each other. I don't mind that.

You only need to set up this profile once for all containers. Simply specify the profile when creating a new container.

   // 'Host:$' means the shell user prompt on the LXD host system. It's not a shell command

   // Learn the eth interface: enp3s5 in this example
   Host:$ ip route show default 0.0.0.0/0

   // Make mistakes on a copy
   Host:$ lxc profile copy default lanprofile

   // Change nictype field. 'eth0' is a virtual device, not a real eth device
   Host:$ lxc profile device set lanprofile eth0 nictype macvlan

   // Change parent field to real eth interface
   Host:$ lxc profile device set lanprofile eth0 parent enp3s5

   // Let's test the changes
   Host:$ lxc profile show lanprofile
     config: {}
     description: Default LXD profile  // This field is copied. Not really the default
     devices:
       eth0:                           // Virtual device
         nictype: macvlan              // Correct network type
         parent: enp3s5                // Correct real device
         type: nic
       root:
         path: /
         pool: containers-disk         // Your pool will be different, of course
         type: disk
     name: lanprofile


2. Creating a Container

Create a new container called 'newcon':

   Host:$ lxc launch -p lanprofile ubuntu:focal newcon
      // 'Host:$'        - user (non-root) shell prompt on the LXD host
      // '-p lanprofile' - use the macvlan networking profile
      // 'focal'         - Ubuntu 20.04. Substitute any release you like


3. Set the Time Zone

The default time zone is UTC. Let's fix that. Here are two easy ways to set the timezone: (source)

   // Get a root prompt within the container for configuration
   // Then use the classic Debian interactive tool:
   Host:$ lxc shell newcon
   newcon:# dpkg-reconfigure tzdata

   // Alternately, here's a non-interactive way to do it entirely on the host
   Host:$ lxc exec newcon -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
   Host:$ lxc exec newcon -- dpkg-reconfigure -f noninteractive tzdata


4. Remove SSH Server

We can access the container from the server at anytime. So most containers don't need an SSH server. Here are two ways to remove it

   // Inside the container
   newcon:# apt autoremove openssh-server 
   
   // Or from the Host
   Host:$ lxc exec newcon -- apt autoremove openssh-server


5. Limit Apt sources to what the container will actually use

Unlike setting the timezone properly, this is *important*. If you do this right, the container will update itself automatically for as long as the release of Ubuntu is supported (mark your calendar!) If you don't get this right, you will leave yourself an ongoing maintenance headache.

   // Limit the apt sources to (in this example) main from within the container
   newcon:# nano /etc/apt/sources.list
         // The final product should look similar to:
         deb http://archive.ubuntu.com/ubuntu focal main           
         deb http://archive.ubuntu.com/ubuntu focal-updates main           
         deb http://security.ubuntu.com/ubuntu focal-security main 

   // Alternately, *push* a new sources.list file from the host.
   # Create the new sources.list file on the host in /tmp
   cat <<EOF > /tmp/container-sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main
   deb http://security.ubuntu.com/ubuntu focal-security main
   EOF
   
   // *Push* the file from host to container
   Host:$ lxc file push /tmp/container-sources.list newcon/etc/apt/sources.list


6. Install the Application

How you do this depends upon the application and how it's packaged.



7. Update Unattended Upgrades

This is the secret sauce that keeps your container up-to-date. First, let's look at a cleaned-up version of the first 20-or-so lines of /etc/apt/apt.conf.d/50unattended-upgrades inside the container:

                    What it says                             What it means
           ------------------------------------------      -----------------------
   Unattended-Upgrade::Allowed-Origins {
           "${distro_id}:${distro_codename}";              Ubuntu:focal
           "${distro_id}:${distro_codename}-security";     Ubuntu:focal-security
   //      "${distro_id}:${distro_codename}-updates";      Ubuntu:focal-updates
   //      "${distro_id}:${distro_codename}-proposed";     Ubuntu:focal-proposed
   //      "${distro_id}:${distro_codename}-backports";    Ubuntu:focal-backports
   };

...why, those are just the normal repositories! -security is enabled (good), but -updates is disabled (bad). Let's fix that. Inside the container, that's just using an editor to remove the commenting ("//"). From the host, it's a substitution job for sed:

   Host:$ lxc exec newcon -- sed "s\/\ \g" /etc/apt/apt.conf.d/50unattended-upgrades

Third-party sources need to be updated, too. This is usually easiest from within the container. See this post for how and where to update Unattended Upgrades with the third-party source information.



8. Mounting External Media

Some containers need disk access. A classic example is a media server that needs access to that hard drive full of disorganized music.

If the disk is available across the network instead of locally, then use plain old sshfs or samba to mount the network share in /etc/fstab.

If the disk is local, then first mount it on the Host. After it's mounted, use an lxd disk device inside the container. A disk device is an all-in-one service: It creates the mount point inside the container and does the mounting. It's persistent across reboots...as long as the disk is mounted on the host.

   // Mount disk on the host and test
   Host:$ sudo mount /dev/sda1 /media
   Host:$ ls /media
      books         movies       music

   // Create disk device called "media_mount" and test
   Host:$ lxc config device add newcon media_mount disk source=/media path=/Shared_Media
   Host:$ lxc exec newcon -- ls /Shared_Media
      books         movies       music

If the ownership of files on the disk is confused, and you get "permisson denied" errors, then use shiftfs to do the equivalent of remounting without suid

   Host:$ lxc exec newcon -- ls /Shared_Media/books
      permission denied

   // Enable shiftfs in LXD, reload the lxd daemon, and test
   Host$ sudo snap set lxd shiftfs.enable=true
   Host$ sudo systemctl reload snap.lxd.daemon
   Host$ lxc info | grep shiftfs
    shiftfs: "true"

   // Add shiftsfs to the disk device
   Host$ lxc config device set newcon media_mount shift=true

   Host:$ lxc exec newcon -- ls /Shared_Media/books
      boring_books       exciting_books        comic_books        cookbooks