Tuesday, August 20, 2019

Toggling the Minecraft Server using systemd features

The new school year is upon us, suddenly the kids are playing Minecraft much less.

This means that the Minecraft server is sitting there churning all day and night, spawning and unspawning, eating CPU and generating heat for a sparse collection of occasional players now. It's an old Sempron 145 (45w, single core), so a single world sitting idle still consumes 40% CPU.

We already use systemd to start and stop the server. Let's add a couple new features to stop the server during the school day. Oh, and let's stop it during the deep night, also.

Here's what we currently have: A basic start/stop/restart systemd service that brings up the server at start:

   ## /etc/systemd/system/minecraft.service

   [Unit]
   Description=Minecraft Server
   After=network.target

   [Service]
   RemainAfterExit=yes
   WorkingDirectory=/home/minecraft
   User=minecraft
   Group=minecraft

   # Start Screen, Java, and Minecraft
   ExecStart=screen -s mc -d -m java -server -Xms512M -Xmx1024M -jar server.jar nogui

   # Tell Minecraft to gracefully stop.
   # Ending Minecraft will terminate Java
   # systemd will kill Screen after the 10-second delay. No explicit kill for Screen needed
   ExecStop=screen -p 0 -S mc -X eval 'stuff "say SERVER SHUTTING DOWN. Saving map..."\\015'
   ExecStop=screen -p 0 -S mc -X eval 'stuff "save-all"\\015'
   ExecStop=screen -p 0 -S mc -X eval 'stuff "stop"\\015'
   ExecStop=sleep 10

   [Install]
   WantedBy=multi-user.target


If you do something like this, remember to:

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable/disable minecraft.service  // Autostart at boot
    $ sudo systemctl start/stop minecraft.service      // Manual start/stop


We need to start with a little bit of planning. After looking at the myriad of hours and days that the server should be available (Summer, Holidays, Weekends, School Afternoons), I don't see a way to make all those work smoothly together inside a cron job or systemd timer.

Instead, let's move the logic into a full-fledged Python script, and let the script decide whether the server should be on or off. Our systemd timer will run the script periodically.

Wait...that's not right. Systemd timers run only services. So the timer must trigger a service, the service runs the script, the script decides if the server should be on or off, and uses the existing service to do so.

Let's draw that out

minecraft-hourly.timer -+  (timers can only run services)
                        |
                        v
                minecraft-hourly.service -+  (service can run a script)
                                          |
                                          v
                                   minecraft-hourly.py -+ (start/stop logic and decision)
                                                        |
                                                        v
                                                 minecraft.service (start/stop the server)


We know where we are going, let's work backwards to get there. We need a Python script with logic, and the ability to decide if the server should be off or on based upon any give time or date.

## /home/me/minecraft-hourly.py

#!/usr/bin/env python3
import datetime, subprocess

def ok_to_run_server():
    """Determine if the server SHOULD be up"""
 
   now = datetime.datetime.now()

    ## All days, OK to run 0-2, 5-8, 16-24
    if -1 < now.hour < 2 or 4 < now.hour < 8 or 15 < now.hour < 24:
        return True

    ## OK to run on weekends -- now.weekday() =  6 or 7 
    if now.weekday() > 5:
        return True

    ## OK to run during Summer Vacation (usually mid May - mid Aug)
    if 5 < now.month < 8:
        return True 
    if now.month == 5 and now.day > 15:
        return True
    if now.month == 8 and now.day < 15:
        return True

    ## OK to run on School Holidays 2019-20
    ## Fill in these holidays!
    school_holidays = ["Aug 30 Fri","Sep 02 Mon"]
    if now.strftime("%b %d %a") in school_holidays:
        return True

    return False

def server_running():
    """Determine if the Minecraft server is currently up"""
    cmd = '/bin/systemctl is-active minecraft.service'
    proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
    if proc.communicate()[0].decode().strip('\n') == 'active':
        return True
    else:
        return False

def run_server(run_flag=True):
    """run_flag=True will start the service. False will stop the service"""
    cmd = '/bin/systemctl start minecraft.service'
    if not run_flag:
        cmd = '/bin/systemctl stop minecraft.service'
    proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
    proc.communicate()
    return

## If the server is stopped, but we're in an ON window, then start the server
if ok_to_run_server() and not server_running():
    run_server(True)
 
## If the server is running, but we're in a OFF window, then stop the server
elif not ok_to_run_server and server_running():
    run_server(False)


This script should be executable, and since it tells systemctl to start/stop services, it should be run using sudo. Let's try this during school hours on a school day:

    $ chmod +x /home/me/minecraft-hourly.py

    $ sudo /home/me/minecraft-hourly.py
        // No output

    $ systemctl status minecraft.service 
        ● minecraft.service - Minecraft Server
           Loaded: loaded (/etc/systemd/system/minecraft.service; enabled; vendor preset: enabled)
           Active: inactive (dead)
        // It worked!


Still working backward, let's create the systemd service that runs the script. The 'type' is 'oneshot' - this is not an always-available daemon. It's a script that does it's function, then terminates.

## /etc/systemd/system/minecraft-hourly.service.

[Unit]
Description=Minecraft shutdown during school and night
After=network.target

[Service]
Type=oneshot
ExecStart=/home/me/minecraft-hourly.py
StandardOutput=journal

[Install]
WantedBy=multi-user.target


We want the hourly script to be triggered by TWO events: Either the hourly timer OR by the system starting up. This also means that we DON'T want minecraft.service to automatically start anymore. We want the script to automatically start, and to decide.

    $ sudo systemctl daemon-reload                     // We added a new service
    $ sudo systemctl enable minecraft-hourly.service   // Run at boot
    $ sudo systemctl disable minecraft.service         // No longer needs to run at boot


Let's test it again during school hours. It should shut down the Minecraft server. It did.

    $ sudo systemctl start minecraft.service       // Wait for it to finish loading (1-2 minutes)
    $ sudo systemctl start minecraft-hourly.service
    $ systemctl status minecraft.service
        ● minecraft.service - Minecraft Server
           Loaded: loaded (/etc/systemd/system/minecraft.service; disabled; vendor preset: enabled)
           Active: inactive (dead)



Finally, let's set up a systemd timer to launch the hourly service...well, hourly.

## /etc/systemd/system/minecraft-hourly.timer:

[Unit]
Description=Run the Minecraft script hourly
[Timer]
OnBootSec=0min
OnCalendar=*-*-* *:00:00
Unit=minecraft-hourly.service

[Install]
WantedBy=multi-user.target


Writing a timer, like writing a service, isn't enough. Remember to activate them.

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable minecraft-hourly.timer   // Start at boot
    $ sudo systemctl start minecraft-hourly.timer    // Start now


And let's check to see if the new timer is working

    $ systemctl list-timers | grep minecraft
        Tue 2019-08-20 15:00:30 CDT  30min left    Tue 2019-08-20 14:00:52 CDT  29min ago     minecraft-hourly.timer       minecraft-hourly.service

Home Assistant in an LXD Container, USBIP to remote USB Z-Wave dongle

This post merely ties together a few existing items, and adds a few twiddly bits specific to Homeassistant, LXD, usbip, and my specific Z-Wave USB dongle.

     |
     |
     + Headless Machine / LXD Host (Ubuntu Server 19.04, 192.168.1.11)
     |    + usbip client
     |    + Some other LXD container     (192.168.1.111)
 LAN |    + Home Assistant LXD container (192.168.1.112)
     |
     + Remote Raspberry Pi (Raspbian, 192.169.1.38)
     |    + usbip server    
     |    + Some other Pi activity
     |    + Z-Wave Controller USB Dongle

Our goal is for Home Assistant, running inside an LXD container, to use the Z-Wave Controller plugged into an entirely different machine.

  1. Since this is a persistent service, harden the pi. SSH using keys only, etc.
  2. Set up usbip server on the Pi. Include the systemd service so it restarts and re-binds at reboot.
  3. Set up the usbip client on the host (the HOST, not the container)
  4. If you haven't already, create the container and install homeassistant into the container

The rest is specific to Ubuntu, to LXD, and to the USB Dongle.

The USB dongle is cheap and includes both Z-Wave and Zigbee, often sold under the 'Nortek' brand. When you plug it into a Linux system, it looks like this:

    $lsusb
        Bus xxx Device xxx: ID 10c4:8a2a Cygnal Integrated Products, Inc. 



When plugged in on the host (or forwarded via usbip), the dongle creates new nodes in /dev:

    host$ ls -l /dev/ | grep USB
        crw-rw---- 1 root dialout 188,   0 Aug 18 10:29 ttyUSB0   // Z-Wave
        crw-rw---- 1 root dialout 188,   1 Aug 18 10:29 ttyUSB1   // Zigbee


These old-style nodes mean that we can NOT use LXD's USB hotplug feature (but there's an alternative). Also, it means that Home Assistant cannot autodetect the dongle's presence (we must manually edit the HA config).

Shut down the container, or restart the container after making the Z-Wave node accessible to the container. Without a restart, the container won't pick up the change. I've seen promises that it should be hot-pluggable. Maybe it is...but I needed to restart the container after this command. It's very similar to the USB Hotplug command, but uses 'unix-char' instead.

    host$ lxc config device add home-assistant zwave unix-char path=/dev/ttyUSB0
        Device zwave added to home-assistant
            // home-assistant is the name of the LXD container
            // z-wave         is the name of this new config. We could name it anything we want
            // unix-char      is the method of attaching we are using (instead of 'usb')
            // path           is the path on the HOST

    host$ lxc restart home-assistant      // Restart the container


Now we move into a shell prompt in the CONTAINER (not host). My personal preference, since I'm used to VMs, is to treat the container like a VM. It has (unnecessary) ssh access (key-only, of course), and non-root users to admin and to run the 'hass' application. It also has the (unnecessary) Python venv. All of that bloat is a combination of preference and following the install documentation which simply did not expect that we might be able to run a as root here. Seems like a whole new blog post. The upshot is that inside the container I have a user prompt ($) and use sudo instead of a root prompt (#). Your mileage may vary.

    container$ ls -l /dev | grep USB
        crw-rw---- 1 root   root    188,   0 Aug 19 17:52 ttyUSB0

Look at the permissions: They have changed. And, as I said before, hass is not running as root within this container. Let's make a one-time change to make the Z-Wave USB dongle readable by hass.

    container$ sudo chown root:dialout /dev/ttyUSB0
        // There should be no output

    container$ ls -la /dev/ | grep USB
        crw-rw---- 1 root   dialout 188,   0 Aug 19 17:53 ttyUSB0

    container$ sudo adduser homeassistant dialout  // OPTIONAL - add the 'homeassistant' user to the correct group
                                                   // This is done early in most Home Assistant install instructions
                                                   // You may have done this already
                                                   // If not, restart the container so it takes effect


The chown seems to NOT persist across a reboot, so let's add a line to the systemd service so the chown occurs every time the container comes up.

    container$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service

        // Add to the [Service] section
        ExecStartPre=/bin/chown root:dialout /dev/ttyUSB0

    container$ sudo systemctl daemon-reload


Edit Home Assistant's config file, so hass knows where to find the Z-Wave node.

    me@container$ sudo -u homeassistant -H -s
    homeassistant@container$ nano /home/homeassistant/.homeassistant/configuration.yaml

        zwave:
          usb_path: /dev/ttyUSB0

    homeassistant@container$ exit


Finally, debian-based systems must install one additional deb package to support Z-Wave.

    container$ sudo apt install libudev-dev


Restart Home Assistant (if it's running) to pick up the new config. Go into the Web Page, and try adding the Z-Wave integration


    container$ sudo systemctl restart home-assistant@homeassistant.service

HomeAssistant in an LXD container

It's time to migrate my Home Assistant from it's experimental home in a Raspberry Pi to an LXD container on my Ubuntu server.

This has a lot of advantages, none of which are likely of the slightest interest to you. However, it also has one big problem: My Z-Wave USB dongle, currently plugged into the Pi, will need a new solution.

This blog post is about setting up Home Assistant in an LXD container. A different blog post will detail how to let the container see the USB dongle across the network

Preliminaries

First, the server needs to have LXD installed, and we need to create a container for Home Assistant.

In this case, I created a container called "homeassistant." It has a consistent IP address assigned by the LAN router (aa.bb.cc.dd), it has a user ("me") with sudo permission, and that user can ssh into the container. To the network it looks like a separate machine. To me, it behaves like a VM. To the host server, it acts like an unprivileged container.

Installing

First we install the python dependencies:

    me@homeassistant:~$ sudo apt-get update
    me@homeassistant:~$ sudo apt-get upgrade
    me@homeassistant:~$ sudo apt-get install python3 python3-venv python3-pip libffi-dev libssl-dev

Add a user named "homeassistant" to run the application. We need to add me to the new "homeassistant" group, so I can edit the config files.

    me@homeassistant:~$ sudo useradd -rm homeassistant -G dialout    // Create the homeassistant user and group
    me@homeassistant:~$ sudo adduser me homeassistant                // Add me to the homeassistant group
    me@homeassistant:~$ newgrp homeassistant                         // Add group to current session; no need to logout

Create the homeassistant directory in /srv, set the ownership, and cd into the dir

    me@homeassistant:~$ cd /srv
    me@homeassistant:~$ sudo mkdir homeassistant
    me@homeassistant:~$ sudo chown homeassistant:homeassistant homeassistant
    me@homeassistant:~$ cd /srv/homeassistant

Switch to homeassistant user, create the venv, install homeassistant:

    me@homeassistant:/srv/homeassistant $ sudo -u homeassistant -H -s
    homeassistant@homeassistant:/srv/homeassistant $ python3 -m venv .
    homeassistant@homeassistant:/srv/homeassistant $ source bin/activate
    (homeassistant) homeassistant@homeassistant:/srv/homeassistant $ python3 -m pip install wheel
    (homeassistant) homeassistant@homeassistant:/srv/homeassistant $ pip3 install homeassistant

First Run and Testing

Start Home Assistant for the first time. This takes a few minutes - let it work:

    (homeassistant) $ hass

Home Assistant should be up and running now. Try to login to the container's webserver (included with homeassistant). Remember how we assigned the container an IP address? (aa.bb.cc.dd) Let's use it now. From another machine on the LAN, try to connect with a web browser: http://aa.bb.cc.dd:8123. If it doesn't work, stop here and start troubleshooting. You can CTRL+C hass to stop it, or you can stop it from inside the web page.

Make Config Files

Once Home Assistant is working, let's change the permissions of the config files so that members of the "homeassistant" group (like me) can edit the files:

    (homeassistant) $ exit                                 // Exit the Venv
    homeassistant@homeassistant:/srv/homeassistant $ exit  // Exit the homeassistant user
    me@homeassistant:/srv/homeassistant $ cd ~             // Return to home dir - optional
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/automations.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/configuration.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/groups.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/scripts.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/secrets.yaml

Let's link Home Assistant to systemd, so hass starts when the container comes up, and hass stops when the container goes down. (Reference):

    me@homeassistant:~$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service

        [Unit]
        Description=Home Assistant
        After=network-online.target

        [Service]
        Type=simple
        User=homeassistant
        ExecStart=/srv/homeassistant/bin/hass -c "/home/homeassistant/.homeassistant"
        // No need for a 'stop' command. Systemd will take care of it automatically

        [Install]
        WantedBy=multi-user.target

    me@homeassistant:~$ sudo systemctl --system daemon-reload                       // Load systemd config changes
    me@homeassistant:~$ sudo systemctl enable home-assistant@homeassistant.service  // Or disable
    me@homeassistant:~$ sudo systemctl start home-assistant@homeassistant.service   // Or stop

Finally, a word about updates: Home Assistant updates frequently, and since it's not deb-based, unattended-upgrades cannot see it. However, starting the application will automatically download and install Home Assistant updates. When this occurs, the web page will take a full minute (or three) before appearing. Be Patient!

    me@homeassistant:~$ sudo -systemctl restart home-assistant@homeassistant.service  // Updates!

Saturday, August 17, 2019

USBIP into an LXD container

In a previous post, I used USBIP to forawrd GPS data from A to B. 'A' was a USB GPS dongle pluged into a Raspberry Pi (Raspbian). 'B' was my laptop.

Now let's take it another step. Let's move 'B' to an LXD container sitting on a headless Ubuntu 19.04 server. No other changes: Same GPS data, same use of USBIP. 'A' is the same USB GPS dongle, the same Raspberry Pi, and the same Raspbian.

Setting up usbip on the server ('B') is identical to setting it up on my laptop. Recall that this particular dongle creates a /dev/ttyUSB_X device upon insertion, and it's the same on the Pi, the Laptop, and the Server

    me@server:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 006: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    me@server:~$ ls -l /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 17 21:13 /dev/ttyUSB0

LXD has a USB Hotplug feature that works for many, but not all USB devices, connecting USB devices on the host to the container. Devices that create a custom entry in /dev (like /dev/ttyUSB_X) generally cannot use the USB Hotplug...but CAN instead use 'usb-char' forwarding which (seems to be) NOT hotpluggable.

Here's that LXD magic at work. In this case, I'm using a container called 'ha-test2', and let's simply name the dongle 'gps'. Do this while the container is stopped, or restart the container afterward

    me@server:~$ lxc config device add ha-test2 gps unix-char path=/dev/ttyUSB0
        Device gps added to ha-test2

Now we start the container, and then jump into a shell inside. We see that /dev/ttyUSB0 has indeed been forwarded. And we test to ensure data is flowing -- that we can read from /dev/tty/USB0.

    me@server:~$ lxc start ha-test2
    me@server:~$ lxc shell ha-test2
        mesg: ttyname failed: No such device

        root@ha-test2:~# ls -l /dev/ | grep tty
            crw-rw-rw- 1 nobody nogroup   5,   0 Aug 18 02:11 tty
            crw-rw---- 1 root   root    188,   0 Aug 18 02:25 ttyUSB0

        root@ha-test2:~# apt install gpsd-clients   // Get the gpsmon application
        root@ha-test2:~# gpsmon /dev/ttyUSB0


Making it permanent

It is permanent already. The 'lxc config' command will edit the config of the container, which is persistent across a reboot.


Cleaning up


There are two options for cleanup of the container.
  • You can simply throw it away (it's a container)
  • Alternately,
     root@ha-test2:~# apt autoremove gpsd-clients

On the Server:

    me@server:~$ lxc config device remove ha-test2 gps
    me@server:~$ sudo apt autoremove gpsd-clients    // If you installed gpsmon to test connectivity

Also remember to detach USBIP, and uninstall usbip packages.

Monday, August 12, 2019

Experimenting with USB devices across the LAN with USBIP

USBIP is a Linux tool for accessing USB devices across a network. I'm trying it out.


At one end of the room, I have a Raspberry Pi with
  • A Philips USB Webcam
  • A no-name USB GPS dongle
  • A Nortek USB Z-Wave/Zigbee network controller dongle
At the other end of the room is my laptop.

Before starting anything, I plugged all three into another system to ensure that they worked properly.


Raspberry Pi Server Setup

The Pi is running stock Raspbian Buster, with the default "pi" user replaced by a new user ("me") with proper ssh keys.

Before we start, here's what the 'lsusb' looks like on the Pi

    me@pi:~ $ lsusb
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Now we plug in the three USB devices and see what changed

    me@pi:~ $ lsusb
        Bus 001 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 001 Device 005: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 001 Device 006: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And here are the new devices created or modified

    me@pi:~ $ ls -l /dev | grep 12    // 12 is today's date
        drwxr-xr-x 4 root root          80 Aug 12 00:46 serial
        lrwxrwxrwx 1 root root           7 Aug 12 00:46 serial0 -> ttyAMA0
        drwxr-xr-x 4 root root         220 Aug 12 00:47 snd
        crw--w---- 1 root tty     204,  64 Aug 12 00:46 ttyAMA0
        crw-rw---- 1 root dialout 188,   0 Aug 12 00:46 ttyUSB0
        drwxr-xr-x 4 root root          80 Aug 12 00:47 v4l
        crw-rw---- 1 root video    81,   3 Aug 12 00:47 video0

Looks like...
  • /dev/ttyAMA0 is the Nortek Z-Wave controller
  • /dev/ttyUSB0 is the GPS stick
  • /dev/video0 is the webcam

Installing USBIP onto Raspbian Buster is easy. However, it is DIFFERENT from stock Debian or Ubuntu. This step is Raspbian-only

    me@pi:~$ sudo apt install usbip
Now load the kernel module. The SERVER always uses the module 'usbip_host'.

    me@pi:~$ sudo modprobe usbip_host     // does not persist across reboot

List the devices the usbip can see. Note each Bus ID - we'll need those later

    me@pi:~ $ usbip list --local
 - busid 1-1.1 (0424:ec00)
   Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

 - busid 1-1.2 (0471:0329)
   Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

 - busid 1-1.4 (067b:2303)
   Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

 - busid 1-1.5 (10c4:8a2a)
   Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

  • We can ignore the Ethernet adapter
  • The Webcam is at 1-1.2
  • The GPS dongle is at 1-1.4
  • The Z-Wave Controller is at 1-1.5

Bind the devices.

    me@pi:~$ sudo usbip bind --busid=1-1.2        // does not persist across reboot
        usbip: info: bind device on busid 1-1.2: complete

    me@pi:~$ sudo usbip bind --busid=1-1.4        // does not persist across reboot
        usbip: info: bind device on busid 1-1.4: complete

    me@pi:~$ sudo usbip bind --busid=1-1.5        // does not persist across reboot
        usbip: info: bind device on busid 1-1.5: complete

The USB dongle will now appear to any client on the network just as though it was plugged in locally.

If you want to STOP serving a USB device:

    me@pi:~$ sudo usbip unbind --busid=1-1.2

The server (usbipd) process may or may not actually be running, serving on port 3240. Let's check:
    me@pi:~ $ ps -e | grep usbipd
        18966 ?        00:00:00 usbipd

    me@:~ $ sudo netstat -tulpn | grep 3240
        tcp        0      0 0.0.0.0:3240            0.0.0.0:*               LISTEN      18966/usbipd        
        tcp6       0      0 :::3240                 :::*                    LISTEN      18966/usbipd

We know that usbipd is active and listening. If not, start usbipd with:

    me@:~ $ sudo usbipd -D

You can run it more than one; only one daemon will start. The usbipd server does NOT need to be running to bind/unbind USB devices - you can start the server and bind/unbind in any order you wish. If you need to debug a connection, omit the -D (daemonize; fork into the background) so you can see the debug messages. See 'man usbipd' for the startup options to change port, IPv4, IPv6, etc.


Laptop Client Setup

Let's look at the USB devices on my laptop before starting:

    me@laptop:~$ lsusb
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

In stock Debian (not Raspbian) and Ubuntu, usbip is NOT a separate package. It's included in the 'linux-tools-generic' package, which many folks already have installed...

    me@laptop:~$ apt list linux-tools-generic
        Listing... Done
        linux-tools-generic/disco-updates 5.0.0.23.24 amd64   // Doesn't say "[installed]"

...but apparently I don't. Let's install it.

    me@laptop:~$ sudo apt install linux-tools-generic

Now load the kernel module. The CLIENT always uses the kernel module 'vhci-hcd'.

    me@laptop:~$ sudo modprobe vhci-hcd     // does not persist across reboot

List the available USB devices on the Pi server (IP addr aa.bb.cc.dd). Those Bus IDs should look familiar.

    me@laptop:~$ usbip list -r aa.bb.cc.dd                        // List available on the IP address
        usbip: error: failed to open /usr/share/hwdata//usb.ids   // Ignore this error
        Exportable USB devices
        ======================
         - aa.bb.cc.dd
              1-1.5: unknown vendor : unknown product (10c4:8a2a)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
                   : (Defined at Interface level) (00/00/00)
                   :  0 - unknown class / unknown subclass / unknown protocol (ff/00/00)
                   :  1 - unknown class / unknown subclass / unknown protocol (ff/00/00)


              1-1.4: unknown vendor : unknown product (067b:2303)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4
                   : (Defined at Interface level) (00/00/00)

              1-1.2: unknown vendor : unknown product (0471:0329)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2
                   : (Defined at Interface level) (00/00/00)

Now we attach the three USB devices. This will not persist across a reboot.

    me@laptop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.2
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.4
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.5
    // No feedback upon success

The remote USB devices now show in 'lsusb'

    me@laptop:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 002: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And we can see that new devices have appeared in /dev. Based upon the order we attached, it's likely that
  • The webcam 1-1.2 is at /dev/video2
  • The GPS dongle 1-1.4 is probably at /dev/ttyUSB0
  • The Z-Wave controller 1-1.5 is at /dev/ttyUSB1
  • The same dongle includes a Zigbee controller, too, at /dev/ttyUSB2
The Z-Wave/Zigbee controller has had it's major number changed from 204 to 188. We don't know if that's important or not yet.

    me@laptop:~$ ls -l /dev | grep 12
        drwxr-xr-x  4 root root            80 Aug 12 00:56 serial
        crw-rw----  1 root dialout 188,     0 Aug 12 00:56 ttyUSB0
        crw-rw----  1 root dialout 188,     1 Aug 12 00:56 ttyUSB1
        crw-rw----  1 root dialout 188,     2 Aug 12 00:56 ttyUSB2
        crw-rw----+ 1 root video    81,     2 Aug 12 00:56 video2


Testing Results

I tested the GPS using the 'gpsmon' application, included with the 'gpsd-clients' package. We don't actually need gpsd, we can connect gpsmon directly to the remote USB device.

    me@laptop:~$ gpsmon /dev/ttyUSB0
        gpsmon:ERROR: SER: device open of /dev/ttyUSB0 failed: Permission denied - retrying read-only
        gpsmon:ERROR: SER: read-only device open of /dev/ttyUSB0 failed: Permission denied

Aha, a permission issue, not a usbip failure!
Add myself to the 'dialout' group, and then it works. A second test across a VPN connection, from a remote location, was also successful.

    me@laptop:~$ ls -la /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 11 21:41 /dev/ttyUSB0    // 'dialout' group

    me@laptop:~$ sudo adduser me dialout
        Adding user `me' to group `dialout' ...
        Adding user me to group dialout
        Done.

    me@laptop:~$ newgrp dialout    // Prevents need to logout/login for new group to take effect

    me@laptop:~$ gpsmon /dev/ttyUSB0
    // Success!

The webcam is immediately recognized in both Cheese and VLC, and plays across the LAN instantly. There is a noticeable half-second lag. A second test, across a VPN connection from a remote location, had the USB device recognized but not enough signal was arriving in timely order for the applications to show the video.

There were a few hiccups along the way. The --debug flag helps a lot to track down the problems:
  • Client failed to connect with "system error" - turns out usbipd was not running on the server.
  • Client could see the list, but failed to attach with "attach failed" - needed to reboot the server (not sure why)
  • An active usbip connection prevents my laptop from sleeping properly
  • The Z-wave controller require HomeAssistant or equivalent to run, a bit more that I want to install onto the testing laptop. Likely to have permission issues, too.


Cleaning up

To tell a CLIENT to cease using a remote USB (virtual unplug), you need to know the usbip port number. Well, not really: We have made only one persistent change; we could simply reboot instead.

    me@laptop:~$ usbip port   // Not using sudo - errors, but still port numbers
        Imported USB devices
        ====================
        libusbip: error: fopen
        libusbip: error: read_record
        Port 00:  at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/007
        libusbip: error: fopen
        libusbip: error: read_record
        Port 01:  at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/005
        libusbip: error: fopen
        libusbip: error: read_record
        Port 02:  at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/006

    me@laptop:~$ sudo usbip port    // Using sudo, no errors and same port numbers
        Imported USB devices
        ====================
        Port 00: <port in use> at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> usbip://aa.bb.cc.dd:3240/1-1.2
                   -> remote bus/dev 001/007
        Port 01: <port in use> at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> usbip://aa.bb.cc.dd:3240/1-1.4
                   -> remote bus/dev 001/005
        Port 02: <port in use> at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> usbip://aa.bb.cc.dd:3240/1-1.5
                   -> remote bus/dev 001/006
 
    me@laptop:~$ sudo usbip detach --port 00
        usbip: info: Port 0 is now detached!

    me@laptop:~$ sudo usbip detach --port 01
        usbip: info: Port 1 is now detached!

    me@laptop:~$ sudo usbip detach --port 02
        usbip: info: Port 2 is now detached!

    me@laptop:~$ lsusb              // The remote USB devices are gone now
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    me@laptop:~$ sudo modprobe -r vhci-hcd    // Remove the kernel module

The only two persistent change we made on the CLIENT were adding myself to the 'dialout' group and installing the 'linux-tools-generic' package, so let's remove that. If you ALREADY were in the 'dialout' group, or had the package installed for other reasons, then obviously don't remove it. It's not the system's responsibility to keep track of why you have certain permissions or packages -- that's the human's job. After this step, my CLIENT is back to stock Ubuntu.

    me@laptop:~$ sudo deluser me dialout                  // Takes effect after logout
    me@laptop:~$ sudo apt autoremove linux-tools-generic  // Immediate

Telling a SERVER to stop sharing a USB device (virtual unplug) and shut down the server is much easier. Of course, this is also a Pi, and we did make any changes permanent, so it might be easier to simply reboot it.

    me@pi:~$ usbip list -l
         - busid 1-1.1 (0424:ec00)
           Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

         - busid 1-1.2 (0471:0329)
           Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

         - busid 1-1.4 (067b:2303)
           Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

         - busid 1-1.5 (10c4:8a2a)
           Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

    me@pi:~$ sudo usbip unbind --busid=1-1.2
        usbip: info: unbind device on busid 1-1.2: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.4
        usbip: info: unbind device on busid 1-1.4: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.5
        usbip: info: unbind device on busid 1-1.5: complete

    me@pi:~$ sudo pkill usbipd

The only persistent change we made on the Pi is installing the 'usbip' package. Once removed, we're back to stock Raspbian.

    me@pi:~$ sudo apt autoremove usbip


Making it permanent

There are two additional steps to making a permanent server, and essentially the same two steps to make a permanent client. This means a USBIP server that begins serving automatically upon boot, and a client that automatically connects to the server upon boot.

Add the kernel modules to /etc/modules so that the USBIP kernel modules will be automatically loaded at boot. To remove a client or server, delete the line from /etc/modules. You don't need to use 'nano' - use any text editor you wish, obviously.

    me@pi:~$ sudo nano /etc/modules     // usbipd SERVER

        usbip_host

    me@laptop:~$ sudo nano /etc/modules     // usbip CLIENT

        usbip_vhci-hcd

    // Another way to add the USBIP kernel modules to /etc/modules on the SERVER
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'usbip_host' >> /etc/modules
    me@pi:~# exit

    // Another way to add the USBIP kernel modules to /etc/modules on the CLIENT
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'vhci-hcd' >> /etc/modules
    me@pi:~# exit

Add a systemd job to the SERVER to automatically bind the USB devices. You can use systemd to start, stop, and restart the server conveniently, and for to start serving at startup automatically.

    me@pi:~$ sudo nano /lib/systemd/system/usbipd.service

        [Unit]
        Description=usbip host daemon
        After=network.target

        [Service]
        Type=forking
        ExecStart=/usr/sbin/usbipd -D
        ExecStartPost=/bin/sh -c "/usr/sbin/usbip bind --$(/usr/sbin/usbip list -p -l | grep '#usbid=10c4:8a2a#' | cut '-d#' -f1)"
        ExecStop=/bin/sh -c "/usr/lib/linux-tools/$(uname -r)/usbip detach --port=$(/usr/lib/linux-tools/$(uname -r)/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new SERVER:
    me@pi:~$ sudo pkill usbipd                          // End the current server daemon (if any)
    me@pi:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@pi:~$ sudo systemctl enable usbipd.service       // Set to run at startup
    me@pi:~$ sudo systemctl start usbipd.service        // Run now

Add a systemd job to the CLIENT to automatically attach the remote USB devices at startup. You can use systemd to unplug conveniently before sleeping, and to reset the connection of needed. Note: On the "ExecStart" line, substitute your server's IP address for aa.bb.cc.dd in two places.

    me@laptop:~$ sudo nano /lib/systemd/system/usbip.service

        [Unit]
        Description=usbip client
        After=network.target

        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/bin/sh -c "/usr/bin/usbip attach -r aa.bb.cc.dd -b $(/usr/bin/usbip list -r aa.bb.cc.dd | grep '10c4:8a2a' | cut -d: -f1)"
        ExecStop=/bin/sh -c "/usr/bin/usbip detach --port=$(/usr/bin/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new CLIENT attachment(s):

    me@laptop:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@laptop:~$ sudo systemctl enable usbip.service       // Set to run at startup
    me@laptop:~$ sudo systemctl start usbip.service        // Run now

Saturday, August 10, 2019

Experiment: Home Assistant in an LXD container without a venv

Update: August 2020 (one year later)

Here's a slightly different way of doing it entirely from the host. Tested with HomeAssistant version 114.

lxc launch -p lanprofile ubuntu:focal ha-test

# Update apt so we can install pip
cat <<EOF > /tmp/container-sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
deb http://security.ubuntu.com/ubuntu focal-security main universe
EOF

lxc file push /tmp/container-sources.list ha-test/etc/apt/sources.list
lxc exec ha-test -- apt update
lxc exec ha-test -- apt upgrade

# Here's the meat: Installing pip3, then using pip3 to install HA and dependencies.
lxc exec ha-test -- apt install python3-pip
lxc exec ha-test -- pip3 install aiohttp_cors defusedxml emoji hass_nabucasa home-assistant-frontend homeassistant mutagen netdisco sqlalchemy zeroconf

# Example of fixing a version error message that occurs during pip install:
# ERROR: homeassistant 0.114.2 has requirement cryptography==2.9.2, but you'll have cryptography 2.8 which is incompatible.
lxc exec ha-test -- pip3 install --upgrade cryptography==2.9.2

# Can't start the web browser without knowing the container's IP address. 
lxc list | grep ha-test
   | ha-test       | RUNNING | 192.168.2.248 (eth0) |      | CONTAINER | 0         |

# Run Hass
lxc exec ha-test -- hass
   Unable to find configuration. Creating default one in /root/.homeassistant

# Web browser: http://192.168.2.248:8123....and there it is!


Home Assistant usually runs in a Python 3 virtual environment (venv). The developers wisely chose Python 3 because it has all the libraries they need. The developers wisely chose venv to create an effective single, predictable platform upon which Home Assistant can run. Users like it because just a couple extra shell incantations is the difference between success and cryptic-error hell.

Let's see if I can get HA 0.97 to run on Ubuntu 19.04. In this case, I'm running it in a disposable LXD container so I can just throw it away after the experiment is complete. This experiment turned out to be about 75% successful - Home Assistant installs and runs outside the venv, but logging and sqlalchemy failed to install, so the final product had some limitations.


Setup

First, let's create the LXD container. Step 1. Step 2. I use a networking profile ("lanprofile") that uses DHCP to request an IP address from my router instead of the local server. I'm using an Ubuntu 19.04 ("Disco") image for the container. And I'm calling the container "ha-test2," second in a line of Home Assistant test containers.

    me@host:~$ lxc launch -p lanprofile ubuntu:disco ha-test2

After a minute or two, the container is running and has picked up an IP address from the router.

    me@host:~$ lxc list
        +----------+---------+----------------------+-----
        |   NAME   |  STATE  |        IPV4          |
        +----------+---------+----------------------+-----
        | ha-test2 | RUNNING | 192.168.1.252 (eth0) |
        +----------+---------+----------------------+-----

Let's enter the container. Note that change to a root prompt within the container. This is an unprivileged container (LXD's default), so root within the container is NOT root for the rest of the system. Note also the mysterious "ttyname failed: No such device" error, due to a very minor bug but does not affect our use of the container in any way.

    me@host:~$ lxc shell ha-test2
        mesg: ttyname failed: No such device
        root@ha-test2:~#

OPTIONAL: Limit the Ubuntu sources. We don't need -resrticted or -multiverse or -proposed or -backports, etc. I replaced the entire file with the following three lines. Proper format is important!

    root@ha-test2:~# nano /etc/apt/sources.list

        deb http://archive.ubuntu.com/ubuntu disco main universe
        deb http://archive.ubuntu.com/ubuntu disco-updates main universe
        deb http://security.ubuntu.com/ubuntu disco-security main universe

OPTIONAL: Expand Unattended Upgrades to handle 100% of the limited sources. I replaced the entire file with the following five lines.

    root@ha-test2:~# nano /etc/apt/apt.conf.d/50unattended-upgrades 

        Unattended-Upgrade::Allowed-Origins {
            "${distro_id}:${distro_codename}";
            "${distro_id}:${distro_codename}-security";
            "${distro_id}:${distro_codename}-updates";
        };

Since this is the first run of the package manager...

    root@ha-test2:~# apt update
    root@ha-test2:~# apt upgrade

Home Assistant uses Python 3's pip, not debs. So we install pip.

    root@ha-test2:~# apt install python3-pip


First Try - Learning Curve

Now we can use pip to install Home Assistant. This command will run for a few minutes, and will produce a lot of output as it downloads many dependencies. Some of those installs output, at first glance, messages that seem like errors -- read them carefully, they are probably uninstall errors if packages were being upgraded...which they are not, of course.

    root@ha-test2:~# pip3 install homeassistant

The first run of 'hass' (the Home Assistant program name) is where we start to encounter errors that need to be investigated and fixed. When the system ground to a halt for several minutes, I used CTRL+C to end the process and return to a shell prompt.

    root@ha-test2:~# hass

        // Lots of success...but then:

        2019-08-09 22:35:57 INFO (MainThread) [homeassistant.bootstrap] Setting up {'system_log'}
        2019-08-09 22:35:57 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of aiohttp_cors==0.7.0
        2019-08-09 22:36:01 INFO (MainThread) [homeassistant.setup] Setting up http
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Error during setup of component http
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 178, in async_setup
            ssl_profile=ssl_profile,
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 240, in __init__
            setup_cors(app, cors_origins)
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/cors.py", line 22, in setup_cors
            import aiohttp_cors
        ModuleNotFoundError: No module named 'aiohttp_cors'
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of system_log. Setup failed for dependencies: http
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Setup failed for system_log: Could not set up all dependencies.
        2019-08-09 22:36:01 INFO (SyncWorker_4) [homeassistant.util.package] Attempting install of sqlalchemy==1.3.5
        2019-08-09 22:36:11 INFO (MainThread) [homeassistant.setup] Setting up recorder
        Exception in thread Recorder:
        Traceback (most recent call last):
          File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
            self.run()
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/__init__.py", line 211, in run
            from .models import States, Events
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/models.py", line 6, in 
            from sqlalchemy import (
        ModuleNotFoundError: No module named 'sqlalchemy'

        2019-08-09 22:36:21 WARNING (MainThread) [homeassistant.setup] Setup of recorder is taking over 10 seconds.

        // Thread hangs here. Use CTRL+C to abort back to a shell prompt

There are two errors there. Both are simply bugs in Home Assistant's list of dependencies. The developers neglected to include dependencies upon "aiohttp_cors" and "sqlalchemy". Let's uninstall all the pip packages and dependencies and start over. The dependencies are listed in the 'pip3 show' command. Remember to delete pip from the list of removals, and to add homeassistant. The pip3 uninstall command asks a lot of questions about deleting files and directories -- as long as the offered removals are in /usr/local, it won't break anything.

    root@ha-test2:~# pip3 show homeassistant
        Name: homeassistant
        Version: 0.97.1
        Summary: Open-source home automation platform running on Python 3.
        Home-page: https://home-assistant.io/
        Author: The Home Assistant Authors
        Author-email: hello@home-assistant.io
        License: Apache License 2.0
        Location: /usr/local/lib/python3.7/dist-packages
        Requires: pyyaml, async-timeout, bcrypt, voluptuous, voluptuous-serialize, importlib-metadata, ruamel.yaml, jinja2, cryptography, python-slugify, pip, PyJWT, requests, aiohttp, certifi, attrs, astral, pytz
        Required-by: 

    root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz


Second Try - Getting closer

For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.

    root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy

        [lots of installing]

    root@ha-test2:~# hass

        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up onboarding
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain config took 0.9 seconds.
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up automation
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain automation took 0.0 seconds.
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain onboarding took 0.0 seconds.
        2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.config] Unable to import ssdp: No module named 'netdisco'
        2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.setup] Setup failed for ssdp: Invalid config.
        2019-08-09 23:56:20 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of distro==1.4.0
        2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setting up updater
        2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setup of domain updater took 0.0 seconds.
        2019-08-09 23:56:24 INFO (SyncWorker_1) [homeassistant.util.package] Attempting install of mutagen==1.42.0
        2019-08-09 23:56:29 INFO (SyncWorker_2) [homeassistant.loader] Loaded google_translate from homeassistant.components.google_translate
        2019-08-09 23:56:29 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of hass-nabucasa==0.16
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up cloud
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Error during setup of component cloud
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/cloud/__init__.py", line 167, in async_setup
            from hass_nabucasa import Cloud
        ModuleNotFoundError: No module named 'hass_nabucasa'
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up mobile_app
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.config] Unable to import zeroconf: No module named 'zeroconf'
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Setup failed for zeroconf: Invalid config.
        2019-08-09 23:56:50 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of home-assistant-frontend==20190805.0
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain mobile_app took 0.0 seconds.
        2019-08-09 23:56:50 INFO (SyncWorker_3) [homeassistant.loader] Loaded notify from homeassistant.components.notify
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up notify
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain notify took 0.0 seconds.
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.components.notify] Setting up notify.mobile_app
        2019-08-09 23:57:24 INFO (MainThread) [homeassistant.setup] Setting up frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Error during setup of component frontend
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 267, in async_setup
            root_path = _frontend_root(repo_path)
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 244, in _frontend_root
            import hass_frontend
        ModuleNotFoundError: No module named 'hass_frontend'
        2019-08-09 23:57:24 INFO (SyncWorker_0) [homeassistant.util.package] Attempting install of gTTS-token==1.1.3
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of logbook. Setup failed for dependencies: frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for logbook: Could not set up all dependencies.
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of map. Setup failed for dependencies: frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for map: Could not set up all dependencies.
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of default_config. Setup failed for dependencies: cloud, frontend, logbook, map, ssdp, zeroconf
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for default_config: Could not set up all dependencies.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setting up tts
        2019-08-09 23:57:30 INFO (SyncWorker_1) [homeassistant.components.tts] Create cache dir /root/.homeassistant/tts.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setup of domain tts took 0.0 seconds.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.bootstrap] Home Assistant initialized in 87.48s
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Starting Home Assistant
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Timer:starting

We have two missing dependencies (netdisco and zeroconf), and a bunch of missing internal homeassistant functions. This looks a bit like a race condition - the setup script is expecting functions that aren't-quite-ready yet. This also explains why many of these errors do not appear during a subsequent run of hass.

Let's delete and try again with those two additional dependencies....
    root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz aiohttp_cors sqlalchemy
    root@ha-test2:~# rm -r .homeassistant/


Third Try - Close enough to call it success

For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.

    root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy netdisco zeroconf

        [lots of installing]

    root@ha-test2:~# hass

        // No missing dependencies
        // Same setup errors

On the first run of hass, the dependency errors are gone, but the setup errors remain and the website is still unavailable. One the second run of hass, no errors at all, the website and all features work. The system is ready for systemd integration to bring hass up and down with the system.


Substituting Debs for Pips

Many of those pip dependencies are also available in Debian and Ubuntu. Let's try adding the debs, one by one, and see if we can reduce the number of pip dependencies. This is a separate experiment, obviously.

The process here is to delete homeassistant, it's pip dependencies, and it's config files, then replace Pips with Debs. We want to see if homeassistant pulls in the relevant pip anyway. If so, we can delete that pip, then see if homeassistant installs and initializes properly. That means that this experiment is not persistent - Home Assistant updates (like 0.97 to 0.98) will pull in all the removed pips again.

Several packages are already installed in the default Ubuntu 19.04 image, but are superseded by pips:
  • python3-certifi, python3-cryptography, python3-jinja2, python3-multidict, python3-requests, python3-yarl
Some packages are not available as debs at all. These are all dependencies of homeassistant:
  • attrs, homeassistant, importlib-metadata, PyJWT, pyyaml, zipp
Several packages, once installed, no longer pull in the pip:

    root@ha-test2:~# apt install python3-async-timeout python3-voluptuous-serialize

These packages, after installed, continue to pull in the pip anyway:

    root@ha-test2:~# apt install python3-aiohttp python3-aiohttp-cors python3-astral python3-async-timeout python3-bcrypt python3-python-slugify python3-ruamel.yaml python3-tz python3-voluptuous python3-voluptuous-serialize


After intalling all those debs, the homeassistant install looks something like this:

    root@ha-test2:~# pip3 install homeassistant
    root@ha-test2:~# pip3 uninstall aiohttp aiohttp_cors astral bcrypt certifi cryptography jinja2 multidict python-slugify pytz requests ruamel.yaml voluptuous yarl
    root@ha-test2:~# hass      // first time - no new install errors
    root@ha-test2:~# hass      // frontend works, no startup errors

Of course, this was an experiment - your mileage may vary. You may encounter problems that I did not. But it IS clearly possible to install Home Assistant into a non-venv environment, clearly possible to install Home Assistant into an LXD container, and clearly possible to more closely integrate Home Assistant into a Debian-based system.

Thursday, August 8, 2019

Creating an LXD container on my Ubuntu 19.04 host

I just finished setting up LXD on my Ubuntu 19.04 server, and I'm ready to create a container.

Installing the service into the container is a separate step - this is just setting up and configuring the container itself.

Creating a disposable container:

Actually, we did this already with our test container:

    me@host:~$ lxc launch -p lanprofile ubuntu:disco test

Let's see if that container is still there:

    me@host:~$ lxc list
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
        |      NAME      |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
        | test           | RUNNING | 192.168.1.124 (eth0)| 2615:a000:141f:e267:215:3eef:fe2a:c55d (eth0) | PERSISTENT | 0         
        |
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+

We can enter the container to run commands on it's shell. Note that root inside the container is not root (unprivileged) on the host. The container comes with a default "ubuntu" user, but we have root so we don't seem to need the user.

    me@host:~$ lxc shell test
        mesg: ttyname failed: No such device  // Ignore this message
        root@test:~#                          // Look, a root prompt within the container!
        root@test:~# exit
            logout
    me@host:~$                                // Back to the host

We can stop and then restart containers. No sudo needed, these are unprivileged containers:

    me@host:~$ lxc stop test
    me@host:~$ lxc stop test

And when we are done we can destroy the container:

    me@host:~$ lxc stop test
    me@host:~$ lxc destroy test

Creating a long-term container:

Now I want to create a container for a long-term service. Now we add security: This means adding non-root users, independent ssh access, and package upgrades. This container can function like a lightweight VM, though with rather less overhead.

    me@host:~$ lxc launch -p lanprofile ubuntu:disco test_2

We can login to our LAN router, and see the test_2 device on the network. This is a good opportunity to assign it a consistent IP address, so you can always find the container again. Stop and restart the container so it picks up the new IP address.

Let's create a user for me with ssh access

    me@host:~$ lxc shell test_2
        mesg: ttyname failed: No such device     // Ignore this message

        root@test_2:~# adduser me                // Includes creating a password
        root@test_2:~# adduser me sudo           // Add me to the "sudo" group for easy remote administration via ssh
        root@test_2:~# nano /etc/ssh/sshd_config

           PasswordAuthentication yes            // Temporary while we set up ssh keys

        root@test_2:~# systemctl restart sshd
        root@test_2:~# exit

Copy my key. Remember to do this from ALL systems you are going to SSH into this container from:

    me@desktop:~$ ssh-copy-id me@192.168.1.124

Now I can ssh directly into the container using keys, so let's end password login.

    me@test_2:~$ sudo nano /etc/ssh/sshd_config

           PermitRootLogin no
           PasswordAuthentication no

    me@test_2:~$ sudo systemctl restart sshd

Remove the default "ubuntu" user, since we won't be using it.

    me@test_2:~$ sudo deluser ubuntu
    me@test_2:~$ sudo rm -r /home/ubuntu

Moving on to package management, simplify the apt sources so only -main and -universe are seen in -updates and -security. We only need what the installed service requires.

    me@test_2:~$ sudo nano /etc/apt/sources.list

        deb http://archive.ubuntu.com/ubuntu disco main universe
        deb http://archive.ubuntu.com/ubuntu disco-updates main universe
        deb http://security.ubuntu.com/ubuntu disco-security main universe

    me@test_2:~$ sudo apt update                        // Since the sources have changed
    me@test_2:~$ sudo apt upgrade                       // Now is a good time

Finally, let's install unattended-upgrades and configure it to upgrade ALL packages from our limited apt sources. This means we are less likely to discover months of unapplied upgrades and security fixes. This is optional, merely my preference:

    me@test_2:~$ sudo apt install unattended-upgrades
    me@test_2:~$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

        // Uncomment the following two lines:
        "${distro_id}:${distro_codename}-security";
        "${distro_id}:${distro_codename}-updates";

And there we have it - a long-term container that is easily (but securely) accessed via ssh for maintenance and automatically pulls package updates. Lightweight VM-like behavior with a consistent IP address. Note that "lxc shell" on the host will still give a root prompt, but recall that the purpose of a container is to keep the service from getting out, not to keep host from getting in. Also note that, due to macvlan networking, the container cannot communicate across the networkk with the host.

How I set up LXD on my Ubuntu 19.04 server

I have a lovely little server that is slowly filling with LXD containers.

Here is how I set up LXD on the server (host).

Install LXD:

My host started as a 19.04 minimal install, so snapd wasn't included. LXD is packaged only for snap now (the deb simply installs the snap).
These references were extremely helpful. Read (or re-read) them: reference 1 reference 2

    host:~$ sudo apt install snapd
    host:~$ sudo snap install lxd
    host:~$ sudo adduser me lxd     // Add me to the LXD group
    host:~$ newgrp lxd              // New group takes effect without logout/login

First Run:

The very first time you run LXD, it must be initialized. It asks a set of questions to set up the default profile. I find that the defaults are quite satisfactory, with one exception - I named the storage:

    host:~$ lxd init               // First run of LXD only - creates profile
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]: container_storage
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

My Preferences:

Yours preferences may vary.
  1. I prefer nano over vi for the default text editor. I know it's silly to have such a preference, but I do.
  2. My containers get their IP address from the LAN router instead of the host, using macvlan. This means that containers can talk to the LAN, and to each other, but not to the host. Personally, I see this as a feature, not a bug.

Set default editor as nano (instead of vi). This is obviously nothing but catering to my personal taste, and has no effect on other steps:

    host:~$ echo 'export EDITOR=nano' >> ~/.profile
    host:~$ source ~/.profile

Change the networking profile from default (NAT) to instead pull IP addresses for each container from the LAN router (macvlan). This is a matter of personal taste - it simply means I have one place to set IP addresses, the router, for all devices and containers. This only works with wired networking...if you are using wifi to connect a server full of containers to the LAN, then you really should rethink your plan anyway! (Reference)

    host:~$ ip route show default 0.0.0.0/0                          // Learn the eth interface
        default via 192.168.2.1 dev enp0s3 proto dhcp metric 600     // Mine is enp0s3 

    host:~$ lxc profile copy default lanprofile                      // Make mistakes on a copy, not the original
    host:~$ lxc profile device set lanprofile eth0 nictype macvlan   // Change nictype field
    host:~$ lxc profile device set lanprofile eth0 parent enp0s3     // Change parent field to real eth interface

Test:

Now that LXD is installed and configured, we can set up an unprivileged test container. An "unprivileged" container means that the container runs as an ordinary user on the larger system - if a process escapes the container, it has only normal (non-sudo, non-root) user permissions. LXD creates unprivileged containers by default so this part is pretty easy. Let's use the "lanprofile" networking profile we just created. Let's use Ubuntu Disco (19.04). And let's call the container "test":

    host:~$ lxc launch -p lanprofile ubuntu:disco test

The container is now running. Login to the LAN's router (or wherever your DHCP server is), and see that it's there among the dhcp clients.

That's all for LXD setup, Now I'm ready to create containers and fill them with services.