Showing posts with label headless. Show all posts
Showing posts with label headless. Show all posts

Monday, August 12, 2019

Experimenting with USB devices across the LAN with USBIP

USBIP is a Linux tool for accessing USB devices across a network. I'm trying it out.


At one end of the room, I have a Raspberry Pi with
  • A Philips USB Webcam
  • A no-name USB GPS dongle
  • A Nortek USB Z-Wave/Zigbee network controller dongle
At the other end of the room is my laptop.

Before starting anything, I plugged all three into another system to ensure that they worked properly.


Raspberry Pi Server Setup

The Pi is running stock Raspbian Buster, with the default "pi" user replaced by a new user ("me") with proper ssh keys.

Before we start, here's what the 'lsusb' looks like on the Pi

    me@pi:~ $ lsusb
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Now we plug in the three USB devices and see what changed

    me@pi:~ $ lsusb
        Bus 001 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 001 Device 005: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 001 Device 006: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And here are the new devices created or modified

    me@pi:~ $ ls -l /dev | grep 12    // 12 is today's date
        drwxr-xr-x 4 root root          80 Aug 12 00:46 serial
        lrwxrwxrwx 1 root root           7 Aug 12 00:46 serial0 -> ttyAMA0
        drwxr-xr-x 4 root root         220 Aug 12 00:47 snd
        crw--w---- 1 root tty     204,  64 Aug 12 00:46 ttyAMA0
        crw-rw---- 1 root dialout 188,   0 Aug 12 00:46 ttyUSB0
        drwxr-xr-x 4 root root          80 Aug 12 00:47 v4l
        crw-rw---- 1 root video    81,   3 Aug 12 00:47 video0

Looks like...
  • /dev/ttyAMA0 is the Nortek Z-Wave controller
  • /dev/ttyUSB0 is the GPS stick
  • /dev/video0 is the webcam

Installing USBIP onto Raspbian Buster is easy. However, it is DIFFERENT from stock Debian or Ubuntu. This step is Raspbian-only

    me@pi:~$ sudo apt install usbip
Now load the kernel module. The SERVER always uses the module 'usbip_host'.

    me@pi:~$ sudo modprobe usbip_host     // does not persist across reboot

List the devices the usbip can see. Note each Bus ID - we'll need those later

    me@pi:~ $ usbip list --local
 - busid 1-1.1 (0424:ec00)
   Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

 - busid 1-1.2 (0471:0329)
   Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

 - busid 1-1.4 (067b:2303)
   Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

 - busid 1-1.5 (10c4:8a2a)
   Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

  • We can ignore the Ethernet adapter
  • The Webcam is at 1-1.2
  • The GPS dongle is at 1-1.4
  • The Z-Wave Controller is at 1-1.5

Bind the devices.

    me@pi:~$ sudo usbip bind --busid=1-1.2        // does not persist across reboot
        usbip: info: bind device on busid 1-1.2: complete

    me@pi:~$ sudo usbip bind --busid=1-1.4        // does not persist across reboot
        usbip: info: bind device on busid 1-1.4: complete

    me@pi:~$ sudo usbip bind --busid=1-1.5        // does not persist across reboot
        usbip: info: bind device on busid 1-1.5: complete

The USB dongle will now appear to any client on the network just as though it was plugged in locally.

If you want to STOP serving a USB device:

    me@pi:~$ sudo usbip unbind --busid=1-1.2

The server (usbipd) process may or may not actually be running, serving on port 3240. Let's check:
    me@pi:~ $ ps -e | grep usbipd
        18966 ?        00:00:00 usbipd

    me@:~ $ sudo netstat -tulpn | grep 3240
        tcp        0      0 0.0.0.0:3240            0.0.0.0:*               LISTEN      18966/usbipd        
        tcp6       0      0 :::3240                 :::*                    LISTEN      18966/usbipd

We know that usbipd is active and listening. If not, start usbipd with:

    me@:~ $ sudo usbipd -D

You can run it more than one; only one daemon will start. The usbipd server does NOT need to be running to bind/unbind USB devices - you can start the server and bind/unbind in any order you wish. If you need to debug a connection, omit the -D (daemonize; fork into the background) so you can see the debug messages. See 'man usbipd' for the startup options to change port, IPv4, IPv6, etc.


Laptop Client Setup

Let's look at the USB devices on my laptop before starting:

    me@laptop:~$ lsusb
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

In stock Debian (not Raspbian) and Ubuntu, usbip is NOT a separate package. It's included in the 'linux-tools-generic' package, which many folks already have installed...

    me@laptop:~$ apt list linux-tools-generic
        Listing... Done
        linux-tools-generic/disco-updates 5.0.0.23.24 amd64   // Doesn't say "[installed]"

...but apparently I don't. Let's install it.

    me@laptop:~$ sudo apt install linux-tools-generic

Now load the kernel module. The CLIENT always uses the kernel module 'vhci-hcd'.

    me@laptop:~$ sudo modprobe vhci-hcd     // does not persist across reboot

List the available USB devices on the Pi server (IP addr aa.bb.cc.dd). Those Bus IDs should look familiar.

    me@laptop:~$ usbip list -r aa.bb.cc.dd                        // List available on the IP address
        usbip: error: failed to open /usr/share/hwdata//usb.ids   // Ignore this error
        Exportable USB devices
        ======================
         - aa.bb.cc.dd
              1-1.5: unknown vendor : unknown product (10c4:8a2a)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
                   : (Defined at Interface level) (00/00/00)
                   :  0 - unknown class / unknown subclass / unknown protocol (ff/00/00)
                   :  1 - unknown class / unknown subclass / unknown protocol (ff/00/00)


              1-1.4: unknown vendor : unknown product (067b:2303)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4
                   : (Defined at Interface level) (00/00/00)

              1-1.2: unknown vendor : unknown product (0471:0329)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2
                   : (Defined at Interface level) (00/00/00)

Now we attach the three USB devices. This will not persist across a reboot.

    me@laptop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.2
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.4
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.5
    // No feedback upon success

The remote USB devices now show in 'lsusb'

    me@laptop:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 002: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And we can see that new devices have appeared in /dev. Based upon the order we attached, it's likely that
  • The webcam 1-1.2 is at /dev/video2
  • The GPS dongle 1-1.4 is probably at /dev/ttyUSB0
  • The Z-Wave controller 1-1.5 is at /dev/ttyUSB1
  • The same dongle includes a Zigbee controller, too, at /dev/ttyUSB2
The Z-Wave/Zigbee controller has had it's major number changed from 204 to 188. We don't know if that's important or not yet.

    me@laptop:~$ ls -l /dev | grep 12
        drwxr-xr-x  4 root root            80 Aug 12 00:56 serial
        crw-rw----  1 root dialout 188,     0 Aug 12 00:56 ttyUSB0
        crw-rw----  1 root dialout 188,     1 Aug 12 00:56 ttyUSB1
        crw-rw----  1 root dialout 188,     2 Aug 12 00:56 ttyUSB2
        crw-rw----+ 1 root video    81,     2 Aug 12 00:56 video2


Testing Results

I tested the GPS using the 'gpsmon' application, included with the 'gpsd-clients' package. We don't actually need gpsd, we can connect gpsmon directly to the remote USB device.

    me@laptop:~$ gpsmon /dev/ttyUSB0
        gpsmon:ERROR: SER: device open of /dev/ttyUSB0 failed: Permission denied - retrying read-only
        gpsmon:ERROR: SER: read-only device open of /dev/ttyUSB0 failed: Permission denied

Aha, a permission issue, not a usbip failure!
Add myself to the 'dialout' group, and then it works. A second test across a VPN connection, from a remote location, was also successful.

    me@laptop:~$ ls -la /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 11 21:41 /dev/ttyUSB0    // 'dialout' group

    me@laptop:~$ sudo adduser me dialout
        Adding user `me' to group `dialout' ...
        Adding user me to group dialout
        Done.

    me@laptop:~$ newgrp dialout    // Prevents need to logout/login for new group to take effect

    me@laptop:~$ gpsmon /dev/ttyUSB0
    // Success!

The webcam is immediately recognized in both Cheese and VLC, and plays across the LAN instantly. There is a noticeable half-second lag. A second test, across a VPN connection from a remote location, had the USB device recognized but not enough signal was arriving in timely order for the applications to show the video.

There were a few hiccups along the way. The --debug flag helps a lot to track down the problems:
  • Client failed to connect with "system error" - turns out usbipd was not running on the server.
  • Client could see the list, but failed to attach with "attach failed" - needed to reboot the server (not sure why)
  • An active usbip connection prevents my laptop from sleeping properly
  • The Z-wave controller require HomeAssistant or equivalent to run, a bit more that I want to install onto the testing laptop. Likely to have permission issues, too.


Cleaning up

To tell a CLIENT to cease using a remote USB (virtual unplug), you need to know the usbip port number. Well, not really: We have made only one persistent change; we could simply reboot instead.

    me@laptop:~$ usbip port   // Not using sudo - errors, but still port numbers
        Imported USB devices
        ====================
        libusbip: error: fopen
        libusbip: error: read_record
        Port 00:  at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/007
        libusbip: error: fopen
        libusbip: error: read_record
        Port 01:  at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/005
        libusbip: error: fopen
        libusbip: error: read_record
        Port 02:  at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/006

    me@laptop:~$ sudo usbip port    // Using sudo, no errors and same port numbers
        Imported USB devices
        ====================
        Port 00: <port in use> at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> usbip://aa.bb.cc.dd:3240/1-1.2
                   -> remote bus/dev 001/007
        Port 01: <port in use> at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> usbip://aa.bb.cc.dd:3240/1-1.4
                   -> remote bus/dev 001/005
        Port 02: <port in use> at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> usbip://aa.bb.cc.dd:3240/1-1.5
                   -> remote bus/dev 001/006
 
    me@laptop:~$ sudo usbip detach --port 00
        usbip: info: Port 0 is now detached!

    me@laptop:~$ sudo usbip detach --port 01
        usbip: info: Port 1 is now detached!

    me@laptop:~$ sudo usbip detach --port 02
        usbip: info: Port 2 is now detached!

    me@laptop:~$ lsusb              // The remote USB devices are gone now
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    me@laptop:~$ sudo modprobe -r vhci-hcd    // Remove the kernel module

The only two persistent change we made on the CLIENT were adding myself to the 'dialout' group and installing the 'linux-tools-generic' package, so let's remove that. If you ALREADY were in the 'dialout' group, or had the package installed for other reasons, then obviously don't remove it. It's not the system's responsibility to keep track of why you have certain permissions or packages -- that's the human's job. After this step, my CLIENT is back to stock Ubuntu.

    me@laptop:~$ sudo deluser me dialout                  // Takes effect after logout
    me@laptop:~$ sudo apt autoremove linux-tools-generic  // Immediate

Telling a SERVER to stop sharing a USB device (virtual unplug) and shut down the server is much easier. Of course, this is also a Pi, and we did make any changes permanent, so it might be easier to simply reboot it.

    me@pi:~$ usbip list -l
         - busid 1-1.1 (0424:ec00)
           Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

         - busid 1-1.2 (0471:0329)
           Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

         - busid 1-1.4 (067b:2303)
           Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

         - busid 1-1.5 (10c4:8a2a)
           Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

    me@pi:~$ sudo usbip unbind --busid=1-1.2
        usbip: info: unbind device on busid 1-1.2: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.4
        usbip: info: unbind device on busid 1-1.4: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.5
        usbip: info: unbind device on busid 1-1.5: complete

    me@pi:~$ sudo pkill usbipd

The only persistent change we made on the Pi is installing the 'usbip' package. Once removed, we're back to stock Raspbian.

    me@pi:~$ sudo apt autoremove usbip


Making it permanent

There are two additional steps to making a permanent server, and essentially the same two steps to make a permanent client. This means a USBIP server that begins serving automatically upon boot, and a client that automatically connects to the server upon boot.

Add the kernel modules to /etc/modules so that the USBIP kernel modules will be automatically loaded at boot. To remove a client or server, delete the line from /etc/modules. You don't need to use 'nano' - use any text editor you wish, obviously.

    me@pi:~$ sudo nano /etc/modules     // usbipd SERVER

        usbip_host

    me@laptop:~$ sudo nano /etc/modules     // usbip CLIENT

        usbip_vhci-hcd

    // Another way to add the USBIP kernel modules to /etc/modules on the SERVER
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'usbip_host' >> /etc/modules
    me@pi:~# exit

    // Another way to add the USBIP kernel modules to /etc/modules on the CLIENT
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'vhci-hcd' >> /etc/modules
    me@pi:~# exit

Add a systemd job to the SERVER to automatically bind the USB devices. You can use systemd to start, stop, and restart the server conveniently, and for to start serving at startup automatically.

    me@pi:~$ sudo nano /lib/systemd/system/usbipd.service

        [Unit]
        Description=usbip host daemon
        After=network.target

        [Service]
        Type=forking
        ExecStart=/usr/sbin/usbipd -D
        ExecStartPost=/bin/sh -c "/usr/sbin/usbip bind --$(/usr/sbin/usbip list -p -l | grep '#usbid=10c4:8a2a#' | cut '-d#' -f1)"
        ExecStop=/bin/sh -c "/usr/lib/linux-tools/$(uname -r)/usbip detach --port=$(/usr/lib/linux-tools/$(uname -r)/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new SERVER:
    me@pi:~$ sudo pkill usbipd                          // End the current server daemon (if any)
    me@pi:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@pi:~$ sudo systemctl enable usbipd.service       // Set to run at startup
    me@pi:~$ sudo systemctl start usbipd.service        // Run now

Add a systemd job to the CLIENT to automatically attach the remote USB devices at startup. You can use systemd to unplug conveniently before sleeping, and to reset the connection of needed. Note: On the "ExecStart" line, substitute your server's IP address for aa.bb.cc.dd in two places.

    me@laptop:~$ sudo nano /lib/systemd/system/usbip.service

        [Unit]
        Description=usbip client
        After=network.target

        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/bin/sh -c "/usr/bin/usbip attach -r aa.bb.cc.dd -b $(/usr/bin/usbip list -r aa.bb.cc.dd | grep '10c4:8a2a' | cut -d: -f1)"
        ExecStop=/bin/sh -c "/usr/bin/usbip detach --port=$(/usr/bin/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new CLIENT attachment(s):

    me@laptop:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@laptop:~$ sudo systemctl enable usbip.service       // Set to run at startup
    me@laptop:~$ sudo systemctl start usbip.service        // Run now

Saturday, August 10, 2019

Experiment: Home Assistant in an LXD container without a venv

Update: August 2020 (one year later)

Here's a slightly different way of doing it entirely from the host. Tested with HomeAssistant version 114.

lxc launch -p lanprofile ubuntu:focal ha-test

# Update apt so we can install pip
cat <<EOF > /tmp/container-sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
deb http://security.ubuntu.com/ubuntu focal-security main universe
EOF

lxc file push /tmp/container-sources.list ha-test/etc/apt/sources.list
lxc exec ha-test -- apt update
lxc exec ha-test -- apt upgrade

# Here's the meat: Installing pip3, then using pip3 to install HA and dependencies.
lxc exec ha-test -- apt install python3-pip
lxc exec ha-test -- pip3 install aiohttp_cors defusedxml emoji hass_nabucasa home-assistant-frontend homeassistant mutagen netdisco sqlalchemy zeroconf

# Example of fixing a version error message that occurs during pip install:
# ERROR: homeassistant 0.114.2 has requirement cryptography==2.9.2, but you'll have cryptography 2.8 which is incompatible.
lxc exec ha-test -- pip3 install --upgrade cryptography==2.9.2

# Can't start the web browser without knowing the container's IP address. 
lxc list | grep ha-test
   | ha-test       | RUNNING | 192.168.2.248 (eth0) |      | CONTAINER | 0         |

# Run Hass
lxc exec ha-test -- hass
   Unable to find configuration. Creating default one in /root/.homeassistant

# Web browser: http://192.168.2.248:8123....and there it is!


Home Assistant usually runs in a Python 3 virtual environment (venv). The developers wisely chose Python 3 because it has all the libraries they need. The developers wisely chose venv to create an effective single, predictable platform upon which Home Assistant can run. Users like it because just a couple extra shell incantations is the difference between success and cryptic-error hell.

Let's see if I can get HA 0.97 to run on Ubuntu 19.04. In this case, I'm running it in a disposable LXD container so I can just throw it away after the experiment is complete. This experiment turned out to be about 75% successful - Home Assistant installs and runs outside the venv, but logging and sqlalchemy failed to install, so the final product had some limitations.


Setup

First, let's create the LXD container. Step 1. Step 2. I use a networking profile ("lanprofile") that uses DHCP to request an IP address from my router instead of the local server. I'm using an Ubuntu 19.04 ("Disco") image for the container. And I'm calling the container "ha-test2," second in a line of Home Assistant test containers.

    me@host:~$ lxc launch -p lanprofile ubuntu:disco ha-test2

After a minute or two, the container is running and has picked up an IP address from the router.

    me@host:~$ lxc list
        +----------+---------+----------------------+-----
        |   NAME   |  STATE  |        IPV4          |
        +----------+---------+----------------------+-----
        | ha-test2 | RUNNING | 192.168.1.252 (eth0) |
        +----------+---------+----------------------+-----

Let's enter the container. Note that change to a root prompt within the container. This is an unprivileged container (LXD's default), so root within the container is NOT root for the rest of the system. Note also the mysterious "ttyname failed: No such device" error, due to a very minor bug but does not affect our use of the container in any way.

    me@host:~$ lxc shell ha-test2
        mesg: ttyname failed: No such device
        root@ha-test2:~#

OPTIONAL: Limit the Ubuntu sources. We don't need -resrticted or -multiverse or -proposed or -backports, etc. I replaced the entire file with the following three lines. Proper format is important!

    root@ha-test2:~# nano /etc/apt/sources.list

        deb http://archive.ubuntu.com/ubuntu disco main universe
        deb http://archive.ubuntu.com/ubuntu disco-updates main universe
        deb http://security.ubuntu.com/ubuntu disco-security main universe

OPTIONAL: Expand Unattended Upgrades to handle 100% of the limited sources. I replaced the entire file with the following five lines.

    root@ha-test2:~# nano /etc/apt/apt.conf.d/50unattended-upgrades 

        Unattended-Upgrade::Allowed-Origins {
            "${distro_id}:${distro_codename}";
            "${distro_id}:${distro_codename}-security";
            "${distro_id}:${distro_codename}-updates";
        };

Since this is the first run of the package manager...

    root@ha-test2:~# apt update
    root@ha-test2:~# apt upgrade

Home Assistant uses Python 3's pip, not debs. So we install pip.

    root@ha-test2:~# apt install python3-pip


First Try - Learning Curve

Now we can use pip to install Home Assistant. This command will run for a few minutes, and will produce a lot of output as it downloads many dependencies. Some of those installs output, at first glance, messages that seem like errors -- read them carefully, they are probably uninstall errors if packages were being upgraded...which they are not, of course.

    root@ha-test2:~# pip3 install homeassistant

The first run of 'hass' (the Home Assistant program name) is where we start to encounter errors that need to be investigated and fixed. When the system ground to a halt for several minutes, I used CTRL+C to end the process and return to a shell prompt.

    root@ha-test2:~# hass

        // Lots of success...but then:

        2019-08-09 22:35:57 INFO (MainThread) [homeassistant.bootstrap] Setting up {'system_log'}
        2019-08-09 22:35:57 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of aiohttp_cors==0.7.0
        2019-08-09 22:36:01 INFO (MainThread) [homeassistant.setup] Setting up http
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Error during setup of component http
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 178, in async_setup
            ssl_profile=ssl_profile,
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 240, in __init__
            setup_cors(app, cors_origins)
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/cors.py", line 22, in setup_cors
            import aiohttp_cors
        ModuleNotFoundError: No module named 'aiohttp_cors'
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of system_log. Setup failed for dependencies: http
        2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Setup failed for system_log: Could not set up all dependencies.
        2019-08-09 22:36:01 INFO (SyncWorker_4) [homeassistant.util.package] Attempting install of sqlalchemy==1.3.5
        2019-08-09 22:36:11 INFO (MainThread) [homeassistant.setup] Setting up recorder
        Exception in thread Recorder:
        Traceback (most recent call last):
          File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
            self.run()
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/__init__.py", line 211, in run
            from .models import States, Events
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/models.py", line 6, in 
            from sqlalchemy import (
        ModuleNotFoundError: No module named 'sqlalchemy'

        2019-08-09 22:36:21 WARNING (MainThread) [homeassistant.setup] Setup of recorder is taking over 10 seconds.

        // Thread hangs here. Use CTRL+C to abort back to a shell prompt

There are two errors there. Both are simply bugs in Home Assistant's list of dependencies. The developers neglected to include dependencies upon "aiohttp_cors" and "sqlalchemy". Let's uninstall all the pip packages and dependencies and start over. The dependencies are listed in the 'pip3 show' command. Remember to delete pip from the list of removals, and to add homeassistant. The pip3 uninstall command asks a lot of questions about deleting files and directories -- as long as the offered removals are in /usr/local, it won't break anything.

    root@ha-test2:~# pip3 show homeassistant
        Name: homeassistant
        Version: 0.97.1
        Summary: Open-source home automation platform running on Python 3.
        Home-page: https://home-assistant.io/
        Author: The Home Assistant Authors
        Author-email: hello@home-assistant.io
        License: Apache License 2.0
        Location: /usr/local/lib/python3.7/dist-packages
        Requires: pyyaml, async-timeout, bcrypt, voluptuous, voluptuous-serialize, importlib-metadata, ruamel.yaml, jinja2, cryptography, python-slugify, pip, PyJWT, requests, aiohttp, certifi, attrs, astral, pytz
        Required-by: 

    root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz


Second Try - Getting closer

For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.

    root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy

        [lots of installing]

    root@ha-test2:~# hass

        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up onboarding
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain config took 0.9 seconds.
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up automation
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain automation took 0.0 seconds.
        2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain onboarding took 0.0 seconds.
        2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.config] Unable to import ssdp: No module named 'netdisco'
        2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.setup] Setup failed for ssdp: Invalid config.
        2019-08-09 23:56:20 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of distro==1.4.0
        2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setting up updater
        2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setup of domain updater took 0.0 seconds.
        2019-08-09 23:56:24 INFO (SyncWorker_1) [homeassistant.util.package] Attempting install of mutagen==1.42.0
        2019-08-09 23:56:29 INFO (SyncWorker_2) [homeassistant.loader] Loaded google_translate from homeassistant.components.google_translate
        2019-08-09 23:56:29 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of hass-nabucasa==0.16
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up cloud
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Error during setup of component cloud
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/cloud/__init__.py", line 167, in async_setup
            from hass_nabucasa import Cloud
        ModuleNotFoundError: No module named 'hass_nabucasa'
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up mobile_app
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.config] Unable to import zeroconf: No module named 'zeroconf'
        2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Setup failed for zeroconf: Invalid config.
        2019-08-09 23:56:50 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of home-assistant-frontend==20190805.0
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain mobile_app took 0.0 seconds.
        2019-08-09 23:56:50 INFO (SyncWorker_3) [homeassistant.loader] Loaded notify from homeassistant.components.notify
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up notify
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain notify took 0.0 seconds.
        2019-08-09 23:56:50 INFO (MainThread) [homeassistant.components.notify] Setting up notify.mobile_app
        2019-08-09 23:57:24 INFO (MainThread) [homeassistant.setup] Setting up frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Error during setup of component frontend
        Traceback (most recent call last):
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
            hass, processed_config
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 267, in async_setup
            root_path = _frontend_root(repo_path)
          File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 244, in _frontend_root
            import hass_frontend
        ModuleNotFoundError: No module named 'hass_frontend'
        2019-08-09 23:57:24 INFO (SyncWorker_0) [homeassistant.util.package] Attempting install of gTTS-token==1.1.3
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of logbook. Setup failed for dependencies: frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for logbook: Could not set up all dependencies.
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of map. Setup failed for dependencies: frontend
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for map: Could not set up all dependencies.
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of default_config. Setup failed for dependencies: cloud, frontend, logbook, map, ssdp, zeroconf
        2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for default_config: Could not set up all dependencies.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setting up tts
        2019-08-09 23:57:30 INFO (SyncWorker_1) [homeassistant.components.tts] Create cache dir /root/.homeassistant/tts.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setup of domain tts took 0.0 seconds.
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.bootstrap] Home Assistant initialized in 87.48s
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Starting Home Assistant
        2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Timer:starting

We have two missing dependencies (netdisco and zeroconf), and a bunch of missing internal homeassistant functions. This looks a bit like a race condition - the setup script is expecting functions that aren't-quite-ready yet. This also explains why many of these errors do not appear during a subsequent run of hass.

Let's delete and try again with those two additional dependencies....
    root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz aiohttp_cors sqlalchemy
    root@ha-test2:~# rm -r .homeassistant/


Third Try - Close enough to call it success

For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.

    root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy netdisco zeroconf

        [lots of installing]

    root@ha-test2:~# hass

        // No missing dependencies
        // Same setup errors

On the first run of hass, the dependency errors are gone, but the setup errors remain and the website is still unavailable. One the second run of hass, no errors at all, the website and all features work. The system is ready for systemd integration to bring hass up and down with the system.


Substituting Debs for Pips

Many of those pip dependencies are also available in Debian and Ubuntu. Let's try adding the debs, one by one, and see if we can reduce the number of pip dependencies. This is a separate experiment, obviously.

The process here is to delete homeassistant, it's pip dependencies, and it's config files, then replace Pips with Debs. We want to see if homeassistant pulls in the relevant pip anyway. If so, we can delete that pip, then see if homeassistant installs and initializes properly. That means that this experiment is not persistent - Home Assistant updates (like 0.97 to 0.98) will pull in all the removed pips again.

Several packages are already installed in the default Ubuntu 19.04 image, but are superseded by pips:
  • python3-certifi, python3-cryptography, python3-jinja2, python3-multidict, python3-requests, python3-yarl
Some packages are not available as debs at all. These are all dependencies of homeassistant:
  • attrs, homeassistant, importlib-metadata, PyJWT, pyyaml, zipp
Several packages, once installed, no longer pull in the pip:

    root@ha-test2:~# apt install python3-async-timeout python3-voluptuous-serialize

These packages, after installed, continue to pull in the pip anyway:

    root@ha-test2:~# apt install python3-aiohttp python3-aiohttp-cors python3-astral python3-async-timeout python3-bcrypt python3-python-slugify python3-ruamel.yaml python3-tz python3-voluptuous python3-voluptuous-serialize


After intalling all those debs, the homeassistant install looks something like this:

    root@ha-test2:~# pip3 install homeassistant
    root@ha-test2:~# pip3 uninstall aiohttp aiohttp_cors astral bcrypt certifi cryptography jinja2 multidict python-slugify pytz requests ruamel.yaml voluptuous yarl
    root@ha-test2:~# hass      // first time - no new install errors
    root@ha-test2:~# hass      // frontend works, no startup errors

Of course, this was an experiment - your mileage may vary. You may encounter problems that I did not. But it IS clearly possible to install Home Assistant into a non-venv environment, clearly possible to install Home Assistant into an LXD container, and clearly possible to more closely integrate Home Assistant into a Debian-based system.

Thursday, August 8, 2019

Creating an LXD container on my Ubuntu 19.04 host

I just finished setting up LXD on my Ubuntu 19.04 server, and I'm ready to create a container.

Installing the service into the container is a separate step - this is just setting up and configuring the container itself.

Creating a disposable container:

Actually, we did this already with our test container:

    me@host:~$ lxc launch -p lanprofile ubuntu:disco test

Let's see if that container is still there:

    me@host:~$ lxc list
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
        |      NAME      |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
        | test           | RUNNING | 192.168.1.124 (eth0)| 2615:a000:141f:e267:215:3eef:fe2a:c55d (eth0) | PERSISTENT | 0         
        |
        +----------------+---------+---------------------+-----------------------------------------------+------------+-----------+

We can enter the container to run commands on it's shell. Note that root inside the container is not root (unprivileged) on the host. The container comes with a default "ubuntu" user, but we have root so we don't seem to need the user.

    me@host:~$ lxc shell test
        mesg: ttyname failed: No such device  // Ignore this message
        root@test:~#                          // Look, a root prompt within the container!
        root@test:~# exit
            logout
    me@host:~$                                // Back to the host

We can stop and then restart containers. No sudo needed, these are unprivileged containers:

    me@host:~$ lxc stop test
    me@host:~$ lxc stop test

And when we are done we can destroy the container:

    me@host:~$ lxc stop test
    me@host:~$ lxc destroy test

Creating a long-term container:

Now I want to create a container for a long-term service. Now we add security: This means adding non-root users, independent ssh access, and package upgrades. This container can function like a lightweight VM, though with rather less overhead.

    me@host:~$ lxc launch -p lanprofile ubuntu:disco test_2

We can login to our LAN router, and see the test_2 device on the network. This is a good opportunity to assign it a consistent IP address, so you can always find the container again. Stop and restart the container so it picks up the new IP address.

Let's create a user for me with ssh access

    me@host:~$ lxc shell test_2
        mesg: ttyname failed: No such device     // Ignore this message

        root@test_2:~# adduser me                // Includes creating a password
        root@test_2:~# adduser me sudo           // Add me to the "sudo" group for easy remote administration via ssh
        root@test_2:~# nano /etc/ssh/sshd_config

           PasswordAuthentication yes            // Temporary while we set up ssh keys

        root@test_2:~# systemctl restart sshd
        root@test_2:~# exit

Copy my key. Remember to do this from ALL systems you are going to SSH into this container from:

    me@desktop:~$ ssh-copy-id me@192.168.1.124

Now I can ssh directly into the container using keys, so let's end password login.

    me@test_2:~$ sudo nano /etc/ssh/sshd_config

           PermitRootLogin no
           PasswordAuthentication no

    me@test_2:~$ sudo systemctl restart sshd

Remove the default "ubuntu" user, since we won't be using it.

    me@test_2:~$ sudo deluser ubuntu
    me@test_2:~$ sudo rm -r /home/ubuntu

Moving on to package management, simplify the apt sources so only -main and -universe are seen in -updates and -security. We only need what the installed service requires.

    me@test_2:~$ sudo nano /etc/apt/sources.list

        deb http://archive.ubuntu.com/ubuntu disco main universe
        deb http://archive.ubuntu.com/ubuntu disco-updates main universe
        deb http://security.ubuntu.com/ubuntu disco-security main universe

    me@test_2:~$ sudo apt update                        // Since the sources have changed
    me@test_2:~$ sudo apt upgrade                       // Now is a good time

Finally, let's install unattended-upgrades and configure it to upgrade ALL packages from our limited apt sources. This means we are less likely to discover months of unapplied upgrades and security fixes. This is optional, merely my preference:

    me@test_2:~$ sudo apt install unattended-upgrades
    me@test_2:~$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

        // Uncomment the following two lines:
        "${distro_id}:${distro_codename}-security";
        "${distro_id}:${distro_codename}-updates";

And there we have it - a long-term container that is easily (but securely) accessed via ssh for maintenance and automatically pulls package updates. Lightweight VM-like behavior with a consistent IP address. Note that "lxc shell" on the host will still give a root prompt, but recall that the purpose of a container is to keep the service from getting out, not to keep host from getting in. Also note that, due to macvlan networking, the container cannot communicate across the networkk with the host.

How I set up LXD on my Ubuntu 19.04 server

I have a lovely little server that is slowly filling with LXD containers.

Here is how I set up LXD on the server (host).

Install LXD:

My host started as a 19.04 minimal install, so snapd wasn't included. LXD is packaged only for snap now (the deb simply installs the snap).
These references were extremely helpful. Read (or re-read) them: reference 1 reference 2

    host:~$ sudo apt install snapd
    host:~$ sudo snap install lxd
    host:~$ sudo adduser me lxd     // Add me to the LXD group
    host:~$ newgrp lxd              // New group takes effect without logout/login

First Run:

The very first time you run LXD, it must be initialized. It asks a set of questions to set up the default profile. I find that the defaults are quite satisfactory, with one exception - I named the storage:

    host:~$ lxd init               // First run of LXD only - creates profile
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]: container_storage
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

My Preferences:

Yours preferences may vary.
  1. I prefer nano over vi for the default text editor. I know it's silly to have such a preference, but I do.
  2. My containers get their IP address from the LAN router instead of the host, using macvlan. This means that containers can talk to the LAN, and to each other, but not to the host. Personally, I see this as a feature, not a bug.

Set default editor as nano (instead of vi). This is obviously nothing but catering to my personal taste, and has no effect on other steps:

    host:~$ echo 'export EDITOR=nano' >> ~/.profile
    host:~$ source ~/.profile

Change the networking profile from default (NAT) to instead pull IP addresses for each container from the LAN router (macvlan). This is a matter of personal taste - it simply means I have one place to set IP addresses, the router, for all devices and containers. This only works with wired networking...if you are using wifi to connect a server full of containers to the LAN, then you really should rethink your plan anyway! (Reference)

    host:~$ ip route show default 0.0.0.0/0                          // Learn the eth interface
        default via 192.168.2.1 dev enp0s3 proto dhcp metric 600     // Mine is enp0s3 

    host:~$ lxc profile copy default lanprofile                      // Make mistakes on a copy, not the original
    host:~$ lxc profile device set lanprofile eth0 nictype macvlan   // Change nictype field
    host:~$ lxc profile device set lanprofile eth0 parent enp0s3     // Change parent field to real eth interface

Test:

Now that LXD is installed and configured, we can set up an unprivileged test container. An "unprivileged" container means that the container runs as an ordinary user on the larger system - if a process escapes the container, it has only normal (non-sudo, non-root) user permissions. LXD creates unprivileged containers by default so this part is pretty easy. Let's use the "lanprofile" networking profile we just created. Let's use Ubuntu Disco (19.04). And let's call the container "test":

    host:~$ lxc launch -p lanprofile ubuntu:disco test

The container is now running. Login to the LAN's router (or wherever your DHCP server is), and see that it's there among the dhcp clients.

That's all for LXD setup, Now I'm ready to create containers and fill them with services.

Sunday, October 30, 2011

deluged on a server

I want to move my (quite limited) torrenting from my laptop onto the server. Here's how to run headless deluge on a server, and connect to it from the deluge client. (Instructions).

Since my server (Debian 6.0.3) is at version 1.2.3, but my Laptop (Ubuntu 11.10) is at version 1.3.3, using the GTK client won't work. I tired and tried, but ultimately the incompatibility defeated me...

...so instead of the GTK client, we'll use the web client. (There's also a console client)





Setup and start the deluged server

1) Run the following on the server as root:
apt-get install deluged deluge-console deluge-web
# To set this for automatic startup at boot, 
# see http://dev.deluge-torrent.org/wiki/UserGuide/InitScript

2) Run the following on the server as a user (not root)
deluged         # Run as USER to create the .config directory
pkill deluged   # Stop deluged

#Add an entry to the /home/USERNAME/.config/deluged/auth file
echo "USERNAME:my__deluge-only_password:5" >> /home/USERNAME/.config/deluged/auth

# Use deluge-console to change the config setting, allowing remote access.
# (For some reason, if you change .config/deluge/core.conf, the change is not persistent!)
deluge-console
config -s allow_remote True
exit

# Start deluged and deluge-web to launch the server and the web socket
deluged
deluge-web --fork

Connect to the server from the laptop:
Open a web browser to the server, port 8112: http: me.myserver.org:8112
Use the same password as the auth file (the "my_deluge-only_password")

And you should be in!

Try it with a small torrent (like a Debian Businesscard .iso)