Friday, May 8, 2020

Testing Ubuntu Core with Containers on VirtualBox

I want to try out Ubuntu Core to see if it's appropriate for running a small server with a couple containers.

The current OS is Ubuntu Server 20.04...but I'm really not using most of the Server features. Those are in the LXD containers. So this is an experiment to see if Ubuntu Core can function as the server OS.

Prerequisites: If you are looking to try this, you should already be familiar (not expert) with:
  • Using SSH
  • Using the vi text editor (Ubuntu Core lacks nano)
  • Basic networking concepts like dhcp
  • Basic VM and Container concepts


Download Ubuntu Core:
  • Create Ubuntu SSO Account (if you don't have one already)
  • Create a SSH Key (if you don't have one already)
  • Import your SSH Public Key to Ubuntu SSO.
  • Download an Ubuntu core .img file from https://ubuntu.com/download/iot#core
  • Convert the Ubuntu Core .img to a Virtualbox .vdi:

         me@desktop:~$ VBoxManage convertdd ubuntu-core-18-amd64.img ubuntu-core.vdi


Set up a new machine in VirtualBox:
  • Install VirtualBox (if you haven't already):

         me@desktop:~$ sudo apt install virtualbox

  • In the Virtualbox Settings, File -> Virtual Media Manager. Add the ubuntu-core.vdi
  • Create a New Machine. Use an existing Hard Disk File --> ubuntu-core.vdi
  • Check the network settings. You want a network that you will be able to access. I chose bridged networking so I could play with the new system from different locations, and set up a static IP address on the router. ENABLE promiscuous mode, so containers can get IP addresses from the router. Otherwise, VirtualBox will filter out the dhcp requests.
  • OPTIONAL: Additional tweaks to enhance performance.


Take a snapshot of your current network neighborhood:
  • Use this to figure out Ubuntu Core's IP address later on:
     me@Desktop:~$ ip neigh
     192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
     192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE
     192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
     192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
     192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
     fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY


Boot the image in VirtualBox:
  • The first boot of Ubuntu Core requires a screen and keyboard (one reason we're trying this in VirtualBox). Subsequent logins will be done by ssh.
  • Answer the couple setup questions.
  • Use your Ubuntu One login e-mail address.
  • The VM will reboot itself (perhaps more than once) when complete.
  • Note you cannot login to the VM's TTY. Ubuntu Core's default login is via ssh. Instead, the VM's TTY tells you the IP address to use for ssh.
  • Since we are using a VM, this is a convenient place to take an initial snapshot. If you make a mess of networking in the next step, you can revert the snapshot.


Let's do some initial configuration:
  • After the VM reboots, the Virtualbox screen only shows the IP address.

  • // SSH into the Ubuntu Core Guest
    me@desktop:~$ ssh my-Ubuntu-One-login-name@IP-address
     [...Welcome message and MOTD...]
    me@localhost:~$
    
    // The default name is "localhost"
    // Let's change that. Takes effect after reboot.
    me@localhost:~$ sudo hostnamectl set-hostname 'ubuntu-core-vm'
    
    // Set the timezone. Takes effect immediately.
    me@localhost:~$ sudo timedatectl set-timezone 'America/Chicago'
    
    // OPTIONAL: Create a TTY login
    // This can be handy if you have networking problems.
    me@localhost:~$ sudo passwd my-Ubuntu-One-login-name


Let's set up the network bridge so containers can draw their IP address from the router:

  • We use vi to edit the netplan configuration. When we apply the changes, the ssh connection will be severed so we must discover the new IP address to login again.

  • me@localhost:~$ sudo vi /writable/system-data/etc/netplan/00-snapd-config.yaml
    
         #// The following seven lines are the original file. Commented instead of deleted.
         # This is the network config written by 'console_conf'
         #network:
         #  ethernets:
         #    eth0:
         #      addresses: []
         #      dhcp4: true
         #  version: 2
    
         #// The following lines are the new config 
         network:
           version: 2
           renderer: networkd
    
           ethernets:
             eth0:
               dhcp4: no
               dhcp6: no
    
           bridges:
             # br0 is the name that containers use as the parent
             br0:
               interfaces:
                 # eth0 is the device name in 'ip addr'
                 - eth0
               dhcp4: yes
               dhcp6: yes
         #// End
         
    
    // After the file is ready, implement it:
    me@localhost:~$ sudo netplan generate
    me@localhost:~$ sudo netplan apply
    
    // If all goes well...your ssh session just terminated without warning.
    


Test our new network settings:
  • The Ubuntu Core VM window will NOT change the displayed IP address after the netplan change...but that IP won't work anymore.
  • If you happen to reboot (not necessary) you will see that the TTY window displays no IP address when bridged...unless you have created an optional TTY login.
  • Instead of rebooting, let's take another network snapshot and compare to earlier:

         me@Desktop:~$ ip neigh
         192.168.1.226 dev enp3s0 lladdr c6:12:89:22:56:e4 STALE
         192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
         192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE  <---- NEW
         192.168.1.235 dev enp3s0 lladdr DELAY                    <-----NEW
         192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
         192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
         192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
         fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY

  • We have two new lines: .226 and .235 One of those was the old IP address and one is the new. SSH into the new IP address, and you're back in.

    me@desktop:~$ ssh my-Ubuntu-One-user-name@192.168.1.226
    Welcome to Ubuntu Core 18 (GNU/Linux 4.15.0-99-generic x86_64)
     [...Welcome message and MOTD...]
    Last login: Thu May  7 16:11:38 2020 from 192.168.1.6
    me@localhost:~$

  • Let's take a closer look at our new, successful network settings.

    me@localhost:~$ ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    
    2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether c6:12:89:22:56:e4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.226/24 brd 192.168.1.255 scope global dynamic br0
           valid_lft 9545sec preferred_lft 9545sec
        inet6 2683:4000:a450:1678:c412:89ff:fe22:56e4/64 scope global dynamic mngtmpaddr noprefixroute
           valid_lft 600sec preferred_lft 600sec
        inet6 fe80::c412:89ff:fe22:56e4/64 scope link
           valid_lft forever preferred_lft forever
    
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
        link/ether 08:00:27:fd:20:92 brd ff:ff:ff:ff:ff:ff
    
    // Note that ubuntu-core-vm now uses the br0 address, and lacks an eth0 address.
    // That's what we want.


Set up static IP addresses on the Router and then reboot to use the new IP address.
  • Remember, the whole point of bridged networking is for the router to issue all the IP addresses and avoid doing a lot of NATing and Port Forwarding.
  • So now is the time to login to the Router and have it issue a constant IP address to the Bridge MAC address (in this case c6:12:89:22:56:e4). After this, ubuntu-core-vm (the Ubuntu Core Guest VM) will always have a predictable IP address.
  • Use VirtualBox to ACPI shutdown the VM, then restart it headless. We're looking for two changes: The hostname and the login IP address.
  • Starting headless can be done two ways:

    1. GUI: Virtualbox Start button submenu
    2. me@Desktop:~$  VBoxHeadless --startvm name-of-vm

  • Success at rebooting headless and logging into the permanent IP address is a good point for another VM Snapshot. And maybe a sandwich. Well done!


Install LXD onto ubuntu-core-vm:
  • Install:

    me@ubuntu-core-vm:~$ snap install lxd
    lxd 4.0.1 from Canonical✓ installed
    me@ubuntu-core-vm:~$

  • Add myself to the `lxd` group so 'sudo' isn't necessary anymore. This SHOULD work, but doesn't due to a bug (discussion)

    host:~$ sudo adduser --extrausers me lxd     // Works on most Ubuntu; does NOT work on Ubuntu Core even with --extrausers
    host:~$ newgrp lxd                           // New group takes effect without logout/login

  • Instead, edit the groups file directly using vi:

    // Use vi to edit the file:
    me@ubuntu-core-vm:~$ sudo vi /var/lib/extrausers/group
    
         // Change the lxd line:
         lxd:x:999:               // Old Line
         lxd:x:999:my-login-name  // New Line
    
    
    // Apply the new group settings without logout
    me@ubuntu-core-vm:~$ newgrp lxd
Configure LXD:
  • LXD is easy to configure. We need to make three changes from the default settings since we already have a bridge (br0) set up that we want to use.

    me@ubuntu-core-vm:~$ lxd init
    
    Would you like to use LXD clustering? (yes/no) [default=no]:
    Do you want to configure a new storage pool? (yes/no) [default=yes]:
    Name of the new storage pool [default=default]:
    Name of the storage backend to use (dir, lvm, ceph, btrfs) [default=btrfs]:
    Create a new BTRFS pool? (yes/no) [default=yes]:
    Would you like to use an existing block device? (yes/no) [default=no]:
    Size in GB of the new loop device (1GB minimum) [default=15GB]:
    Would you like to connect to a MAAS server? (yes/no) [default=no]:
    Would you like to create a new local network bridge? (yes/no) [default=yes]: no    <------------------------- CHANGE
    Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes   <-- CHANGE
    Name of the existing bridge or host interface: br0     <----------------------------------------------------- CHANGE
    Would you like LXD to be available over the network? (yes/no) [default=no]:
    Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
    Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
    
    me@ubuntu-core-vm:~$
  • Next, we change the networking profile so containers use the bridge:

    // Open the default container profile in vi
    me@ubuntu-core-vm:~$ lxc profile edit default
    
         config: {}
         description: Default LXD profile
         devices:
           # Container eth0, not ubuntu-core-vm eth0
           eth0:
             name: eth0
             nictype: bridged
             # This is the ubuntu-core-vm br0, the real network connection
             parent: br0
             type: nic
           root:
             path: /
             pool: default
             type: disk
         name: default
         used_by: []
  • Add the Ubuntu-Minimal stream for cloud-images, so our test container is small:

    me@ubuntu-core-vm:~$ lxc remote add --protocol simplestreams ubuntu-minimal https://cloud-images.ubuntu.com/minimal/releases/
Create and start a Minimal container:
    me@ubuntu-core-vm:~$ lxc launch ubuntu-minimal:20.04 test1
    Creating test1
    Starting test1
    
    me@ubuntu-core-vm:~$ lxc list
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    | NAME  |  STATE  |         IPV4         |                      IPV6                     |   TYPE    | SNAPSHOTS |
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    | test1 | RUNNING | 192.168.1.248 (eth0) | 2603:6000:a540:1678:216:3eff:fef0:3a6f (eth0) | CONTAINER | 0         |
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    
    
    // Let's test outbound connectivity from the container
    me@ubuntu-core-vm:~$ lxc shell test1
    root@test1:~# apt update
    Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
    [...lots of succesful server connections...]
    Get:26 http://archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1280 B]
    Fetched 16.3 MB in 5s (3009 kB/s)
    Reading package lists... Done
    Building dependency tree...
    Reading state information... Done
    5 packages can be upgraded. Run 'apt list --upgradable' to see them.
    root@test1:~#

Wednesday, February 19, 2020

Pushing a file from Host into an to LXD Container

One of the little (and deliberate) papercuts of using unprivileged LXD containers is that unless data flows in from a network connection, it likely has the wrong owner and permissions.

Here are two examples in the HomeAssistant container.

1. The HA container needs to talk to a USB dongle elsewhere in the building. It does so using USBIP, and I discussed how to make it work in this previous post.

2. I want the HA container to display some performance data about the host (uptime, RAM used, similar excitements). Of course, it's a container, so it simply cannot do that natively without using lots of jiggerypokery to escape the container. Instead, a script collects the information and pushes the information into the container every few minutes.

     $ sudo lxc file push /path/to/host/file.json container-name/path/to/container/

Easy enough, right.

Well, not quite. Home Assistant, when installed, creates a non-root user, and puts all of it's files in a subdirectory. Add another directory to keep things simple, and you get:

     /home/homeassistant/.homeassistant/external_files/

And, unfortunately, all those subdirectories are owned by a non-root user. So lxc cannot 'push' all the way into them (result: permission error).

    -rw-r--r-- 1   root root  154 Feb 19 15:34 file.json

The pushed file can only be pushed to in the wrong location, and gets there with the wrong ownership.



Systemd to the rescue: Let's create a systemd job on the container that listens for a push, then fixes the location and the ownership.

The feature is called a systemd.path.

Like a systemd timer it consists of two parts, a trigger (.path) and a service that gets triggered.

The .path file is very simple. Here's what I used for the trigger:

[Unit]
# /etc/systemd/system/server_status.path
Description=Listener for a new server status file

[Path]
PathModified=/home/homeassistant/.homeassistant/file.json

[Install]
WantedBy=multi-user.target

The service file is almost as simple. Here's what I used:

[Unit]
# /etc/systemd/system/server_status.service
Description=Move and CHOWN the server status file

[Service]
Type=oneshot
User=root
ExecStartPre=mv /home/homeassistant/.homeassistant/file.json /home/homeassistant/.homeassistant/external_files/
ExecStart=chown homeassistant:homeassistant /home/homeassistant/.homeassistant/external_files/file.json

[Install]
WantedBy=multi-user.target

Finally, enable and start the path (not the service)

sudo systemctl daemon-reload
sudo systemctl enable server_status.path
sudo systemctl start server_status.path


Sunday, February 2, 2020

Advanced Unattended Upgrade (Ubuntu): Chrome and Plex examples

This is a question that pops up occasionally in various support forums:

Why doesn't (Ubuntu) Unattended Upgrades work for all applications? How can I get it to work for my application?

Good question.

Here is what happens under the hood:  The default settings for Unattended Upgrades are for only packages in the "-security" pocket of the Ubuntu repositories.

Not "-updates", not "-backports", not "-universe", not any third-party repositories, not any PPAs. Just "-security".

This is a deliberately conservative choice -- while the Ubuntu Security Team keeps it's delta as small as possible, it's a historical fact that even small security patches have (unintentionally) introduced new bugs.



Here's how you can override that choice. 

Let's take a look at the top section of file /etc/apt/apt.conf.d/50unattended-upgrades, and focus on the "Allowed-Origins section." It's edited for clarity here:

Unattended-Upgrade::Allowed-Origins {
     "${distro_id}:${distro_codename}";
     "${distro_id}:${distro_codename}-security";
//   "${distro_id}:${distro_codename}-updates";
//   "${distro_id}:${distro_codename}-proposed";
//   "${distro_id}:${distro_codename}-backports";
};

There, you can see the various Ubuntu repo pockets.

You can also see that most of the options are commented out (the "//"). If you know how to use a basic text editor and sudo, you can safely change those settings. Warning: You can break your system quite horribly by enabling the wrong source. Enabling "-proposed" and other testing sources is a very bad idea.



How to add the -updates pocket of the Ubuntu Repos?

I've done this for years, BUT (this is important) I don't add lots of extra sources. Simply uncomment that line.

   "${distro_id}:${distro_codename}-updates";



That's all. When Unattended Upgrades runs next, it will load the new settings.



How to add the -universe pocket of the Ubuntu Repos?

You can create a '-universe' line like the others, but it won't do anything. The -universe pocket is not updated after an Ubuntu release.



How to add a generic new repository that's not in the Ubuntu Repos?

Add a line in the format to the end of the section:

    //    "${distro_id}:${distro_codename}-backports";
    "origin:section"       <-------- Add this format
    };

The "origin" and "section" strings comes from the source URL, but there is a bit of tweaking involved.

Step 1: Find the URL of the source that you want to add. It's located somewhere in /etc/apt/sources.list or /etc/apt/sources.list.* . It looks something like this...

    deb http://security.ubuntu.com/ubuntu eoan-security main restricted universe multiverse
      ...or...
    deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
      ...or...
    deb https://downloads.plex.tv/repo/deb/ public main


Step 2: Find the corresponding Release file in your system for the URL.

    http://security.ubuntu.com/ubuntu eoan-security
      ...becomes...
    /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease


    http://dl.google.com/linux/chrome/deb/ stable
      ...becomes...
    /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_Release


    https://downloads.plex.tv/repo/deb/ public
      ...becomes...
    /var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release


Step 3: Use grep to find the "Origin" string.

    $ grep Origin /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease
    Origin: Ubuntu

    $ grep Origin /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_Release
    Origin: Google LLC

    $ grep Origin /var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release
    Origin: Artifactory


Step 4: With the Origin string and Section (after the space in the URL), we have all the information we need:

    "Ubuntu:eoan-security"
       ...or...
    "Google LLC:stable"
       ...or...
    "Artifactory:public"



You're ready to add the appropriate string to the config file.


Step 5: Run Unattended Upgrades once, then check the log to make sure Unattended Upgrades accepted the change.

    $ sudo unattended-upgrade
    $ less /var/log/unattended-upgrades/unattended-upgrades.log   (sometimes sudo may be needed)

        You are looking for a recent line like:

    2020-02-02 13:36:23,165 INFO Allowed origins are: o=Ubuntu,a=eoan, o=Ubuntu,a=eoan-security, o=UbuntuESM,a=eoan, o=UbuntuESM,a=eoan-security, o=UbuntuESM,a=eoan-security

        Your new source and section should be listed.


Summary for folks who just want to know how to update Chrome (stable)


  1. Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades 
  2. In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"
    "Google LLC:stable"



Summary for folks who just want to know how to update Plex

  1. Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades 
  2. In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"
    "Artifactory:public"




Tuesday, August 20, 2019

Toggling the Minecraft Server using systemd features

The new school year is upon us, suddenly the kids are playing Minecraft much less.

This means that the Minecraft server is sitting there churning all day and night, spawning and unspawning, eating CPU and generating heat for a sparse collection of occasional players now. It's an old Sempron 145 (45w, single core), so a single world sitting idle still consumes 40% CPU.

We already use systemd to start and stop the server. Let's add a couple new features to stop the server during the school day. Oh, and let's stop it during the deep night, also.

Here's what we currently have: A basic start/stop/restart systemd service that brings up the server at start:

   ## /etc/systemd/system/minecraft.service

   [Unit]
   Description=Minecraft Server
   After=network.target

   [Service]
   RemainAfterExit=yes
   WorkingDirectory=/home/minecraft
   User=minecraft
   Group=minecraft

   # Start Screen, Java, and Minecraft
   ExecStart=screen -s mc -d -m java -server -Xms512M -Xmx1024M -jar server.jar nogui

   # Tell Minecraft to gracefully stop.
   # Ending Minecraft will terminate Java
   # systemd will kill Screen after the 10-second delay. No explicit kill for Screen needed
   ExecStop=screen -p 0 -S mc -X eval 'stuff "say SERVER SHUTTING DOWN. Saving map..."\\015'
   ExecStop=screen -p 0 -S mc -X eval 'stuff "save-all"\\015'
   ExecStop=screen -p 0 -S mc -X eval 'stuff "stop"\\015'
   ExecStop=sleep 10

   [Install]
   WantedBy=multi-user.target


If you do something like this, remember to:

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable/disable minecraft.service  // Autostart at boot
    $ sudo systemctl start/stop minecraft.service      // Manual start/stop


We need to start with a little bit of planning. After looking at the myriad of hours and days that the server should be available (Summer, Holidays, Weekends, School Afternoons), I don't see a way to make all those work smoothly together inside a cron job or systemd timer.

Instead, let's move the logic into a full-fledged Python script, and let the script decide whether the server should be on or off. Our systemd timer will run the script periodically.

Wait...that's not right. Systemd timers run only services. So the timer must trigger a service, the service runs the script, the script decides if the server should be on or off, and uses the existing service to do so.

Let's draw that out

minecraft-hourly.timer -+  (timers can only run services)
                        |
                        v
                minecraft-hourly.service -+  (service can run a script)
                                          |
                                          v
                                   minecraft-hourly.py -+ (start/stop logic and decision)
                                                        |
                                                        v
                                                 minecraft.service (start/stop the server)


We know where we are going, let's work backwards to get there. We need a Python script with logic, and the ability to decide if the server should be off or on based upon any give time or date.

## /home/me/minecraft-hourly.py

#!/usr/bin/env python3
import datetime, subprocess

def ok_to_run_server():
    """Determine if the server SHOULD be up"""
 
   now = datetime.datetime.now()

    ## All days, OK to run 0-2, 5-8, 16-24
    if -1 < now.hour < 2 or 4 < now.hour < 8 or 15 < now.hour < 24:
        return True

    ## OK to run on weekends -- now.weekday() =  6 or 7 
    if now.weekday() > 5:
        return True

    ## OK to run during Summer Vacation (usually mid May - mid Aug)
    if 5 < now.month < 8:
        return True 
    if now.month == 5 and now.day > 15:
        return True
    if now.month == 8 and now.day < 15:
        return True

    ## OK to run on School Holidays 2019-20
    ## Fill in these holidays!
    school_holidays = ["Aug 30 Fri","Sep 02 Mon"]
    if now.strftime("%b %d %a") in school_holidays:
        return True

    return False

def server_running():
    """Determine if the Minecraft server is currently up"""
    cmd = '/bin/systemctl is-active minecraft.service'
    proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
    if proc.communicate()[0].decode().strip('\n') == 'active':
        return True
    else:
        return False

def run_server(run_flag=True):
    """run_flag=True will start the service. False will stop the service"""
    cmd = '/bin/systemctl start minecraft.service'
    if not run_flag:
        cmd = '/bin/systemctl stop minecraft.service'
    proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
    proc.communicate()
    return

## If the server is stopped, but we're in an ON window, then start the server
if ok_to_run_server() and not server_running():
    run_server(True)
 
## If the server is running, but we're in a OFF window, then stop the server
elif not ok_to_run_server and server_running():
    run_server(False)


This script should be executable, and since it tells systemctl to start/stop services, it should be run using sudo. Let's try this during school hours on a school day:

    $ chmod +x /home/me/minecraft-hourly.py

    $ sudo /home/me/minecraft-hourly.py
        // No output

    $ systemctl status minecraft.service 
        ● minecraft.service - Minecraft Server
           Loaded: loaded (/etc/systemd/system/minecraft.service; enabled; vendor preset: enabled)
           Active: inactive (dead)
        // It worked!


Still working backward, let's create the systemd service that runs the script. The 'type' is 'oneshot' - this is not an always-available daemon. It's a script that does it's function, then terminates.

## /etc/systemd/system/minecraft-hourly.service.

[Unit]
Description=Minecraft shutdown during school and night
After=network.target

[Service]
Type=oneshot
ExecStart=/home/me/minecraft-hourly.py
StandardOutput=journal

[Install]
WantedBy=multi-user.target


We want the hourly script to be triggered by TWO events: Either the hourly timer OR by the system starting up. This also means that we DON'T want minecraft.service to automatically start anymore. We want the script to automatically start, and to decide.

    $ sudo systemctl daemon-reload                     // We added a new service
    $ sudo systemctl enable minecraft-hourly.service   // Run at boot
    $ sudo systemctl disable minecraft.service         // No longer needs to run at boot


Let's test it again during school hours. It should shut down the Minecraft server. It did.

    $ sudo systemctl start minecraft.service       // Wait for it to finish loading (1-2 minutes)
    $ sudo systemctl start minecraft-hourly.service
    $ systemctl status minecraft.service
        ● minecraft.service - Minecraft Server
           Loaded: loaded (/etc/systemd/system/minecraft.service; disabled; vendor preset: enabled)
           Active: inactive (dead)



Finally, let's set up a systemd timer to launch the hourly service...well, hourly.

## /etc/systemd/system/minecraft-hourly.timer:

[Unit]
Description=Run the Minecraft script hourly
[Timer]
OnBootSec=0min
OnCalendar=*-*-* *:00:00
Unit=minecraft-hourly.service

[Install]
WantedBy=multi-user.target


Writing a timer, like writing a service, isn't enough. Remember to activate them.

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable minecraft-hourly.timer   // Start at boot
    $ sudo systemctl start minecraft-hourly.timer    // Start now


And let's check to see if the new timer is working

    $ systemctl list-timers | grep minecraft
        Tue 2019-08-20 15:00:30 CDT  30min left    Tue 2019-08-20 14:00:52 CDT  29min ago     minecraft-hourly.timer       minecraft-hourly.service

Home Assistant in an LXD Container, USBIP to remote USB Z-Wave dongle

This post merely ties together a few existing items, and adds a few twiddly bits specific to Homeassistant, LXD, usbip, and my specific Z-Wave USB dongle.

     |
     |
     + Headless Machine / LXD Host (Ubuntu Server 19.04, 192.168.1.11)
     |    + usbip client
     |    + Some other LXD container     (192.168.1.111)
 LAN |    + Home Assistant LXD container (192.168.1.112)
     |
     + Remote Raspberry Pi (Raspbian, 192.169.1.38)
     |    + usbip server    
     |    + Some other Pi activity
     |    + Z-Wave Controller USB Dongle

Our goal is for Home Assistant, running inside an LXD container, to use the Z-Wave Controller plugged into an entirely different machine.

  1. Since this is a persistent service, harden the pi. SSH using keys only, etc.
  2. Set up usbip server on the Pi. Include the systemd service so it restarts and re-binds at reboot.
  3. Set up the usbip client on the host (the HOST, not the container)
  4. If you haven't already, create the container and install homeassistant into the container

The rest is specific to Ubuntu, to LXD, and to the USB Dongle.

The USB dongle is cheap and includes both Z-Wave and Zigbee, often sold under the 'Nortek' brand. When you plug it into a Linux system, it looks like this:

    $lsusb
        Bus xxx Device xxx: ID 10c4:8a2a Cygnal Integrated Products, Inc. 



When plugged in on the host (or forwarded via usbip), the dongle creates new nodes in /dev:

    host$ ls -l /dev/ | grep USB
        crw-rw---- 1 root dialout 188,   0 Aug 18 10:29 ttyUSB0   // Z-Wave
        crw-rw---- 1 root dialout 188,   1 Aug 18 10:29 ttyUSB1   // Zigbee


These old-style nodes mean that we can NOT use LXD's USB hotplug feature (but there's an alternative). Also, it means that Home Assistant cannot autodetect the dongle's presence (we must manually edit the HA config).

Shut down the container, or restart the container after making the Z-Wave node accessible to the container. Without a restart, the container won't pick up the change. I've seen promises that it should be hot-pluggable. Maybe it is...but I needed to restart the container after this command. It's very similar to the USB Hotplug command, but uses 'unix-char' instead.

    host$ lxc config device add home-assistant zwave unix-char path=/dev/ttyUSB0
        Device zwave added to home-assistant
            // home-assistant is the name of the LXD container
            // z-wave         is the name of this new config. We could name it anything we want
            // unix-char      is the method of attaching we are using (instead of 'usb')
            // path           is the path on the HOST

    host$ lxc restart home-assistant      // Restart the container


Now we move into a shell prompt in the CONTAINER (not host). My personal preference, since I'm used to VMs, is to treat the container like a VM. It has (unnecessary) ssh access (key-only, of course), and non-root users to admin and to run the 'hass' application. It also has the (unnecessary) Python venv. All of that bloat is a combination of preference and following the install documentation which simply did not expect that we might be able to run a as root here. Seems like a whole new blog post. The upshot is that inside the container I have a user prompt ($) and use sudo instead of a root prompt (#). Your mileage may vary.

    container$ ls -l /dev | grep USB
        crw-rw---- 1 root   root    188,   0 Aug 19 17:52 ttyUSB0

Look at the permissions: They have changed. And, as I said before, hass is not running as root within this container. Let's make a one-time change to make the Z-Wave USB dongle readable by hass.

    container$ sudo chown root:dialout /dev/ttyUSB0
        // There should be no output

    container$ ls -la /dev/ | grep USB
        crw-rw---- 1 root   dialout 188,   0 Aug 19 17:53 ttyUSB0

    container$ sudo adduser homeassistant dialout  // OPTIONAL - add the 'homeassistant' user to the correct group
                                                   // This is done early in most Home Assistant install instructions
                                                   // You may have done this already
                                                   // If not, restart the container so it takes effect


The chown seems to NOT persist across a reboot, so let's add a line to the systemd service so the chown occurs every time the container comes up.

    container$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service

        // Add to the [Service] section
        ExecStartPre=/bin/chown root:dialout /dev/ttyUSB0

    container$ sudo systemctl daemon-reload


Edit Home Assistant's config file, so hass knows where to find the Z-Wave node.

    me@container$ sudo -u homeassistant -H -s
    homeassistant@container$ nano /home/homeassistant/.homeassistant/configuration.yaml

        zwave:
          usb_path: /dev/ttyUSB0

    homeassistant@container$ exit


Finally, debian-based systems must install one additional deb package to support Z-Wave.

    container$ sudo apt install libudev-dev


Restart Home Assistant (if it's running) to pick up the new config. Go into the Web Page, and try adding the Z-Wave integration


    container$ sudo systemctl restart home-assistant@homeassistant.service

HomeAssistant in an LXD container

It's time to migrate my Home Assistant from it's experimental home in a Raspberry Pi to an LXD container on my Ubuntu server.

This has a lot of advantages, none of which are likely of the slightest interest to you. However, it also has one big problem: My Z-Wave USB dongle, currently plugged into the Pi, will need a new solution.

This blog post is about setting up Home Assistant in an LXD container. A different blog post will detail how to let the container see the USB dongle across the network

Preliminaries

First, the server needs to have LXD installed, and we need to create a container for Home Assistant.

In this case, I created a container called "homeassistant." It has a consistent IP address assigned by the LAN router (aa.bb.cc.dd), it has a user ("me") with sudo permission, and that user can ssh into the container. To the network it looks like a separate machine. To me, it behaves like a VM. To the host server, it acts like an unprivileged container.

Installing

First we install the python dependencies:

    me@homeassistant:~$ sudo apt-get update
    me@homeassistant:~$ sudo apt-get upgrade
    me@homeassistant:~$ sudo apt-get install python3 python3-venv python3-pip libffi-dev libssl-dev

Add a user named "homeassistant" to run the application. We need to add me to the new "homeassistant" group, so I can edit the config files.

    me@homeassistant:~$ sudo useradd -rm homeassistant -G dialout    // Create the homeassistant user and group
    me@homeassistant:~$ sudo adduser me homeassistant                // Add me to the homeassistant group
    me@homeassistant:~$ newgrp homeassistant                         // Add group to current session; no need to logout

Create the homeassistant directory in /srv, set the ownership, and cd into the dir

    me@homeassistant:~$ cd /srv
    me@homeassistant:~$ sudo mkdir homeassistant
    me@homeassistant:~$ sudo chown homeassistant:homeassistant homeassistant
    me@homeassistant:~$ cd /srv/homeassistant

Switch to homeassistant user, create the venv, install homeassistant:

    me@homeassistant:/srv/homeassistant $ sudo -u homeassistant -H -s
    homeassistant@homeassistant:/srv/homeassistant $ python3 -m venv .
    homeassistant@homeassistant:/srv/homeassistant $ source bin/activate
    (homeassistant) homeassistant@homeassistant:/srv/homeassistant $ python3 -m pip install wheel
    (homeassistant) homeassistant@homeassistant:/srv/homeassistant $ pip3 install homeassistant

First Run and Testing

Start Home Assistant for the first time. This takes a few minutes - let it work:

    (homeassistant) $ hass

Home Assistant should be up and running now. Try to login to the container's webserver (included with homeassistant). Remember how we assigned the container an IP address? (aa.bb.cc.dd) Let's use it now. From another machine on the LAN, try to connect with a web browser: http://aa.bb.cc.dd:8123. If it doesn't work, stop here and start troubleshooting. You can CTRL+C hass to stop it, or you can stop it from inside the web page.

Make Config Files

Once Home Assistant is working, let's change the permissions of the config files so that members of the "homeassistant" group (like me) can edit the files:

    (homeassistant) $ exit                                 // Exit the Venv
    homeassistant@homeassistant:/srv/homeassistant $ exit  // Exit the homeassistant user
    me@homeassistant:/srv/homeassistant $ cd ~             // Return to home dir - optional
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/automations.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/configuration.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/groups.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/scripts.yaml
    me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/secrets.yaml

Let's link Home Assistant to systemd, so hass starts when the container comes up, and hass stops when the container goes down. (Reference):

    me@homeassistant:~$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service

        [Unit]
        Description=Home Assistant
        After=network-online.target

        [Service]
        Type=simple
        User=homeassistant
        ExecStart=/srv/homeassistant/bin/hass -c "/home/homeassistant/.homeassistant"
        // No need for a 'stop' command. Systemd will take care of it automatically

        [Install]
        WantedBy=multi-user.target

    me@homeassistant:~$ sudo systemctl --system daemon-reload                       // Load systemd config changes
    me@homeassistant:~$ sudo systemctl enable home-assistant@homeassistant.service  // Or disable
    me@homeassistant:~$ sudo systemctl start home-assistant@homeassistant.service   // Or stop

Finally, a word about updates: Home Assistant updates frequently, and since it's not deb-based, unattended-upgrades cannot see it. However, starting the application will automatically download and install Home Assistant updates. When this occurs, the web page will take a full minute (or three) before appearing. Be Patient!

    me@homeassistant:~$ sudo -systemctl restart home-assistant@homeassistant.service  // Updates!

Saturday, August 17, 2019

USBIP into an LXD container

In a previous post, I used USBIP to forawrd GPS data from A to B. 'A' was a USB GPS dongle pluged into a Raspberry Pi (Raspbian). 'B' was my laptop.

Now let's take it another step. Let's move 'B' to an LXD container sitting on a headless Ubuntu 19.04 server. No other changes: Same GPS data, same use of USBIP. 'A' is the same USB GPS dongle, the same Raspberry Pi, and the same Raspbian.

Setting up usbip on the server ('B') is identical to setting it up on my laptop. Recall that this particular dongle creates a /dev/ttyUSB_X device upon insertion, and it's the same on the Pi, the Laptop, and the Server

    me@server:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 006: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    me@server:~$ ls -l /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 17 21:13 /dev/ttyUSB0

LXD has a USB Hotplug feature that works for many, but not all USB devices, connecting USB devices on the host to the container. Devices that create a custom entry in /dev (like /dev/ttyUSB_X) generally cannot use the USB Hotplug...but CAN instead use 'usb-char' forwarding which (seems to be) NOT hotpluggable.

Here's that LXD magic at work. In this case, I'm using a container called 'ha-test2', and let's simply name the dongle 'gps'. Do this while the container is stopped, or restart the container afterward

    me@server:~$ lxc config device add ha-test2 gps unix-char path=/dev/ttyUSB0
        Device gps added to ha-test2

Now we start the container, and then jump into a shell inside. We see that /dev/ttyUSB0 has indeed been forwarded. And we test to ensure data is flowing -- that we can read from /dev/tty/USB0.

    me@server:~$ lxc start ha-test2
    me@server:~$ lxc shell ha-test2
        mesg: ttyname failed: No such device

        root@ha-test2:~# ls -l /dev/ | grep tty
            crw-rw-rw- 1 nobody nogroup   5,   0 Aug 18 02:11 tty
            crw-rw---- 1 root   root    188,   0 Aug 18 02:25 ttyUSB0

        root@ha-test2:~# apt install gpsd-clients   // Get the gpsmon application
        root@ha-test2:~# gpsmon /dev/ttyUSB0


Making it permanent

It is permanent already. The 'lxc config' command will edit the config of the container, which is persistent across a reboot.


Cleaning up


There are two options for cleanup of the container.
  • You can simply throw it away (it's a container)
  • Alternately,
     root@ha-test2:~# apt autoremove gpsd-clients

On the Server:

    me@server:~$ lxc config device remove ha-test2 gps
    me@server:~$ sudo apt autoremove gpsd-clients    // If you installed gpsmon to test connectivity

Also remember to detach USBIP, and uninstall usbip packages.