Showing posts with label Ubuntu. Show all posts
Showing posts with label Ubuntu. Show all posts

Sunday, August 16, 2020

Installing Home Assistant Core in an LXD Container (Part 2)

Last time, we built a basic LXD container, and then build HomeAssistant inside.

This time, we're going to add a few more elements.

  • We're going to do all the steps on the Host instead of diving inside the container. So we're going to use lxc exec and lxc push. The goal is to make spinning up a new container scriptable
  • We're going to start/stop the HomeAssistant application using a systemd service
  • We're going to keep the data and config outside the container and use an lxd disk device to mount the data. Even if we destroy the container, the data and config survive to be mounted another day.

Preparing LXD

We're going to skip LXD initialization in this example. There's one addition from last time: We're going to add shiftfs, which permits us to chown mounted data. The macvlan profile and shiftfs enablement are persistent -- if you already have them, you don't need to redo them. All of these commands occur on the Host (we have not created the container yet!)

   # Create a macvlan profile, so the container will get it's IP Address from
   # the router instead of the host. This works on ethernet, but often not on wifi 
   ip route show default 0.0.0.0/0
   lxc profile copy default lanprofile
   lxc profile device set lanprofile eth0 nictype macvlan
   lxc profile device set lanprofile eth0 parent enp3s5

   # Test that macvlan networking is set up
   lxc profile show lanprofile
     config: {}
     description: Default LXD profile  // Copied. Not really the default
     devices:
       eth0:                           // Name, not real device
         nictype: macvlan              // Correct network type
         parent: enp3s5                // Correct real device
         type: nic

   # Enable shiftfs in LXD so data mounts work properly
   sudo snap set lxd shiftfs.enable=true
   sudo systemctl reload snap.lxd.daemon

   # Test that shiftfs is enabled:
   Host$ lxc info | grep shiftfs
    shiftfs: "true"

Create the Container and Initial Configuration

If LXD is already set up, then start here. We will mount the external data location, set the timezone and do all that apt setup. But this time, we will do all the commands on the Host instead of inside the container. We will also create the sources.list file on the host and push it into the container.

   # Create the container named "ha"
   lxc launch -p lanprofile ubuntu:focal ha

   # Mount the existing HomeAssistant data directory
   # Skip on the first run, since there won't be anything to mount
   # Shiftfs is needed, else the mounted data is owned by nobody:nogroup
   # Chown is needed because shiftfs changes the owner to 'ubuntu'
   lxc config device add ha data_mount disk source=/somewhere/else/.homeassistant path=/root/ha_data
   lxc config device set ha data_mount shift=true
   lxc exec ha -- chown -R root:root /root

   # Set the timezone non-interactively
   lxc exec ha -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
   lxc exec ha -- dpkg-reconfigure -f noninteractive tzdata

   # Reduce apt sources to Main and Universe only
   # Create the new sources.list file on the host in /tmp
   # Paste all of these lines at once into the Host terminal
   cat <<EOF > /tmp/container-sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
   deb http://security.ubuntu.com/ubuntu focal-security main universe
   EOF

   # Push the file into the container
   lxc file push /tmp/container-sources.list ha/etc/apt/sources.list

   # Apt removals and additions
   lxc exec ha -- apt autoremove openssh-server
   lxc exec ha -- apt update
   lxc exec ha -- apt upgrade
   lxc exec ha -- apt install python3-pip python3-venv

Create the Venv, Build HomeAssistant, and Test

This method is simpler than all that mucking around activating and venv and paying attention to your prompt. All these command are issued on the Host. You don't need a container shell prompt.

   # Setup the homeassistant venv in a dir called 'ha_system'
   #We will use the root account since it's an unprivileged container.
   lxc exec ha -- python3 -m venv --system-site-packages /root/ha_system

   # Build and install HomeAssistant
   lxc exec ha -- /root/ha_system/bin/pip3 install homeassistant

   # Learn the container's IP address. Need this for the web browser. 
   lxc list | grep ha

   # Run HomeAssistant
   lxc exec ha -- /root/ha_system/bin/hass -c "/root/ha_data"

   # Use your browser to open the the IP address:8123
   # HA takes a couple minutes to start up. Be patient.
   # Stop the server from within the Web UI or ^C to exit when done.

Start HomeAssistant at Boot (Container Startup)

The right way to do autostart is a systemd service file on the container. Like with the sources.list file, we will create it on the host, then push it into the container, then enable it. There's one optional ExecPreStart line - it will slow each startup slightly while it checks for and installs updates.

   cat <<EOF > /tmp/container-homeassistant.service
   [Unit]
   Description=Home Assistant
   After=network-online.target

   [Service]
   Type=simple
   User=root
   PermissionsStartOnly=true
   ExecPreStart=/root/ha_system/bin/pip3 install --upgrade homeassistant
   ExecStart=/root/ha_system/bin/hass -c "/root/ha_data"

   [Install]
   WantedBy=multi-user.target
   EOF

   # Push the .service file into the container, and enable it
   lxc file push /tmp/container-homeassistant.service ha/etc/systemd/system/homeassistant.service
   lxc exec ha -- systemctl --system daemon-reload
   lxc exec ha -- systemctl enable homeassistant.service
   lxc exec ha -- systemctl start homeassistant.service

Now we can test it. The last command should start HA. The same command with 'stop' should gracefully stop HA. Restarting the container should gracefully stop HA, and then restart it automatically. Your web browser UI should pick up each stop and start. You did it!


Final Notes

Remember how you start without any HomeAssitant data to mount? Now that you have a running HA Core, you can save a set of data:

   lxc file pull ha/root/.homeassistant /somewhere/else/.homeassistant --recursive

And remember to clean up your mess when youare done:

   lxc stop ha
   lxc delete ha

Saturday, August 15, 2020

Installing Home Assistant Core in an LXD Container (Part 1)

I've been running HomeAssistant Core reliably in an LXD container for almost two years now, so it's probably time to start detailing how to do it.

This is a step-by-step example of how to do it for folks who aren't very familiar with LXD containers and their features.

Installing LXD (documentation)

If you haven't used LXD before, you need to install it (it's a Snap) and initialize it (tell it where the storage is located). The initialization defaults are sane, so you should not have problems.

   sudo snap install lxd
   sudo lxd init

Container Profile: Macvlan Networking (optional)

A macvlan profile is one easy way for the container to get it's IP address from the router instead of the host. This means you can use a MAC Address filter to issue a permanent IP address. This works on ethernet, but often not on wifi. You only need to set up this profile ONCE, and it's easiest to do BEFORE creating the container. Since the container doesn't exist yet, all of these commands are done on the Host.

   # Get the real ethernet device (enp3s5 or some such)
   ip route show default 0.0.0.0/0

   # Make mistakes on a copy
   lxc profile copy default lanprofile

   # Change nictype field to macvlan
   #  'eth0' is a virtual device, not a real eth device
   lxc profile device set lanprofile eth0 nictype macvlan

   # Change parent field to real eth interface
   lxc profile device set lanprofile eth0 parent enp3s5

Create the Container

Create a new container named 'ha'. This command is done on the Host.

   # Create the container named "ha"
   lxc launch -p lanprofile ubuntu:focal ha

   # Learn the container's IP address. Need this for the web browser. 
   lxc list | grep ha

   # Get a root shell prompt inside the container
   lxc shell ha

Initial Setup in the Container

Let's get a shell set up timezone and apt. These commands are done on the Container root prompt.


   // This is one way to set the timezone
   dpkg-reconfigure tzdata

   // Reduce apt sources to Main and Universe only
   cat <<EOF > /etc/apt/sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
   deb http://security.ubuntu.com/ubuntu focal-security main universe
   EOF

   // Tweak: Remove openssh-server
   apt autoremove openssh-server

   // Populate the apt package database and bring the container packages up-to-date
   apt update
   apt upgrade
   
   // Install the python packages needed for HomeAssistant
   apt install python3-pip python3-venv

   # Setup the homeassistant venv in the root home dir (/root)
   # --system-site-packages allows the venv to use the many deb packages that are already
   #    installed as dependencies instead of donwnloading pip duplicates 
   python3 -m venv --system-site-packages /root

Install and Run HomeAssistant

Now we move into a virtual environment inside the container, build HomeAssistant, and give it a first run. If you try to build or run HomeAssistant outside the venv, it will fail with cryptic errors.

   // Activate the installed venv. Notice how the prompt changes.
   root@ha:~# source bin/activate
   (root) root@ha:~#
   
   // Initial build of HomeAssistant. This takes a few minutes.
   (root) root@ha:~# python3 -m pip install homeassistant

   // Instead of first build, this is where you would upgrade
   (root) root@ha:~# python3 -m pip install --upgrade homeassistant

   // Initial run to set up and test.
   (root) root@ha:~# hass

   // After a minute or two, open the IP Address (port 8123). Example: http://192.168.1.18:8123
   // Use the Web UI to shut down the application. Or use CTRL+C.

   // Exit the venv
   (root) root@ha:~# deactivate

   // Exit the container and return to the Host shell.
   root@ha:~# exit
   Host:~$


There's a lot more to talk about in future posts:

  • The systemd service that starts HomeAssistant at container startup.
  • Creating an LXD disk device to keep the HomeAssistant data in. If I rebuild the container for some reason, I can simply connect it to the data.
  • Adding a USBIP client. The Z-Wave controller is elsewhere in the building, and USBIP lets me control it like it's attached to the host. That also means adding a USB device to the container.
  • Collecting Host hearbeat statistics for the HomeAssistant dashboard, and pushing those into the container regularly.
  • Backing up and restoring HomeAssistant data and configurations.

Friday, August 14, 2020

LXD Containers on a Home Server

LXD Containers are very handy, and I use them for quite a few services on my home hobby & fun server. Here's how I set up my containers after a year of experimenting. Your mileage will vary, of course. You may have very different preferences than I do.

1. Networking:

I use macvlan networking. It's a simple, reliable, low-overhead way to pull an IP address from the network DHCP server (router). I set the IP address of many machines on my network at the router.

The container and server cannot communicate using TCP/UDP with each other. I don't mind that.

You only need to set up this profile once for all containers. Simply specify the profile when creating a new container.

   // 'Host:$' means the shell user prompt on the LXD host system. It's not a shell command

   // Learn the eth interface: enp3s5 in this example
   Host:$ ip route show default 0.0.0.0/0

   // Make mistakes on a copy
   Host:$ lxc profile copy default lanprofile

   // Change nictype field. 'eth0' is a virtual device, not a real eth device
   Host:$ lxc profile device set lanprofile eth0 nictype macvlan

   // Change parent field to real eth interface
   Host:$ lxc profile device set lanprofile eth0 parent enp3s5

   // Let's test the changes
   Host:$ lxc profile show lanprofile
     config: {}
     description: Default LXD profile  // This field is copied. Not really the default
     devices:
       eth0:                           // Virtual device
         nictype: macvlan              // Correct network type
         parent: enp3s5                // Correct real device
         type: nic
       root:
         path: /
         pool: containers-disk         // Your pool will be different, of course
         type: disk
     name: lanprofile


2. Creating a Container

Create a new container called 'newcon':

   Host:$ lxc launch -p lanprofile ubuntu:focal newcon
      // 'Host:$'        - user (non-root) shell prompt on the LXD host
      // '-p lanprofile' - use the macvlan networking profile
      // 'focal'         - Ubuntu 20.04. Substitute any release you like


3. Set the Time Zone

The default time zone is UTC. Let's fix that. Here are two easy ways to set the timezone: (source)

   // Get a root prompt within the container for configuration
   // Then use the classic Debian interactive tool:
   Host:$ lxc shell newcon
   newcon:# dpkg-reconfigure tzdata

   // Alternately, here's a non-interactive way to do it entirely on the host
   Host:$ lxc exec newcon -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
   Host:$ lxc exec newcon -- dpkg-reconfigure -f noninteractive tzdata


4. Remove SSH Server

We can access the container from the server at anytime. So most containers don't need an SSH server. Here are two ways to remove it

   // Inside the container
   newcon:# apt autoremove openssh-server 
   
   // Or from the Host
   Host:$ lxc exec newcon -- apt autoremove openssh-server


5. Limit Apt sources to what the container will actually use

Unlike setting the timezone properly, this is *important*. If you do this right, the container will update itself automatically for as long as the release of Ubuntu is supported (mark your calendar!) If you don't get this right, you will leave yourself an ongoing maintenance headache.

   // Limit the apt sources to (in this example) main from within the container
   newcon:# nano /etc/apt/sources.list
         // The final product should look similar to:
         deb http://archive.ubuntu.com/ubuntu focal main           
         deb http://archive.ubuntu.com/ubuntu focal-updates main           
         deb http://security.ubuntu.com/ubuntu focal-security main 

   // Alternately, *push* a new sources.list file from the host.
   # Create the new sources.list file on the host in /tmp
   cat <<EOF > /tmp/container-sources.list
   deb http://us.archive.ubuntu.com/ubuntu/ focal main
   deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main
   deb http://security.ubuntu.com/ubuntu focal-security main
   EOF
   
   // *Push* the file from host to container
   Host:$ lxc file push /tmp/container-sources.list newcon/etc/apt/sources.list


6. Install the Application

How you do this depends upon the application and how it's packaged.



7. Update Unattended Upgrades

This is the secret sauce that keeps your container up-to-date. First, let's look at a cleaned-up version of the first 20-or-so lines of /etc/apt/apt.conf.d/50unattended-upgrades inside the container:

                    What it says                             What it means
           ------------------------------------------      -----------------------
   Unattended-Upgrade::Allowed-Origins {
           "${distro_id}:${distro_codename}";              Ubuntu:focal
           "${distro_id}:${distro_codename}-security";     Ubuntu:focal-security
   //      "${distro_id}:${distro_codename}-updates";      Ubuntu:focal-updates
   //      "${distro_id}:${distro_codename}-proposed";     Ubuntu:focal-proposed
   //      "${distro_id}:${distro_codename}-backports";    Ubuntu:focal-backports
   };

...why, those are just the normal repositories! -security is enabled (good), but -updates is disabled (bad). Let's fix that. Inside the container, that's just using an editor to remove the commenting ("//"). From the host, it's a substitution job for sed:

   Host:$ lxc exec newcon -- sed "s\/\ \g" /etc/apt/apt.conf.d/50unattended-upgrades

Third-party sources need to be updated, too. This is usually easiest from within the container. See this post for how and where to update Unattended Upgrades with the third-party source information.



8. Mounting External Media

Some containers need disk access. A classic example is a media server that needs access to that hard drive full of disorganized music.

If the disk is available across the network instead of locally, then use plain old sshfs or samba to mount the network share in /etc/fstab.

If the disk is local, then first mount it on the Host. After it's mounted, use an lxd disk device inside the container. A disk device is an all-in-one service: It creates the mount point inside the container and does the mounting. It's persistent across reboots...as long as the disk is mounted on the host.

   // Mount disk on the host and test
   Host:$ sudo mount /dev/sda1 /media
   Host:$ ls /media
      books         movies       music

   // Create disk device called "media_mount" and test
   Host:$ lxc config device add newcon media_mount disk source=/media path=/Shared_Media
   Host:$ lxc exec newcon -- ls /Shared_Media
      books         movies       music

If the ownership of files on the disk is confused, and you get "permisson denied" errors, then use shiftfs to do the equivalent of remounting without suid

   Host:$ lxc exec newcon -- ls /Shared_Media/books
      permission denied

   // Enable shiftfs in LXD, reload the lxd daemon, and test
   Host$ sudo snap set lxd shiftfs.enable=true
   Host$ sudo systemctl reload snap.lxd.daemon
   Host$ lxc info | grep shiftfs
    shiftfs: "true"

   // Add shiftsfs to the disk device
   Host$ lxc config device set newcon media_mount shift=true

   Host:$ lxc exec newcon -- ls /Shared_Media/books
      boring_books       exciting_books        comic_books        cookbooks

Friday, May 8, 2020

Testing Ubuntu Core with Containers on VirtualBox

I want to try out Ubuntu Core to see if it's appropriate for running a small server with a couple containers.

The current OS is Ubuntu Server 20.04...but I'm really not using most of the Server features. Those are in the LXD containers. So this is an experiment to see if Ubuntu Core can function as the server OS.

Prerequisites: If you are looking to try this, you should already be familiar (not expert) with:
  • Using SSH
  • Using the vi text editor (Ubuntu Core lacks nano)
  • Basic networking concepts like dhcp
  • Basic VM and Container concepts


Download Ubuntu Core:
  • Create Ubuntu SSO Account (if you don't have one already)
  • Create a SSH Key (if you don't have one already)
  • Import your SSH Public Key to Ubuntu SSO.
  • Download an Ubuntu core .img file from https://ubuntu.com/download/iot#core
  • Convert the Ubuntu Core .img to a Virtualbox .vdi:

         me@desktop:~$ VBoxManage convertdd ubuntu-core-18-amd64.img ubuntu-core.vdi


Set up a new machine in VirtualBox:
  • Install VirtualBox (if you haven't already):

         me@desktop:~$ sudo apt install virtualbox

  • In the Virtualbox Settings, File -> Virtual Media Manager. Add the ubuntu-core.vdi
  • Create a New Machine. Use an existing Hard Disk File --> ubuntu-core.vdi
  • Check the network settings. You want a network that you will be able to access. I chose bridged networking so I could play with the new system from different locations, and set up a static IP address on the router. ENABLE promiscuous mode, so containers can get IP addresses from the router. Otherwise, VirtualBox will filter out the dhcp requests.
  • OPTIONAL: Additional tweaks to enhance performance.


Take a snapshot of your current network neighborhood:
  • Use this to figure out Ubuntu Core's IP address later on:
     me@Desktop:~$ ip neigh
     192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
     192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE
     192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
     192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
     192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
     fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY


Boot the image in VirtualBox:
  • The first boot of Ubuntu Core requires a screen and keyboard (one reason we're trying this in VirtualBox). Subsequent logins will be done by ssh.
  • Answer the couple setup questions.
  • Use your Ubuntu One login e-mail address.
  • The VM will reboot itself (perhaps more than once) when complete.
  • Note you cannot login to the VM's TTY. Ubuntu Core's default login is via ssh. Instead, the VM's TTY tells you the IP address to use for ssh.
  • Since we are using a VM, this is a convenient place to take an initial snapshot. If you make a mess of networking in the next step, you can revert the snapshot.


Let's do some initial configuration:
  • After the VM reboots, the Virtualbox screen only shows the IP address.

  • // SSH into the Ubuntu Core Guest
    me@desktop:~$ ssh my-Ubuntu-One-login-name@IP-address
     [...Welcome message and MOTD...]
    me@localhost:~$
    
    // The default name is "localhost"
    // Let's change that. Takes effect after reboot.
    me@localhost:~$ sudo hostnamectl set-hostname 'ubuntu-core-vm'
    
    // Set the timezone. Takes effect immediately.
    me@localhost:~$ sudo timedatectl set-timezone 'America/Chicago'
    
    // OPTIONAL: Create a TTY login
    // This can be handy if you have networking problems.
    me@localhost:~$ sudo passwd my-Ubuntu-One-login-name


Let's set up the network bridge so containers can draw their IP address from the router:

  • We use vi to edit the netplan configuration. When we apply the changes, the ssh connection will be severed so we must discover the new IP address to login again.

  • me@localhost:~$ sudo vi /writable/system-data/etc/netplan/00-snapd-config.yaml
    
         #// The following seven lines are the original file. Commented instead of deleted.
         # This is the network config written by 'console_conf'
         #network:
         #  ethernets:
         #    eth0:
         #      addresses: []
         #      dhcp4: true
         #  version: 2
    
         #// The following lines are the new config 
         network:
           version: 2
           renderer: networkd
    
           ethernets:
             eth0:
               dhcp4: no
               dhcp6: no
    
           bridges:
             # br0 is the name that containers use as the parent
             br0:
               interfaces:
                 # eth0 is the device name in 'ip addr'
                 - eth0
               dhcp4: yes
               dhcp6: yes
         #// End
         
    
    // After the file is ready, implement it:
    me@localhost:~$ sudo netplan generate
    me@localhost:~$ sudo netplan apply
    
    // If all goes well...your ssh session just terminated without warning.
    


Test our new network settings:
  • The Ubuntu Core VM window will NOT change the displayed IP address after the netplan change...but that IP won't work anymore.
  • If you happen to reboot (not necessary) you will see that the TTY window displays no IP address when bridged...unless you have created an optional TTY login.
  • Instead of rebooting, let's take another network snapshot and compare to earlier:

         me@Desktop:~$ ip neigh
         192.168.1.226 dev enp3s0 lladdr c6:12:89:22:56:e4 STALE
         192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
         192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE  <---- NEW
         192.168.1.235 dev enp3s0 lladdr DELAY                    <-----NEW
         192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
         192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
         192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
         fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY

  • We have two new lines: .226 and .235 One of those was the old IP address and one is the new. SSH into the new IP address, and you're back in.

    me@desktop:~$ ssh my-Ubuntu-One-user-name@192.168.1.226
    Welcome to Ubuntu Core 18 (GNU/Linux 4.15.0-99-generic x86_64)
     [...Welcome message and MOTD...]
    Last login: Thu May  7 16:11:38 2020 from 192.168.1.6
    me@localhost:~$

  • Let's take a closer look at our new, successful network settings.

    me@localhost:~$ ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    
    2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether c6:12:89:22:56:e4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.226/24 brd 192.168.1.255 scope global dynamic br0
           valid_lft 9545sec preferred_lft 9545sec
        inet6 2683:4000:a450:1678:c412:89ff:fe22:56e4/64 scope global dynamic mngtmpaddr noprefixroute
           valid_lft 600sec preferred_lft 600sec
        inet6 fe80::c412:89ff:fe22:56e4/64 scope link
           valid_lft forever preferred_lft forever
    
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
        link/ether 08:00:27:fd:20:92 brd ff:ff:ff:ff:ff:ff
    
    // Note that ubuntu-core-vm now uses the br0 address, and lacks an eth0 address.
    // That's what we want.


Set up static IP addresses on the Router and then reboot to use the new IP address.
  • Remember, the whole point of bridged networking is for the router to issue all the IP addresses and avoid doing a lot of NATing and Port Forwarding.
  • So now is the time to login to the Router and have it issue a constant IP address to the Bridge MAC address (in this case c6:12:89:22:56:e4). After this, ubuntu-core-vm (the Ubuntu Core Guest VM) will always have a predictable IP address.
  • Use VirtualBox to ACPI shutdown the VM, then restart it headless. We're looking for two changes: The hostname and the login IP address.
  • Starting headless can be done two ways:

    1. GUI: Virtualbox Start button submenu
    2. me@Desktop:~$  VBoxHeadless --startvm name-of-vm

  • Success at rebooting headless and logging into the permanent IP address is a good point for another VM Snapshot. And maybe a sandwich. Well done!


Install LXD onto ubuntu-core-vm:
  • Install:

    me@ubuntu-core-vm:~$ snap install lxd
    lxd 4.0.1 from Canonical✓ installed
    me@ubuntu-core-vm:~$

  • Add myself to the `lxd` group so 'sudo' isn't necessary anymore. This SHOULD work, but doesn't due to a bug (discussion)

    host:~$ sudo adduser --extrausers me lxd     // Works on most Ubuntu; does NOT work on Ubuntu Core even with --extrausers
    host:~$ newgrp lxd                           // New group takes effect without logout/login

  • Instead, edit the groups file directly using vi:

    // Use vi to edit the file:
    me@ubuntu-core-vm:~$ sudo vi /var/lib/extrausers/group
    
         // Change the lxd line:
         lxd:x:999:               // Old Line
         lxd:x:999:my-login-name  // New Line
    
    
    // Apply the new group settings without logout
    me@ubuntu-core-vm:~$ newgrp lxd
Configure LXD:
  • LXD is easy to configure. We need to make three changes from the default settings since we already have a bridge (br0) set up that we want to use.

    me@ubuntu-core-vm:~$ lxd init
    
    Would you like to use LXD clustering? (yes/no) [default=no]:
    Do you want to configure a new storage pool? (yes/no) [default=yes]:
    Name of the new storage pool [default=default]:
    Name of the storage backend to use (dir, lvm, ceph, btrfs) [default=btrfs]:
    Create a new BTRFS pool? (yes/no) [default=yes]:
    Would you like to use an existing block device? (yes/no) [default=no]:
    Size in GB of the new loop device (1GB minimum) [default=15GB]:
    Would you like to connect to a MAAS server? (yes/no) [default=no]:
    Would you like to create a new local network bridge? (yes/no) [default=yes]: no    <------------------------- CHANGE
    Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes   <-- CHANGE
    Name of the existing bridge or host interface: br0     <----------------------------------------------------- CHANGE
    Would you like LXD to be available over the network? (yes/no) [default=no]:
    Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
    Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
    
    me@ubuntu-core-vm:~$
  • Next, we change the networking profile so containers use the bridge:

    // Open the default container profile in vi
    me@ubuntu-core-vm:~$ lxc profile edit default
    
         config: {}
         description: Default LXD profile
         devices:
           # Container eth0, not ubuntu-core-vm eth0
           eth0:
             name: eth0
             nictype: bridged
             # This is the ubuntu-core-vm br0, the real network connection
             parent: br0
             type: nic
           root:
             path: /
             pool: default
             type: disk
         name: default
         used_by: []
  • Add the Ubuntu-Minimal stream for cloud-images, so our test container is small:

    me@ubuntu-core-vm:~$ lxc remote add --protocol simplestreams ubuntu-minimal https://cloud-images.ubuntu.com/minimal/releases/
Create and start a Minimal container:
    me@ubuntu-core-vm:~$ lxc launch ubuntu-minimal:20.04 test1
    Creating test1
    Starting test1
    
    me@ubuntu-core-vm:~$ lxc list
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    | NAME  |  STATE  |         IPV4         |                      IPV6                     |   TYPE    | SNAPSHOTS |
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    | test1 | RUNNING | 192.168.1.248 (eth0) | 2603:6000:a540:1678:216:3eff:fef0:3a6f (eth0) | CONTAINER | 0         |
    +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
    
    
    // Let's test outbound connectivity from the container
    me@ubuntu-core-vm:~$ lxc shell test1
    root@test1:~# apt update
    Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
    [...lots of succesful server connections...]
    Get:26 http://archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1280 B]
    Fetched 16.3 MB in 5s (3009 kB/s)
    Reading package lists... Done
    Building dependency tree...
    Reading state information... Done
    5 packages can be upgraded. Run 'apt list --upgradable' to see them.
    root@test1:~#

Wednesday, February 19, 2020

Pushing a file from Host into an to LXD Container

One of the little (and deliberate) papercuts of using unprivileged LXD containers is that unless data flows in from a network connection, it likely has the wrong owner and permissions.

Here are two examples in the HomeAssistant container.

1. The HA container needs to talk to a USB dongle elsewhere in the building. It does so using USBIP, and I discussed how to make it work in this previous post.

2. I want the HA container to display some performance data about the host (uptime, RAM used, similar excitements). Of course, it's a container, so it simply cannot do that natively without using lots of jiggerypokery to escape the container. Instead, a script collects the information and pushes the information into the container every few minutes.

     $ sudo lxc file push /path/to/host/file.json container-name/path/to/container/

Easy enough, right.

Well, not quite. Home Assistant, when installed, creates a non-root user, and puts all of it's files in a subdirectory. Add another directory to keep things simple, and you get:

     /home/homeassistant/.homeassistant/external_files/

And, unfortunately, all those subdirectories are owned by a non-root user. So lxc cannot 'push' all the way into them (result: permission error).

    -rw-r--r-- 1   root root  154 Feb 19 15:34 file.json

The pushed file can only be pushed to in the wrong location, and gets there with the wrong ownership.



Systemd to the rescue: Let's create a systemd job on the container that listens for a push, then fixes the location and the ownership.

The feature is called a systemd.path.

Like a systemd timer it consists of two parts, a trigger (.path) and a service that gets triggered.

The .path file is very simple. Here's what I used for the trigger:

[Unit]
# /etc/systemd/system/server_status.path
Description=Listener for a new server status file

[Path]
PathModified=/home/homeassistant/.homeassistant/file.json

[Install]
WantedBy=multi-user.target

The service file is almost as simple. Here's what I used:

[Unit]
# /etc/systemd/system/server_status.service
Description=Move and CHOWN the server status file

[Service]
Type=oneshot
User=root
ExecStartPre=mv /home/homeassistant/.homeassistant/file.json /home/homeassistant/.homeassistant/external_files/
ExecStart=chown homeassistant:homeassistant /home/homeassistant/.homeassistant/external_files/file.json

[Install]
WantedBy=multi-user.target

Finally, enable and start the path (not the service)

sudo systemctl daemon-reload
sudo systemctl enable server_status.path
sudo systemctl start server_status.path


Sunday, February 2, 2020

Advanced Unattended Upgrade (Ubuntu): Chrome and Plex examples

Updated Aug 26, 2020

This is a question that pops up occasionally in various support forums:

Why doesn't (Ubuntu) Unattended Upgrades work for all applications? How can I get it to work for my application?

Good question.

Here is what happens under the hood: The default settings for Unattended Upgrades are for only packages in the "-security" pocket of the Ubuntu repositories.

Not "-updates", not "-backports", not "-universe", not any third-party repositories, not any PPAs. Just "-security".

This is a deliberately conservative choice -- while the Ubuntu Security Team keeps it's delta as small as possible, it's a historical fact that even small security patches have (unintentionally) introduced new bugs.



Here's how you can override that choice. 

Let's take a look at the top section of file /etc/apt/apt.conf.d/50unattended-upgrades, and focus on the "Allowed-Origins section." It's edited for clarity here:

Unattended-Upgrade::Allowed-Origins {
     "${distro_id}:${distro_codename}";
     "${distro_id}:${distro_codename}-security";
//   "${distro_id}:${distro_codename}-updates";
//   "${distro_id}:${distro_codename}-proposed";
//   "${distro_id}:${distro_codename}-backports";
};

There, you can see the various Ubuntu repo pockets.

You can also see that most of the options are commented out (the "//"). If you know how to use a basic text editor and sudo, you can safely change those settings. Warning: You can break your system quite horribly by enabling the wrong source. Enabling "-proposed" and other testing sources is a very bad idea.



How to add the -updates pocket of the Ubuntu Repos?

I've done this for years, BUT (this is important) I don't add lots of extra sources. Simply uncomment the line.

   "${distro_id}:${distro_codename}-updates";

That's all. When Unattended Upgrades runs next, it will load the new settings.

Bonus: Here's one way to do it using sed:

   sed -i 's~//\(."${distro_id}:${distro_codename}-updates";\)~\1~' /etc/apt/apt.conf.d/50unattended-upgrades


How to add the -universe pocket of the Ubuntu Repos?

You can create a '-universe' line like the others, but it won't do anything. It's already handled by the "-updates" line.



How to add a generic new repository that's not in the Ubuntu Repos?

Add a line in the format to the end of the section:

    //    "${distro_id}:${distro_codename}-backports";
    "origin:section"       <-------- Add this format
    };

The trick is finding out what the "origin" and "section" strings should be.

Step 1: Find the URL of the source that you want to add. It's located somewhere in /etc/apt/sources.list or /etc/apt/sources.list.* . It looks something like this...

    deb http://security.ubuntu.com/ubuntu eoan-security main restricted universe multiverse
      ...or...
    deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
      ...or...
    deb https://downloads.plex.tv/repo/deb/ public main

Step 2: Find the corresponding Release file in your system for the URL.

    http://security.ubuntu.com/ubuntu eoan-security
      ...becomes...
    /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease


    http://dl.google.com/linux/chrome/deb/ stable
      ...becomes...
    /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_InRelease


    https://downloads.plex.tv/repo/deb/ public
      ...becomes...
    /var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release

Step 3: Use grep to find the "Origin" string.

    $ grep Origin /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease
    Origin: Ubuntu

    $ grep Origin /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_InRelease
    Origin: Google LLC

    $ grep Origin /var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release
    Origin: Artifactory

Step 4: With the Origin string and Section (after the space in the URL), we have all the information we need:

    "Ubuntu:eoan-security"
       ...or...
    "Google LLC:stable"
       ...or...
    "Artifactory:public"

You're ready to add the appropriate string to the config file.

Bonus: Here's one way to isolate most of these using shell script

    package="google-chrome-stable"
    url=$(apt-cache policy $package | grep "500 http://")
    var_path=$(echo $url | sed 's~/~_~g' | \
           sed 's~500 http:__\([a-z0-9._]*\) \([a-z0-9]*\)_.*~/var/lib/apt/lists/\1_dists_\2_InRelease~')
    origin=$(grep "Origin:" $var_path | cut -d" " -f2)
    section=$(echo $url | sed 's~500 http://\([a-z0-9._/]*\) \([a-z0-9]*\)/.*~\2~')
    echo "$origin":"$section"

Step 5: Run Unattended Upgrades once, then check the log to make sure Unattended Upgrades accepted the change.

    $ sudo unattended-upgrade
    $ less /var/log/unattended-upgrades/unattended-upgrades.log   (sometimes sudo may be needed)

You are looking for a recent line like:

    2020-02-02 13:36:23,165 INFO Allowed origins are: o=Ubuntu,a=eoan, o=Ubuntu,a=eoan-security, o=UbuntuESM,a=eoan, o=UbuntuESM,a=eoan-security, o=UbuntuESM,a=eoan-security

Your new source and section should be listed.



Summary for folks who just want to know how to update Chrome (stable)

  1. Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades
  2. In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"
    "Google LLC:stable"


Summary for folks who just want to know how to update Plex

  1. Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades 
  2. In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"
    "Artifactory:public"

Saturday, August 17, 2019

USBIP into an LXD container

In a previous post, I used USBIP to forawrd GPS data from A to B. 'A' was a USB GPS dongle pluged into a Raspberry Pi (Raspbian). 'B' was my laptop.

Now let's take it another step. Let's move 'B' to an LXD container sitting on a headless Ubuntu 19.04 server. No other changes: Same GPS data, same use of USBIP. 'A' is the same USB GPS dongle, the same Raspberry Pi, and the same Raspbian.

Setting up usbip on the server ('B') is identical to setting it up on my laptop. Recall that this particular dongle creates a /dev/ttyUSB_X device upon insertion, and it's the same on the Pi, the Laptop, and the Server

    me@server:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 006: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    me@server:~$ ls -l /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 17 21:13 /dev/ttyUSB0

LXD has a USB Hotplug feature that works for many, but not all USB devices, connecting USB devices on the host to the container. Devices that create a custom entry in /dev (like /dev/ttyUSB_X) generally cannot use the USB Hotplug...but CAN instead use 'usb-char' forwarding which (seems to be) NOT hotpluggable.

Here's that LXD magic at work. In this case, I'm using a container called 'ha-test2', and let's simply name the dongle 'gps'. Do this while the container is stopped, or restart the container afterward

    me@server:~$ lxc config device add ha-test2 gps unix-char path=/dev/ttyUSB0
        Device gps added to ha-test2

Now we start the container, and then jump into a shell inside. We see that /dev/ttyUSB0 has indeed been forwarded. And we test to ensure data is flowing -- that we can read from /dev/tty/USB0.

    me@server:~$ lxc start ha-test2
    me@server:~$ lxc shell ha-test2
        mesg: ttyname failed: No such device

        root@ha-test2:~# ls -l /dev/ | grep tty
            crw-rw-rw- 1 nobody nogroup   5,   0 Aug 18 02:11 tty
            crw-rw---- 1 root   root    188,   0 Aug 18 02:25 ttyUSB0

        root@ha-test2:~# apt install gpsd-clients   // Get the gpsmon application
        root@ha-test2:~# gpsmon /dev/ttyUSB0


Making it permanent

It is permanent already. The 'lxc config' command will edit the config of the container, which is persistent across a reboot.


Cleaning up


There are two options for cleanup of the container.
  • You can simply throw it away (it's a container)
  • Alternately,
     root@ha-test2:~# apt autoremove gpsd-clients

On the Server:

    me@server:~$ lxc config device remove ha-test2 gps
    me@server:~$ sudo apt autoremove gpsd-clients    // If you installed gpsmon to test connectivity

Also remember to detach USBIP, and uninstall usbip packages.

Monday, August 12, 2019

Experimenting with USB devices across the LAN with USBIP

USBIP is a Linux tool for accessing USB devices across a network. I'm trying it out.


At one end of the room, I have a Raspberry Pi with
  • A Philips USB Webcam
  • A no-name USB GPS dongle
  • A Nortek USB Z-Wave/Zigbee network controller dongle
At the other end of the room is my laptop.

Before starting anything, I plugged all three into another system to ensure that they worked properly.


Raspberry Pi Server Setup

The Pi is running stock Raspbian Buster, with the default "pi" user replaced by a new user ("me") with proper ssh keys.

Before we start, here's what the 'lsusb' looks like on the Pi

    me@pi:~ $ lsusb
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Now we plug in the three USB devices and see what changed

    me@pi:~ $ lsusb
        Bus 001 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 001 Device 005: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 001 Device 006: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
        Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And here are the new devices created or modified

    me@pi:~ $ ls -l /dev | grep 12    // 12 is today's date
        drwxr-xr-x 4 root root          80 Aug 12 00:46 serial
        lrwxrwxrwx 1 root root           7 Aug 12 00:46 serial0 -> ttyAMA0
        drwxr-xr-x 4 root root         220 Aug 12 00:47 snd
        crw--w---- 1 root tty     204,  64 Aug 12 00:46 ttyAMA0
        crw-rw---- 1 root dialout 188,   0 Aug 12 00:46 ttyUSB0
        drwxr-xr-x 4 root root          80 Aug 12 00:47 v4l
        crw-rw---- 1 root video    81,   3 Aug 12 00:47 video0

Looks like...
  • /dev/ttyAMA0 is the Nortek Z-Wave controller
  • /dev/ttyUSB0 is the GPS stick
  • /dev/video0 is the webcam

Installing USBIP onto Raspbian Buster is easy. However, it is DIFFERENT from stock Debian or Ubuntu. This step is Raspbian-only

    me@pi:~$ sudo apt install usbip
Now load the kernel module. The SERVER always uses the module 'usbip_host'.

    me@pi:~$ sudo modprobe usbip_host     // does not persist across reboot

List the devices the usbip can see. Note each Bus ID - we'll need those later

    me@pi:~ $ usbip list --local
 - busid 1-1.1 (0424:ec00)
   Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

 - busid 1-1.2 (0471:0329)
   Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

 - busid 1-1.4 (067b:2303)
   Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

 - busid 1-1.5 (10c4:8a2a)
   Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

  • We can ignore the Ethernet adapter
  • The Webcam is at 1-1.2
  • The GPS dongle is at 1-1.4
  • The Z-Wave Controller is at 1-1.5

Bind the devices.

    me@pi:~$ sudo usbip bind --busid=1-1.2        // does not persist across reboot
        usbip: info: bind device on busid 1-1.2: complete

    me@pi:~$ sudo usbip bind --busid=1-1.4        // does not persist across reboot
        usbip: info: bind device on busid 1-1.4: complete

    me@pi:~$ sudo usbip bind --busid=1-1.5        // does not persist across reboot
        usbip: info: bind device on busid 1-1.5: complete

The USB dongle will now appear to any client on the network just as though it was plugged in locally.

If you want to STOP serving a USB device:

    me@pi:~$ sudo usbip unbind --busid=1-1.2

The server (usbipd) process may or may not actually be running, serving on port 3240. Let's check:
    me@pi:~ $ ps -e | grep usbipd
        18966 ?        00:00:00 usbipd

    me@:~ $ sudo netstat -tulpn | grep 3240
        tcp        0      0 0.0.0.0:3240            0.0.0.0:*               LISTEN      18966/usbipd        
        tcp6       0      0 :::3240                 :::*                    LISTEN      18966/usbipd

We know that usbipd is active and listening. If not, start usbipd with:

    me@:~ $ sudo usbipd -D

You can run it more than one; only one daemon will start. The usbipd server does NOT need to be running to bind/unbind USB devices - you can start the server and bind/unbind in any order you wish. If you need to debug a connection, omit the -D (daemonize; fork into the background) so you can see the debug messages. See 'man usbipd' for the startup options to change port, IPv4, IPv6, etc.


Laptop Client Setup

Let's look at the USB devices on my laptop before starting:

    me@laptop:~$ lsusb
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

In stock Debian (not Raspbian) and Ubuntu, usbip is NOT a separate package. It's included in the 'linux-tools-generic' package, which many folks already have installed...

    me@laptop:~$ apt list linux-tools-generic
        Listing... Done
        linux-tools-generic/disco-updates 5.0.0.23.24 amd64   // Doesn't say "[installed]"

...but apparently I don't. Let's install it.

    me@laptop:~$ sudo apt install linux-tools-generic

Now load the kernel module. The CLIENT always uses the kernel module 'vhci-hcd'.

    me@laptop:~$ sudo modprobe vhci-hcd     // does not persist across reboot

List the available USB devices on the Pi server (IP addr aa.bb.cc.dd). Those Bus IDs should look familiar.

    me@laptop:~$ usbip list -r aa.bb.cc.dd                        // List available on the IP address
        usbip: error: failed to open /usr/share/hwdata//usb.ids   // Ignore this error
        Exportable USB devices
        ======================
         - aa.bb.cc.dd
              1-1.5: unknown vendor : unknown product (10c4:8a2a)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
                   : (Defined at Interface level) (00/00/00)
                   :  0 - unknown class / unknown subclass / unknown protocol (ff/00/00)
                   :  1 - unknown class / unknown subclass / unknown protocol (ff/00/00)


              1-1.4: unknown vendor : unknown product (067b:2303)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4
                   : (Defined at Interface level) (00/00/00)

              1-1.2: unknown vendor : unknown product (0471:0329)
                   : /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2
                   : (Defined at Interface level) (00/00/00)

Now we attach the three USB devices. This will not persist across a reboot.

    me@laptop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.2
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.4
    me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.5
    // No feedback upon success

The remote USB devices now show in 'lsusb'

    me@laptop:~$ lsusb
        Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 003 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc. 
        Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
        Bus 003 Device 002: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And we can see that new devices have appeared in /dev. Based upon the order we attached, it's likely that
  • The webcam 1-1.2 is at /dev/video2
  • The GPS dongle 1-1.4 is probably at /dev/ttyUSB0
  • The Z-Wave controller 1-1.5 is at /dev/ttyUSB1
  • The same dongle includes a Zigbee controller, too, at /dev/ttyUSB2
The Z-Wave/Zigbee controller has had it's major number changed from 204 to 188. We don't know if that's important or not yet.

    me@laptop:~$ ls -l /dev | grep 12
        drwxr-xr-x  4 root root            80 Aug 12 00:56 serial
        crw-rw----  1 root dialout 188,     0 Aug 12 00:56 ttyUSB0
        crw-rw----  1 root dialout 188,     1 Aug 12 00:56 ttyUSB1
        crw-rw----  1 root dialout 188,     2 Aug 12 00:56 ttyUSB2
        crw-rw----+ 1 root video    81,     2 Aug 12 00:56 video2


Testing Results

I tested the GPS using the 'gpsmon' application, included with the 'gpsd-clients' package. We don't actually need gpsd, we can connect gpsmon directly to the remote USB device.

    me@laptop:~$ gpsmon /dev/ttyUSB0
        gpsmon:ERROR: SER: device open of /dev/ttyUSB0 failed: Permission denied - retrying read-only
        gpsmon:ERROR: SER: read-only device open of /dev/ttyUSB0 failed: Permission denied

Aha, a permission issue, not a usbip failure!
Add myself to the 'dialout' group, and then it works. A second test across a VPN connection, from a remote location, was also successful.

    me@laptop:~$ ls -la /dev/ttyUSB0
        crw-rw---- 1 root dialout 188, 0 Aug 11 21:41 /dev/ttyUSB0    // 'dialout' group

    me@laptop:~$ sudo adduser me dialout
        Adding user `me' to group `dialout' ...
        Adding user me to group dialout
        Done.

    me@laptop:~$ newgrp dialout    // Prevents need to logout/login for new group to take effect

    me@laptop:~$ gpsmon /dev/ttyUSB0
    // Success!

The webcam is immediately recognized in both Cheese and VLC, and plays across the LAN instantly. There is a noticeable half-second lag. A second test, across a VPN connection from a remote location, had the USB device recognized but not enough signal was arriving in timely order for the applications to show the video.

There were a few hiccups along the way. The --debug flag helps a lot to track down the problems:
  • Client failed to connect with "system error" - turns out usbipd was not running on the server.
  • Client could see the list, but failed to attach with "attach failed" - needed to reboot the server (not sure why)
  • An active usbip connection prevents my laptop from sleeping properly
  • The Z-wave controller require HomeAssistant or equivalent to run, a bit more that I want to install onto the testing laptop. Likely to have permission issues, too.


Cleaning up

To tell a CLIENT to cease using a remote USB (virtual unplug), you need to know the usbip port number. Well, not really: We have made only one persistent change; we could simply reboot instead.

    me@laptop:~$ usbip port   // Not using sudo - errors, but still port numbers
        Imported USB devices
        ====================
        libusbip: error: fopen
        libusbip: error: read_record
        Port 00:  at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/007
        libusbip: error: fopen
        libusbip: error: read_record
        Port 01:  at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/005
        libusbip: error: fopen
        libusbip: error: read_record
        Port 02:  at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> unknown host, remote port and remote busid
                   -> remote bus/dev 001/006

    me@laptop:~$ sudo usbip port    // Using sudo, no errors and same port numbers
        Imported USB devices
        ====================
        Port 00: <port in use> at Full Speed(12Mbps)
               Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
               5-1 -> usbip://aa.bb.cc.dd:3240/1-1.2
                   -> remote bus/dev 001/007
        Port 01: <port in use> at Full Speed(12Mbps)
               Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
               5-2 -> usbip://aa.bb.cc.dd:3240/1-1.4
                   -> remote bus/dev 001/005
        Port 02: <port in use> at Full Speed(12Mbps)
               Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
               5-3 -> usbip://aa.bb.cc.dd:3240/1-1.5
                   -> remote bus/dev 001/006
 
    me@laptop:~$ sudo usbip detach --port 00
        usbip: info: Port 0 is now detached!

    me@laptop:~$ sudo usbip detach --port 01
        usbip: info: Port 1 is now detached!

    me@laptop:~$ sudo usbip detach --port 02
        usbip: info: Port 2 is now detached!

    me@laptop:~$ lsusb              // The remote USB devices are gone now
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd 
        Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    me@laptop:~$ sudo modprobe -r vhci-hcd    // Remove the kernel module

The only two persistent change we made on the CLIENT were adding myself to the 'dialout' group and installing the 'linux-tools-generic' package, so let's remove that. If you ALREADY were in the 'dialout' group, or had the package installed for other reasons, then obviously don't remove it. It's not the system's responsibility to keep track of why you have certain permissions or packages -- that's the human's job. After this step, my CLIENT is back to stock Ubuntu.

    me@laptop:~$ sudo deluser me dialout                  // Takes effect after logout
    me@laptop:~$ sudo apt autoremove linux-tools-generic  // Immediate

Telling a SERVER to stop sharing a USB device (virtual unplug) and shut down the server is much easier. Of course, this is also a Pi, and we did make any changes permanent, so it might be easier to simply reboot it.

    me@pi:~$ usbip list -l
         - busid 1-1.1 (0424:ec00)
           Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)

         - busid 1-1.2 (0471:0329)
           Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)

         - busid 1-1.4 (067b:2303)
           Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)

         - busid 1-1.5 (10c4:8a2a)
           Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)

    me@pi:~$ sudo usbip unbind --busid=1-1.2
        usbip: info: unbind device on busid 1-1.2: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.4
        usbip: info: unbind device on busid 1-1.4: complete
    me@pi:~$ sudo usbip unbind --busid=1-1.5
        usbip: info: unbind device on busid 1-1.5: complete

    me@pi:~$ sudo pkill usbipd

The only persistent change we made on the Pi is installing the 'usbip' package. Once removed, we're back to stock Raspbian.

    me@pi:~$ sudo apt autoremove usbip


Making it permanent

There are two additional steps to making a permanent server, and essentially the same two steps to make a permanent client. This means a USBIP server that begins serving automatically upon boot, and a client that automatically connects to the server upon boot.

Add the kernel modules to /etc/modules so that the USBIP kernel modules will be automatically loaded at boot. To remove a client or server, delete the line from /etc/modules. You don't need to use 'nano' - use any text editor you wish, obviously.

    me@pi:~$ sudo nano /etc/modules     // usbipd SERVER

        usbip_host

    me@laptop:~$ sudo nano /etc/modules     // usbip CLIENT

        usbip_vhci-hcd

    // Another way to add the USBIP kernel modules to /etc/modules on the SERVER
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'usbip_host' >> /etc/modules
    me@pi:~# exit

    // Another way to add the USBIP kernel modules to /etc/modules on the CLIENT
    me@pi:~$ sudo -s                            // "sudo echo" won't work
    me@pi:~# echo 'vhci-hcd' >> /etc/modules
    me@pi:~# exit

Add a systemd job to the SERVER to automatically bind the USB devices. You can use systemd to start, stop, and restart the server conveniently, and for to start serving at startup automatically.

    me@pi:~$ sudo nano /lib/systemd/system/usbipd.service

        [Unit]
        Description=usbip host daemon
        After=network.target

        [Service]
        Type=forking
        ExecStart=/usr/sbin/usbipd -D
        ExecStartPost=/bin/sh -c "/usr/sbin/usbip bind --$(/usr/sbin/usbip list -p -l | grep '#usbid=10c4:8a2a#' | cut '-d#' -f1)"
        ExecStop=/bin/sh -c "/usr/lib/linux-tools/$(uname -r)/usbip detach --port=$(/usr/lib/linux-tools/$(uname -r)/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new SERVER:
    me@pi:~$ sudo pkill usbipd                          // End the current server daemon (if any)
    me@pi:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@pi:~$ sudo systemctl enable usbipd.service       // Set to run at startup
    me@pi:~$ sudo systemctl start usbipd.service        // Run now

Add a systemd job to the CLIENT to automatically attach the remote USB devices at startup. You can use systemd to unplug conveniently before sleeping, and to reset the connection of needed. Note: On the "ExecStart" line, substitute your server's IP address for aa.bb.cc.dd in two places.

    me@laptop:~$ sudo nano /lib/systemd/system/usbip.service

        [Unit]
        Description=usbip client
        After=network.target

        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/bin/sh -c "/usr/bin/usbip attach -r aa.bb.cc.dd -b $(/usr/bin/usbip list -r aa.bb.cc.dd | grep '10c4:8a2a' | cut -d: -f1)"
        ExecStop=/bin/sh -c "/usr/bin/usbip detach --port=$(/usr/bin/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"

        [Install]
        WantedBy=multi-user.target

To start the new CLIENT attachment(s):

    me@laptop:~$ sudo systemctl --system daemon-reload      // Reload system jobs because one changed
    me@laptop:~$ sudo systemctl enable usbip.service       // Set to run at startup
    me@laptop:~$ sudo systemctl start usbip.service        // Run now

Wednesday, October 7, 2015

CAC on Firefox using Ubuntu 15.04

After a couple years away form CAC on Linux, it's time to revisit how to install a DOD CAC reader for Firefox under Ubuntu 15.10.

Very good instructions are on the Ubuntu Help pages. This guide clarifies a few vague elements, and reorganizes the information to help you troubleshoot.

There are five simple steps:
  • Get an appropriate card reader
  • Install the card reader software (pcsd)
  • Test the card, reader, and software
  • Install cackey
  • Install the DOD certs and point Firefox to the card reader

The Firefox extension requires cackey, cackey requires pcsd, pcsd requires hardware to detect. We will follow best practice for Debian/Ubuntu and install the dependences first, in the right order.


Get A Card Reader

There's nothing to add here. The Ubuntu Help page says it all.



Install Card Reader Software


sudo apt-get install pcscd pcsc-tools

The key software you need is the pcsc daemon, and it's libpcsclite1 dependency. pcsc-tools is handy for testing the connection in the next step.



Test the card reader and software


Insert your CAC card and run:

pcsc_scan

As shown in the Ubuntu Help page, pcscd will clearly show you if your card reader and card are detected.



Install cackey

The cackey library provides access to the cryptographic and certificate functions of the CAC card.

1) You need to know if your Ubuntu system is a 32-bit or 64-bit install. Don't trust a sticker of what you remember - checking takes but a moment:

uname -i

If the result is 'i386' or similar, you are running a 32-bit system. Look for a download labeled 'i386'.
If the result is 'x86_64' or similar, you are running a 64-bit system. Look for a download labeled 'amd64'

2) There are two places to download the latest cackey package from:
https://software.forge.mil/sf/projects/community_cac (CAC required)
http://cackey.rkeene.org/fossil/home (non-CAC)

3) Download the latest cackey .deb package. Be sure to choose between 32/64 bit properly - the wrong package will happily install...but won't work.

4) Bug workaround for 64-bit only: Cackey tries to install to the /usr/lib64 directory, which probably doesn't exist on your system. Simply create it. This bug does not affect 32-bit users, who can safely ignore this entire paragraph.

5) Finally, install the downloaded cackey deb using the 'dpkg --install' command.


Example:
1) I'm running a 64-bit system.
3) I downloaded cackey_0.7.5-1_amd64.deb to my Downloads directory.
Then I installed the deb using:

sudo mkdir /usr/lib64        ## Step 4 - 64-bit bug workaround
sudo dpkg --install ~/Downloads/cackey_0.7.5-1_amd64.deb    ## Step 5



Install DOD Certificates and Point Firefox to the Card Reader

Happily, forge.mil has a Firefox add-on that does all this for you!

1) Simply download the latest 'dod_configuration-X.X.X.xpi' file from http://www.forge.mil/Resources-Firefox.html (non-CAC).

2) Quit Firefox

3) Double-click on the dod_configuration-X.X.X.xpi file you downloaded (it might be in your Downloads directory). Firefox will restart, and offer to install the add-on. Go ahead and install it.




Testing

Try your favorite CAC website (like AKO or OWA) and see if the site works, and if the site communicates properly with your card.

Be sure your USB card reader is snugly inserted, of course.

Start (or restart) Firefox after your CAC reader and card are inserted and recognized by the system. 

Thursday, September 3, 2015

The best DebConf 15 videos

I simply cannot take time off work to attend DebConf, so each year I watch the videos instead. It took almost a month, thanks to the back-to-school rush at work, but I finally got through the sessions I wanted to see.

Here are my highlights from DebConf 15:

Cool Stuff


Creating A More Inviting Environment For Newcomers New Experiences From MoM SoB Teammetrics - A detailed discussion of how a mature team with tapering contributions re-energized itself with new enthusiasts. How they were recruited, mentored, trained, and finally assigned key roles in the team. Lots of discussion of mentoring strategies and the costs of mentoring (less time for the work) from the developer/maintainer perspective. Lots of good ideas for any mature team, and thoroughly applicable to Ubuntu teams too.

Linux in the City of Munich AKA LiMux - There has been a lot of FUD written about one of the largest public conversions to an open-source platform, and it was great to see an actual insider talking about the project. Worth a watch.

Lightning Talks 2 - The first Lightning Talk was a proposal to add a new service to Debian. The service tests all uploaded packages for many known faults (using valgrind, infer, etc.), and automatically files bug reports on the faults. This should provide a large number of real bite-sized bugs for drive-by patches, and corresponding hefty improvement in code quality. Most cool.


Under the hood


Your Systemd Tool Box - Dissecting And Debugging Boot And Services - This is a great walk-through of the new (to me) tools. Had a terminal window open alongside to try each of the tools. Saved the video for a refresh, it's a lot to digest in one sitting.

Systemd How We Survived Jessie And How We Will Break Stretch - Fantastic discussion of coming systemd features: Persistent interface names, networkd, kdbus, and more. Also great discussion of how to get involved around the edges.

Dpkg The Interface - A presentation by the current maintainer, explaining how he keeps dpkg stable and the future roadmap. Since Snappy uses dpkg (but not apt), that roadmap is important! I have used dpkg for a decade, but never thought about all the bits of it I never see....


Keeping Free Software Free


Debians Central Role In The Future Of Software Freedom - A presentation by the President of the Software Freedom Conservancy (SFC), explaining the problems they see, their strategies to attack those problems, and how they try to effectively challenge GPL violations. A bit of Canonical-bashing in this one at a couple points (some deserved, some not).

At 23:30, it introduces the Debian Copyright Aggregation Project, where Debian contributors can opt to revocably assign their copyright to SFC, and can also permit the SFC to enforce those copyrights. This is one strategy SFC is pursuing to fight both CLAs and license violations.




Tuesday, January 13, 2015

Introducing Ubuntu Find-A-Task



The Ubuntu Community website has an awesome new service: Find-A-Task

It's a referral service - it helps volunteers discover teams and tasks that match their interests.

  • Link to it!
  • Refer new enthusiasts toward it!
  • Advertise your teams and projects on it!

Give it a try and see how it can work for your team or project.


How do I get my team listed?


So easy and so fast.
  1. What volunteer role do you want to advertise?
  2. What's a very short, exciting description of the role?
  3. Which Find-A-Task paths do you think this role is appropriate for? 
  4. Create a great landing page on the wiki. (example)
    • Drop by #ubuntu-community-team and let us know.
      • Role: Frobishers, 
      • Description: "Help Frobnicators add fabulous Frob!"
      • Path: One, in the Coding and Development submenu
      • Landing URL http://wiki.ubuntu.com/FrobTeam/Frobishers

    Your landing page:


    This is a volunteer's first impression of your team. Make it shine.

    When volunteers show up at your wiki page, they are already interested. They want to know how to set up, who to contact, and how to get started on their first easy work item. They want instructions and details.

    If you don't provide what they want, they may move on to their next choice. Find-A-Task makes it easy for them to move on.



    Credits


    Tremendous thanks to:

    Tuesday, July 8, 2014

    Simple geolocation in Ubuntu 14.04

    Geolocation means 'figuring out where a spot on the Earth is'.

    Usually, it's the even more limited question 'where am I?'


    GeoClue


    The default install of Ubuntu includes GeoClue, a dbus service that checks IP address and GPS data. Since 2012, when I last looked at GeoClue, it's changed a bit, and it has more backends available in the Ubuntu Repositories.

     

    Privacy

    Some commenters on the interwebs have claimed that GeoClue is privacy-intrusive. It's not. It merely tries to figure out your location, which can be handy for various services on your system. It doesn't share or send your location to anybody else.

    dbus introspection and d-feet

    You would expect that a dbus application like GeoClue would be visible using a dbus introspection tool like d-feet (provided by the d-feet package).

    But there's a small twist: D-feet can only see dbus applications that are running. It can only see dbus applications that active, or are inactive daemons.

    It's possible (and indeed preferable in many circumstances) to write a dbus application that is not a daemon - it starts at first connection, terminates when complete, and restarts at the next connection. D-feet cannot see these when they are not running.

    Back in 2012, GeoClue was an always-on daemon, and always visible to d-feet.
    But in 2014 GeoClue is (properly) no longer a daemon, and d-feet won't see GeoClue if it's not active.

    This simply means we must trigger a connection to GeoClue to make it visible.
    Below are two ways to do so: The geoclue-test-gui application, and a Python3 example.




    geoclue-test-gui


    One easy way to see GeoClue in action, and to make it visible to d-feet, is to use the geoclue-test-gui application (included in the geoclue-examples package)

    $ sudo apt-get install geoclue-examples
    $ geoclue-test-gui





    GeoClue Python3 example


    Once GeoClue is visible in d-feet (look in the 'session' tab), you can see the interfaces and try them out.

    Here's an example of the GetAddress() and GetLocation() methods using Python3:

    >>> import dbus
    
    >>> dest           = "org.freedesktop.Geoclue.Master"
    >>> path           = "/org/freedesktop/Geoclue/Master/client0"
    >>> addr_interface = "org.freedesktop.Geoclue.Address"
    >>> posn_interface = "org.freedesktop.Geoclue.Position"
    
    >>> bus        = dbus.SessionBus()
    >>> obj        = bus.get_object(dest, path)
    >>> addr_iface = dbus.Interface(obj, addr_interface)
    >>> posn_iface = dbus.Interface(obj, posn_interface)
    
    >>> addr_iface.GetAddress()
    (dbus.Int32(1404823176),          # Timestamp
     dbus.Dictionary({
         dbus.String('locality')   : dbus.String('Milwaukee'),
         dbus.String('country')    : dbus.String('United States'),
         dbus.String('countrycode'): dbus.String('US'),
         dbus.String('region')     : dbus.String('Wisconsin'), 
         dbus.String('timezone')   : dbus.String('America/Chicago')}, 
         signature=dbus.Signature('ss')),
     dbus.Struct(                 # Accuracy
         (dbus.Int32(3),
          dbus.Double(0.0),
          dbus.Double(0.0)),
          signature=None)
    )
    
    >>> posn_iface.GetPosition()
    (dbus.Int32(3),               # Num of fields
     dbus.Int32(1404823176),      # Timestamp
     dbus.Double(43.0389),        # Latitude
     dbus.Double(-87.9065),       # Longitude
     dbus.Double(0.0),            # Altitude
     dbus.Struct((dbus.Int32(3),  # Accuracy
                  dbus.Double(0.0),
                  dbus.Double(0.0)),
                  signature=None))
    
    >>> addr_dict = addr_iface.GetAddress()[1]
    >>> str(addr_dict['locality'])
    'Milwaukee'
    
    >>> posn_iface.GetPosition()[2]
    dbus.Double(43.0389)
    >>> posn_iface.GetPosition()[3]
    dbus.Double(-87.9065)
    >>> lat = float(posn_iface.GetPosition()[2])
    >>> lon = float(posn_iface.GetPosition()[3])
    >>> lat,lon
    (43.0389, -87.9065)

    Note: Geoclue's accuracy codes



    Ubuntu GeoIP Service


    When you run geoclue-test-gui, you discover that only one backend service is installed with the default install of Ubuntu - the Ubuntu GeoIP service.

    The Ubuntu GeoIP service is provided by the geoclue-ubuntu-geoip package, and is included with the default install of Ubuntu 14.04. It simply queries an ubuntu.com server, and parses the XML response.

    You can do it yourself, too:

    $ wget -q -O - http://geoip.ubuntu.com/lookup
    
    <?xml version="1.0" encoding="UTF-8"?>
    <Response>
      <Ip>76.142.123.22</Ip>
      <Status>OK</Status>
      <CountryCode>US</CountryCode>
      <CountryCode3>USA</CountryCode3>
      <CountryName>United States</CountryName>
      <RegionCode>WI</RegionCode>
      <RegionName>Wisconsin</RegionName>
      <City>Milwaukee</City>
      <ZipPostalCode></ZipPostalCode>
      <Latitude>43.0389</Latitude>
      <Longitude>-87.9065</Longitude>
      <AreaCode>414</AreaCode>
      <TimeZone>America/Chicago</TimeZone>
    </Response>




    GeoIP


    The default install of Ubuntu 14.04 also includes (the confusingly-named) GeoIP. While it has the prefix 'Geo', it's not a geolocator. It's completely unrelated to the Ubuntu GeoIP service. Instead, GeoIP is a database the IP addresses assigned to each country, provided by the geoip-database package. Knowing the country of origin of a packet or server or connection can be handy.

    geoip-database has many bindings, including Python 2.7 (but sadly not Python 3). Easiest is the command line, provided by the additional geoip-bin package.

    $ sudo apt-get install geoip-bin
    $ geoiplookup 76.45.203.45
    GeoIP Country Edition: US, United States




    GeocodeGlib


    Back in 2012, I compared the two methods of geolocation in Ubuntu: GeoClue and GeocodeGlib. GeocodeGlib was originally intended as a smaller, easier to maintain replacement for GeoClue. But as we have already seen, GeoClue has thrived instead of withering. The only two packages that seem to require GeocodeGlib in 14.04 are gnome-core-devel and gnome-clocks
    GeocodeGlib, provided by the libgeocode-glib0 package, is no longer included with a default Ubuntu installation anymore, but it is easily available in the Software Center.

    sudo apt-get install gir1.2-geocodeglib-1.0


    That is the GTK introspection package for geocodeglib, and it pulls in libgeocode-glib0 as a dependency. The introspection package is necessary.

    Useful documentation and code examples are non-existent. My python code sample from 2012 no longer works. It's easy to create a GeocodeGlib.Place() object, and to assign various values to it (town name, postal code, state), but I can't figure out how to get GeocoddeGlib to automatically determine and fill in other properties. So even though it seems maintained, I'm not recommending it as a useful geolocation service.

    Monday, December 2, 2013

    upstart-socket-bridge

    Upstart-socket-bridge is a lot like xinetd. They replace the need for some daemons by monitoring a port, and then launching the desired application when an inbound connection is detected. U-s-b is part of a family of Upstart services that replace many daemon monitoring and listening functions and hooks.

    Unlike xinetd, services need to be customized (patched) to run with upstart-socket-bridge.

    Documentation is quite sparse. Hence this blog post. That's not intended to criticize; it's really hard to write "good" documentation when you don't know the use cases or the experience level of the user. If you have experience with writing sockets in C, and understand what a file descriptor is and how to use one,  then the documentation is just fine. I didn't before I began this odyssey.




    How do I make it work?

    Here are three simple examples of how it works.
    One uses shell script.
    One uses Python3.
    One uses C.



    Hello, World! with shell script


    The script that gets triggered by the port action. The port is just a trigger, no data gets exchanged on the port.

    1) Let's create a shell script called test-script. This script merely prints out the Upstart-related environment variables into a file.

    #!/bin/sh
    outfile=/tmp/outfile
    date > $outfile            # Timestamp
    printenv | grep UPSTART >> $outfile
    exit 0


    2)  Create an Upstart .conf, let's call it /etc/init/socket-test.conf

    description "upstart-socket-bridge test"
    start on socket PROTO=inet PORT=34567 ADDR=127.0.0.1  # Port 34567
    setuid exampleuser                                    # Run as exampleuser, not root
    exec /bin/sh /tmp/test-script                         # Launch the service


    3)  Let's run it. Connect to the port using netcat.

    $ nc localhost 34567
    ^C       # End the process using CTRL+C


    4)  Look at the result. Hey, look, it's all the environment variables we need!

    $ cat /tmp/outfile


    5)  Clean up:

    $sudo rm /etc/init/socket-test.conf           # Disconnect the launch trigger
    $rm /tmp/test-script /tmp/outfile             # Delete the test service





    "Hello, World" service in Python 3

    (UPDATED: Thanks to Dmitrijs Ledkovs for getting this to work!)

    It's a simple echo server - the Python version of the C service below. It requires two files, the application and the Upstart .conf. It demonstrates how a service reads uses the port connection for a trigger and exchanging data.


    1) Let's create the Python 3 file. Let's call it test-service.py

    #!/usr/bin/python3
    import os, socket
    
    # Create the socket file descriptor from the the env var
    sock_fd = socket.fromfd(int(os.environ["UPSTART_FDS"]),
                            socket.AF_INET, socket.SOCK_STREAM)
    
    # Accept the connection, create a connection file descriptor
    conn, addr = sock_fd.accept()
    
    # Read
    message = conn.recv(1024).decode('UTF-8')
    
    # Manipulate data
    reply = ("I got your message: " + message)
    
    # Write
    conn.send(reply.encode('UTF-8'))
    
    # Finish
    conn.close()



    2)  Create an Upstart .conf, let's call it /etc/init/socket-test.conf

    description "upstart-socket-bridge test"
    start on socket PROTO=inet PORT=34567 ADDR=127.0.0.1  # Port 34567
    setuid exampleuser                                    # Run as exampleuser, not root
    exec /usr/bin/python3 /tmp/test-service.py            # Launch the service


    3) Let's run it. Connect to the port using netcat, and then type in a string.

    $ nc localhost 34567
    Hello, World!                       # You type this in. Server read()s it.
    I got your message: Hello, World!   # Server response.  Server write()s it.


    4) Cleanup is simple. Simply delete the two files.

    $ sudo rm /etc/init/socket-test.conf         # Disconnect the bridge
    $ rm /tmp/test-service.py                    # Delete the test service







    "Hello, World!" service in C


    It's a simple echo server - the C version of the Python service above. It requires two files, the application and the Upstart .conf. It demonstrates how a service reads uses the port connection for a trigger and exchanging data.

    1)  Let's create a C file. Let's call it test-service.c

    #include <stdlib.h>
    #include <netinet/in.h>
    #include <string.h>
    
    int main()
    {
        /* Read the UPSTART_FDS env var to get the socket fd */
        char *name = "UPSTART_FDS";
        char *env = getenv (name);       // Read the environment variable
        int sock_fd = atoi(env);         // Socket file descriptor
    
        /* Don't need to do any of these normal socket tasks! Hooray!
        / int port_num;           
        / int sock_fd = socket(AF_INET, SOCK_STREAM, 0);  
        / memset((char *) &serv_addr, 0, sizeof(serv_addr));
        / serv_addr.sin_family = AF_INET;
        / serv_addr.sin_addr.s_addr = INADDR_ANY;
        / serv_addr.sin_port = htons(port_num);
        / struct sockaddr_in serv_addr
        / bind(sock_fd, (struct sockaddr *) &serv_addr, sizeof(serv_addr));
        / listen(sock_fd, 5)                                                 
        */
    
        /* Accept() the connection. Returns the second fd: 'conn_fd' */
        struct sockaddr_in cli_addr;   // Requires netinet/in.h
        int clilen = sizeof(cli_addr);
        int conn_fd = accept(sock_fd, (struct sockaddr *) &cli_addr, &clilen);
    
        /* Service is active. Read-from and write-to the connection fd */
        char response[276] = "I got your message: ";
        char buffer[256];
        memset((char *) &buffer, 0, sizeof(buffer));  
        read(conn_fd, buffer, 256);                   // Read from conn_fd
        strcat(response, buffer);                     
        write(conn_fd, response, strlen(response));   // Write to conn_fd
    
        /* Close the connection fd. Socket fd can be reused */
        close(conn_fd);
        return 0;
    }

    2)  Compile it using gcc, and output the compiled application as an executable called test-service. I put mine in /tmp to make cleanup easier. If you're familiar with gcc, the important element is that there are no flags and no libraries:

    gcc -o test-service test-service.c


    3)  Create an Upstart .conf, let's call it /etc/init/socket-test.conf

    description "upstart-socket-bridge test"
    start on socket PROTO=inet PORT=34567 ADDR=127.0.0.1  # Port 34567
    setuid exampleuser                                    # Run as exampleuser, not root
    exec /tmp/test-service                                # Launch the service


    4) Let's run it. Connect to the port using netcat, and then type in a string.

    $ nc localhost 34567
    Hello, World!                       # You type this in. Server read()s it.
    I got your message: Hello, World!   # Server response.  Server write()s it.


    5) Cleanup is simple. Simply delete the three files.

    $ sudo rm /etc/init/socket-test.conf         # Disconnect the bridge
    $ rm /tmp/test-service.c /tmp/test/service   # Delete the test service



    How does it work?

    Here is the oversimplified explanation. Each stream of data whizzing round inside your system is tracked by the kernel. That tracking, sort of like an index or a pointer, is called a file descriptor (fd). A few fds are reserved (0=stdin, 1=stdout, 2=stderr) and you run into these in shell scripting or cron jobs.

    A pipe or port or socket is just a way to tell the kernel that a stream of data output from Application A should be input to Application B. Let's look at it another way, and add that fd definition: An fd identifies a data stream output from A and input to B. The pipe/socket/port is a way to express how you want the fd set up.

    Now the gritty stuff: A socket/port can have a single server and multiple clients attached. The server bind()s the port, and listen()s on the port for connections, and accept()s each connection from a client. Each connection to a client gets issues another file descriptor.

    That's two file descriptors: The first defines the port/socket in general, and the second defines the specific connection between one client and the server.

    Upstart tells your server application (since it's not actually serving, let's just call it the "service") the first file descriptor.
    • Your service doesn't start a daemon at boot.
    • Your service doesn't bind() to the socket/port. Upstart does that.
    • Your service doesn't listen() on the socket/port. Upstart does that.
    • Your service doesn't fork() for each connection. Upstart launches an instance of your service for each connection. Your service can terminate when the connection ends...if you want it to.
    • Your service does accept() the connection from a client, communicate using the resulting file descriptor, and end when the connection close()s.

    Let's try it with the example above:
    1. Upstart and Service are running on Server. Client is running somewhere else - maybe it's also running on Server, maybe it's out on the network somewhere.
    2. The file /etc/init/socket-test.conf tells Upstart to monitor port #34567 on behalf of test-service application. As currently written, it will begin monitoring at boot and stop monitoring at shutdown.
    3. When Client --like netcat-- connect()s to port #34567, Upstart launches test-service application with a couple extra environment variables.
    4. test-service reads the environment variables, including the file descriptor (fd).
    5. test-service accept()s the connection on the file descriptor. This creates a second fd that Service can use to communicate.
    6. When Client and test-service are done communicating, they close() the connection fd.
    7. test-service can end. Upstart will restart it next time an inbound connection is received.




    How do I make it work with a service I didn't write? (like my favorite Game Server or Media Server or Backup Server)

    Maybe it will work, maybe it won't. There are a couple issues to consider. I don't see an easy, non-coding solution because we're talking about changing the nature of these services.
    • Change from always-on to sometimes-on.
    • Change to (save and) quit when the connection is done instead of listen()ing for another connection. I don't see any upstart trigger for a socket connection ending.
    • Might make some service code simpler. No longer need to fork() or bind().
    • Not portable to non-Upstart systems, so the daemon code remains. Adds a bit to code complexity and a new testing case.
    • A different trigger (hardware, file change, etc) might be a better trigger than a connection to a socket. Upstart has bridges for those, too.