tag:blogger.com,1999:blog-27030604150276079892024-03-01T19:30:13.143-06:00Ian's TechBlogI document my adventures and problems so that I remember my mistakes, and perhaps you may learn something.<br>I'm not a programmer, nor a computer expert.<br>I'm just a tinkering guy in Milwaukee with a store and three kids to keep me busy.Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.comBlogger199125tag:blogger.com,1999:blog-2703060415027607989.post-79602511493884267512022-11-13T11:59:00.005-06:002022-11-13T11:59:54.194-06:00The easiest way yet to house a remote Z-wave controller<p>I have HomeAssitant running on a server. But the Z-wave controller --a USB dongle-- needs to be centrally located in the building.</p>
<p>I used a Raspberry Pi 3 at that central location, and it has an ethernet cable running through the wall to the server. So the network is reliable.</p>
<p>In the past, I've run Raspbian on the Pi. For a couple years I ran USBIP. Then I ran a docker container of zwave2js. But both suffered from the same problem: Every month or so I needed to remember to login to the pi and perform maintenance. The docker container in particular would get stale and break the connection to HomeAssistant. <br /></p><p>So we're trying something new:</p>
<ul style="text-align: left;">
<li>Replacing Raspbian with Ubuntu Core, which will update automatically.</li>
<li>Replacing the docker container with a Snap package, which will also update automatically.</li>
</ul>
<p>This turned out to be much easier than I expected:</p>
<ol style="text-align: left;">
<li><a href="https://ubuntu.com/tutorials/how-to-install-ubuntu-core-on-raspberry-pi" target="_blank">Install Ubuntu Core on a Pi</a></li>
<li>Install and configure the <a href="https://snapcraft.io/zwave-js-ui" target="_blank">Zwave-JS-UI snap</a></li>
</ol>
<p>Installing the snap was literally this easy:</p>
<pre>
sudo snap install zwave-js-ui
sudo snap start zwave-js-ui
</pre>
<p>And then open a web browser to port 3000 on the pi. All configuration is done through the web ui. And HomeAssistant picked up the data immediately.</p>Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-85825631944003502272022-11-13T11:39:00.002-06:002022-11-13T11:40:41.368-06:00Bridging LXD Containers in Ubuntu Core<p>I'm setting up a set of server containers on an Ubuntu Core 22 base.</p>
<p>This is slightly different from deb-based Ubuntu in several ways.</p>
<p>The hardware is a salvaged laptop motherboard, without keyboard or monitor.</p>
<span><a name='more'></a></span>
<p>Installing Ubuntu Core on most amd64 hardware:</p>
<p>Preparation on an existing system</p>
<ol>
<li>If you lack an ssh private and public key pair, <a href="https://help.ubuntu.com/community/SSH/OpenSSH/Keys">create one using the ssh-keygen command</a> on an existing system. If you already have your own private key, you can reuse it...or not. That's your choice.</li>
<li>If you lack an <a href="https://login.ubuntu.com/" target="_blank">Ubuntu SSO account</a>, create one.</li>
<li><a href="https://login.ubuntu.com/ssh-keys" target="_blank">Add your public key</a> to that Ubuntu SSO account.<br /></li>
<li>If you lack a bootable Ubuntu Desktop (not Core) LiveUSB, <a href="https://ubuntu.com/tutorials/create-a-usb-stick-on-ubuntu" target="_blank">create one</a>.</li>
</ol>
<p>Installation of the new system:</p>
<ol start="5">
<li>A monitor and keyboard and network connection are required. Attach them to the hardware.</li>
<li>Boot the Ubuntu (or any distro) LiveUSB. Select the Live ("Try Ubuntu") environment. Don't install anything.</li>
<li><a href="https://ubuntu.com/core/docs/install-nuc" target="_blank">Download and install Ubuntu Core</a> from within that live environment. Unplug the LiveUSB and reboot into Ubuntu Core.</li>
<li>Once you have successfully connected to Ubuntu Core via ssh, the LiveUSB, monitor, and keyboard are no longer required.</li>
<li>Do NOT install LXD yet. Set up the bridge on Ubuntu Core first.</li><li>Finally, after the bridge is working, set up LXD.</li>
</ol>
<span><!--more--></span>
<p>Detail on Step #9 (Set up the bridge on Ubuntu Core)</p>
<ul style="text-align: left;">
<li>On the stock install of Ubuntu Core, netplan is NOT in /etc. Instead, look in /writable/system-data/etc/</li>
<li>Also note that more commands in Ubuntu Core require sudo.</li>
</ul>
<p>Here's the stock Netplan YAML from the setup. Your interfaces (enp3s0, wlp2s0) are likely to vary. Since this was a laptop motherboard, it included wi-fi:</p>
<pre>$ sudo cat /writable/system-data/etc/netplan/*
# This is the network config written by 'console-conf'
network:
ethernets:
enp3s0:
dhcp4: true
version: 2
wifis:
wlp2s0:
access-points:
My-open-wireless-SSID: {}
dhcp4: true
</pre>
<p>We want to add a bridge to this YAML. Use the vi command to edit, since no other editor is included with Ubuntu Core. (<a href="https://www.tutorialspoint.com/unix/unix-vi-editor.htm" target="_blank">vi tutorial</a>) When complete, it should look more like this:</p>
<pre>$ sudo cat /writable/system-data/etc/netplan/*
# This is the network config written by 'console-conf'
network:
ethernets:
enp3s0:
<mark>dhcp4: false</mark> <--- The original interface no longer gets the IP address
version: 2
wifis:
wlp2s0:
access-points:
My-open-wireless-SSID: {}
dhcp4: true
<mark>bridges:</mark> <--- New section for the bridge
<mark>br0:</mark> <--- Bridge interface name. Use this instead of the original enp3s0 interface name
<mark>dhcp4: true</mark> <--- The bridge interface gets the IP address
<mark>interfaces: [ enp3s0 ]</mark>
</pre>
<p>Here's the first trap and how to avoid it: When these changes are applied, the new bridge will instantly get control of the network interface.</p>
<p>Unfortunately, that new bridge will have a different MAC address, and thus will receive a different IP address. Your existing ssh session over ethernet will be severed and unrecoverable. Ubuntu Core WON'T tell you the new IP address.</p>
<p>You must have access to your dhcp server (or another SSH connection over an unchanged interface...like wireless) to learn the bridge's new IP address. Set that up now.</p>
<p>And then the commands <code>netplan generate</code> (check for errors) and <code>netplan apply</code> (execute the changes).</p>
<p>Login again to Ubuntu Core using the new IP address</p>
<span><!--more--></span>
<p>Detail on Step #10 (Install LXD)</p>
<pre>sudo snap install lxd
sudo lxd init</pre>
<p>Here's the second trap and how to avoid it: During setup (<code>lxd init</code>) LXD offers to create a bridge (lxdbr0). That default bridge is *internal*; it has no external network access. Let it create the bridge, but specify the existing bridge (br0) that you created in Step 9. There's no difference between the two bridges, except that br0 happens to have network access already.</p>
<pre>Would you like to create a new local network bridge? (yes/no) [default=yes]: <mark>yes</mark>
What should the new bridge be called? [default=lxdbr0]: <mark>br0</mark>
</pre>
<p>Let's take a look at our LXD settings now:</p>
<pre>$ sudo lxc profile show defaul
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: default
</pre>
<p>And now our soon-to-be-created containers will all have bridged network access. Each container will each get a separate IP address from the real LAN router. Each container will be accessible from anywhere on the LAN.</p>
<p>Easy!</p>Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-73894992116413445322021-10-07T16:29:00.002-05:002021-10-07T16:29:15.019-05:00Installing Ubuntu Core onto 64-bit Bare Metal<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuk987IroGVgQPdvjergW-ggKWyw7m85Hdg-Qku_Yfcx4nHIJbLxSSctEDdBPGuEzzgR45iT1NJ7WG_LK1tp385QJgVur40L1Ni16ye00xUeA7K8gCQKwGtEgU7pB9p80EaW_RNDzaMTyB/s2048/IMG_20211007_112319360.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1536" data-original-width="2048" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuk987IroGVgQPdvjergW-ggKWyw7m85Hdg-Qku_Yfcx4nHIJbLxSSctEDdBPGuEzzgR45iT1NJ7WG_LK1tp385QJgVur40L1Ni16ye00xUeA7K8gCQKwGtEgU7pB9p80EaW_RNDzaMTyB/w200-h150/IMG_20211007_112319360.jpg" width="200" /></a></div>
<p>I have a re-purposed AMD64 laptop motherboard, ready to become an experimental Ubuntu Core server.</p>
<p>It's in fine condition. You can see that it boots an Ubuntu LiveUSB's "Try Ubuntu" environment just fine. Attached to the motherboard is a new 60GB SSD for testing. The real server will use a 1TB HDD.</p>
<p>But Ubuntu Core doesn't install on bare metal from a Live USB. It's still easy, though.</p>
<p>1. Boot a "Try Ubuntu" Environment on the target system.</p>
<ul style="text-align: left;">
<li>Test your network connection. The picture shows a wireless connection. This particular laptop has a wireless chip that is recognized out-of-the box, so I didn't need to get out the long network cable.</li>
<li>Test that your storage device works. You can see in the picture that Gnome Disks can see the storage device.</li>
</ul>
<p>2. Terminal: <code>sudo fdisk -l</code>. Locate the storage device that you want to install Ubuntu Core onto.</p>
<ul style="text-align: left;">
<li>The entire storage device will be erased.</li>
<li>My storage device is at /dev/sda today. It might be different next boot. Yours might be different.</li>
</ul>
<p>3. Open the web browser and download Ubuntu Core.</p>
<ul style="text-align: left;">
<li>Ubuntu Core 20 (stable) is at <a href="https://cdimage.ubuntu.com/ubuntu-core/20/stable/current/" target="_blank">https://cdimage.ubuntu.com/ubuntu-core/20/stable/current/</a></li><li>My file was called ubuntu-core-20-amd64.img.xz. The download is a .img.xz file, not a .iso file</li><li>Your browser downloads to your Downloads directory, of course.</li>
</ul>
<p>4. Write Ubuntu Core to the storage device.</p>
<ul>
<li><b>Warning</b>: This command will erase your entire storage device. If there is anything valuable on your storage device, then you have skipped too many steps!<br />
<pre>xzcat Downloads/<.img.xz file> | sudo dd of=/dev/<target_storage_device> bs=32M status=progress; sync</pre></li>
<li>So mine was<br /><pre>xzcat Downloads/ubuntu-core-20-amd64.img.xz | sudo dd of=/dev/sda bs=32M status=progress; sync</pre></li>
<li>Source: <a href="https://ubuntu.com/download/intel-nuc" target="_blank">https://ubuntu.com/download/intel-nuc</a></li>
</ul>
<p>5. Reboot into Ubuntu Core.</p>
<ul style="text-align: left;">
<li>When prompted by the "Try Ubuntu" environment, remove the LiveUSB so you are booting from your newly-written storage device.</li>
<li>Be patient. My first boot into Ubuntu Core led to a black screen for nearly a minute before the system acknowledged that it actually has been working the entire time.</li>
<li>After 3-4 minutes of non-interactive setup alternating between blank screens and scrolling setup output, Ubuntu Core finally asked me two questions: Which network to connect to, and my <a href="https://login.ubuntu.com/" target="_blank">Ubuntu SSO</a> e-mail address.</li>
<li>Finally, the system rebooted again. This time it didn't ask any question - just displayed the new Ubuntu Core system's IP address.</li>
</ul>
<p>6. Log into Ubuntu Core.</p>
<ul>
On my Desktop:<br />
<pre>me@Desktop:~$ ssh me@192.168.1.x
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-77-generic x86_64)</pre></ul>
Success: A working Ubuntu Core on bare metal.Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com1tag:blogger.com,1999:blog-2703060415027607989.post-27653775528013093922020-11-25T06:46:00.000-06:002020-11-25T06:46:57.682-06:00Basic SNMP for HomeAssistant<p>I have a lovely OKI MB480 printer. It's been reliable for 10 years. And I want to display it's status in HomeAssistant.</p>
Like this...
<center><img border="0" data-original-height="121" data-original-width="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTqoOSxjMvFPWCV9Fnm5HgtHUkEBx0XdBJwIcIWrtUrBEeXQxdyHfM6G1ZAyshRavQEg7VJ0nd9tj9irm7xv8IGv8nuHZ0VhIvv_gREvlsa_O42QCsbCNGEa1aAPLQwhxKZYhmdaEt3akv/s0/Screenshot+from+2020-11-25+05-11-36.png" /></center><br /><br />
<p>The printer speaks SNMP and Home Assistant has an SNMP Sensor, so let's learn some SNMP and find a way to make the two talk to each other.</p>
<p>SNMP keeps it's overhead low by not transmitting a lot of information. What is transmitted is compressed by encoding. Here's an example:</p>
<pre>$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.4.1.2001.1.1.1.1.2.20.0
iso.3.6.1.4.1.2001.1.1.1.1.2.20.0 = STRING: "Ready To Print/Power Save"
</pre>
<p>Each of those numbers has meaning, so you need to know exactly what to ask for. Also, there is a client-server (manager-agent) arrangement to figure out (and install), three different versions of SNMP, and finally migrating a successful query into a Home Assistant format.</p>
<br />
<hr />
<h2>How to ask SNMP a question</h2>
<p>The printer has a built in SNMP <i>agent</i> (server). Let's install an SNMP <i>manager</i> (client) on my laptop.</p>
<pre>$ sudo apt install snmp</pre>
<p>Now we can make two simple queries: walk (return a whole tree) and get (return one item). the tree may be quite lengthy -- on this printer, it's 1900 lines.</p>
<pre>$snmpwalk 10.10.10.3
snmpwalk: No securityName specified</pre>
<p>Oops, we are missing two more elements:</p>
<ul>
<li>A <i>version</i> number. We're going to stick with version 1, the easiest.</li>
<li>A <i>community</i> name. This is somewhat like a username; it defines access. Communities get replaced by real usernames and passwords in version 3. The most common community name is "public"</li>
</ul>
<p>These are defined by the remote agent (server). For example, the printer supports v1 and v3, but not v2.</p>
<pre>$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.2.1.1.5.0
iso.3.6.1.2.1.1.5.0 = STRING: "OKI-MB480-224E59"
$ snmpwalk -v 1 -v public 10.10.10.3 > walkfile // Use redirection for lengthy output
</pre>
<br />
<hr />
<h2>Finding the right question to ask</h2>
<p>Now that we have connectivity, we need a dictionary to understand all those number encodings. That dictionary is called a MIB file. It's a structured text file that defines all of the numbers and positions and response codes.</p>
<ol>
<li>The SNMP package that we installed has MIBs disabled by default. Enable them.
<br />
<ul>
<li>Edit the /etc/snmp.conf file</li>
<li>Comment out the "mib:" line</li>
</ul>
</li>
<br />
<li>Install the package of standard MIB files.
<pre> sudo apt install snmp-mibs-downloader</pre>
</li>
</ol>
<p>The MIB for my printer wasn't in the package. I foind it online, downloaded it, and stored it in /home/$ME/.snmp/mibs/. The snmp command automatically looks for MIBs there, too.</p>
<p>Here's the same query using the proper MIB as a dictionary:</p>
<pre>$ snmpget -v 1 -c public -m OKIDATA-MIB 10.10.10.3 sysName.0
SNMPv2-MIB::sysName.0 = STRING: OKI-MB480-224E59
$ snmpget -v 1 -c public -m OKIDATA-MIB -O n 10.10.10.3 sysName.0 // '-O' formats output. 'n'=numeric
.1.3.6.1.2.1.1.5.0 = STRING: OKI-MB480-224E59</pre>
<p>So now it's a matter of using snmpwalk to locate the fields that I want to ask for. I chose three fields:</p>
<ul>
<li>Current Status: OKIDATA-MIB::stLcdMessage.0</li>
<li>Drum Usage: OKIDATA-MIB::usageDrumCurrentLevel.1</li>
<li>Toner Percent Remaining: OKIDATA-MIB::usageTonerCurrentLevel.1</li>
</ul>
<p>Obtain the correspiding numeric code (called an OID) for each field using the -On flag, and test the OID without the MIB.</p>
<pre>$ snmpget -v 1 -m OKIDATA-MIB -c public -O n 10.10.10.3 usageDrumCurrentLevel.1
.1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1 = STRING: "2298"
$ snmpget -v 1 -c public 10.10.10.3 .1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1
SNMPv2-SMI::enterprises.2001.1.1.1.1.100.4.1.1.3.1 = STRING: "2298"</pre>
<br />
<hr />
<h2>Migrating a successful query into Home Assistant</h2>
<p>Here's what the same <a href="https://www.home-assistant.io/integrations/snmp/">SNMP query</a> looks like in a Home Assistant config:</p>
<pre>Sensor:
- platform: snmp
version: 1 # Optional: Default is 1
community: public # Optional: Default is public
host: 10.10.10.3
baseoid: .1.3.6.1.4.1.2001.1.1.1.1.100.4.1.1.3.1
name: Printer Drum Remaining
unit_of_measurement: '%'
# A drum lasts about 25,000 impressions. Convert usage to a percentage of 25,000
value_template: '{{ 100 - ((value | int) / 250.00 )) | int}}'</pre>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com1tag:blogger.com,1999:blog-2703060415027607989.post-56117408689871664942020-08-16T13:38:00.000-05:002020-08-16T13:38:45.707-05:00Installing Home Assistant Core in an LXD Container (Part 2)<p><a href="https://cheesehead-techblog.blogspot.com/2020/08/installing-home-assistant-core-in-lxd.html">Last time</a>, we built a basic LXD container, and then build HomeAssistant inside.</p>
<p>This time, we're going to add a few more elements.</p>
<ul>
<li>We're going to do all the steps on the Host instead of diving inside the container. So we're going to use <i>lxc exec</i> and <i>lxc push</i>. The goal is to make spinning up a new container scriptable</li>
<li>We're going to start/stop the HomeAssistant application using a systemd service</li>
<li>We're going to keep the data and config outside the container and use an <i>lxd disk</i> device to mount the data. Even if we destroy the container, the data and config survive to be mounted another day. </li>
</ul>
<br />
<h3>Preparing LXD</h3>
<p>We're going to skip LXD initialization in this example. There's one addition from last time: We're going to add shiftfs, which permits us to chown mounted data. The macvlan profile and shiftfs enablement are persistent -- if you already have them, you don't need to redo them. All of these commands occur on the Host (we have not created the container yet!)</p>
<pre> # Create a macvlan profile, so the container will get it's IP Address from
# the router instead of the host. This works on ethernet, but often not on wifi
ip route show default 0.0.0.0/0
lxc profile copy default lanprofile
lxc profile device set lanprofile eth0 nictype macvlan
lxc profile device set lanprofile eth0 parent enp3s5
# Test that macvlan networking is set up
lxc profile show lanprofile
config: {}
description: Default LXD profile // Copied. Not really the default
devices:
eth0: // Name, not real device
nictype: macvlan // Correct network type
parent: enp3s5 // Correct real device
type: nic
# Enable shiftfs in LXD so data mounts work properly
sudo snap set lxd shiftfs.enable=true
sudo systemctl reload snap.lxd.daemon
# Test that shiftfs is enabled:
Host$ lxc info | grep shiftfs
shiftfs: "true"
</pre>
<br />
<h3>Create the Container and Initial Configuration</h3>
<p>If LXD is already set up, then start here. We will mount the external data location, set the timezone and do all that apt setup. But this time, we will do all the commands on the Host instead of inside the container. We will also create the sources.list file on the host and <i>push</i> it into the container.</p>
<pre> # Create the container named "ha"
lxc launch -p lanprofile ubuntu:focal ha
# Mount the existing HomeAssistant data directory
# Skip on the first run, since there won't be anything to mount
# Shiftfs is needed, else the mounted data is owned by nobody:nogroup
# Chown is needed because shiftfs changes the owner to 'ubuntu'
lxc config device add ha data_mount disk source=/somewhere/else/.homeassistant path=/root/ha_data
lxc config device set ha data_mount shift=true
lxc exec ha -- chown -R root:root /root
# Set the timezone non-interactively
lxc exec ha -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
lxc exec ha -- dpkg-reconfigure -f noninteractive tzdata
# Reduce apt sources to Main and Universe only
# Create the new sources.list file on the host in /tmp
# Paste all of these lines at once into the Host terminal
cat <<EOF > /tmp/container-sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
deb http://security.ubuntu.com/ubuntu focal-security main universe
EOF
# Push the file into the container
lxc file push /tmp/container-sources.list ha/etc/apt/sources.list
# Apt removals and additions
lxc exec ha -- apt autoremove openssh-server
lxc exec ha -- apt update
lxc exec ha -- apt upgrade
lxc exec ha -- apt install python3-pip python3-venv</pre>
<br />
<h3>Create the Venv, Build HomeAssistant, and Test</h3>
<p>This method is simpler than all that mucking around activating and venv and paying attention to your prompt. All these command are issued on the Host. You don't need a container shell prompt.</p>
<pre> # Setup the homeassistant venv in a dir called 'ha_system'
#We will use the root account since it's an unprivileged container.
lxc exec ha -- python3 -m venv --system-site-packages /root/ha_system
# Build and install HomeAssistant
lxc exec ha -- /root/ha_system/bin/pip3 install homeassistant
# Learn the container's IP address. Need this for the web browser.
lxc list | grep ha
# Run HomeAssistant
lxc exec ha -- /root/ha_system/bin/hass -c "/root/ha_data"
# Use your browser to open the the IP address:8123
# HA takes a couple minutes to start up. Be patient.
# Stop the server from within the Web UI or ^C to exit when done.</pre>
<br />
<h3>Start HomeAssistant at Boot (Container Startup)</h3>
<p>The right way to do autostart is a systemd service file on the container. Like with the sources.list file, we will create it on the host, then push it into the container, then enable it. There's one optional ExecPreStart line - it will slow each startup slightly while it checks for and installs updates.</p>
<pre> cat <<EOF > /tmp/container-homeassistant.service
[Unit]
Description=Home Assistant
After=network-online.target
[Service]
Type=simple
User=root
PermissionsStartOnly=true
ExecPreStart=/root/ha_system/bin/pip3 install --upgrade homeassistant
ExecStart=/root/ha_system/bin/hass -c "/root/ha_data"
[Install]
WantedBy=multi-user.target
EOF
# Push the .service file into the container, and enable it
lxc file push /tmp/container-homeassistant.service ha/etc/systemd/system/homeassistant.service
lxc exec ha -- systemctl --system daemon-reload
lxc exec ha -- systemctl enable homeassistant.service
lxc exec ha -- systemctl start homeassistant.service</pre>
<p>Now we can test it. The last command should start HA. The same command with 'stop' should gracefully stop HA. Restarting the container should gracefully stop HA, and then restart it automatically. Your web browser UI should pick up each stop and start. You did it!</p>
<br/>
<h3>Final Notes</h3>
<p>Remember how you start without any HomeAssitant data to mount? Now that you have a running HA Core, you can save a set of data:</p>
<pre> lxc file pull ha/root/.homeassistant /somewhere/else/.homeassistant --recursive</pre>
<p>And remember to clean up your mess when youare done:</p>
<pre> lxc stop ha
lxc delete ha</pre>Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-83129169024254395852020-08-15T19:36:00.056-05:002020-08-16T12:38:53.475-05:00Installing Home Assistant Core in an LXD Container (Part 1)<p>I've been running HomeAssistant Core reliably in an LXD container for almost two years now, so it's probably time to start detailing how to do it.</p>
<p>This is a step-by-step example of how to do it for folks who aren't very familiar with LXD containers and their features.</p>
<h3>Installing LXD (<a href="https://linuxcontainers.org/lxd/getting-started-cli/">documentation</a>)</h3>
<p>If you haven't used LXD before, you need to install it (it's a Snap) and initialize it (tell it where the storage is located). The initialization defaults are sane, so you should not have problems.</p>
<pre> sudo snap install lxd
sudo lxd init</pre>
<br />
<h3>Container Profile: Macvlan Networking (optional)</h3>
<p>A macvlan profile is one easy way for the container to get it's IP address from the router instead of the host. This means you can use a MAC Address filter to issue a permanent IP address. This works on ethernet, but often not on wifi. You only need to set up this profile ONCE, and it's easiest to do BEFORE creating the container. Since the container doesn't exist yet, all of these commands are done on the Host.</p>
<pre> # Get the real ethernet device (enp3s5 or some such)
ip route show default 0.0.0.0/0
# Make mistakes on a copy
lxc profile copy default lanprofile
# Change nictype field to macvlan
# 'eth0' is a virtual device, not a real eth device
lxc profile device set lanprofile eth0 nictype macvlan
# Change parent field to real eth interface
lxc profile device set lanprofile eth0 parent enp3s5</pre>
<br />
<h3>Create the Container</h3>
<p>Create a new container named 'ha'. This command is done on the Host.</p>
<pre> # Create the container named "ha"
lxc launch -p lanprofile ubuntu:focal ha
# Learn the container's IP address. Need this for the web browser.
lxc list | grep ha
# Get a root shell prompt inside the container
lxc shell ha</pre>
<br />
<h3>Initial Setup in the Container</h3>
<p>Let's get a shell set up timezone and apt. These commands are done on the Container root prompt.</p>
<pre>
// This is one way to set the timezone
dpkg-reconfigure tzdata
// Reduce apt sources to Main and Universe only
cat <<EOF > /etc/apt/sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
deb http://security.ubuntu.com/ubuntu focal-security main universe
EOF
// Tweak: Remove openssh-server
apt autoremove openssh-server
// Populate the apt package database and bring the container packages up-to-date
apt update
apt upgrade
// Install the python packages needed for HomeAssistant
apt install python3-pip python3-venv
# Setup the homeassistant venv in the root home dir (/root)
# --system-site-packages allows the venv to use the many deb packages that are already
# installed as dependencies instead of donwnloading pip duplicates
python3 -m venv --system-site-packages /root</pre>
<br />
<h3>Install and Run HomeAssistant</h3>
<p>Now we move into a virtual environment inside the container, build HomeAssistant, and give it a first run. If you try to build or run HomeAssistant outside the venv, it will fail with cryptic errors.</p>
<pre> // Activate the installed venv. Notice how the prompt changes.
root@ha:~# source bin/activate
(root) root@ha:~#
// Initial build of HomeAssistant. This takes a few minutes.
(root) root@ha:~# python3 -m pip install homeassistant
// Instead of first build, this is where you would upgrade
(root) root@ha:~# python3 -m pip install --upgrade homeassistant
// Initial run to set up and test.
(root) root@ha:~# hass
// After a minute or two, open the IP Address (port 8123). Example: http://192.168.1.18:8123
// Use the Web UI to shut down the application. Or use CTRL+C.
// Exit the venv
(root) root@ha:~# deactivate
// Exit the container and return to the Host shell.
root@ha:~# exit
Host:~$</pre>
<br />
<hr />
<p>There's a lot more to talk about in future posts:</p>
<ul>
<li>The systemd service that starts HomeAssistant at container startup.</li>
<li>Creating an LXD <i>disk device</i> to keep the HomeAssistant data in. If I rebuild the container for some reason, I can simply connect it to the data.</li>
<li>Adding a USBIP client. The Z-Wave controller is elsewhere in the building, and USBIP lets me control it like it's attached to the host. That also means adding a USB device to the container.</li>
<li>Collecting Host hearbeat statistics for the HomeAssistant dashboard, and pushing those into the container regularly.</li>
<li>Backing up and restoring HomeAssistant data and configurations.</li>
</ul>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-50410662293163652742020-08-14T12:11:00.002-05:002020-08-15T23:07:03.782-05:00LXD Containers on a Home Server<p>LXD Containers are very handy, and I use them for quite a few services on my home hobby & fun server. Here's how I set up my containers after a year of experimenting. Your mileage will vary, of course. You may have very different preferences than I do.</p>
<h3 style="text-align: left;">1. Networking:</h3>
<p style="text-align: left;">I use macvlan networking. It's a simple, reliable, low-overhead way to pull an IP address from the network DHCP server (router). I set the IP address of many machines on my network at the router.</p>
<p style="text-align: left;">The container and server cannot communicate using TCP/UDP with each other. I don't mind that.</p>
<p style="text-align: left;"><b>You only need to set up this profile once for all containers</b>. Simply specify the profile when creating a new container.</p>
<ul style="text-align: left;">
<li>Reference: <a href="https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/">https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/</a></li>
</ul>
<pre> // 'Host:$' means the shell user prompt on the LXD host system. It's not a shell command
// Learn the eth interface: enp3s5 in this example
Host:$ ip route show default 0.0.0.0/0
// Make mistakes on a copy
Host:$ lxc profile copy default lanprofile
// Change nictype field. 'eth0' is a virtual device, not a real eth device
Host:$ lxc profile device set lanprofile eth0 nictype macvlan
// Change parent field to real eth interface
Host:$ lxc profile device set lanprofile eth0 parent enp3s5
// Let's test the changes
Host:$ lxc profile show lanprofile
config: {}
description: Default LXD profile // This field is copied. Not really the default
devices:
eth0: // Virtual device
nictype: macvlan // Correct network type
parent: enp3s5 // Correct real device
type: nic
root:
path: /
pool: containers-disk // Your pool will be different, of course
type: disk
name: lanprofile</pre>
<br />
<hr />
<h3 style="text-align: left;">2. Creating a Container</h3>
<p style="text-align: left;">Create a new container called 'newcon':</p>
<pre> Host:$ lxc launch -p lanprofile ubuntu:focal newcon
// 'Host:$' - user (non-root) shell prompt on the LXD host
// '-p lanprofile' - use the macvlan networking profile
// 'focal' - Ubuntu 20.04. Substitute any release you like</pre>
<br />
<hr />
<h3 style="text-align: left;">3. Set the Time Zone</h3>
<p style="text-align: left;">The default time zone is UTC. Let's fix that. Here are two easy ways to set the timezone: (<a href="https://serverfault.com/a/846989">source</a>)</p>
<pre> // Get a root prompt within the container for configuration
// Then use the classic Debian interactive tool:
Host:$ lxc shell newcon
newcon:# dpkg-reconfigure tzdata
// Alternately, here's a non-interactive way to do it entirely on the host
Host:$ lxc exec newcon -- ln -fs /usr/share/zoneinfo/US/Central /etc/localtime
Host:$ lxc exec newcon -- dpkg-reconfigure -f noninteractive tzdata</pre>
<br />
<hr />
<h3 style="text-align: left;">4. Remove SSH Server</h3>
<p style="text-align: left;">We can access the container from the server at anytime. So most containers don't need an SSH server. Here are two ways to remove it</p>
<pre> // Inside the container
newcon:# apt autoremove openssh-server
// Or from the Host
Host:$ lxc exec newcon -- apt autoremove openssh-server</pre>
<br />
<hr />
<h3 style="text-align: left;">5. Limit Apt sources to what the container will actually use</h3>
<p style="text-align: left;">Unlike setting the timezone properly, this is *important*. If you do this right, the container will update itself automatically for as long as the release of Ubuntu is supported (mark your calendar!) If you don't get this right, you will leave yourself an ongoing maintenance headache.
</p>
<pre> // Limit the apt sources to (in this example) main from within the container
newcon:# nano /etc/apt/sources.list
// The final product should look similar to:
deb http://archive.ubuntu.com/ubuntu focal main
deb http://archive.ubuntu.com/ubuntu focal-updates main
deb http://security.ubuntu.com/ubuntu focal-security main
// Alternately, *push* a new sources.list file from the host.
# Create the new sources.list file on the host in /tmp
cat <<EOF > /tmp/container-sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main
deb http://security.ubuntu.com/ubuntu focal-security main
EOF
// *Push* the file from host to container
Host:$ lxc file push /tmp/container-sources.list newcon/etc/apt/sources.list</pre>
<br />
<hr />
<h3 style="text-align: left;">6. Install the Application</h3>
<p style="text-align: left;">How you do this depends upon the application and how it's packaged.</p>
<br />
<hr />
<h3 style="text-align: left;">7. Update Unattended Upgrades</h3>
<p style="text-align: left;">This is the secret sauce that keeps your container up-to-date. First, let's look at a cleaned-up version of the first 20-or-so lines of /etc/apt/apt.conf.d/50unattended-upgrades inside the container:</p>
<pre> What it says What it means
------------------------------------------ -----------------------
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}"; Ubuntu:focal
"${distro_id}:${distro_codename}-security"; Ubuntu:focal-security
// "${distro_id}:${distro_codename}-updates"; Ubuntu:focal-updates
// "${distro_id}:${distro_codename}-proposed"; Ubuntu:focal-proposed
// "${distro_id}:${distro_codename}-backports"; Ubuntu:focal-backports
};</pre>
<p style="text-align: left;">...why, those are just the normal repositories! -security is enabled (good), but -updates is disabled (bad). Let's fix that. Inside the container, that's just using an editor to remove the commenting ("//"). From the host, it's a substitution job for sed:</p>
<pre> Host:$ lxc exec newcon -- sed "s\/\ \g" /etc/apt/apt.conf.d/50unattended-upgrades</pre>
<p style="text-align: left;">Third-party sources need to be updated, too. This is usually easiest from within the container. See <a href="https://cheesehead-techblog.blogspot.com/2020/02/advanced-unattended-upgrade-ubuntu.html">this post</a> for how and where to update Unattended Upgrades with the third-party source information.</p>
<br />
<hr />
<h3 style="text-align: left;">8. Mounting External Media</h3>
<p style="text-align: left;">Some containers need disk access. A classic example is a media server that needs access to that hard drive full of disorganized music.</p>
<p style="text-align: left;">If the disk is available across the network instead of locally, then use plain old <a href="https://askubuntu.com/questions/710149/how-to-convert-sshfs-command-to-fstab-entry">sshfs</a> or samba to mount the network share in /etc/fstab.</p>
<p style="text-align: left;">If the disk is local, then first mount it on the Host. After it's mounted, use an lxd disk device inside the container. A disk device is an all-in-one service: It creates the mount point inside the container and does the mounting. It's persistent across reboots...as long as the disk is mounted on the host.</p>
<pre> // Mount disk on the host and test
Host:$ sudo mount /dev/sda1 /media
Host:$ ls /media
books movies music
// Create disk device called "media_mount" and test
Host:$ lxc config device add newcon media_mount disk source=/media path=/Shared_Media
Host:$ lxc exec newcon -- ls /Shared_Media
books movies music</pre>
<p style="text-align: left;">If the ownership of files on the disk is confused, and you get "permisson denied" errors, then use shiftfs to do the equivalent of remounting without suid</p>
<pre> Host:$ lxc exec newcon -- ls /Shared_Media/books
permission denied
// Enable shiftfs in LXD, reload the lxd daemon, and test
Host$ sudo snap set lxd shiftfs.enable=true
Host$ sudo systemctl reload snap.lxd.daemon
Host$ lxc info | grep shiftfs
shiftfs: "true"
// Add shiftsfs to the disk device
Host$ lxc config device set newcon media_mount shift=true
Host:$ lxc exec newcon -- ls /Shared_Media/books
boring_books exciting_books comic_books cookbooks</pre>
<br />Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-89594600368029284872020-05-08T09:05:00.000-05:002020-05-08T09:05:36.099-05:00Testing Ubuntu Core with Containers on VirtualBoxI want to try out <a href="https://ubuntu.com/core">Ubuntu Core</a> to see if it's appropriate for running a small server with a couple containers.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtGXTKdL2F7hylS8GPajZGEvphtCxMb2qIlH0ioEX4wyEoX9ph-hFeP7FudCV8fQkecZ9hVFL-I7EQkH8Xloj7_qwjie_Y3ZzS_1GaKL_aB4JK4pQq0aI1zYLUW_NXRtwu5ELN3UbMOU-F/s1600/layout1.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1056" data-original-width="816" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtGXTKdL2F7hylS8GPajZGEvphtCxMb2qIlH0ioEX4wyEoX9ph-hFeP7FudCV8fQkecZ9hVFL-I7EQkH8Xloj7_qwjie_Y3ZzS_1GaKL_aB4JK4pQq0aI1zYLUW_NXRtwu5ELN3UbMOU-F/s320/layout1.png" width="246" /></a>The current OS is Ubuntu Server 20.04...but I'm really not using most of the Server features. Those are in the LXD containers. So this is an experiment to see if Ubuntu Core can function as the server OS.<br />
<br />
<b>Prerequisites</b>: If you are looking to try this, you should already be familiar (not expert) with:<br />
<ul>
<li>Using SSH</li>
<li>Using the <a href="https://www.youtube.com/watch?v=g-XsXEsd6xA">vi text editor</a> (Ubuntu Core lacks nano)</li>
<li>Basic networking concepts like dhcp</li>
<li>Basic VM and Container concepts</li>
</ul>
<br />
<br />
<b>Download Ubuntu Core</b>:<br />
<ul>
<li>Create Ubuntu SSO Account (if you don't have one already)</li>
<li>Create a SSH Key (if you don't have one already)</li>
<li><a href="https://login.ubuntu.com/ssh-keys">Import your SSH Public Key</a> to Ubuntu SSO.</li>
<li>Download an Ubuntu core .img file from <a href="https://ubuntu.com/download/iot#core">https://ubuntu.com/download/iot#core</a></li>
<li>Convert the Ubuntu Core .img to a Virtualbox .vdi:<br /><br />
<pre> me@desktop:~$ VBoxManage convertdd ubuntu-core-18-amd64.img ubuntu-core.vdi</pre>
</li>
</ul>
<br />
<br />
<b>Set up a new machine in VirtualBox</b>:<br />
<ul>
<li>Install VirtualBox (if you haven't already): <br /><br />
<pre> me@desktop:~$ sudo apt install virtualbox</pre>
<br />
</li>
<li>In the Virtualbox Settings, File -> Virtual Media Manager. Add the ubuntu-core.vdi</li>
<li>Create a New Machine. Use an existing Hard Disk File --> ubuntu-core.vdi</li>
<li>Check the network settings. You want a network that you will be able to access. I chose bridged networking so I could play with the new system from different locations, and set up a static IP address on the router. ENABLE promiscuous mode, so containers can get IP addresses from the router. Otherwise, VirtualBox will filter out the dhcp requests.</li>
<li>OPTIONAL: <a href="https://forum.snapcraft.io/t/ubuntu-core-18-running-in-virtualbox/9891">Additional tweaks</a> to enhance performance.</li>
</ul>
<br />
<br />
<b>Take a snapshot</b> of your current network neighborhood:<br />
<ul>
<li>Use this to figure out Ubuntu Core's IP address later on:</li>
</ul>
<pre> me@Desktop:~$ ip neigh
192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE
192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY</pre>
<br />
<br />
<b>Boot the image</b> in VirtualBox:<br />
<ul>
<li>The first boot of Ubuntu Core requires a screen and keyboard (one reason we're trying this in VirtualBox). Subsequent logins will be done by ssh.</li>
<li>Answer the couple setup questions.</li>
<li>Use your Ubuntu One login e-mail address.</li>
<li>The VM will reboot itself (perhaps more than once) when complete.</li>
<li>Note you cannot login to the VM's TTY. Ubuntu Core's default login is via ssh. Instead, the VM's TTY tells you the IP address to use for ssh.</li>
<li>Since we are using a VM, this is a convenient place to take an initial snapshot. If you make a mess of networking in the next step, you can revert the snapshot.
</li>
</ul>
<br />
<br />
<b>Let's do some initial configuration</b>:<br />
<ul>
<li>After the VM reboots, the Virtualbox screen only shows the IP address.</li>
<br />
<pre>// SSH into the Ubuntu Core Guest
me@desktop:~$ ssh my-Ubuntu-One-login-name@IP-address
[...Welcome message and MOTD...]
me@localhost:~$
// The default name is "localhost"
// Let's change that. Takes effect after reboot.
me@localhost:~$ sudo hostnamectl set-hostname 'ubuntu-core-vm'
// Set the timezone. Takes effect immediately.
me@localhost:~$ sudo timedatectl set-timezone 'America/Chicago'
// OPTIONAL: Create a TTY login
// This can be handy if you have networking problems.
me@localhost:~$ sudo passwd my-Ubuntu-One-login-name</pre>
</ul>
<br />
<br />
<b>Let's set up the network bridge so containers can draw their IP address from the router</b>:<br />
<br />
<ul>
<li>We use vi to edit the netplan configuration. When we apply the changes, the ssh connection will be severed so we must discover the new IP address to login again.</li>
<br />
<pre>me@localhost:~$ sudo vi /writable/system-data/etc/netplan/00-snapd-config.yaml
#// The following seven lines are the original file. Commented instead of deleted.
# This is the network config written by 'console_conf'
#network:
# ethernets:
# eth0:
# addresses: []
# dhcp4: true
# version: 2
#// The following lines are the new config
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
dhcp6: no
bridges:
# br0 is the name that containers use as the parent
br0:
interfaces:
# eth0 is the device name in 'ip addr'
- eth0
dhcp4: yes
dhcp6: yes
#// End
// After the file is ready, implement it:
me@localhost:~$ sudo netplan generate
me@localhost:~$ sudo netplan apply
// If all goes well...your ssh session just terminated without warning.
</pre>
</ul>
<br />
<br />
<b>Test our new network settings</b>:
<br />
<ul>
<li>The Ubuntu Core VM window will NOT change the displayed IP address after the netplan change...but that IP won't work anymore.</li>
<li>If you happen to reboot (not necessary) you will see that the TTY window displays no IP address when bridged...unless you have created an optional TTY login.</li>
<li>Instead of rebooting, let's take another network snapshot and compare to earlier:<br /><br />
<pre> me@Desktop:~$ ip neigh
192.168.1.226 dev enp3s0 lladdr c6:12:89:22:56:e4 STALE
192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE
192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE <---- NEW
192.168.1.235 dev enp3s0 lladdr DELAY <-----NEW
192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE
192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE
192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE
fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY</pre>
<br />
</li>
<li>We have two new lines: .226 and .235 One of those was the old IP address and one is the new. SSH into the new IP address, and you're back in.<br /><br />
<pre>me@desktop:~$ ssh my-Ubuntu-One-user-name@192.168.1.226
Welcome to Ubuntu Core 18 (GNU/Linux 4.15.0-99-generic x86_64)
[...Welcome message and MOTD...]
Last login: Thu May 7 16:11:38 2020 from 192.168.1.6
me@localhost:~$</pre>
<br />
</li>
<li>Let's take a closer look at our new, successful network settings.<br /><br />
<pre>me@localhost:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether c6:12:89:22:56:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.226/24 brd 192.168.1.255 scope global dynamic br0
valid_lft 9545sec preferred_lft 9545sec
inet6 2683:4000:a450:1678:c412:89ff:fe22:56e4/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 600sec preferred_lft 600sec
inet6 fe80::c412:89ff:fe22:56e4/64 scope link
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 08:00:27:fd:20:92 brd ff:ff:ff:ff:ff:ff
// Note that ubuntu-core-vm now uses the br0 address, and lacks an eth0 address.
// That's what we want.</pre>
</li>
</ul>
<br />
<br />
<b>Set up static IP addresses on the Router</b> and then reboot to use the new IP address.
<br />
<ul>
<li>Remember, the whole point of bridged networking is for the router to issue all the IP addresses and avoid doing a lot of NATing and Port Forwarding.</li>
<li>So now is the time to login to the Router and have it issue a constant IP address to the Bridge MAC address (in this case c6:12:89:22:56:e4). After this, ubuntu-core-vm (the Ubuntu Core Guest VM) will always have a predictable IP address.</li>
<li>Use VirtualBox to ACPI shutdown the VM, then restart it headless. We're looking for two changes: The hostname and the login IP address.</li>
<li> Starting headless can be done two ways:<br /><br />
<ol>
<li>GUI: Virtualbox Start button submenu</li>
<li><pre>me@Desktop:~$ VBoxHeadless --startvm name-of-vm</pre>
<br />
</li>
</ol>
</li>
<li>Success at rebooting headless and logging into the permanent IP address is a good point for another VM Snapshot. And maybe a sandwich. Well done!</li>
</ul>
<br />
<br />
<b>Install LXD</b> onto ubuntu-core-vm:
<br />
<ul>
<li>Install:<br /><br />
<pre>me@ubuntu-core-vm:~$ snap install lxd
lxd 4.0.1 from Canonical✓ installed
me@ubuntu-core-vm:~$</pre>
<br />
</li>
<li>Add myself to the `lxd` group so 'sudo' isn't necessary anymore. This SHOULD work, but doesn't due to a bug (<a href="https://forum.snapcraft.io/t/creating-a-udev-rule-and-adding-a-user-to-the-dialout-group/5097">discussion</a>)<br /><br />
<pre>host:~$ sudo adduser --extrausers me lxd // Works on most Ubuntu; does NOT work on Ubuntu Core even with --extrausers
host:~$ newgrp lxd // New group takes effect without logout/login</pre>
<br />
</li>
<li>Instead, edit the groups file directly using vi:<br /><br />
<pre>// Use vi to edit the file:
me@ubuntu-core-vm:~$ sudo vi /var/lib/extrausers/group
// Change the lxd line:
lxd:x:999: // Old Line
lxd:x:999:my-login-name // New Line
// Apply the new group settings without logout
me@ubuntu-core-vm:~$ newgrp lxd</pre>
</li>
</ul>
<b>Configure LXD</b>:
<br />
<ul>
<li>LXD is easy to configure. We need to make three changes from the default settings since we already have a bridge (br0) set up that we want to use.<br /><br />
<pre>me@ubuntu-core-vm:~$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, ceph, btrfs) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no <------------------------- CHANGE
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes <-- CHANGE
Name of the existing bridge or host interface: br0 <----------------------------------------------------- CHANGE
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
me@ubuntu-core-vm:~$</pre>
</li>
<li>Next, we change the networking profile so containers use the bridge:<br /><br />
<pre>// Open the default container profile in vi
me@ubuntu-core-vm:~$ lxc profile edit default
config: {}
description: Default LXD profile
devices:
# Container eth0, not ubuntu-core-vm eth0
eth0:
name: eth0
nictype: bridged
# This is the ubuntu-core-vm br0, the real network connection
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by: []</pre>
</li>
<li>Add the Ubuntu-Minimal stream for cloud-images, so our test container is small:<br /><br />
<pre>me@ubuntu-core-vm:~$ lxc remote add --protocol simplestreams ubuntu-minimal https://cloud-images.ubuntu.com/minimal/releases/</pre>
</li>
</ul>
<b>Create and start a Minimal container</b>:
<br />
<ul>
<pre>me@ubuntu-core-vm:~$ lxc launch ubuntu-minimal:20.04 test1
Creating test1
Starting test1
me@ubuntu-core-vm:~$ lxc list
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| test1 | RUNNING | 192.168.1.248 (eth0) | 2603:6000:a540:1678:216:3eff:fef0:3a6f (eth0) | CONTAINER | 0 |
+-------+---------+----------------------+-----------------------------------------------+-----------+-----------+
// Let's test outbound connectivity from the container
me@ubuntu-core-vm:~$ lxc shell test1
root@test1:~# apt update
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
[...lots of succesful server connections...]
Get:26 http://archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1280 B]
Fetched 16.3 MB in 5s (3009 kB/s)
Reading package lists... Done
Building dependency tree...
Reading state information... Done
5 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@test1:~#</pre>
</ul>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-48917691405563259902020-02-19T16:36:00.001-06:002020-02-19T16:36:05.743-06:00Pushing a file from Host into an to LXD ContainerOne of the little (and deliberate) papercuts of using unprivileged <b>LXD containers</b> is that unless data flows in from a network connection, it likely has the wrong owner and permissions.<br />
<br />
Here are two examples in the HomeAssistant container.<br />
<br />
1. The HA container needs to talk to a USB dongle elsewhere in the building. It does so using USBIP, and I discussed how to make it work in <a href="http://cheesehead-techblog.blogspot.com/2019/08/usbip-into-lxd-container.html">this previous post</a>.<br />
<br />
2. I want the HA container to display some performance data about the host (uptime, RAM used, similar excitements). Of course, <i>it's a container</i>, so it simply cannot do that natively without using lots of jiggerypokery to escape the container. Instead, a script collects the information and <i>pushes</i> the information into the container every few minutes.<br />
<br />
<pre> $ sudo lxc file push /path/to/host/file.json container-name/path/to/container/</pre>
<br />
Easy enough, right.<br />
<br />
Well, not quite. Home Assistant, when installed, creates a non-root user, and puts all of it's files in a subdirectory. Add another directory to keep things simple, and you get:<br />
<br />
<pre> /home/homeassistant/.homeassistant/external_files/</pre>
<br />
And, unfortunately, all those subdirectories are owned by a non-root user. So lxc cannot 'push' all the way into them (result: permission error).<br />
<br />
<pre> -rw-r--r-- 1 root root 154 Feb 19 15:34 file.json</pre>
<br />
The pushed file can only be pushed to in the wrong location, and gets there with the wrong ownership.<br />
<br />
<hr />
<br />
<b>Systemd to the rescue</b>: Let's create a systemd job on the container that listens for a push, then fixes the location and the ownership.<br />
<br />
The feature is called a systemd.<i>path.</i><br />
<i><br /></i>
Like a systemd timer it consists of two parts, a trigger (.path) and a service that gets triggered.<br />
<br />
The .path file is very simple. Here's what I used for the trigger:<br />
<br />
<pre>[Unit]
# /etc/systemd/system/server_status.path
Description=Listener for a new server status file
[Path]
PathModified=/home/homeassistant/.homeassistant/file.json
[Install]
WantedBy=multi-user.target</pre>
<br />
The service file is almost as simple. Here's what I used:<br />
<br />
<pre>[Unit]
# /etc/systemd/system/server_status.service
Description=Move and CHOWN the server status file
[Service]
Type=oneshot
User=root
ExecStartPre=mv /home/homeassistant/.homeassistant/file.json /home/homeassistant/.homeassistant/external_files/
ExecStart=chown homeassistant:homeassistant /home/homeassistant/.homeassistant/external_files/file.json
[Install]
WantedBy=multi-user.target
</pre>
<br />
Finally, enable and start the path (not the service)<br />
<br />
<pre>sudo systemctl daemon-reload
sudo systemctl enable server_status.path
sudo systemctl start server_status.path</pre>
<br />
<br />Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-42670212194602333672020-02-02T12:38:00.002-06:002020-08-26T19:56:26.913-05:00Advanced Unattended Upgrade (Ubuntu): Chrome and Plex examples<p><b>Updated Aug 26, 2020</b></p>
<p>This is a question that pops up occasionally in various support forums:</p>
<blockquote class="tr_bq">
Why doesn't (Ubuntu) Unattended Upgrades work for all applications? How can I get it to work for my application?</blockquote>
<p>Good question.</p>
<p>Here is what happens under the hood: The default settings for Unattended Upgrades are for only packages in the "-security" pocket of the Ubuntu repositories.</p>
<p>Not "-updates", not "-backports", not "-universe", not any third-party repositories, not any PPAs. Just "-security".</p>
<p>This is a deliberately conservative choice -- while the Ubuntu Security Team keeps it's delta as small as possible, it's a historical fact that even small security patches have (unintentionally) introduced new bugs.</p>
<br />
<hr>
<h3>Here's how you can override that choice. </h3>
<p>Let's take a look at the top section of file /etc/apt/apt.conf.d/50unattended-upgrades, and focus on the "Allowed-Origins section." It's edited for clarity here:</p>
<pre>Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
// "${distro_id}:${distro_codename}-updates";
// "${distro_id}:${distro_codename}-proposed";
// "${distro_id}:${distro_codename}-backports";
};</pre>
<p>There, you can see the various Ubuntu repo pockets.</p>
<p>You can also see that most of the options are commented out (the "//"). If you know how to use a basic text editor and sudo, you can safely change those settings. <b>Warning</b>: You can break your system quite horribly by enabling the wrong source. Enabling "-proposed" and other testing sources is a <i>very</i> bad idea.</p>
<br />
<hr>
<h3>How to add the -updates pocket of the Ubuntu Repos?</h3>
<p>I've done this for years, BUT (this is important) I don't add lots of extra sources. Simply uncomment the line.</p>
<pre> "${distro_id}:${distro_codename}-updates";</pre>
<p>That's all. When Unattended Upgrades runs next, it will load the new settings.</p>
<p><b>Bonus</b>: Here's one way to do it using sed:</p>
<pre> sed -i 's~//\(."${distro_id}:${distro_codename}-updates";\)~\1~' /etc/apt/apt.conf.d/50unattended-upgrades</pre>
<br />
<hr>
<h3>How to add the -universe pocket of the Ubuntu Repos?</h3>
<p>You can create a '-universe' line like the others, but it won't do anything. It's already handled by the "-updates" line.</p>
<br />
<hr>
<h3>How to add a generic new repository that's not in the Ubuntu Repos?</h3>
<p>Add a line in the format to the end of the section:</p>
<pre> // "${distro_id}:${distro_codename}-backports";
"origin:section" <-------- Add this format
};
</pre>
<p>The trick is finding out what the "origin" and "section" strings should be.<p>
<p><b>Step 1</b>: Find the URL of the source that you want to add. It's located somewhere in /etc/apt/sources.list or /etc/apt/sources.list.* . It looks something like this...</p>
<pre> deb http://security.ubuntu.com/ubuntu eoan-security main restricted universe multiverse
...or...
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
...or...
deb https://downloads.plex.tv/repo/deb/ public main</pre>
<br />
<p><b>Step 2</b>: Find the corresponding Release file in your system for the URL.</p>
<pre> http://security.ubuntu.com/ubuntu eoan-security
...becomes...
/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease
http://dl.google.com/linux/chrome/deb/ stable
...becomes...
/var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_InRelease
https://downloads.plex.tv/repo/deb/ public
...becomes...
/var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release</pre>
<br />
<p><b>Step 3</b>: Use grep to find the "Origin" string.</p>
<pre> $ grep Origin /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_eoan-security_InRelease
Origin: Ubuntu
$ grep Origin /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_InRelease
Origin: Google LLC
$ grep Origin /var/lib/apt/lists/downloads.plex.tv_repo_deb_dists_public_Release
Origin: Artifactory</pre>
<br />
<p><b>Step 4</b>: With the Origin string and Section (after the space in the URL), we have all the information we need:</p>
<pre> "Ubuntu:eoan-security"
...or...
"Google LLC:stable"
...or...
"Artifactory:public"</pre>
<p>You're ready to add the appropriate string to the config file.</p>
<p><b>Bonus</b>: Here's one way to isolate most of these using shell script</p>
<pre> package="google-chrome-stable"
url=$(apt-cache policy $package | grep "500 http://")
var_path=$(echo $url | sed 's~/~_~g' | \
sed 's~500 http:__\([a-z0-9._]*\) \([a-z0-9]*\)_.*~/var/lib/apt/lists/\1_dists_\2_InRelease~')
origin=$(grep "Origin:" $var_path | cut -d" " -f2)
section=$(echo $url | sed 's~500 http://\([a-z0-9._/]*\) \([a-z0-9]*\)/.*~\2~')
echo "$origin":"$section"</pre>
<br />
<p><b>Step 5</b>: Run Unattended Upgrades once, then check the log to make sure Unattended Upgrades accepted the change.</p>
<pre> $ sudo unattended-upgrade
$ less /var/log/unattended-upgrades/unattended-upgrades.log (sometimes sudo may be needed)</pre>
<p>You are looking for a recent line like:</p>
<pre> 2020-02-02 13:36:23,165 INFO Allowed origins are: o=Ubuntu,a=eoan, o=Ubuntu,a=eoan-security, o=UbuntuESM,a=eoan, o=UbuntuESM,a=eoan-security, o=UbuntuESM,a=eoan-security</pre>
<p>Your new source and section should be listed.</p>
<br />
<hr>
<h3>Summary for folks who just want to know how to update Chrome (stable)</h3>
<ol>
<li>Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades</li>
<li>In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"</li>
</ol>
<pre> "Google LLC:stable"</pre>
<br />
<hr />
<h3>Summary for folks who just want to know how to update Plex</h3>
<ol>
<li>Edit (using sudo and a text editor) the file /etc/apt/apt.conf.d/50unattended-upgrades </li>
<li>In the section "Unattended-Upgrade::Allowed-Origins {", add the following line BEFORE the final "};"</li>
</ol>
<pre> "Artifactory:public"</pre>
<br />Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-35740923272624223702019-08-20T14:32:00.003-05:002019-08-20T14:32:38.338-05:00Toggling the Minecraft Server using systemd features<div>
The new school year is upon us, suddenly the kids are playing Minecraft much less.</div>
<div>
<br /></div>
<div>
This means that the Minecraft server is sitting there churning all day and night, spawning and unspawning, eating CPU and generating heat for a sparse collection of occasional players now. It's an old Sempron 145 (45w, single core), so a single world sitting idle still consumes 40% CPU.</div>
<div>
<br /></div>
<div>
We already use systemd to start and stop the server. Let's add a couple new features to stop the server during the school day. Oh, and let's stop it during the deep night, also.</div>
<div>
<br /></div>
<div>
Here's what we currently have: A basic start/stop/restart systemd service that brings up the server at start:<br />
<br />
<pre> ## /etc/systemd/system/minecraft.service
[Unit]
Description=Minecraft Server
After=network.target
[Service]
RemainAfterExit=yes
WorkingDirectory=/home/minecraft
User=minecraft
Group=minecraft
# Start Screen, Java, and Minecraft
ExecStart=screen -s mc -d -m java -server -Xms512M -Xmx1024M -jar server.jar nogui
# Tell Minecraft to gracefully stop.
# Ending Minecraft will terminate Java
# systemd will kill Screen after the 10-second delay. No explicit kill for Screen needed
ExecStop=screen -p 0 -S mc -X eval 'stuff "say SERVER SHUTTING DOWN. Saving map..."\\015'
ExecStop=screen -p 0 -S mc -X eval 'stuff "save-all"\\015'
ExecStop=screen -p 0 -S mc -X eval 'stuff "stop"\\015'
ExecStop=sleep 10
[Install]
WantedBy=multi-user.target</pre>
<br /></div>
<div>
<br /></div>
<div>
If you do something like this, remember to:<br />
<br />
<pre> $ sudo systemctl daemon-reload
$ sudo systemctl enable/disable minecraft.service // Autostart at boot
$ sudo systemctl start/stop minecraft.service // Manual start/stop</pre>
<br /></div>
<div>
<br /></div>
<div>
We need to start with a little bit of planning. After looking at the myriad of hours and days that the server should be available (Summer, Holidays, Weekends, School Afternoons), I don't see a way to make all those work smoothly together inside a cron job or systemd timer. </div>
<div>
<br /></div>
<div>
Instead, let's move the logic into a full-fledged Python script, and let the script decide whether the server should be on or off. Our systemd timer will run the script periodically.</div>
<div>
<br /></div>
<div>
Wait...that's not right. Systemd timers run only <i>services</i>. So the timer must trigger a service, the service runs the script, the script decides if the server should be on or off, and uses the existing service to do so.</div>
<div>
<br /></div>
<div>
Let's draw that out<br />
<br />
<pre>minecraft-hourly.timer -+ (timers can only run services)
|
v
minecraft-hourly.service -+ (service can run a script)
|
v
minecraft-hourly.py -+ (start/stop logic and decision)
|
v
minecraft.service (start/stop the server)</pre>
<br /></div>
<div>
<br /></div>
<div>
We know where we are going, let's work backwards to get there. We need a Python script with logic, and the ability to decide if the server should be off or on based upon any give time or date.
<br />
<br />
<pre>## /home/me/minecraft-hourly.py
#!/usr/bin/env python3
import datetime, subprocess
def ok_to_run_server():
"""Determine if the server SHOULD be up"""
now = datetime.datetime.now()
## All days, OK to run 0-2, 5-8, 16-24
if -1 < now.hour < 2 or 4 < now.hour < 8 or 15 < now.hour < 24:
return True
## OK to run on weekends -- now.weekday() = 6 or 7
if now.weekday() > 5:
return True
## OK to run during Summer Vacation (usually mid May - mid Aug)
if 5 < now.month < 8:
return True
if now.month == 5 and now.day > 15:
return True
if now.month == 8 and now.day < 15:
return True
## OK to run on School Holidays 2019-20
## Fill in these holidays!
school_holidays = ["Aug 30 Fri","Sep 02 Mon"]
if now.strftime("%b %d %a") in school_holidays:
return True
return False
def server_running():
"""Determine if the Minecraft server is currently up"""
cmd = '/bin/systemctl is-active minecraft.service'
proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
if proc.communicate()[0].decode().strip('\n') == 'active':
return True
else:
return False
def run_server(run_flag=True):
"""run_flag=True will start the service. False will stop the service"""
cmd = '/bin/systemctl start minecraft.service'
if not run_flag:
cmd = '/bin/systemctl stop minecraft.service'
proc = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE)
proc.communicate()
return
## If the server is stopped, but we're in an ON window, then start the server
if ok_to_run_server() and not server_running():
run_server(True)
## If the server is running, but we're in a OFF window, then stop the server
elif not ok_to_run_server and server_running():
run_server(False)</pre>
<br /></div>
<div>
<br /></div>
<div>
This script should be executable, and since it tells systemctl to start/stop services, it should be run using sudo. Let's try this during school hours on a school day:<br />
<br />
<pre> $ chmod +x /home/me/minecraft-hourly.py
$ sudo /home/me/minecraft-hourly.py
// No output
$ systemctl status minecraft.service
● minecraft.service - Minecraft Server
Loaded: loaded (/etc/systemd/system/minecraft.service; enabled; vendor preset: enabled)
Active: inactive (dead)
// It worked!</pre>
<br /></div>
<div>
<br /></div>
<div>
Still working backward, let's create the systemd service that runs the script. The 'type' is 'oneshot' - this is not an always-available daemon. It's a script that does it's function, then terminates.
<br />
<br />
<pre>## /etc/systemd/system/minecraft-hourly.service.
[Unit]
Description=Minecraft shutdown during school and night
After=network.target
[Service]
Type=oneshot
ExecStart=/home/me/minecraft-hourly.py
StandardOutput=journal
[Install]
WantedBy=multi-user.target</pre>
<br /></div>
<div>
<br /></div>
<div>
We want the hourly script to be triggered by TWO events: Either the hourly timer OR by the system starting up. This also means that we DON'T want minecraft.service to automatically start anymore. We want the script to automatically start, and to decide.
<br />
<br />
<pre> $ sudo systemctl daemon-reload // We added a new service
$ sudo systemctl enable minecraft-hourly.service // Run at boot
$ sudo systemctl disable minecraft.service // No longer needs to run at boot
</pre>
<br /></div>
<div>
<br /></div>
<div>
Let's test it again during school hours. It should shut down the Minecraft server. It did.
<br />
<br />
<pre> $ sudo systemctl start minecraft.service // Wait for it to finish loading (1-2 minutes)
$ sudo systemctl start minecraft-hourly.service
$ systemctl status minecraft.service
● minecraft.service - Minecraft Server
Loaded: loaded (/etc/systemd/system/minecraft.service; disabled; vendor preset: enabled)
Active: inactive (dead)
</pre>
<br /></div>
<div>
<br /></div>
<div>
Finally, let's set up a systemd timer to launch the hourly service...well, hourly.
<br />
<br />
<pre>## /etc/systemd/system/minecraft-hourly.timer:
[Unit]
Description=Run the Minecraft script hourly
[Timer]
OnBootSec=0min
OnCalendar=*-*-* *:00:00
Unit=minecraft-hourly.service
[Install]
WantedBy=multi-user.target</pre>
<br /></div>
<div>
<br /></div>
<div>
Writing a timer, like writing a service, isn't enough. Remember to activate them.
<br />
<br />
<pre> $ sudo systemctl daemon-reload
$ sudo systemctl enable minecraft-hourly.timer // Start at boot
$ sudo systemctl start minecraft-hourly.timer // Start now</pre>
<br /></div>
<div>
<br /></div>
<div>
And let's check to see if the new timer is working
<br />
<br />
<pre> $ systemctl list-timers | grep minecraft
Tue 2019-08-20 15:00:30 CDT 30min left Tue 2019-08-20 14:00:52 CDT 29min ago minecraft-hourly.timer minecraft-hourly.service</pre>
<br /></div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-61173094013129811392019-08-20T13:03:00.001-05:002019-08-20T13:03:17.511-05:00Home Assistant in an LXD Container, USBIP to remote USB Z-Wave dongle<div>
This post merely ties together a few existing items, and adds a few twiddly bits specific to Homeassistant, LXD, usbip, and my specific Z-Wave USB dongle.</div>
<div>
<br /></div>
<div>
<pre> |
|
+ Headless Machine / LXD Host (Ubuntu Server 19.04, 192.168.1.11)
| + usbip client
| + Some other LXD container (192.168.1.111)
LAN | + Home Assistant LXD container (192.168.1.112)
|
+ Remote Raspberry Pi (Raspbian, 192.169.1.38)
| + usbip server
| + Some other Pi activity
| + Z-Wave Controller USB Dongle</pre>
</div>
<div>
<br /></div>
<div>
Our goal is for Home Assistant, running inside an LXD container, to use the Z-Wave Controller plugged into an entirely different machine.</div>
<div>
<br /></div>
<ol>
<li>Since this is a persistent service, harden the pi. SSH using keys only, etc.</li>
<li>Set up <a href="https://cheesehead-techblog.blogspot.com/2019/08/experimenting-with-usb-devices-across.html">usbip server on the Pi</a>. Include the systemd service so it restarts and re-binds at reboot.</li>
<li>Set up <a href="https://cheesehead-techblog.blogspot.com/2019/08/experimenting-with-usb-devices-across.html">the usbip client on the host</a> (the HOST, not the container)</li>
<li>If you haven't already, create the container and <a href="https://cheesehead-techblog.blogspot.com/2019/08/homeassistant-in-lxd-container.html">install homeassistant into the container</a></li>
</ol>
<div>
<br /></div>
<div>
The rest is specific to Ubuntu, to LXD, and to the USB Dongle.</div>
<div>
<br /></div>
<div>
The USB dongle is cheap and includes both Z-Wave and Zigbee, often sold under the 'Nortek' brand. When you plug it into a Linux system, it looks like this:<br />
<br />
<pre> $lsusb
Bus xxx Device xxx: ID 10c4:8a2a Cygnal Integrated Products, Inc.
</pre>
<br /></div>
<div>
<br /></div>
<div>
<br />
When plugged in on the host (or forwarded via usbip), the dongle creates new nodes in /dev:<br />
<br />
<pre> host$ ls -l /dev/ | grep USB
crw-rw---- 1 root dialout 188, 0 Aug 18 10:29 ttyUSB0 // Z-Wave
crw-rw---- 1 root dialout 188, 1 Aug 18 10:29 ttyUSB1 // Zigbee</pre>
<br /></div>
<div>
<br /></div>
<div>
These old-style nodes mean that we can NOT use LXD's <a href="https://stgraber.org/2017/03/27/usb-hotplug-with-lxd-containers/">USB hotplug</a> feature (but there's an alternative). Also, it means that Home Assistant cannot autodetect the dongle's presence (we must manually edit the HA config).</div>
<div>
<br /></div>
<div>
Shut down the container, or restart the container after making the Z-Wave node accessible to the container. Without a restart, the container won't pick up the change. I've seen promises that it should be hot-pluggable. Maybe it is...but I needed to restart the container after this command. It's very similar to the USB Hotplug command, but uses 'unix-char' instead.
<br />
<br />
<pre> host$ lxc config device add home-assistant zwave unix-char path=/dev/ttyUSB0
Device zwave added to home-assistant
// home-assistant is the name of the LXD container
// z-wave is the name of this new config. We could name it anything we want
// unix-char is the method of attaching we are using (instead of 'usb')
// path is the path on the HOST
host$ lxc restart home-assistant // Restart the container</pre>
<br /></div>
<div>
<br /></div>
<div>
Now we move into a shell prompt in the CONTAINER (not host). My personal preference, since I'm used to VMs, is to treat the container like a VM. It has (unnecessary) ssh access (key-only, of course), and non-root users to admin and to run the 'hass' application. It also has the (unnecessary) Python venv. All of that bloat is a combination of preference and following the install documentation which simply did not expect that we might be able to run a as root here. Seems like a whole new blog post. The upshot is that inside the container I have a user prompt ($) and use sudo instead of a root prompt (#). Your mileage may vary.
<br />
<br />
<pre> container$ ls -l /dev | grep USB
crw-rw---- 1 root root 188, 0 Aug 19 17:52 ttyUSB0
</pre>
<br /></div>
<div>
Look at the permissions: They have changed. And, as I said before, hass is not running as root within this container. Let's make a one-time change to make the Z-Wave USB dongle readable by hass.<br />
<br />
<pre> container$ sudo chown root:dialout /dev/ttyUSB0
// There should be no output
container$ ls -la /dev/ | grep USB
crw-rw---- 1 root dialout 188, 0 Aug 19 17:53 ttyUSB0
container$ sudo adduser homeassistant dialout // OPTIONAL - add the 'homeassistant' user to the correct group
// This is done early in most Home Assistant install instructions
// You may have done this already
// If not, restart the container so it takes effect</pre>
<br /></div>
<div>
<br /></div>
<div>
The chown seems to NOT persist across a reboot, so let's add a line to the systemd service so the chown occurs every time the container comes up.
<br />
<br />
<pre> container$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service
// Add to the [Service] section
ExecStartPre=/bin/chown root:dialout /dev/ttyUSB0
container$ sudo systemctl daemon-reload</pre>
<br /></div>
<div>
<br /></div>
<div>
Edit Home Assistant's config file, so hass knows where to find the Z-Wave node.
<br />
<br />
<pre> me@container$ sudo -u homeassistant -H -s
homeassistant@container$ nano /home/homeassistant/.homeassistant/configuration.yaml
zwave:
usb_path: /dev/ttyUSB0
homeassistant@container$ exit</pre>
<br /></div>
<div>
<br /></div>
<div>
Finally, debian-based systems must install one additional deb package to support Z-Wave.
<br />
<br />
<pre> container$ sudo apt install libudev-dev
</pre>
<br /></div>
<div>
<br /></div>
<div>
Restart Home Assistant (if it's running) to pick up the new config. Go into the Web Page, and try adding the Z-Wave integration
<br />
<br />
<br />
<pre> container$ sudo systemctl restart home-assistant@homeassistant.service</pre>
</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-89019569472939888322019-08-20T11:52:00.002-05:002019-08-20T11:55:04.367-05:00HomeAssistant in an LXD container<div>
It's time to migrate my Home Assistant from it's experimental home in a Raspberry Pi to an LXD container on my Ubuntu server.</div>
<div>
<br /></div>
<div>
This has a lot of advantages, none of which are likely of the slightest interest to you. However, it also has one big problem: My Z-Wave USB dongle, currently plugged into the Pi, will need a new solution.</div>
<div>
<br /></div>
<div>
This blog post is about setting up Home Assistant in an LXD container. A different blog post will detail how to let the container see the USB dongle across the network</div>
<div>
<br /></div>
<div>
<h2>
Preliminaries</h2>
</div>
<div>
First, the server needs to have <a href="https://cheesehead-techblog.blogspot.com/2019/08/how-i-set-up-lxd-on-my-ubuntu-1904.html" target="_blank">LXD installed</a>, and we need to <a href="https://cheesehead-techblog.blogspot.com/2019/08/creating-lxd-container-on-my-ubuntu.html" target="_blank">create a container</a> for Home Assistant.</div>
<div>
<br /></div>
<div>
In this case, I created a container called "homeassistant." It has a consistent IP address assigned by the LAN router (aa.bb.cc.dd), it has a user ("me") with sudo permission, and that user can ssh into the container. To the network it looks like a separate machine. To me, it behaves like a VM. To the host server, it acts like an unprivileged container.</div>
<div>
<br /></div>
<div>
<h2>
Installing</h2>
</div>
<div>
First we install the python dependencies:<br />
<br />
<pre> me@homeassistant:~$ sudo apt-get update
me@homeassistant:~$ sudo apt-get upgrade
me@homeassistant:~$ sudo apt-get install python3 python3-venv python3-pip libffi-dev libssl-dev</pre>
</div>
<div>
<br /></div>
<div>
Add a user named "homeassistant" to run the application. We need to add me to the new "homeassistant" group, so I can edit the config files.<br />
<br />
<pre> me@homeassistant:~$ sudo useradd -rm homeassistant -G dialout // Create the homeassistant user and group
me@homeassistant:~$ sudo adduser me homeassistant // Add me to the homeassistant group
me@homeassistant:~$ newgrp homeassistant // Add group to current session; no need to logout</pre>
</div>
<div>
<br /></div>
<div>
Create the homeassistant directory in /srv, set the ownership, and cd into the dir<br />
<br />
<pre> me@homeassistant:~$ cd /srv
me@homeassistant:~$ sudo mkdir homeassistant
me@homeassistant:~$ sudo chown homeassistant:homeassistant homeassistant
me@homeassistant:~$ cd /srv/homeassistant</pre>
</div>
<div>
<br /></div>
<div>
Switch to homeassistant user, create the venv, install homeassistant:<br />
<br />
<pre> me@homeassistant:/srv/homeassistant $ sudo -u homeassistant -H -s
homeassistant@homeassistant:/srv/homeassistant $ python3 -m venv .
homeassistant@homeassistant:/srv/homeassistant $ source bin/activate
(homeassistant) homeassistant@homeassistant:/srv/homeassistant $ python3 -m pip install wheel
(homeassistant) homeassistant@homeassistant:/srv/homeassistant $ pip3 install homeassistant</pre>
</div>
<div>
<br /></div>
<div>
<h2>
First Run and Testing</h2>
</div>
<div>
Start Home Assistant for the first time. This takes a few minutes - let it work:<br />
<br />
<pre> (homeassistant) $ hass</pre>
</div>
<div>
<br /></div>
<div>
Home Assistant should be up and running now. Try to login to the container's webserver (included with homeassistant). Remember how we assigned the container an IP address? (aa.bb.cc.dd) Let's use it now. From another machine on the LAN, try to connect with a web browser: http://aa.bb.cc.dd:8123. If it doesn't work, stop here and start troubleshooting. You can CTRL+C hass to stop it, or you can stop it from inside the web page.
</div>
<div>
<br /></div>
<div>
<h2>
Make Config Files </h2>
</div>
<div>
Once Home Assistant is working, let's change the permissions of the config files so that members of the "homeassistant" group (like me) can edit the files:<br />
<br />
<pre> (homeassistant) $ exit // Exit the Venv
homeassistant@homeassistant:/srv/homeassistant $ exit // Exit the homeassistant user
me@homeassistant:/srv/homeassistant $ cd ~ // Return to home dir - optional
me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/automations.yaml
me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/configuration.yaml
me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/groups.yaml
me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/scripts.yaml
me@homeassistant:~$ sudo chmod 644 /home/homeassistant/.homeassistant/secrets.yaml</pre>
</div>
<div>
<br /></div>
<div>
Let's link Home Assistant to systemd, so hass starts when the container comes up, and hass stops when the container goes down. (<a href="https://www.home-assistant.io/docs/autostart/systemd/" target="_blank">Reference</a>):<br />
<br />
<pre> me@homeassistant:~$ sudo nano /etc/systemd/system/home-assistant@homeassistant.service
[Unit]
Description=Home Assistant
After=network-online.target
[Service]
Type=simple
User=homeassistant
ExecStart=/srv/homeassistant/bin/hass -c "/home/homeassistant/.homeassistant"
// No need for a 'stop' command. Systemd will take care of it automatically
[Install]
WantedBy=multi-user.target
me@homeassistant:~$ sudo systemctl --system daemon-reload // Load systemd config changes
me@homeassistant:~$ sudo systemctl enable home-assistant@homeassistant.service // Or disable
me@homeassistant:~$ sudo systemctl start home-assistant@homeassistant.service // Or stop</pre>
</div>
<div>
<br /></div>
<div>
Finally, a word about updates: Home Assistant updates frequently, and since it's not deb-based, unattended-upgrades cannot see it. However, starting the application will <i>automatically</i> download and install Home Assistant updates. When this occurs, the web page will take a full minute (or three) before appearing. <i>Be Patient!</i><br />
<br />
<pre> me@homeassistant:~$ sudo -systemctl restart home-assistant@homeassistant.service // Updates!</pre>
</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-43225212031157607142019-08-17T22:02:00.000-05:002019-08-17T22:02:54.344-05:00USBIP into an LXD container<div>
In <a href="https://cheesehead-techblog.blogspot.com/2019/08/experimenting-with-usb-devices-across.html">a previous post</a>, I used USBIP to forawrd GPS data from A to B. 'A' was a USB GPS dongle pluged into a Raspberry Pi (Raspbian). 'B' was my laptop.</div>
<div>
<br /></div>
<div>
Now let's take it another step. Let's move 'B' to an LXD container sitting on a headless Ubuntu 19.04 server. No other changes: Same GPS data, same use of USBIP. 'A' is the same USB GPS dongle, the same Raspberry Pi, and the same Raspbian.</div>
<div>
<br /></div>
<div>
Setting up usbip on the server ('B') is identical to setting it up on my laptop. Recall that this particular dongle creates a /dev/ttyUSB_X device upon insertion, and it's the same on the Pi, the Laptop, and the Server<br />
<br />
<pre> me@server:~$ lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 006: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
me@server:~$ ls -l /dev/ttyUSB0
crw-rw---- 1 root dialout 188, 0 Aug 17 21:13 /dev/ttyUSB0</pre>
</div>
<div>
<br /></div>
<div>
LXD has a USB Hotplug feature that works for many, but not all USB devices, connecting USB devices on the host to the container. Devices that create a custom entry in /dev (like /dev/ttyUSB_X) generally cannot use the USB Hotplug...but CAN instead use 'usb-char' forwarding which (seems to be) NOT hotpluggable.</div>
<div>
<br /></div>
<div>
Here's that LXD magic at work. In this case, I'm using a container called 'ha-test2', and let's simply name the dongle 'gps'. Do this while the container is stopped, or restart the container afterward<br />
<br />
<pre> me@server:~$ lxc config device add ha-test2 gps unix-char path=/dev/ttyUSB0
Device gps added to ha-test2</pre>
</div>
<div>
<br /></div>
<div>
Now we start the container, and then jump into a shell inside. We see that /dev/ttyUSB0 has indeed been forwarded. And we test to ensure data is flowing -- that we can read from /dev/tty/USB0.<br />
<br />
<pre> me@server:~$ lxc start ha-test2
me@server:~$ lxc shell ha-test2
mesg: ttyname failed: No such device
root@ha-test2:~# ls -l /dev/ | grep tty
crw-rw-rw- 1 nobody nogroup 5, 0 Aug 18 02:11 tty
crw-rw---- 1 root root 188, 0 Aug 18 02:25 ttyUSB0
root@ha-test2:~# apt install gpsd-clients // Get the gpsmon application
root@ha-test2:~# gpsmon /dev/ttyUSB0</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Making it permanent</h2>
</div>
<div>
It is permanent already. The 'lxc config' command will edit the config of the container, which is persistent across a reboot.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Cleaning up</h2>
</div>
<div>
<br /></div>
<div>
There are two options for cleanup of the container.<br />
<ul>
<li>You can simply throw it away (it's a container)</li>
<li>Alternately, <pre> root@ha-test2:~# apt autoremove gpsd-clients</pre>
</li>
</ul>
</div>
<div>
<br /></div>
<div>
On the Server:<br />
<br />
<pre> me@server:~$ lxc config device remove ha-test2 gps
me@server:~$ sudo apt autoremove gpsd-clients // If you installed gpsmon to test connectivity</pre>
<br />
Also remember to detach USBIP, and uninstall usbip packages.</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-27866456283700190352019-08-12T13:57:00.000-05:002019-08-17T20:52:46.807-05:00Experimenting with USB devices across the LAN with USBIP<div>
USBIP is a Linux tool for accessing USB devices across a network. I'm trying it out.</div>
<div>
<br /></div>
<div>
<a href="https://github.com/torvalds/linux/blob/master/tools/usb/usbip/README">Reference 1</a>, <a href="https://derushadigital.com/other%20projects/2019/02/19/RPi-USBIP-ZWave.html">Reference 2</a>, <a href="https://developer.ridgerun.com/wiki/index.php?title=How_to_setup_and_use_USB/IP">Reference 3</a>, <a href="https://sourceforge.net/p/usbip/discussion/">Reference 4</a>.</div>
<div>
<br /></div>
<div>
At one end of the room, I have a Raspberry Pi with
<br />
<ul>
<li>A Philips USB Webcam</li>
<li>A no-name USB GPS dongle</li>
<li>A Nortek USB Z-Wave/Zigbee network controller dongle</li>
</ul>
</div>
<div>
At the other end of the room is my laptop.</div>
<div>
<br /></div>
<div>
Before starting anything, I plugged all three into another system to ensure that they worked properly.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Raspberry Pi Server Setup</h2>
</div>
<div>
The Pi is running stock Raspbian Buster, with the default "pi" user replaced by a new user ("me") with proper ssh keys.</div>
<div>
<br /></div>
<div>
Before we start, here's what the 'lsusb' looks like on the Pi<br />
<br />
<pre> me@pi:~ $ lsusb
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</pre>
</div>
<div>
<br /></div>
<div>
Now we plug in the three USB devices and see what changed<br />
<br />
<pre> me@pi:~ $ lsusb
Bus 001 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc.
Bus 001 Device 005: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
Bus 001 Device 006: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</pre>
</div>
<div>
<br /></div>
<div>
And here are the new devices created or modified<br />
<br />
<pre> me@pi:~ $ ls -l /dev | grep 12 // 12 is today's date
drwxr-xr-x 4 root root 80 Aug 12 00:46 serial
lrwxrwxrwx 1 root root 7 Aug 12 00:46 serial0 -> ttyAMA0
drwxr-xr-x 4 root root 220 Aug 12 00:47 snd
crw--w---- 1 root tty 204, 64 Aug 12 00:46 ttyAMA0
crw-rw---- 1 root dialout 188, 0 Aug 12 00:46 ttyUSB0
drwxr-xr-x 4 root root 80 Aug 12 00:47 v4l
crw-rw---- 1 root video 81, 3 Aug 12 00:47 video0</pre>
</div>
<div>
<br /></div>
<div>
Looks like...<br />
<ul>
<li>/dev/ttyAMA0 is the Nortek Z-Wave controller</li>
<li>/dev/ttyUSB0 is the GPS stick</li>
<li>/dev/video0 is the webcam</li>
</ul>
</div>
<div>
<br /></div>
<div>
Installing USBIP onto Raspbian Buster is easy. However, it is DIFFERENT from stock Debian or Ubuntu. This step is Raspbian-only<br />
<br />
<pre> me@pi:~$ sudo apt install usbip</pre>
</div>
<div>
Now load the kernel module. The SERVER always uses the module 'usbip_host'.<br />
<br />
<pre> me@pi:~$ sudo modprobe usbip_host // does not persist across reboot</pre>
</div>
<div>
<br /></div>
<div>
List the devices the usbip can see. Note each Bus ID - we'll need those later<br />
<br />
<pre> me@pi:~ $ usbip list --local
- busid 1-1.1 (0424:ec00)
Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)
- busid 1-1.2 (0471:0329)
Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
- busid 1-1.4 (067b:2303)
Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
- busid 1-1.5 (10c4:8a2a)
Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
</pre>
<br />
<ul>
<li>We can ignore the Ethernet adapter</li>
<li>The Webcam is at 1-1.2</li>
<li>The GPS dongle is at 1-1.4</li>
<li>The Z-Wave Controller is at 1-1.5</li>
</ul>
</div>
<div>
<br /></div>
<div>
Bind the devices.<br />
<br />
<pre> me@pi:~$ sudo usbip bind --busid=1-1.2 // does not persist across reboot
usbip: info: bind device on busid 1-1.2: complete
me@pi:~$ sudo usbip bind --busid=1-1.4 // does not persist across reboot
usbip: info: bind device on busid 1-1.4: complete
me@pi:~$ sudo usbip bind --busid=1-1.5 // does not persist across reboot
usbip: info: bind device on busid 1-1.5: complete</pre>
<br />
The USB dongle will now appear to any client on the network just as though it was plugged in locally.<br />
<br />
If you want to STOP serving a USB device:<br />
<br />
<pre> me@pi:~$ sudo usbip unbind --busid=1-1.2</pre>
</div>
<div>
<br /></div>
<div>
The server (usbipd) process may or may not actually be running, serving on port 3240. Let's check:<br />
<pre> me@pi:~ $ ps -e | grep usbipd
18966 ? 00:00:00 usbipd
me@:~ $ sudo netstat -tulpn | grep 3240
tcp 0 0 0.0.0.0:3240 0.0.0.0:* LISTEN 18966/usbipd
tcp6 0 0 :::3240 :::* LISTEN 18966/usbipd</pre>
</div>
<div>
<br /></div>
<div>
We know that usbipd is active and listening. If not, start usbipd with:<br />
<br />
<pre> me@:~ $ sudo usbipd -D</pre>
<br />
You can run it more than one; only one daemon will start. The usbipd server does NOT need to be running to bind/unbind USB devices - you can start the server and bind/unbind in any order you wish. If you need to debug a connection, omit the -D (daemonize; fork into the background) so you can see the debug messages. See 'man usbipd' for the startup options to change port, IPv4, IPv6, etc.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Laptop Client Setup</h2>
</div>
<div>
Let's look at the USB devices on my laptop before starting:<br />
<br />
<pre> me@laptop:~$ lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</pre>
</div>
<div>
<br /></div>
<div>
In stock Debian (not Raspbian) and Ubuntu, usbip is NOT a separate package. It's included in the 'linux-tools-generic' package, which many folks already have installed...<br />
<br />
<pre> me@laptop:~$ apt list linux-tools-generic
Listing... Done
linux-tools-generic/disco-updates 5.0.0.23.24 amd64 // Doesn't say "[installed]"
</pre>
<br />
...but apparently I don't. Let's install it.<br />
<br />
<pre> me@laptop:~$ sudo apt install linux-tools-generic</pre>
</div>
<div>
<br /></div>
<div>
Now load the kernel module. The CLIENT always uses the kernel module 'vhci-hcd'.<br />
<br />
<pre> me@laptop:~$ sudo modprobe vhci-hcd // does not persist across reboot</pre>
</div>
<div>
<br /></div>
<div>
List the available USB devices on the Pi server (IP addr aa.bb.cc.dd). Those Bus IDs should look familiar.<br />
<br />
<pre> me@laptop:~$ usbip list -r aa.bb.cc.dd // List available on the IP address
usbip: error: failed to open /usr/share/hwdata//usb.ids // Ignore this error
Exportable USB devices
======================
- aa.bb.cc.dd
1-1.5: unknown vendor : unknown product (10c4:8a2a)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
: (Defined at Interface level) (00/00/00)
: 0 - unknown class / unknown subclass / unknown protocol (ff/00/00)
: 1 - unknown class / unknown subclass / unknown protocol (ff/00/00)
1-1.4: unknown vendor : unknown product (067b:2303)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4
: (Defined at Interface level) (00/00/00)
1-1.2: unknown vendor : unknown product (0471:0329)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2
: (Defined at Interface level) (00/00/00)</pre>
</div>
<div>
<br /></div>
<div>
Now we <i>attach</i> the three USB devices. This will not persist across a reboot.<br />
<br />
<pre> me@laptop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.2
me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.4
me@desktop:~$ sudo usbip attach --remote=aa.bb.cc.dd --busid=1-1.5
// No feedback upon success</pre>
</div>
<div>
<br /></div>
<div>
The remote USB devices now show in 'lsusb'<br />
<br />
<pre> me@laptop:~$ lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 004: ID 10c4:8a2a Cygnal Integrated Products, Inc.
Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
Bus 003 Device 002: ID 0471:0329 Philips (or NXP) SPC 900NC PC Camera / ORITE CCD Webcam(PC370R)
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</pre>
</div>
<div>
<br /></div>
<div>
And we can see that new devices have appeared in /dev. Based upon the order we attached, it's likely that<br />
<ul>
<li>The webcam 1-1.2 is at /dev/video2</li>
<li>The GPS dongle 1-1.4 is probably at /dev/ttyUSB0</li>
<li>The Z-Wave controller 1-1.5 is at /dev/ttyUSB1</li>
<li>The same dongle includes a Zigbee controller, too, at /dev/ttyUSB2</li>
</ul>
The Z-Wave/Zigbee controller has had it's major number changed from 204 to 188. We don't know if that's important or not yet.<br />
<br />
<pre> me@laptop:~$ ls -l /dev | grep 12
drwxr-xr-x 4 root root 80 Aug 12 00:56 serial
crw-rw---- 1 root dialout 188, 0 Aug 12 00:56 ttyUSB0
crw-rw---- 1 root dialout 188, 1 Aug 12 00:56 ttyUSB1
crw-rw---- 1 root dialout 188, 2 Aug 12 00:56 ttyUSB2
crw-rw----+ 1 root video 81, 2 Aug 12 00:56 video2</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Testing Results</h2>
</div>
<div>
I tested the GPS using the 'gpsmon' application, included with the 'gpsd-clients' package. We don't actually need gpsd, we can connect gpsmon directly to the remote USB device.<br />
<br />
<pre> me@laptop:~$ gpsmon /dev/ttyUSB0
gpsmon:ERROR: SER: device open of /dev/ttyUSB0 failed: Permission denied - retrying read-only
gpsmon:ERROR: SER: read-only device open of /dev/ttyUSB0 failed: Permission denied</pre>
</div>
<div>
<br />
Aha, a permission issue, not a usbip failure!<br />
Add myself to the 'dialout' group, and then it works. A second test across a VPN connection, from a remote location, was also successful.<br />
<br />
<pre> me@laptop:~$ ls -la /dev/ttyUSB0
crw-rw---- 1 root dialout 188, 0 Aug 11 21:41 /dev/ttyUSB0 // 'dialout' group
me@laptop:~$ sudo adduser me dialout
Adding user `me' to group `dialout' ...
Adding user me to group dialout
Done.
me@laptop:~$ newgrp dialout // Prevents need to logout/login for new group to take effect
me@laptop:~$ gpsmon /dev/ttyUSB0
// Success!</pre>
</div>
<div>
<br /></div>
<div>
The webcam is immediately recognized in both Cheese and VLC, and plays across the LAN instantly. There is a noticeable half-second lag. A second test, across a VPN connection from a remote location, had the USB device recognized but not enough signal was arriving in timely order for the applications to show the video.</div>
<div>
<br />
There were a few hiccups along the way. The --debug flag helps a lot to track down the problems:<br />
<ul>
<li>Client failed to connect with "system error" - turns out usbipd was not running on the server.</li>
<li>Client could see the list, but failed to attach with "attach failed" - needed to reboot the server (not sure why)</li>
<li>An active usbip connection prevents my laptop from sleeping properly</li>
<li>The Z-wave controller require HomeAssistant or equivalent to run, a bit more that I want to install onto the testing laptop. Likely to have permission issues, too.</li>
</ul>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Cleaning up</h2>
</div>
<div>
To tell a CLIENT to cease using a remote USB (virtual unplug), you need to know the <i>usbip port</i> number. Well, not really: We have made only one persistent change; we could simply reboot instead.<br />
<br />
<pre> me@laptop:~$ usbip port // Not using sudo - errors, but still port numbers
Imported USB devices
====================
libusbip: error: fopen
libusbip: error: read_record
Port 00: <port in="" use=""> at Full Speed(12Mbps)
Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
5-1 -> unknown host, remote port and remote busid
-> remote bus/dev 001/007
libusbip: error: fopen
libusbip: error: read_record
Port 01: <port in="" use=""> at Full Speed(12Mbps)
Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
5-2 -> unknown host, remote port and remote busid
-> remote bus/dev 001/005
libusbip: error: fopen
libusbip: error: read_record
Port 02: <port in="" use=""> at Full Speed(12Mbps)
Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
5-3 -> unknown host, remote port and remote busid
-> remote bus/dev 001/006
me@laptop:~$ sudo usbip port // Using sudo, no errors and same port numbers
Imported USB devices
====================
Port 00: <port in use> at Full Speed(12Mbps)
Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
5-1 -> usbip://aa.bb.cc.dd:3240/1-1.2
-> remote bus/dev 001/007
Port 01: <port in use> at Full Speed(12Mbps)
Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
5-2 -> usbip://aa.bb.cc.dd:3240/1-1.4
-> remote bus/dev 001/005
Port 02: <port in use> at Full Speed(12Mbps)
Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
5-3 -> usbip://aa.bb.cc.dd:3240/1-1.5
-> remote bus/dev 001/006
me@laptop:~$ sudo usbip detach --port 00
usbip: info: Port 0 is now detached!
me@laptop:~$ sudo usbip detach --port 01
usbip: info: Port 1 is now detached!
me@laptop:~$ sudo usbip detach --port 02
usbip: info: Port 2 is now detached!
me@laptop:~$ lsusb // The remote USB devices are gone now
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 04f2:b56c Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
me@laptop:~$ sudo modprobe -r vhci-hcd // Remove the kernel module</port></port></port></pre>
</div>
<div>
<br /></div>
<div>
The only two persistent change we made on the CLIENT were adding myself to the 'dialout' group and installing the 'linux-tools-generic' package, so let's remove that. If you ALREADY were in the 'dialout' group, or had the package installed for other reasons, then obviously don't remove it. It's not the system's responsibility to keep track of why you have certain permissions or packages -- that's the human's job. After this step, my CLIENT is back to stock Ubuntu.<br />
<br />
<pre> me@laptop:~$ sudo deluser me dialout // Takes effect after logout
me@laptop:~$ sudo apt autoremove linux-tools-generic // Immediate</pre>
</div>
<div>
<br /></div>
<div>
Telling a SERVER to stop sharing a USB device (virtual unplug) and shut down the server is much easier. Of course, this is also a Pi, and we did make any changes permanent, so it might be easier to simply reboot it.<br />
<br />
<pre> me@pi:~$ usbip list -l
- busid 1-1.1 (0424:ec00)
Standard Microsystems Corp. : SMSC9512/9514 Fast Ethernet Adapter (0424:ec00)
- busid 1-1.2 (0471:0329)
Philips (or NXP) : SPC 900NC PC Camera / ORITE CCD Webcam(PC370R) (0471:0329)
- busid 1-1.4 (067b:2303)
Prolific Technology, Inc. : PL2303 Serial Port (067b:2303)
- busid 1-1.5 (10c4:8a2a)
Cygnal Integrated Products, Inc. : unknown product (10c4:8a2a)
me@pi:~$ sudo usbip unbind --busid=1-1.2
usbip: info: unbind device on busid 1-1.2: complete
me@pi:~$ sudo usbip unbind --busid=1-1.4
usbip: info: unbind device on busid 1-1.4: complete
me@pi:~$ sudo usbip unbind --busid=1-1.5
usbip: info: unbind device on busid 1-1.5: complete
me@pi:~$ sudo pkill usbipd</pre>
</div>
<div>
<br /></div>
<div>
The only persistent change we made on the Pi is installing the 'usbip' package. Once removed, we're back to stock Raspbian.<br />
<br />
<pre> me@pi:~$ sudo apt autoremove usbip</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Making it permanent</h2>
</div>
<div>
There are two additional steps to making a permanent server, and essentially the same two steps to make a permanent client. This means a USBIP server that begins serving automatically upon boot, and a client that automatically connects to the server upon boot.</div>
<div>
<br /></div>
<div>
Add the kernel modules to /etc/modules so that the USBIP kernel modules will be automatically loaded at boot. To remove a client or server, delete the line from /etc/modules. You don't need to use 'nano' - use any text editor you wish, obviously.<br />
<br />
<pre> me@pi:~$ sudo nano /etc/modules // usbipd SERVER
usbip_host
me@laptop:~$ sudo nano /etc/modules // usbip CLIENT
usbip_vhci-hcd
// Another way to add the USBIP kernel modules to /etc/modules on the SERVER
me@pi:~$ sudo -s // "sudo echo" won't work
me@pi:~# echo 'usbip_host' >> /etc/modules
me@pi:~# exit
// Another way to add the USBIP kernel modules to /etc/modules on the CLIENT
me@pi:~$ sudo -s // "sudo echo" won't work
me@pi:~# echo 'vhci-hcd' >> /etc/modules
me@pi:~# exit</pre>
</div>
<div>
<br /></div>
<div>
Add a systemd job to the SERVER to automatically bind the USB devices. You can use systemd to start, stop, and restart the server conveniently, and for to start serving at startup automatically.<br />
<br />
<pre> me@pi:~$ sudo nano /lib/systemd/system/usbipd.service
[Unit]
Description=usbip host daemon
After=network.target
[Service]
Type=forking
ExecStart=/usr/sbin/usbipd -D
ExecStartPost=/bin/sh -c "/usr/sbin/usbip bind --$(/usr/sbin/usbip list -p -l | grep '#usbid=10c4:8a2a#' | cut '-d#' -f1)"
ExecStop=/bin/sh -c "/usr/lib/linux-tools/$(uname -r)/usbip detach --port=$(/usr/lib/linux-tools/$(uname -r)/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"
[Install]
WantedBy=multi-user.target</pre>
</div>
<div>
<br /></div>
<div>
To start the new SERVER:
<br />
<pre> me@pi:~$ sudo pkill usbipd // End the current server daemon (if any)
me@pi:~$ sudo systemctl --system daemon-reload // Reload system jobs because one changed
me@pi:~$ sudo systemctl enable usbipd.service // Set to run at startup
me@pi:~$ sudo systemctl start usbipd.service // Run now</pre>
</div>
<div>
<br /></div>
<div>
Add a systemd job to the CLIENT to automatically attach the remote USB devices at startup. You can use systemd to unplug conveniently before sleeping, and to reset the connection of needed. Note: On the "ExecStart" line, substitute your server's IP address for aa.bb.cc.dd in two places.<br />
<br />
<pre> me@laptop:~$ sudo nano /lib/systemd/system/usbip.service
[Unit]
Description=usbip client
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c "/usr/bin/usbip attach -r aa.bb.cc.dd -b $(/usr/bin/usbip list -r aa.bb.cc.dd | grep '10c4:8a2a' | cut -d: -f1)"
ExecStop=/bin/sh -c "/usr/bin/usbip detach --port=$(/usr/bin/usbip port | grep '<port in use>' | sed -E 's/^Port ([0-9][0-9]).*/\1/')"
[Install]
WantedBy=multi-user.target</pre>
</div>
<div>
<br /></div>
<div>
To start the new CLIENT attachment(s):<br />
<br />
<pre> me@laptop:~$ sudo systemctl --system daemon-reload // Reload system jobs because one changed
me@laptop:~$ sudo systemctl enable usbip.service // Set to run at startup
me@laptop:~$ sudo systemctl start usbip.service // Run now</pre>
</div>
<div>
<br /></div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com1tag:blogger.com,1999:blog-2703060415027607989.post-87202888892311699762019-08-10T00:14:00.002-05:002020-08-17T22:22:34.256-05:00Experiment: Home Assistant in an LXD container without a venv<h3>Update: August 2020 (one year later)</h3>
<p>Here's a slightly different way of doing it entirely from the host. Tested with HomeAssistant version 114.</p>
<pre>lxc launch -p lanprofile ubuntu:focal ha-test
# Update apt so we can install pip
cat <<EOF > /tmp/container-sources.list
deb http://us.archive.ubuntu.com/ubuntu/ focal main universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main universe
deb http://security.ubuntu.com/ubuntu focal-security main universe
EOF
lxc file push /tmp/container-sources.list ha-test/etc/apt/sources.list
lxc exec ha-test -- apt update
lxc exec ha-test -- apt upgrade
# Here's the meat: Installing pip3, then using pip3 to install HA and dependencies.
lxc exec ha-test -- apt install python3-pip
lxc exec ha-test -- pip3 install aiohttp_cors defusedxml emoji hass_nabucasa home-assistant-frontend homeassistant mutagen netdisco sqlalchemy zeroconf
# Example of fixing a version error message that occurs during pip install:
# ERROR: homeassistant 0.114.2 has requirement cryptography==2.9.2, but you'll have cryptography 2.8 which is incompatible.
lxc exec ha-test -- pip3 install --upgrade cryptography==2.9.2
# Can't start the web browser without knowing the container's IP address.
lxc list | grep ha-test
| ha-test | RUNNING | 192.168.2.248 (eth0) | | CONTAINER | 0 |
# Run Hass
lxc exec ha-test -- hass
Unable to find configuration. Creating default one in /root/.homeassistant
# Web browser: http://192.168.2.248:8123....and there it is!
</pre>
<br/>
<hr>
<div>
<a href="https://www.home-assistant.io/" target="_blank">Home Assistant</a> usually runs in a Python 3 virtual environment (venv). The developers wisely chose Python 3 because it has all the libraries they need. The developers wisely chose venv to create an effective single, predictable platform upon which Home Assistant can run. Users like it because just a couple extra shell incantations is the difference between success and cryptic-error hell.</div>
<div>
<br /></div>
<div>
Let's see if I can get HA 0.97 to run on Ubuntu 19.04. In this case, I'm running it in a disposable LXD container so I can just throw it away after the experiment is complete. This experiment turned out to be about 75% successful - Home Assistant installs and runs outside the venv, but logging and sqlalchemy failed to install, so the final product had some limitations.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Setup</h2>
</div>
<div>
First, let's create the LXD container. <a href="https://cheesehead-techblog.blogspot.com/2019/08/how-i-set-up-lxd-on-my-ubuntu-1904.html" target="_blank">Step 1</a>. <a href="https://cheesehead-techblog.blogspot.com/2019/08/creating-lxd-container-on-my-ubuntu.html" target="_blank">Step 2</a>. I use a networking profile ("lanprofile") that uses DHCP to request an IP address from my router instead of the local server. I'm using an Ubuntu 19.04 ("Disco") image for the container. And I'm calling the container "ha-test2," second in a line of Home Assistant test containers.<br />
<br />
<pre> me@host:~$ lxc launch -p lanprofile ubuntu:disco ha-test2</pre>
</div>
<div>
<br /></div>
<div>
After a minute or two, the container is running and has picked up an IP address from the router.<br />
<br />
<pre> me@host:~$ lxc list
+----------+---------+----------------------+-----
| NAME | STATE | IPV4 |
+----------+---------+----------------------+-----
| ha-test2 | RUNNING | 192.168.1.252 (eth0) |
+----------+---------+----------------------+-----</pre>
</div>
<div>
<br /></div>
<div>
Let's enter the container. Note that change to a root prompt within the container. This is an <i>unprivileged</i> container (LXD's default), so root within the container is NOT root for the rest of the system. Note also the mysterious "ttyname failed: No such device" error, due to a very minor bug but does not affect our use of the container in any way.<br />
<br />
<pre> me@host:~$ lxc shell ha-test2
mesg: ttyname failed: No such device
root@ha-test2:~#</pre>
</div>
<div>
<br /></div>
<div>
OPTIONAL: Limit the Ubuntu sources. We don't need -resrticted or -multiverse or -proposed or -backports, etc. I replaced the entire file with the following three lines. Proper format is important!<br />
<br />
<pre> root@ha-test2:~# nano /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu disco main universe
deb http://archive.ubuntu.com/ubuntu disco-updates main universe
deb http://security.ubuntu.com/ubuntu disco-security main universe</pre>
</div>
<div>
<br /></div>
<div>
OPTIONAL: Expand Unattended Upgrades to handle 100% of the limited sources. I replaced the entire file with the following five lines.<br />
<br />
<pre> root@ha-test2:~# nano /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
"${distro_id}:${distro_codename}-updates";
};</pre>
</div>
<div>
<br /></div>
<div>
Since this is the first run of the package manager...<br />
<br />
<pre> root@ha-test2:~# apt update
root@ha-test2:~# apt upgrade</pre>
</div>
<div>
<br /></div>
<div>
Home Assistant uses Python 3's pip, not debs. So we install pip.<br />
<br />
<pre> root@ha-test2:~# apt install python3-pip</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
First Try - Learning Curve</h2>
</div>
<div>
Now we can use pip to install Home Assistant. This command will run for a few minutes, and will produce a lot of output as it downloads many dependencies. Some of those installs output, at first glance, messages that seem like errors -- read them carefully, they are probably <i>un</i>install errors if packages were being upgraded...which they are not, of course.<br />
<br />
<pre> root@ha-test2:~# pip3 install homeassistant</pre>
</div>
<div>
<br /></div>
<div>
The first run of 'hass' (the Home Assistant program name) is where we start to encounter errors that need to be investigated and fixed. When the system ground to a halt for several minutes, I used CTRL+C to end the process and return to a shell prompt.<br />
<br />
<pre> root@ha-test2:~# hass
// Lots of success...but then:
2019-08-09 22:35:57 INFO (MainThread) [homeassistant.bootstrap] Setting up {'system_log'}
2019-08-09 22:35:57 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of aiohttp_cors==0.7.0
2019-08-09 22:36:01 INFO (MainThread) [homeassistant.setup] Setting up http
2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Error during setup of component http
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
hass, processed_config
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 178, in async_setup
ssl_profile=ssl_profile,
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/__init__.py", line 240, in __init__
setup_cors(app, cors_origins)
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/http/cors.py", line 22, in setup_cors
import aiohttp_cors
ModuleNotFoundError: No module named 'aiohttp_cors'
2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of system_log. Setup failed for dependencies: http
2019-08-09 22:36:01 ERROR (MainThread) [homeassistant.setup] Setup failed for system_log: Could not set up all dependencies.
2019-08-09 22:36:01 INFO (SyncWorker_4) [homeassistant.util.package] Attempting install of sqlalchemy==1.3.5
2019-08-09 22:36:11 INFO (MainThread) [homeassistant.setup] Setting up recorder
Exception in thread Recorder:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/__init__.py", line 211, in run
from .models import States, Events
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/recorder/models.py", line 6, in <module>
from sqlalchemy import (
ModuleNotFoundError: No module named 'sqlalchemy'
2019-08-09 22:36:21 WARNING (MainThread) [homeassistant.setup] Setup of recorder is taking over 10 seconds.
// Thread hangs here. Use CTRL+C to abort back to a shell prompt</module></pre>
</div>
<div>
<br /></div>
<div>
There are two errors there. Both are simply bugs in Home Assistant's list of dependencies. The developers neglected to include dependencies upon "aiohttp_cors" and "sqlalchemy". Let's uninstall all the pip packages and dependencies and start over. The dependencies are listed in the 'pip3 show' command. Remember to delete pip from the list of removals, and to add homeassistant. The pip3 uninstall command asks a lot of questions about deleting files and directories -- as long as the offered removals are in /usr/local, it won't break anything.</div>
<div>
<br />
<pre> root@ha-test2:~# pip3 show homeassistant
Name: homeassistant
Version: 0.97.1
Summary: Open-source home automation platform running on Python 3.
Home-page: https://home-assistant.io/
Author: The Home Assistant Authors
Author-email: hello@home-assistant.io
License: Apache License 2.0
Location: /usr/local/lib/python3.7/dist-packages
Requires: pyyaml, async-timeout, bcrypt, voluptuous, voluptuous-serialize, importlib-metadata, ruamel.yaml, jinja2, cryptography, python-slugify, pip, PyJWT, requests, aiohttp, certifi, attrs, astral, pytz
Required-by:
root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Second Try - Getting closer</h2>
</div>
<div>
For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.<br />
<br />
<pre> root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy
[lots of installing]
root@ha-test2:~# hass
2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up onboarding
2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain config took 0.9 seconds.
2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setting up automation
2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain automation took 0.0 seconds.
2019-08-09 23:56:16 INFO (MainThread) [homeassistant.setup] Setup of domain onboarding took 0.0 seconds.
2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.config] Unable to import ssdp: No module named 'netdisco'
2019-08-09 23:56:20 ERROR (MainThread) [homeassistant.setup] Setup failed for ssdp: Invalid config.
2019-08-09 23:56:20 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of distro==1.4.0
2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setting up updater
2019-08-09 23:56:24 INFO (MainThread) [homeassistant.setup] Setup of domain updater took 0.0 seconds.
2019-08-09 23:56:24 INFO (SyncWorker_1) [homeassistant.util.package] Attempting install of mutagen==1.42.0
2019-08-09 23:56:29 INFO (SyncWorker_2) [homeassistant.loader] Loaded google_translate from homeassistant.components.google_translate
2019-08-09 23:56:29 INFO (SyncWorker_3) [homeassistant.util.package] Attempting install of hass-nabucasa==0.16
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up cloud
2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Error during setup of component cloud
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
hass, processed_config
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/cloud/__init__.py", line 167, in async_setup
from hass_nabucasa import Cloud
ModuleNotFoundError: No module named 'hass_nabucasa'
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up mobile_app
2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.config] Unable to import zeroconf: No module named 'zeroconf'
2019-08-09 23:56:50 ERROR (MainThread) [homeassistant.setup] Setup failed for zeroconf: Invalid config.
2019-08-09 23:56:50 INFO (SyncWorker_2) [homeassistant.util.package] Attempting install of home-assistant-frontend==20190805.0
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain mobile_app took 0.0 seconds.
2019-08-09 23:56:50 INFO (SyncWorker_3) [homeassistant.loader] Loaded notify from homeassistant.components.notify
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setting up notify
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.setup] Setup of domain notify took 0.0 seconds.
2019-08-09 23:56:50 INFO (MainThread) [homeassistant.components.notify] Setting up notify.mobile_app
2019-08-09 23:57:24 INFO (MainThread) [homeassistant.setup] Setting up frontend
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Error during setup of component frontend
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/homeassistant/setup.py", line 168, in _async_setup_component
hass, processed_config
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 267, in async_setup
root_path = _frontend_root(repo_path)
File "/usr/local/lib/python3.7/dist-packages/homeassistant/components/frontend/__init__.py", line 244, in _frontend_root
import hass_frontend
ModuleNotFoundError: No module named 'hass_frontend'
2019-08-09 23:57:24 INFO (SyncWorker_0) [homeassistant.util.package] Attempting install of gTTS-token==1.1.3
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of logbook. Setup failed for dependencies: frontend
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for logbook: Could not set up all dependencies.
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of map. Setup failed for dependencies: frontend
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for map: Could not set up all dependencies.
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Unable to set up dependencies of default_config. Setup failed for dependencies: cloud, frontend, logbook, map, ssdp, zeroconf
2019-08-09 23:57:24 ERROR (MainThread) [homeassistant.setup] Setup failed for default_config: Could not set up all dependencies.
2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setting up tts
2019-08-09 23:57:30 INFO (SyncWorker_1) [homeassistant.components.tts] Create cache dir /root/.homeassistant/tts.
2019-08-09 23:57:30 INFO (MainThread) [homeassistant.setup] Setup of domain tts took 0.0 seconds.
2019-08-09 23:57:30 INFO (MainThread) [homeassistant.bootstrap] Home Assistant initialized in 87.48s
2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Starting Home Assistant
2019-08-09 23:57:30 INFO (MainThread) [homeassistant.core] Timer:starting</pre>
</div>
<div>
<br /></div>
<div>
We have two missing dependencies (netdisco and zeroconf), and a bunch of missing <i>internal</i> homeassistant functions. This looks a bit like a race condition - the setup script is expecting functions that aren't-quite-ready yet. This also explains why many of these errors do not appear during a subsequent run of hass.</div>
<div>
<br /></div>
<div>
Let's delete and try again with those two additional dependencies....
<br />
<pre> root@ha-test2:~# pip3 uninstall homeassistant pyyaml async-timeout bcrypt voluptuous voluptuous-serialize importlib-metadata ruamel.yaml jinja2 cryptography python-slugify PyJWT requests aiohttp certifi attrs astral pytz aiohttp_cors sqlalchemy
root@ha-test2:~# rm -r .homeassistant/</pre>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Third Try - Close enough to call it success</h2>
</div>
<div>
For the second try, let's add those two missing dependencies. This time, we successfully started logging and sqalchemy (success), and progressed to the next errors. The web server started, but the Home Assistant front end hosted on the webserver failed. The .homeassistant config directory was created and populated.<br />
<br />
<pre> root@ha-test2:~# pip3 install homeassistant aiohttp_cors sqlalchemy netdisco zeroconf
[lots of installing]
root@ha-test2:~# hass
// No missing dependencies
// Same setup errors</pre>
</div>
<div>
<br /></div>
<div>
On the <i>first</i> run of hass, the dependency errors are gone, but the setup errors remain and the website is still unavailable. One the <i>second</i> run of hass, no errors at all, the website and all features work. The system is ready for systemd integration to bring hass up and down with the system.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
Substituting Debs for Pips</h2>
</div>
<div>
Many of those pip dependencies are also available in Debian and Ubuntu. Let's try adding the debs, one by one, and see if we can reduce the number of pip dependencies. This is a separate experiment, obviously.</div>
<div>
<br /></div>
<div>
The process here is to delete homeassistant, it's pip dependencies, and it's config files, then replace Pips with Debs. We want to see if homeassistant pulls in the relevant pip anyway. If so, we can delete that pip, then see if homeassistant installs and initializes properly. That means that this experiment is not persistent - Home Assistant updates (like 0.97 to 0.98) will pull in all the removed pips again.</div>
<div>
<br /></div>
<div>
Several packages are <i>already installed</i> in the default Ubuntu 19.04 image, but are superseded by pips:
<br />
<ul>
<li>python3-certifi, python3-cryptography, python3-jinja2, python3-multidict, python3-requests, python3-yarl</li>
</ul>
Some packages are not available as debs at all. These are all dependencies of homeassistant:
<br />
<ul>
<li>attrs, homeassistant, importlib-metadata, PyJWT, pyyaml, zipp</li>
</ul>
Several packages, once installed, no longer pull in the pip:
<br />
<br />
<pre> root@ha-test2:~# apt install python3-async-timeout python3-voluptuous-serialize</pre>
<br />
These packages, after installed, continue to pull in the pip anyway:<br />
<br />
<pre> root@ha-test2:~# apt install python3-aiohttp python3-aiohttp-cors python3-astral python3-async-timeout python3-bcrypt python3-python-slugify python3-ruamel.yaml python3-tz python3-voluptuous python3-voluptuous-serialize</pre>
<br />
<br />
After intalling all those debs, the homeassistant install looks something like this:<br />
<br />
<pre> root@ha-test2:~# pip3 install homeassistant
root@ha-test2:~# pip3 uninstall aiohttp aiohttp_cors astral bcrypt certifi cryptography jinja2 multidict python-slugify pytz requests ruamel.yaml voluptuous yarl
root@ha-test2:~# hass // first time - no new install errors
root@ha-test2:~# hass // frontend works, no startup errors</pre>
</div>
<div>
<br /></div>
<div>
Of course, this was an experiment - your mileage may vary. You may encounter problems that I did not. But it IS clearly possible to install Home Assistant into a non-venv environment, clearly possible to install Home Assistant into an LXD container, and clearly possible to more closely integrate Home Assistant into a Debian-based system.</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-92159362499159863122019-08-08T10:57:00.000-05:002019-08-17T20:49:48.823-05:00Creating an LXD container on my Ubuntu 19.04 host<div>
I just finished <a href="https://cheesehead-techblog.blogspot.com/2019/08/how-i-set-up-lxd-on-my-ubuntu-1904.html" target="_blank">setting up LXD</a> on my Ubuntu 19.04 server, and I'm ready to create a container.</div>
<div>
<br /></div>
<div>
Installing the service into the container is a separate step - this is just setting up and configuring the container itself.</div>
<div>
<br /></div>
<div>
<h2>
Creating a disposable container:</h2>
</div>
<div>
Actually, we did this already with our test container:<br />
<br />
<pre> me@host:~$ lxc launch -p lanprofile ubuntu:disco test</pre>
</div>
<div>
<br /></div>
<div>
Let's see if that container is still there:<br />
<br />
<pre> me@host:~$ lxc list
+----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| test | RUNNING | 192.168.1.124 (eth0)| 2615:a000:141f:e267:215:3eef:fe2a:c55d (eth0) | PERSISTENT | 0
|
+----------------+---------+---------------------+-----------------------------------------------+------------+-----------+</pre>
</div>
<div>
<br /></div>
<div>
We can enter the container to run commands on it's shell. Note that root inside the container is not root (unprivileged) on the host. The container comes with a default "ubuntu" user, but we have root so we don't seem to need the user.<br />
<br />
<pre> me@host:~$ lxc shell test
mesg: ttyname failed: No such device // Ignore this message
root@test:~# // Look, a root prompt within the container!
root@test:~# exit
logout
me@host:~$ // Back to the host</pre>
</div>
<div>
<br /></div>
<div>
We can stop and then restart containers. No sudo needed, these are <i>unprivileged</i> containers:<br />
<br />
<pre> me@host:~$ lxc stop test
me@host:~$ lxc stop test</pre>
</div>
<div>
<br /></div>
<div>
And when we are done we can destroy the container:<br />
<br />
<pre> me@host:~$ lxc stop test
me@host:~$ lxc destroy test</pre>
</div>
<div>
<br /></div>
<div>
<h2>
Creating a long-term container:</h2>
</div>
<div>
Now I want to create a container for a long-term service. Now we add security: This means adding non-root users, independent ssh access, and package upgrades. This container can function like a lightweight VM, though with rather less overhead.<br />
<br />
<pre> me@host:~$ lxc launch -p lanprofile ubuntu:disco test_2</pre>
</div>
<div>
<br /></div>
<div>
We can login to our LAN router, and see the test_2 device on the network. This is a good opportunity to assign it a consistent IP address, so you can always find the container again. Stop and restart the container so it picks up the new IP address.</div>
<div>
<br /></div>
<div>
Let's create a user for me with ssh access<br />
<br />
<pre> me@host:~$ lxc shell test_2
mesg: ttyname failed: No such device // Ignore this message
root@test_2:~# adduser me // Includes creating a password
root@test_2:~# adduser me sudo // Add me to the "sudo" group for easy remote administration via ssh
root@test_2:~# nano /etc/ssh/sshd_config
PasswordAuthentication yes // Temporary while we set up ssh keys
root@test_2:~# systemctl restart sshd
root@test_2:~# exit</pre>
</div>
<div>
<br /></div>
<div>
Copy my key. Remember to do this from ALL systems you are going to SSH into this container from:<br />
<br />
<pre> me@desktop:~$ ssh-copy-id me@192.168.1.124</pre>
</div>
<div>
<br /></div>
<div>
Now I can ssh directly into the container using keys, so let's end password login.<br />
<br />
<pre> me@test_2:~$ sudo nano /etc/ssh/sshd_config
PermitRootLogin no
PasswordAuthentication no
me@test_2:~$ sudo systemctl restart sshd</pre>
</div>
<div>
<br /></div>
<div>
Remove the default "ubuntu" user, since we won't be using it.<br />
<br />
<pre> me@test_2:~$ sudo deluser ubuntu
me@test_2:~$ sudo rm -r /home/ubuntu</pre>
</div>
<div>
<br /></div>
<div>
Moving on to package management, simplify the apt sources so only -main and -universe are seen in -updates and -security. We only need what the installed service requires.<br />
<br />
<pre> me@test_2:~$ sudo nano /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu disco main universe
deb http://archive.ubuntu.com/ubuntu disco-updates main universe
deb http://security.ubuntu.com/ubuntu disco-security main universe
me@test_2:~$ sudo apt update // Since the sources have changed
me@test_2:~$ sudo apt upgrade // Now is a good time</pre>
</div>
<div>
<br /></div>
<div>
Finally, let's install unattended-upgrades and configure it to upgrade ALL packages from our limited apt sources. This means we are less likely to discover months of unapplied upgrades and security fixes. This is optional, merely my preference:<br />
<br />
<pre> me@test_2:~$ sudo apt install unattended-upgrades
me@test_2:~$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
// Uncomment the following two lines:
"${distro_id}:${distro_codename}-security";
"${distro_id}:${distro_codename}-updates";</pre>
</div>
<div>
<br /></div>
<div>
And there we have it - a long-term container that is easily (but securely) accessed via ssh for maintenance and automatically pulls package updates. Lightweight VM-like behavior with a consistent IP address. Note that "lxc shell" on the host will still give a root prompt, but recall that the purpose of a container is to keep the service from getting out, not to keep host from getting in. Also note that, due to macvlan networking, the container cannot communicate across the networkk with the host.</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-71996777918491498542019-08-08T09:29:00.001-05:002019-08-17T20:49:13.670-05:00How I set up LXD on my Ubuntu 19.04 server<div>
I have a lovely little server that is slowly filling with LXD containers.</div>
<div>
<br /></div>
<div>
Here is how I set up LXD on the server (host).</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div>
<h2>
Install LXD:</h2>
</div>
<div>
My host started as a 19.04 <a href="http://cdimages.ubuntu.com/netboot/" target="_blank">minimal install</a>, so snapd wasn't included. LXD is packaged only for snap now (the deb simply installs the snap).<br />
These references were extremely helpful. Read (or re-read) them: <a href="https://tutorials.ubuntu.com/tutorial/tutorial-setting-up-lxd-1604">reference 1</a> <a href="https://help.ubuntu.com/lts/serverguide/lxd.html">reference 2</a>
<br />
<br />
<pre> host:~$ sudo apt install snapd
host:~$ sudo snap install lxd
host:~$ sudo adduser me lxd // Add me to the LXD group
host:~$ newgrp lxd // New group takes effect without logout/login</pre>
</div>
<div class="separator" style="clear: both;">
</div>
<div>
<h2 style="clear: both; text-align: left;">
First Run:</h2>
</div>
<div>
The very first time you run LXD, it must be initialized. It asks a set of questions to set up the default profile. I find that the defaults are quite satisfactory, with one exception - I named the storage:</div>
<div>
<br />
<pre> host:~$ lxd init // First run of LXD only - creates profile
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: container_storage
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:</pre>
</div>
<div>
<br /></div>
<div>
<h2 style="clear: both; text-align: left;">
My Preferences:</h2>
</div>
<div class="separator" style="clear: both; text-align: left;">
Yours preferences may vary.</div>
<div>
<ol>
<li>I prefer nano over vi for the default text editor. I know it's silly to have such a preference, but I do.</li>
<li>My containers get their IP address from the LAN router instead of the host, using <i>macvlan</i>. This means that containers can talk to the LAN, and to each other, but not to the host. Personally, I see this as a feature, not a bug.</li>
</ol>
</div>
<div>
<br /></div>
<div>
Set default editor as nano (instead of vi). This is obviously nothing but catering to my personal taste, and has no effect on other steps:<br />
<br />
<pre> host:~$ echo 'export EDITOR=nano' >> ~/.profile
host:~$ source ~/.profile</pre>
</div>
<div>
<br /></div>
<div>
Change the networking profile from default (NAT) to instead pull IP addresses for each container from the LAN router (macvlan). This is a matter of personal taste - it simply means I have one place to set IP addresses, the router, for all devices and containers. This only works with wired networking...if you are using wifi to connect a server full of containers to the LAN, then you really should rethink your plan anyway! (<a href="https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/">Reference</a>)<br />
<br />
<pre> host:~$ ip route show default 0.0.0.0/0 // Learn the eth interface
default via 192.168.2.1 dev enp0s3 proto dhcp metric 600 // Mine is enp0s3
host:~$ lxc profile copy default lanprofile // Make mistakes on a copy, not the original
host:~$ lxc profile device set lanprofile eth0 nictype macvlan // Change nictype field
host:~$ lxc profile device set lanprofile eth0 parent enp0s3 // Change parent field to real eth interface
</pre>
</div>
<div>
<br /></div>
<div>
<h2 style="clear: both; text-align: left;">
Test:</h2>
</div>
<div>
Now that LXD is installed and configured, we can set up an unprivileged test container. An "unprivileged" container means that the container runs as an ordinary user on the larger system - if a process escapes the container, it has only normal (non-sudo, non-root) user permissions. LXD creates unprivileged containers by default so this part is pretty easy. Let's use the "lanprofile" networking profile we just created. Let's use Ubuntu Disco (19.04). And let's call the container "test":
<br />
<br />
<pre> host:~$ lxc launch -p lanprofile ubuntu:disco test</pre>
</div>
<div>
<br /></div>
<div>
The container is now running. Login to the LAN's router (or wherever your DHCP server is), and see that it's there among the dhcp clients.</div>
<div>
<br /></div>
<div>
That's all for LXD setup, Now I'm ready to create containers and fill them with services.</div>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-28197223805916923082018-03-15T19:04:00.000-05:002018-03-15T19:56:34.587-05:00Easy VMs in Ubuntu 17.10Let's do some experimenting with QEUM/KVM Virtual Machines in Ubuntu.<br />
<br />
I was, frankly, shocked at just how easy Linux VMs are to set up and manage.<br />
<br />
<h2>
Preparation</h2>
If the hardware supports hardware virtualization...<br />
<br />
<pre>$ egrep -c '(vmx|svm)' /proc/cpuinfo
2 // A result of '0' means no. '1' or higher means yes</pre>
<br />
...then reboot into BIOS and turn it on.<br />
<br />
<br />
<h2>
<b>Creating the first VM:</b></h2>
Once virtualization is turned on, then from zero to fully operating is just three commands. The host is Ubuntu 17.10. The guest will also be 17.10, but that is merely because I lack imagination.<br />
<br />
1) Install KVM, qemu, virt-manager and all the other tools you need. The are all dependencies of a single package:<br />
<br />
<pre>$ sudo apt install uvtool</pre>
<br />
2) Download a cloud image of Ubuntu 17.10. Cloud images are headless - shell only. The download takes a few minutes (approximately 350 MB), so don't panic:<br />
<br />
<pre>$ uvt-simplestreams-libvirt sync release=artful arch=amd64</pre>
<br />
3) Create and start VM Guest 'test1'<br />
<br />
<pre>$ uvt-kvm create test1 release=artful</pre>
<br />
<br />
<br />
<h2>
<b>Starting, Stopping, Suspending, and Resuming the VM Guest from Host</b></h2>
<br />
<pre>$ virsh list // Check status
Id Name State
----------------------------------------------------
1 test1 running
$ virsh suspend test1
Domain test1 suspended
$ virsh resume test1
Domain test1 resumed
$ virsh shutdown test1
Domain test1 is being shutdown
$ virsh list --all // Use --all to show inactive VMs
Id Name State
----------------------------------------------------
- test1 shut off
$ virsh start test1
Domain test1 started
$ virsh list
Id Name State
----------------------------------------------------
2 test1 running</pre>
<br />
<br />
<br />
<h2>
<b>Under the hood looking at storage</b></h2>
<br />
We didn't set up a any virtual storage, and we don't know where that Ubuntu Cloud image went. Let's take a moment and figure it out using virsh...<br />
<br />
<pre>$ virsh dumpxml test2 | grep file
<disk device="disk" type="file">
<source file="/var/lib/uvtool/libvirt/images/test1.qcow"></source>
<backingstore index="1" type="file">
<source file="/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTcuMTA6YW1kNjQgMjAxODAzMTQ="></source>
<disk device="disk" type="file">
<source file="/var/lib/uvtool/libvirt/images/test1-ds.qcow"></source>
</disk></backingstore></pre>
<br />
There are the images for the virtual storage devices, and for the original cloud image ('backingstore') too. Looks like they are all in the same directory.<br />
<br />
<pre>$ ls -l /var/lib/uvtool/libvirt/images/
total 1490572
-rw------- 1 libvirt-qemu kvm 458752 Mar 14 22:06 test1-ds.qcow
-rw------- 1 libvirt-qemu kvm 490471424 Mar 15 08:14 test1.qcow
-rw------- 1 libvirt-qemu kvm 1035468800 Mar 14 22:05 x-uvt-b64-Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTcuMTA6YW1kNjQgMjAxODAzMTQ=
</pre>
<br />
Aha. There's the cloud image is the third line - that's where it went! The actual VM Guest storage is the first and second lines - they are simply diffs from the original cloud image. Multiple Guests can base off the same cloud image, keeping storage tidy...and small.<br />
<br />
Let's add another Guest VM and see how it changes.<br />
<br />
<pre>$ uvt-kvm create test2 release=artful
$ ls -l /var/lib/uvtool/libvirt/images/
total 1491344
-rw------- 1 libvirt-qemu kvm 458752 Mar 14 22:06 test1-ds.qcow
-rw------- 1 libvirt-qemu kvm 490471424 Mar 15 08:29 test1.qcow
-rw------- 1 libvirt-qemu kvm 458752 Mar 15 08:34 test2-ds.qcow
-rw------- 1 libvirt-qemu kvm 393216 Mar 15 08:34 test2.qcow
-rw------- 1 libvirt-qemu kvm 1035468800 Mar 14 22:05 x-uvt-b64-Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTcuMTA6YW1kNjQgMjAxODAzMTQ=
</pre>
<br />
A whole fresh VM takes less than 1 MB. Of course, it will grow quickly once you start giving it work to do.<br />
<br />
And here you can see how to destroy a VM Guest properly. The guest files are deleted, the cloud image is not.<br />
<br />
<pre>$ uvt-kvm destroy test2
$ ls -l /var/lib/uvtool/libvirt/images/
total 1490572
-rw------- 1 libvirt-qemu kvm 458752 Mar 14 22:06 test1-ds.qcow
-rw------- 1 libvirt-qemu kvm 490471424 Mar 15 08:29 test1.qcow
-rw------- 1 libvirt-qemu kvm 1035468800 Mar 14 22:05 x-uvt-b64-Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTcuMTA6YW1kNjQgMjAxODAzMTQ=
</pre>
<br />
<br />
<h2>
Securing VM Guest with a new admin account and SSH Keys</h2>
uvt-created guests start with the 'ubuntu' admin user, so you can start the process of customization without a lot of hassle. But they are insecure, so let's add our own admin user and delete that default fellow.<br />
<br />
<b>Step 1</b>. On the HOST, login insecurely to the Guest<br />
<br />
<pre><span style="color: blue;">host</span>$ uvt-kvm ssh test1
</pre>
<br />
<b>Step 2</b>. On the GUEST, add the new admin user. Let's call her 'adminnnn', and let's make her part of the 'sudo' group (since she's an admin, of course). The 'adduser' command below asks a few questions, including a password. Give a password. We will need it once later to set up SSH keys, and --of course-- to use sudo in the Guest.<br />
<br />
<pre><span style="color: red;">test1</span>$ sudo adduser adminnnn --ingroup sudo</pre>
<br />
<b>Step 3</b>. Edit the SSH settings to briefly permit insecure login so we can place the ssh key. We will change this back in a later step. I use nano - you use whatever editor you wish.<br />
<br />
<pre><span style="color: red;">test1</span>$ sudo nano /etc/ssh/sshd_config</pre>
<br />
<t>Make sure these settings are active:<br />
</t><br />
<br />
<pre> PubKey Authentication <b>yes</b>
Password Authentication <b>yes</b>
ChallengeResponseAuthentication <b>no</b>
UsePAM <b>yes</b>
(Remember to save your changes!)</pre>
<br />
<b>Step 4</b>. Restart SSH so the sshd config changes take effect, and logout from the 'ubuntu' user<br />
<br />
<pre><span style="color: red;">test1</span>$ sudo service sshd restart
<span style="color: red;">test1</span>$ exit</pre>
<br />
<b>Step 5</b>. Create an SSH key if you don't already have one. If you already have a key then use it, of course. Learn the IP address of the Guest. Copy the key across to the Guest. Login using the new key<br />
<br />
<pre><span style="color: blue;">host</span>$ ssh-keygen
<span style="color: blue;">host</span>$ uvt-kvm ip test1
192.168.122.249
<span style="color: blue;">host</span>$ ssh-copy-id adminnnn@192.168.122.249
<span style="color: blue;">host</span>$ ssh adminnnn@192.168.122.249
</pre>
<br />
<b>Step 6</b>. Test adminnnn's new sudo powers. If they work then delete the 'ubuntu' user.<br />
<br />
<pre><span style="color: red;">test1</span>$ sudo apt update
<span style="color: red;">test1</span>$ sudo apt upgrade
<span style="color: red;">test1</span>$ sudo deluser ubuntu
</pre>
<br />
<b>Step 7</b>. Tighten ssh to allow keys only. Finally, we will exit so the sshd changes take effect.<br />
<br />
<pre><span style="color: red;">test1</span>$ sudo nano /etc/ssh/sshd_config</pre>
<br />
<t>Make sure these settings are active:<br />
</t><br />
<t><br /></t>
<br />
<pre> PubKey Authentication <b>yes</b>
Password Authentication <b>no</b>
ChallengeResponseAuthentication <b>no</b>
UsePAM <b>no</b>
(Remember to save your changes!)
<span style="color: red;">test1</span>$ sudo service sshd restart
<span style="color: red;">test1</span>$ exit
</pre>
<br />
...and that's all you need<br />
<br />
<br />
<h2>
Let's add a full Desktop Environment</h2>
In this case, let's add Lubuntu.<br />
<br />
<pre><span style="color: blue;">host</span>$ ssh adminnnn@192.168.122.249
<span style="color: red;">test1</span>$ sudo apt install lubuntu-desktop --no-install-recommends
<span style="color: red;">test1</span>$ exit</pre>
<br />
A reboot is necessary for the new desktop to launch at startup. Let's use virt-viewer to watch the reboot process. We could also use remmina since we know the IP address.<br />
<br />
<pre><span style="color: blue;">host</span>$ virt-viewer test1
<span style="color: red;">test1</span>$ sudo reboot</pre>
<br />
After reboot, the desktop should come up.<br />
<br />
<br />
To eliminate the desktop, including another way to reboot: <br />
<br />
<pre><span style="color: blue;">host</span>$ ssh adminnnn@192.168.122.249
<span style="color: red;">test1</span>$ sudo apt remove lubuntu-desktop
<span style="color: red;">test1</span>$ sudo apt autoremove
<span style="color: red;">test1</span>$ exit
<span style="color: blue;">host</span>$ virsh reboot test1</pre>
<br />
<br />
<h2>
Cleaning Up</h2>
<br />
It's poor practice to leave your system littered with old experiments. When finished playing, here's how to clean up. All of these commands, of course, are done on the HOST.<br />
<br />
To delete just one Guest VM, but leave the VM Host software on your system:<br />
<br />
<pre>$ uvt-kvm destroy test1</pre>
<br />
To delete the VM Host software from your system (Ubuntu), but leave guest Virtual Disks intact:<br />
<br />
<pre>$ sudo apt remove uvtool
$ sudo apt autoremove</pre>
<br />
To delete any remaining Virtual Disks, including the cloud image(s) they are based upon.<br />
<br />
<pre>sudo rm -r /var/lib/uvtool</pre>
<pre><div style="font-family: "Times New Roman"; white-space: normal;">
</div>
<div style="font-family: "Times New Roman"; white-space: normal;">
</div>
<div style="font-family: "Times New Roman"; white-space: normal;">
</div>
<span style="font-family: Times New Roman;"><span style="white-space: normal;">References:</span></span><br />
<span style="font-family: Times New Roman;"><span style="white-space: normal;"><a href="https://help.ubuntu.com/community/KVM">https://help.ubuntu.com/community/KVM</a></span></span><br />
<span style="font-family: Times New Roman;"><span style="white-space: normal;"><a href="https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html">https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html</a></span></span></pre>
Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-72864339652527488852015-12-20T08:40:00.000-06:002015-12-20T08:40:18.061-06:00Are you ready for new members?<a href="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" /></a>
In a few days, many Ubuntu users will unwrap new hardware, plug it in, and have a fantastic experience.<br />
<br />
Some users will get inspired to join the community to solve bugs, add features, contribute code, and much more.<br />
<br />
<br />
<h2>
Support Gurus: use Find-a-Task</h2>
New, enthusiastic users often show up in the many Ubuntu help forums.<br />
<br />
Encourage them to try <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> to see the variety of ways they can help.<br />
Just send them over, and we'll do the rest. <br />
<br />
<br />
<h2>
Team Leaders: Is your team ready?</h2>
Is your team ready to welcome, train, and integrate these new volunteers?<br />
<br />
Has your team looked at it's <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> roles for volunteers? It's easy to <a href="http://cheesehead-techblog.blogspot.com/2015/01/introducing-ubuntu-find-task.html" target="_blank">add or change your team's listings</a>.<br />
<br />
Is your team approachable? Can you be contacted easily by a new volunteer? Is your web page for new volunteers accurate?<br />
<br />
<br />
<h2>
Improving Find-a-Task</h2>
<a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> is the Ubuntu community's job board for volunteers. Introduced in January 2015, <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> shows fellow volunteers the variety of tasks and roles available, and links those roles to the team web pages. <br />
<br />
Please share your suggestions to improve <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> to the Ubuntu Community Team <a href="https://lists.ubuntu.com/archives/ubuntu-community-team/" target="_blank">mailing list</a>.Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-52975234147934221182015-11-04T16:36:00.001-06:002015-11-04T16:36:49.348-06:00UOS Overflow Session: FInd-a-Task<a href="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" /></a>The <a href="http://summit.ubuntu.com/uos-1511/" target="_blank">Ubuntu Online Summit</a> has added an overflow session on Find-a-Task,
the Ubuntu community's volunteer job board. The job board tries to link
volunteers with a wide range of jobs that need to be done.<br />
<br />
<ul>
<li>Does it work?</li>
<li>Have you tried it?</li>
<li>Do you know anyone who has joined a team after using it?</li>
<li>Is your team listed on it?</li>
<li>How can it be improved?</li>
<li>Is it the best gateway for undecided new volunteers?</li>
</ul>
<br />
Join us tomorrow, 05 Nov at 1800 UTC to discuss the future of Find-a-Task, and the best ways to recruit new Ubuntu Members.<br />
<br />
Watch Live at <a class="ot-anchor aaTEdf" dir="ltr" href="http://summit.ubuntu.com/uos-1511/meeting/22644/growing-new-community-members/" rel="nofollow" target="_blank">http://summit.ubuntu.com/uos-1511/meeting/22644/growing-new-community-members/</a><br />
Or join us on freenode IRC: #ubuntu-uos-overflow<br />
<br />
See you there! Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-88746211537356127752015-10-31T09:43:00.000-05:002015-10-31T09:43:24.534-05:00Is your team ready for UOS?<h2 class="post-title entry-title">
</h2>
<a href="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" /></a>The <a href="http://summit.ubuntu.com/uos-1511/" target="_blank">Ubuntu Online Summit</a> (UOS), 03-05 November 2015, is only a few days away.<br />
<br />
Is your team ready to welcome, train, and integrate new volunteers inspired by UOS?<br />
<br />
Has your team updated it's <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> roles for volunteers? It's easy to <a href="http://cheesehead-techblog.blogspot.com/2015/01/introducing-ubuntu-find-task.html" target="_blank">add or change your team's listings</a>. <br />
<br />
<a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> is the Ubuntu community's job board for volunteers. Introduced in January 2015, <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> shows fellow volunteers the variety of tasks and roles available.<br />
<br />
<br />
<h2>
<b>It's for everyone, new and old</b></h2>
UOS is one of the events that energizes the Ubuntu community. It is a great time for volunteers to change tracks, to try something new.<br />
<br />
Your <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> roles should reflect that. Don't limit yourself to new enthusiasts. Your roles should welcome experienced members, too!<br />
<br />
<br />
<h2>
Improving Find-a-Task</h2>
Please share your suggestions to improve <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> during any of the UOS <a href="http://summit.ubuntu.com/uos-1511/meeting/22610/community-roundtable/" target="_blank">Community</a> <a href="http://summit.ubuntu.com/uos-1511/meeting/22620/community-roundtable-ii/" target="_blank">Roundtable</a> <a href="http://summit.ubuntu.com/uos-1511/meeting/22621/community-roundtable-iii/" target="_blank">sessions</a>.<br />
See you there!Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-2514709532699721302015-10-10T09:36:00.001-05:002015-10-10T09:37:14.225-05:00Point New Participants to Find-a-Task!<h2 class="post-title entry-title">
</h2>
<div class="post-header">
</div>
<a href="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://design.ubuntu.com/wp-content/uploads/pictogram-community-orange-hex.svg" /></a><a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> is the Ubuntu community's job board for volunteers.<br />
<br />
Introduced in January 2015, <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> shows fellow volunteers the variety of tasks and roles available.<br />
<br />
<br />
<h2 class="post-title entry-title">
Are you using Find-a-Task?
</h2>
Volunteers can browse the many ways to contribute to Ubuntu, and choose their favorite. No hassle, no pressure, no sign-up, no commitment.<br />
<br />
New enthusiasts don't know about Find-a-Task. (How could they?)<br />
It only works if *you* encourage new volunteers to try it. <br />
<br />
<br />
<h2>
It's for new participants</h2>
Take a <a href="http://community.ubuntu.com/contribute/find-a-task/" target="_blank">quick look</a>, and see the variety of volunteer roles available. We have listings for many different skills and interests, including plenty of non-technical tasks.<br />
<br />
<br />
<h2>
<b>It's also for longtime participants</b></h2>
Life moves on. Jobs and family and hobbies change.<br />
<br />
Losing interest in your current role, or have less time for it? Renew the magic - use <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> to try something new and different!<br />
<br />
Real friends don't let their mates burn out or drop off. When you see a friend start to teeter or flame out, guide them to <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> and help them recover with a different role. <br />
<br />
<br />
<h2>
Adding Listings and Improving Find-a-Task</h2>
It's easy to <a href="http://cheesehead-techblog.blogspot.com/2015/01/introducing-ubuntu-find-task.html" target="_blank">add or change your team's listing</a>. <br />
<br />
Please share your suggestions to improve <a href="http://community.ubuntu.com/contribute/find-a-task" target="_blank">Find-a-Task</a> on the <a href="https://lists.ubuntu.com/mailman/listinfo/ubuntu-community-team" target="_blank">ubuntu-community-team</a> mailing list.Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-5029617966630477182015-10-07T20:39:00.000-05:002015-10-10T08:59:45.704-05:00CAC on Firefox using Ubuntu 15.04After a couple years away form CAC on Linux, it's time to revisit how to install a DOD CAC reader for Firefox under Ubuntu 15.10.<br />
<br />
Very good instructions are on the <a href="https://help.ubuntu.com/community/CommonAccessCard" target="_blank">Ubuntu Help</a> pages. This guide clarifies a few vague elements, and reorganizes the information to help you troubleshoot.<br />
<br />
There are five simple steps:<br />
<ul>
<li>Get an appropriate card reader</li>
<li>Install the card reader software (pcsd)</li>
<li>Test the card, reader, and software</li>
<li>Install cackey</li>
<li>Install the DOD certs and point Firefox to the card reader</li>
</ul>
<br />
The Firefox extension requires cackey, cackey requires pcsd, pcsd requires hardware to detect. We will follow best practice for Debian/Ubuntu and install the dependences first, in the right order.<br />
<br />
<br />
<h2>
Get A Card Reader</h2>
There's nothing to add here. The <a href="https://help.ubuntu.com/community/CommonAccessCard#Get_a_card_reader" target="_blank">Ubuntu Help</a> page says it all.<br />
<br />
<br />
<br />
<h2>
Install Card Reader Software</h2>
<br />
<pre>sudo apt-get install pcscd pcsc-tools</pre>
<br />
The key software you need is the pcsc daemon, and it's libpcsclite1 dependency.
pcsc-tools is handy for testing the connection in the next step.<br />
<br />
<br />
<br />
<h2>
Test the card reader and software</h2>
<br />
Insert your CAC card and run:<br />
<br />
<pre>pcsc_scan</pre>
<br />
As shown in the <a href="https://help.ubuntu.com/community/CommonAccessCard#pcsc_tools" target="_blank">Ubuntu Help</a> page, pcscd will clearly show you if your card reader and card are detected.<br />
<br />
<br />
<br />
<h2>
Install cackey</h2>
The cackey library provides access to the cryptographic and certificate functions of the CAC card.<br />
<br />
1) You need to know if your Ubuntu system is a 32-bit or 64-bit install. Don't trust a sticker of what you remember - checking takes but a moment:<br />
<br />
<pre>uname -i</pre>
<br />
If the result is '<b>i386</b>' or similar, you are running a <b>32-bit</b> system. Look for a download labeled '<b>i386</b>'.<br />
If the result is '<b>x86_64</b>' or similar, you are running a <b>64-bit</b> system. Look for a download labeled '<b>amd64</b>'<br />
<br />
2) There are two places to download the latest cackey package from:<br />
<a href="https://software.forge.mil/sf/projects/community_cac">https://software.forge.mil/sf/projects/community_cac</a> (CAC required)<br />
<a href="http://cackey.rkeene.org/fossil/home">http://cackey.rkeene.org/fossil/home</a> (non-CAC)<br />
<br />
3) Download the latest cackey .deb package. Be sure to choose between 32/64 bit properly - the wrong package will happily install...but won't work.<br />
<br />
4) Bug workaround <i>for 64-bit only</i>: Cackey tries to install to the /usr/lib64 directory, which probably doesn't exist on your system. Simply create it. This bug does not affect 32-bit users, who can safely ignore this entire paragraph.<br />
<br />
5) Finally, install the downloaded cackey deb using the 'dpkg --install' command.<br />
<br />
<br />
<b>Example</b>:<br />
1) I'm running a 64-bit system.<br />
3) I downloaded cackey_0.7.5-1_<b>amd64</b>.deb to my Downloads directory.<br />
Then I installed the deb using:<br />
<br />
<pre>sudo mkdir /usr/lib64 ## Step 4 - 64-bit bug workaround
sudo dpkg --install ~/Downloads/cackey_0.7.5-1_amd64.deb ## Step 5</pre>
<br />
<br />
<br />
<h2>
Install DOD Certificates and Point Firefox to the Card Reader </h2>
Happily, forge.mil has a Firefox add-on that does all this for you!<br />
<br />
1) Simply download the latest 'dod_configuration-X.X.X.xpi' file from <a href="http://www.forge.mil/Resources-Firefox.html">http://www.forge.mil/Resources-Firefox.html</a> (non-CAC).<br />
<br />
2) Quit Firefox<br />
<br />
3) Double-click on the dod_configuration-X.X.X.xpi file you downloaded (it might be in your Downloads directory). Firefox will restart, and offer to install the add-on. Go ahead and install it.<br />
<br />
<br />
<br />
<br />
<h2>
Testing</h2>
Try your favorite CAC website (like AKO or OWA) and see if the site works, and if the site communicates properly with your card.<br />
<br />
Be sure your USB card reader is snugly inserted, of course.<br />
<br />
Start (or restart) Firefox <i>after</i> your CAC reader and card are inserted and recognized by the system. Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0tag:blogger.com,1999:blog-2703060415027607989.post-85446965806725263092015-09-03T10:19:00.000-05:002015-09-03T10:19:36.635-05:00The best DebConf 15 videosI simply cannot take time off work to attend DebConf, so each year I watch the videos instead. It took almost a month, thanks to the back-to-school rush at work, but I finally got through the sessions I wanted to see.<br />
<br />
Here are my highlights from DebConf 15:<br />
<br />
<h3>
<span style="font-size: large;">
Cool Stuff</span></h3>
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Creating_a_more_inviting_environment_for_newcomers_New_experiences_from_MoM_SoB_Teammetrics.webm" target="_blank">Creating A More Inviting Environment For Newcomers New Experiences From MoM SoB Teammetrics</a>
- A detailed discussion of how a mature team with tapering
contributions re-energized itself with new enthusiasts. How they were
recruited, mentored, trained, and finally assigned key roles in the
team. Lots of discussion of mentoring strategies and the costs of
mentoring (less time for the work) from the developer/maintainer
perspective. Lots of good ideas for any mature team, and thoroughly
applicable to Ubuntu teams too.<br />
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Linux_in_the_City_of_Munich_AKA_LiMux.webm" target="_blank">Linux in the City of Munich AKA LiMux</a> - There has been a lot of FUD written about one of the largest public conversions to an open-source platform, and it was great to see an actual insider talking about the project. Worth a watch.<br />
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Lightning_talks_2.webm" target="_blank">Lightning Talks 2</a> - The first Lightning Talk was a proposal to add a new service to Debian. The service tests all uploaded packages for many known faults (using valgrind, infer, etc.), and automatically files bug reports on the faults. This should provide a large number of real bite-sized bugs for drive-by patches, and corresponding hefty improvement in code quality. Most cool.<br />
<br />
<br />
<h3>
<span style="font-size: large;">
Under the hood</span></h3>
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Your_systemd_tool_box_dissecting_and_debugging_boot_and_services.webm" target="_blank">Your Systemd Tool Box - Dissecting And Debugging Boot And Services</a> - This is a great walk-through of the new (to me) tools. Had a terminal window open alongside to try each of the tools. Saved the video for a refresh, it's a lot to digest in one sitting.<br />
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/systemd_How_we_survived_jessie_and_how_we_will_break_stretch.webm" target="_blank">Systemd How We Survived Jessie And How We Will Break Stretch</a> - Fantastic discussion of coming systemd features: Persistent interface names, networkd, kdbus, and more. Also great discussion of how to get involved around the edges. <br />
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Dpkg_The_Interface.webm" target="_blank">Dpkg The Interface</a> - A presentation by the current maintainer, explaining how he keeps dpkg stable and the future roadmap. Since Snappy uses dpkg (but not apt), that roadmap is important! I have used dpkg for a decade, but never thought about all the bits of it I never see....<br />
<br />
<br />
<h3>
<span style="font-size: large;">
Keeping Free Software Free</span></h3>
<br />
<a href="http://meetings-archive.debian.net/pub/debian-meetings/2015/debconf15/Debians_Central_Role_in_the_Future_of_Software_Freedom.webm" target="_blank">Debians Central Role In The Future Of Software Freedom</a> - A presentation by the President of the Software Freedom Conservancy (SFC), explaining the problems they see, their strategies to attack those problems, and how they try to effectively challenge GPL violations. A bit of Canonical-bashing in this one at a couple points (some deserved, some not).<br />
<br />
At 23:30, it introduces the Debian Copyright Aggregation Project, where Debian contributors can opt to revocably assign their copyright to SFC, and can also permit the SFC to enforce those copyrights. This is one strategy SFC is pursuing to fight both CLAs and license violations.<br />
<br />
<br />
<br />
<br />Ianhttp://www.blogger.com/profile/13159046087533726064noreply@blogger.com0