The current OS is Ubuntu Server 20.04...but I'm really not using most of the Server features. Those are in the LXD containers. So this is an experiment to see if Ubuntu Core can function as the server OS.
Prerequisites: If you are looking to try this, you should already be familiar (not expert) with:
- Using SSH
- Using the vi text editor (Ubuntu Core lacks nano)
- Basic networking concepts like dhcp
- Basic VM and Container concepts
Download Ubuntu Core:
- Create Ubuntu SSO Account (if you don't have one already)
- Create a SSH Key (if you don't have one already)
- Import your SSH Public Key to Ubuntu SSO.
- Download an Ubuntu core .img file from https://ubuntu.com/download/iot#core
- Convert the Ubuntu Core .img to a Virtualbox .vdi:
me@desktop:~$ VBoxManage convertdd ubuntu-core-18-amd64.img ubuntu-core.vdi
Set up a new machine in VirtualBox:
- Install VirtualBox (if you haven't already):
me@desktop:~$ sudo apt install virtualbox
- In the Virtualbox Settings, File -> Virtual Media Manager. Add the ubuntu-core.vdi
- Create a New Machine. Use an existing Hard Disk File --> ubuntu-core.vdi
- Check the network settings. You want a network that you will be able to access. I chose bridged networking so I could play with the new system from different locations, and set up a static IP address on the router. ENABLE promiscuous mode, so containers can get IP addresses from the router. Otherwise, VirtualBox will filter out the dhcp requests.
- OPTIONAL: Additional tweaks to enhance performance.
Take a snapshot of your current network neighborhood:
- Use this to figure out Ubuntu Core's IP address later on:
me@Desktop:~$ ip neigh 192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE 192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE 192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE 192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE 192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY
Boot the image in VirtualBox:
- The first boot of Ubuntu Core requires a screen and keyboard (one reason we're trying this in VirtualBox). Subsequent logins will be done by ssh.
- Answer the couple setup questions.
- Use your Ubuntu One login e-mail address.
- The VM will reboot itself (perhaps more than once) when complete.
- Note you cannot login to the VM's TTY. Ubuntu Core's default login is via ssh. Instead, the VM's TTY tells you the IP address to use for ssh.
- Since we are using a VM, this is a convenient place to take an initial snapshot. If you make a mess of networking in the next step, you can revert the snapshot.
Let's do some initial configuration:
- After the VM reboots, the Virtualbox screen only shows the IP address.
// SSH into the Ubuntu Core Guest me@desktop:~$ ssh my-Ubuntu-One-login-name@IP-address [...Welcome message and MOTD...] me@localhost:~$ // The default name is "localhost" // Let's change that. Takes effect after reboot. me@localhost:~$ sudo hostnamectl set-hostname 'ubuntu-core-vm' // Set the timezone. Takes effect immediately. me@localhost:~$ sudo timedatectl set-timezone 'America/Chicago' // OPTIONAL: Create a TTY login // This can be handy if you have networking problems. me@localhost:~$ sudo passwd my-Ubuntu-One-login-name
Let's set up the network bridge so containers can draw their IP address from the router:
- We use vi to edit the netplan configuration. When we apply the changes, the ssh connection will be severed so we must discover the new IP address to login again.
me@localhost:~$ sudo vi /writable/system-data/etc/netplan/00-snapd-config.yaml #// The following seven lines are the original file. Commented instead of deleted. # This is the network config written by 'console_conf' #network: # ethernets: # eth0: # addresses: [] # dhcp4: true # version: 2 #// The following lines are the new config network: version: 2 renderer: networkd ethernets: eth0: dhcp4: no dhcp6: no bridges: # br0 is the name that containers use as the parent br0: interfaces: # eth0 is the device name in 'ip addr' - eth0 dhcp4: yes dhcp6: yes #// End // After the file is ready, implement it: me@localhost:~$ sudo netplan generate me@localhost:~$ sudo netplan apply // If all goes well...your ssh session just terminated without warning.
Test our new network settings:
- The Ubuntu Core VM window will NOT change the displayed IP address after the netplan change...but that IP won't work anymore.
- If you happen to reboot (not necessary) you will see that the TTY window displays no IP address when bridged...unless you have created an optional TTY login.
- Instead of rebooting, let's take another network snapshot and compare to earlier:
me@Desktop:~$ ip neigh 192.168.1.226 dev enp3s0 lladdr c6:12:89:22:56:e4 STALE 192.168.1.227 dev enp3s0 lladdr 00:1c:b3:75:23:a3 STALE 192.168.1.234 dev enp3s0 lladdr d8:31:34:2c:b8:3a STALE <---- NEW 192.168.1.235 dev enp3s0 lladdr DELAY <-----NEW 192.168.1.246 dev enp3s0 lladdr f4:f5:d8:29:e5:90 REACHABLE 192.168.1.213 dev enp3s0 lladdr 98:e0:d9:77:5d:6b STALE 192.168.1.1 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 STALE fe80::2efd:a1ff:fe67:2ad0 dev enp3s0 lladdr 2c:fd:a1:67:2a:d0 router DELAY
- We have two new lines: .226 and .235 One of those was the old IP address and one is the new. SSH into the new IP address, and you're back in.
me@desktop:~$ ssh my-Ubuntu-One-user-name@192.168.1.226 Welcome to Ubuntu Core 18 (GNU/Linux 4.15.0-99-generic x86_64) [...Welcome message and MOTD...] Last login: Thu May 7 16:11:38 2020 from 192.168.1.6 me@localhost:~$
- Let's take a closer look at our new, successful network settings.
me@localhost:~$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether c6:12:89:22:56:e4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.226/24 brd 192.168.1.255 scope global dynamic br0 valid_lft 9545sec preferred_lft 9545sec inet6 2683:4000:a450:1678:c412:89ff:fe22:56e4/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 600sec preferred_lft 600sec inet6 fe80::c412:89ff:fe22:56e4/64 scope link valid_lft forever preferred_lft forever 3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000 link/ether 08:00:27:fd:20:92 brd ff:ff:ff:ff:ff:ff // Note that ubuntu-core-vm now uses the br0 address, and lacks an eth0 address. // That's what we want.
Set up static IP addresses on the Router and then reboot to use the new IP address.
- Remember, the whole point of bridged networking is for the router to issue all the IP addresses and avoid doing a lot of NATing and Port Forwarding.
- So now is the time to login to the Router and have it issue a constant IP address to the Bridge MAC address (in this case c6:12:89:22:56:e4). After this, ubuntu-core-vm (the Ubuntu Core Guest VM) will always have a predictable IP address.
- Use VirtualBox to ACPI shutdown the VM, then restart it headless. We're looking for two changes: The hostname and the login IP address.
- Starting headless can be done two ways:
- GUI: Virtualbox Start button submenu
me@Desktop:~$ VBoxHeadless --startvm name-of-vm
- Success at rebooting headless and logging into the permanent IP address is a good point for another VM Snapshot. And maybe a sandwich. Well done!
Install LXD onto ubuntu-core-vm:
- Install:
me@ubuntu-core-vm:~$ snap install lxd lxd 4.0.1 from Canonical✓ installed me@ubuntu-core-vm:~$
- Add myself to the `lxd` group so 'sudo' isn't necessary anymore. This SHOULD work, but doesn't due to a bug (discussion)
host:~$ sudo adduser --extrausers me lxd // Works on most Ubuntu; does NOT work on Ubuntu Core even with --extrausers host:~$ newgrp lxd // New group takes effect without logout/login
- Instead, edit the groups file directly using vi:
// Use vi to edit the file: me@ubuntu-core-vm:~$ sudo vi /var/lib/extrausers/group // Change the lxd line: lxd:x:999: // Old Line lxd:x:999:my-login-name // New Line // Apply the new group settings without logout me@ubuntu-core-vm:~$ newgrp lxd
- LXD is easy to configure. We need to make three changes from the default settings since we already have a bridge (br0) set up that we want to use.
me@ubuntu-core-vm:~$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (dir, lvm, ceph, btrfs) [default=btrfs]: Create a new BTRFS pool? (yes/no) [default=yes]: Would you like to use an existing block device? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=15GB]: Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: no <------------------------- CHANGE Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes <-- CHANGE Name of the existing bridge or host interface: br0 <----------------------------------------------------- CHANGE Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: me@ubuntu-core-vm:~$
- Next, we change the networking profile so containers use the bridge:
// Open the default container profile in vi me@ubuntu-core-vm:~$ lxc profile edit default config: {} description: Default LXD profile devices: # Container eth0, not ubuntu-core-vm eth0 eth0: name: eth0 nictype: bridged # This is the ubuntu-core-vm br0, the real network connection parent: br0 type: nic root: path: / pool: default type: disk name: default used_by: []
- Add the Ubuntu-Minimal stream for cloud-images, so our test container is small:
me@ubuntu-core-vm:~$ lxc remote add --protocol simplestreams ubuntu-minimal https://cloud-images.ubuntu.com/minimal/releases/
me@ubuntu-core-vm:~$ lxc launch ubuntu-minimal:20.04 test1 Creating test1 Starting test1 me@ubuntu-core-vm:~$ lxc list +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+ | test1 | RUNNING | 192.168.1.248 (eth0) | 2603:6000:a540:1678:216:3eff:fef0:3a6f (eth0) | CONTAINER | 0 | +-------+---------+----------------------+-----------------------------------------------+-----------+-----------+ // Let's test outbound connectivity from the container me@ubuntu-core-vm:~$ lxc shell test1 root@test1:~# apt update Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB] [...lots of succesful server connections...] Get:26 http://archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1280 B] Fetched 16.3 MB in 5s (3009 kB/s) Reading package lists... Done Building dependency tree... Reading state information... Done 5 packages can be upgraded. Run 'apt list --upgradable' to see them. root@test1:~#
No comments:
Post a Comment