Elroyjetson

Archive for April, 2021

My Ubuntu 20.10 Setup

Posted on: by

No matter what operating system I work on it is inevitable that it will become filled with cruft as I test new apps or code I have written. Some of it I keep, others I delete. In a short time I have no idea how to rebuild the system to where it is at or to make sure that I have cleaned up the cruft when I am done.

When I installed Ubuntu 20.10 on my trusty Lenovo X240 I decided that this time I wanted to both document everything so I can recreate the ideal install for me as well as ensure that the cruft gets cleaned out from time-to-time. I have modest needs on my system to begin with so keeping things documented seemed easy enough. In this post I will detail how my current system is setup so I have it to look back on and maybe you will find it helpful as well.

Start with the right OS

Obviously if you are looking for stability and support then the best place to start is with the LTS version of Ubuntu. Since I was documenting everything and I wanted to play with the most recent OS I installed Ubuntu 20.10. I only install a minimal OS, however, since, again, I have modest computing needs and the full OS installs a lot of applications I just don’t need.

Since I installed a minimal OS I will have to build up the applications that I need.

Applications

I wanted to be able to look and examine what apps I have installed as time goes on so that I can document what I want to keep and purge what I do not. This was a hard problem to solve since apt has no easy way to find this information out of the box. I did some digging (a lot of digging) and finally found a solution that has proven to be both solid and useful so far. I add this function to my .bashrc so I can list any apps that I have installed.

my-apt-installs () {
    comm -23 <(apt-mark showmanual | sort -u) \
         <(gzip -dc /var/log/installer/initial-status.gz |
           sed -n 's/^Package: //p' | sort -u)
}

It lists anything that I have installed using apt and ignores anything that was installed by the system, such as dependencies.

Now with that set up I can begin installing apps. The first I install some necessary utilities: htop, curl, openssh-server, vim, tmux, and git. These are the bare minimum I need to get up and running. Technically I don’t absolutely need openssh-server on my laptop, but I like to be able to log into it using ssh from my desktop when I am at home.

sudo apt install curl
sudo apt install openssh-server
sudo apt install vim
sudo apt install tmux
sudo apt install git
sudo apt install htop

With the basics out of the way, I am going to install a few more utilities that are helpful but not necessary to begin work.

First is uptimed. Uptimed is a daemon that runs and keeps a history of uptimes. Not a big deal for a laptop, but I like having the history so I can evaluate stability and when I get changes that force a reboot.

sudo apt install uptimed

Since I do a lot of writing and I mostly do that in Markdown, it is nice to have a way to simply convert it to other formats such as html and pdf. For this I use pandoc.

sudo apt install pandoc

It is good to have some tools to manipulate images and videos so ffmpeg and imagemagick are installed next.

sudo apt install ffmpeg
sudo apt install imagemagick

I use LXD a great deal to build system containers, again to keep my main OS clean when testing software and services, and it is recommend to use ZFS to create storage pools for it. Ubuntu 20.10 minimal does not include the ZFS utilities to create ZFS pools so that gets added next.

sudo apt install zfsutils-linux

Finally, and this one is a recent addition, I no longer use Dropbox to syncronize files between machines. I could always use rsync, but I am lazy and using a simple GUI interface in this case to handle everything is just easier so I now use Syncthing. I hadn’t used it before because the documentation made it sound more time consuming to setup than it really is. I set up three machines running Syncthing in less than 15 minutes.

sudo apt install syncthing

Now that is all the software I isntall with apt. It is Ubuntu and Ubuntu relies heavily on snap packages for software installation. This is somehow controversial, but snaps don’t bother me a great deal. I have three applications I install as snaps.

Like I said earlier, I use LXD for building system containers so I install that next.

sudo snap install lxd

I have written a simple tutorial on initializing and using LXD a while ago if you are interested.

The next two apps are useful, but not necessary. First, switching audio input and output can be a pain. An application called Sound Switcher sits as an icon in the top bar in GNOME and makes it simple to switch these when necessary so it is added next.

sudo snap install indicator-sound-switcher

The last snap package I install serves a single purpose for me. I use vim normally when editing files. But I wanted a simple Markdown previewer. I turns out that Visual Studio Code includes a live Markdown previewer. I am warming to it. I always felt it was too heavy and cumbersome to write code in but for Markdown with live preview it works quite well.

sudo snap install code --classic

If you are running an LTS version of Ubuntu, I highly recommend installing and setting up Canonical Livepatch to automagically install security updates. This is a relatively simple install and setup but it is good practice. You will have to get a livepatch token but it is free for individual use. To get a livepatch token view the Canonical Livepatch Service.

sudo snap install canonical-livepatch
sudo canonical-livepatch enable <livepatch token>

Since this is a laptop and I occasionally must work offline, I install Youtube-dl to download videos. I install this directly instead of using apt because the apt version is quite old. I did run into a small problem with this in that it expects to run using python, but Ubuntu 20.10 does not have python linked to anything instead it ships with just a python3 executable. To fix it I just linked /usr/local/bin/python to /usr/bin/python3 and this solved the problem.

sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
sudo chmod a+rx /usr/local/bin/youtube-dl
sudo ln -s /usr/bin/python3 /usr/local/bin/python

As much as I might like to use a different browser, some things simply work better using Google Chrome. At some point it would be nice to only need one browser, like Firefox, but for now I install Chrome.

curl -L https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb -o google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb

Gnome setup

With all of this done, now it is time to tweak Gnome. I prefer things to be a stock as possible but, at least for now, I make four changes.

First, Ubuntu maps launching the terminal to ctrl+alt+t. That just seems awkward to me so I remap it to super+t.

gsettings set org.gnome.settings-daemon.plugins.media-keys terminal "['<Super>t']"

The last three changes are extensions. I haven’t looked into installing them from the command line so I use the browser. To do this a Firefox plugin must be installed to download and activate them. See the Gnome site to locate addons for other browsers or distros.

sudo apt install chrome-gnome-shell

Update 2021-11-27: Decided to remap caps lock key to esc key to save the reach.

dconf write "/org/gnome/desktop/input-sources/xkb-options" "['caps:swapescape']"

The three Gnome extensions I use are Emoji selector, Caffine, and Clock Override. The Gnome project has many more but these are the ones I find most useful.

Conclusion

That is my setup and modifications. I try and keep it clean and as close to the stock install as possible.

Managing LXD Profiles

Posted on: by

I have been digging into LXD again and wanted to figure out how to manage and configure containers using LXD profiles. I touched briefly on managing container resources at the end of my last post Using LXD but there is so much more you can do with profiles.

LXD has a default profile pre-installed and it looks like this:

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/ubuntu2004

Yours should look similar though it may not be exact. It is pretty generic. If you do not specify a profile when creating a container default is the profile that is used.

To get a list of the profiles that currently exist use the list command:

$ lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+
| demo    |                     | 0       |
+---------+---------------------+---------+

In this case I have two profiles: default and demo. The “USED BY” column is a quick way to see how many containers use a particular profile. In my case, I have one container using the default profile and no containers using the demo profile. Since I have no containers using the demo profile let’s delete it so we can create and configure it later.

$ lxc profile delete demo
Profile demo deleted
$ lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+

One of the common uses of a custom profile is for constraining resources. Let’s create a profile to limit the number of CPU cores and memory available to a container. First, let’s determine how many cores and memory is available to the system using the nproc and free commands.

$ nproc
4
$ free -m
              total        used        free      shared  buff/cache   available
Mem:           7649        1737         156         472        5754        5142
Swap:           975           6         969

In the case of my demo machine I have 4 total cores and 8GiB of RAM. No matter what the container is doing I never want it to be able to use all of the CPU and RAM of my machine so I will constrain a container to one CPU and 1GiB of RAM. To do this I will have to add constraints to a profile. I could add them to the default profile, but it is conceivable that I may want some containers without these constraints. Instead, I will recreate the demo profile to hold the constraints. To get started I have to either create a new profile or copy an existing one. I will show both methods.

To create a new profile named demo:

$ lxc profile create demo
Profile demo created
$ lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+
| demo    |                     | 0       |
+---------+---------------------+---------+

To create a new profile by copying an existing profile:

$ lxc profile copy demo demo2
$ lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+
| demo    |                     | 0       |
+---------+---------------------+---------+
| demo2   |                     | 0       |
+---------+---------------------+---------+

No method is better than the other so use which method is best for your situation. Now it is time to add our CPU and memory constraints. Just like creating the profile, several options exist to add constraints. Constraints can be added one at a time using the profile set command or the profile can be edited in using your editor. They both accomplish the same goal. I will first create the CPU constraint using the profile set method.

$ lxc profile set demo limits.cpu 1
$ lxc profile show demo
config:
  limits.cpu: "1"
description: ""
devices: {}
name: demo
used_by: []

I used profile show to verify that the profile set command worked, but you can verify just the constraint using the profile get command.

$ lxc profile get demo limits.cpu
1

With the CPU limit set, let’s set our memory constraint. It works the same as setting the CPU constraint. A matter of fact, setting constraints all work this way and there are too many items that you can configure for this blog post, but take a look at the listing of configuration settings at linuxcontainers.org for more.

$ lxc profile set demo limits.memory "1GiB"
$ lxc profile show demo
config:
  limits.cpu: "1"
  limits.memory: 1GiB
description: ""
devices: {}
name: demo
used_by: []

I mentioned earlier that it is possible to edit the whole profile in order to add all of the configuration at one time:

$ lxc profile edit demo

### This is a YAML representation of the profile.
### Any line starting with a '# will be ignored.
###
### A profile consists of a set of configuration items followed by a set of
### devices.
###
### An example would look like:
### name: onenic
### config:
###   raw.lxc: lxc.aa_profile=unconfined
### devices:
###   eth0:
###     nictype: bridged
###     parent: lxdbr0
###     type: nic
###
### Note that the name is shown but cannot be changed

config:
  limits.cpu: "1"
  limits.memory: 1GiB
description: ""
devices: {}
name: demo
used_by: []

It is possible to remove configuration items as well. Using profile edit like the command above will work. Another way is to remove items individually using the unset command:

$ lxc profile unset demo limits.cpu
$ lxc profile show demo
config:
  limits.memory: 1GiB
description: ""
devices: {}
name: demo
used_by: []

At this point I want the demo profile to look like this:

$ lxc profile show demo
config:
  limits.cpu: "1"
  limits.memory: 1GiB
description: ""
devices: {}
name: demo
used_by: []

I have limited any container using the demo profile to 1 CPU and 1 GiB of RAM. But if I examine my profiles I see that the demo profile is not being used by any containers.

$ lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+
| demo    |                     | 0       |
+---------+---------------------+---------+
| demo2   |                     | 0       |
+---------+---------------------+---------+

Applying a profile to a container can be done either at the time we create the container or later to an already running container. If we check we will see that I already have an Ubuntu 20.04 container running so I will apply the demo profile to that.

$ lxc list
+------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
|    NAME    |  STATE  |        IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
| ubuntu2004 | RUNNING | 10.7.249.161 (eth0) | fd42:db50:ff3d:1bfa:216:3eff:feeb:cc92 (eth0) | CONTAINER | 0         |
+------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
$ lxc profile add ubuntu2004 demo
Profile demo added to ubuntu2004
lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 1       |
+---------+---------------------+---------+
| demo    |                     | 1       |
+---------+---------------------+---------+
| demo2   |                     | 0       |
+---------+---------------------+---------+

Let’s check if the profile has been applied by running the nproc command in the container:

$ lxc exec ubuntu2004 nproc
4

That isn’t right! It should be 1. So what happened? It appears that you have to restart the container for the new configuration to take effect. Here is the command to restart the container:

$ lxc restart ubuntu2004
$ lxc exec ubuntu2004 nproc
1

Now when we re-run the nproc command it returns one CPU as we expected. If we check the RAM, we should see it is now set to 1GiB:

$ lxc exec ubuntu2004 free
              total        used        free      shared  buff/cache   available
Mem:        1048576       70232      925016         132       53328      978344
Swap:        999420           0      999420

If we look at what profiles are assigned to the container we see that the ubuntu2004 container actually has two profiles applied to it – both default and demo:

$ lxc list --fast 
+------------+---------+--------------+----------------------+----------+-----------+
|    NAME    |  STATE  | ARCHITECTURE |      CREATED AT      | PROFILES |   TYPE    |
+------------+---------+--------------+----------------------+----------+-----------+
| ubuntu2004 | RUNNING | x86_64       | 2021/03/31 14:40 UTC | default  | CONTAINER |
|            |         |              |                      | demo     |           |
+------------+---------+--------------+----------------------+----------+-----------+

It turns out that you can stack profiles on a container with the last profile applied overriding the previous settings.

In this case this is fine, but if you only want one profile applied you can remove the unwanted profile from the container:

$ lxc profile remove ubuntu2004 demo
Profile demo removed from ubuntu2004
$ lxc list --fast 
+------------+---------+--------------+----------------------+----------+-----------+
|    NAME    |  STATE  | ARCHITECTURE |      CREATED AT      | PROFILES |   TYPE    |
+------------+---------+--------------+----------------------+----------+-----------+
| ubuntu2004 | RUNNING | x86_64       | 2021/03/31 14:40 UTC | default  | CONTAINER |
+------------+---------+--------------+----------------------+----------+-----------+

Finally, it is possible to specify the profile to use at container creation:

$ lxc launch ubuntu:20.04 ubuntu --profile demo

Containers can be configured in many ways not just resource constraints. We could also have applied network settings, firewall settings, and much more. But the process would be the same.