second post

This is the second post to this example blog. To add new posts, just add files to the posts/ subdirectory, or use the web form.

Posted
first post

This is the first post to this example blog. To add new posts, just add files to the posts/ subdirectory, or use the web form.

Bedrock Linux

I am very much a fan of Linux, using it as my primary OS on my computer. Obviously, I have used multiple distributions of it. Each distribution has it's own independent software library that is integrated with the package manager and the system as a whole. (Note: I am very much aware that Linux From Scratch and similar exists. I'm talking about the general case where some form of package manager/management exists. )

This has some advantages:

  • No random downloading of installers/executables from the Internet like on Windows
  • You can browse and search for available software
  • Everything in the repositories follows a single set of standards / policies that the user can apply to any installed program.

All in all, it's a very wonderful user experience. However, it isn't perfect. Repositories provided are always finite. They cannot and will not include every program that exists, nor include variations of included programs. This can very easily become a problem, such as in the following situations:

  • You want a different version of the program than the one available in the repositories.
  • You want a program that simply isn't in the repositories.
  • You want a program that is in the repositories..but was created using options you want to change.

If you enter this situation, there are many many ways to manage/deal with it, each having their own trade-offs/side-effects but today I'm going to focus on one particular case: You are a user on Distro X that has somehow got into one of the 3 situations described above. While browsing the internet for solutions, you see that a package from Distro Y would get you out of this situation. How do you install that package from Distro Y onto your Distro X installation?

Normally, you simply can't. Distro Y packages are built to work on that one only, there's no support for Distro X and you can't even install it, since Distro X's package manager only supports the specific format used by Distro X. Even if you did get it to install, you'd have problems with dependencies and other cross-distro differences.

At this point you might be asking, 'What is Bedrock Linux and how does it come into this' to which I answer this: Bedrock Linux allows you to combine multiple installed distributions. You're not limited to just 'Arch Linux' or just 'Debian'. Instead, you can have both Arch and Debian installed and be using programs with each concurrently. Of course, those two are just examples - you can have any number of distros concurrently installed and functioning.

It should be obvious how this applies to the hypothetical situation above. For someone using Bedrock Linux, the above is mostly a non-issue as packages from Distro Y can easily be installed - even if most of the packages on your system come from Distro X. The full story of how this is achieved is somewhat complex and involves decent amounts of filesystem manipulation but to simplify, each distribution/chunk of files is called a stratum in Bedrock Linux terms. Aside from special strata, each stratum is a self-contained installation of a distribution. The combination of multiple strata as a single system results in something that not only has a much deeper pool of software to draw upon and use, but can leverage the strengths provided by each individual stratum.

Under Bedrock Linux, you can install Distro Y packages on a mostly-Distro-X system because that Distro Y package is installed into a complete functional installation of Distro Y. (and is accessible via a filesystem directory specially maintained by a Bedrock Linux component) There are certainly many other potential applications and use cases for Bedrock Linux, but this is one of the more obvious and immmediately useful ones.

Should you wish to find out more, there's plenty of documentation here.

Moving a Raspberry Pi 3 from Berryboot to just plain Raspbian

For a while now, I've had a Raspberry Pi 3, replacing my original Pi. The Pi, inside of my chosen case looks something like the below image

Raspberry PI 3 with Pibow case

Originally, I thought that it'd be cool to be able to install/uninstall/update multiple distros on it. NOOBS can do this but I believe you can't do it while retaining existing data whenever you add/remove OSes / distributions. Instead, I became aware of (and chose) Berryboot instead. It provided a decent boot menu to select what you wanted to boot from while enabling you to add/remove new items without affecting existed installed ones. It did this by not giving each item it's own partition - instead, it stored the initial download as a filesystem image and used AUFS to persist any user-made changes to the downloaded system.

As time passed, I never actually used this functionality - my Pi 3 always booted Raspbian, I never bothered to even install anything else, never mind use/boot it. I continued to use Berryboot, even if I didn't really need it and would do just fine with a simple plain Raspbian install because it caused no issues (that I noticed anyway).

One day, the time came to reboot my Pi. I had done this multiple times before without any issues. However, on this attempt all I got after the reboot was that 4-pixel rainbow screen stuck there. I did some googling / research on this problem led to me to this GitHub issue. It says that after upgrading the installed OS, a reboot may cause the exact same symptoms that I saw.

I had two options:

  • Replace the problem Berryboot files with copies from the installed OS.
  • Somehow get rid of Berryboot and boot Raspbian directly...while preserving the exact state and data of my install of Raspbian. I chose the second option, reasoning that it'd be simpler and possibly more performant too (NO use of AUFS, just direct writes to the underlying media.)

Now that I had chosen to remove Berryboot, I had to face the problem of migrating all my data/configuration. Since all my modified data was just a directory on a partition , I couldn't simply use dd to take a copy and place it back after removing Berryboot. I also couldn't simply create a blank partition and copy the existing data into it - only the modifications were stored as normal files/directories, and it was practically certain that some files had not been modified and as such would be missing.

I came up with a plan that would (hopefully) work, and should anyone else need to do this, the steps are below:

  1. Create a tarball of the filesystem (Compression is optional, but you likely want to) - Make it's not on the SD card itself because it will be erased in the next step.
  2. Download (and extract) the latest release of Raspbian. Use dd (or whatever tool is appropriate) to write the resulting disk image to the SD card. Be very careful when using dd because it's very easy to overwrite the wrong partition and lose data.
  3. Extract the tarball onto the root filesystem of your Raspbian. All files that were on your original installation will be on this one too, while any missing/absent files will remain from the freshly-installed Raspbian.

It took a while, but I successfully performed all of these 3 steps (I also took the opportunity to make good use of GParted and resize the root filesystem before the first boot). The resulting system successfully booted and launched all the services that were previously running. However it was not an exact copy - The SSH keys changed for example.

Managing dotfiles with vcsh and mr

Over time, a Linux user may customize and configure his environment rather substantially. These modifications are stored in a collection of configuration files / data known as 'dotfiles' (because the first letter of many of them is '.'). For multiple reasons, it is very beneificial if you track, control and synchronise all of your personal dotfiles, a few example reasons include: - Having an additional backup - Being able to see their history, how they changed over time - Being able to rollback changes if needed (I haven't needed this yet) - Being able to use the same set of files accross multiple physical/virtual machines - Being able to share you configuration with the world so people can learn from it just like you learn from other people's.

However, there is not one single universal method for managing them, instead there are many tools and approaches that one can take. GitHub provides a decent list of programs here but I intend to summarize the main approaches below. (It may be worth noting that while the methods may not be mutually exclusive, there is one 'main' approach/method per tool and that is what counts.)

  1. Symlink-driven management involves moving the dotfiles away from their original location, instead creating symbolic links to one or more destinations. There are many ways/approaches of doing this, but the simplest is to just have a single directory be the destination for all the links.
  2. VC(Version Control)-driven management involves less management of actual dotfiles compared to the other two. Instead of copying or using symbolic links, instead a version-control system is primarily used to track/manage dotfiles in groups. The original dotfiles are left in place, instead they can be treated just like every other repository. There are multiple methods of implementing this approach with their own unique advantages and drawbacks.
  3. Configuration-driven management involves using explicit configuration file(s) to determine exactly what dotfiles are to be managed/tracked as well as how they are to be tracked among other things. The key difference between this method and the others is that rather than using interactive commands to manage and modify dotfiles, one or more configuration files are used. Typical formats for this information include YAML/JSON or a purpose-built configuration format. Typically but not exclusively uses symbolic links for dotfiles.

I have been tracking my dotfiles for short-to-moderate period of time. I originally started when I read an article about using GNU Stow as the management tool. Stow has some features that make it just as useful for this as a dedicated too: It supports 'packages' so you can choose to install part of the dotfiles. It also doesn't make you specify specifically which files to symlink, it just symlinks the entire package. However, it's definitely not perfect: Symlinks can be overwritten, Moving dotfiles and replicating directory structures sucked, and you could only manage operations from the right directory. (I could also only easily have 1 VCS repo, which effectively meant private dotfiles couldn't be tracked)

One day, while inspecting my ~/dotfiles I noticed that the .git directory was missing. I could've seen this as a disaster, but I didn't. I had been thinking about migrating away from Stow for a while, but I never actually did anything - so I took this opportunity. After some reading/googling, I had made the decision to use mr and vcsh. vcsh would provide each individual repository, public and private while mr would be used for higher-level tasks. There are multiple guides to this pair of tools, such as:

When I was migrating, I particularly found the latter link to be rather useful due to the detailed explanations of multiple common tasks. However, should you not want to read any of the above links I will attempt to give an overview of how it works in practice.

Creating a new repository

  1. Clone/Initialize the local vcsh repository
  2. Update the myrepos(mr) configuration to include that repository
  3. Add the wanted stuff to the vcsh repository
  4. Write/generate a .gitignore and modify as needed
  5. Commit to the vcsh repository and push both sets of changes as needed.

Updating an existing repository

  1. You can prefix git operations with vcsh and then the repo name to perform them on the repository.
  2. Alternatively, use 'vcsh enter' to go into an environment where git can be used normally.

Updating all the repositories

  1. Use mr up and let myrepos do the job it was designed to do.

Bootstrapping the dotfiles

(presuming git is installed. If not, install it.)

  1. Install myrepos and vcsh. If there's no distribution package, a manual install is easy (they're just standalone scripts)
  2. Obtain your myrepos configuration.
  3. Use mr up and let myrepos obtain all your repositories as needed.

If you think the above workflow looks interesting, I recommend you have a nice read of the above links - especially the last one as I found it very useful. However, I have not moved my entire collection of dotfiles over and yet I still have some interesting problems/caveats to tackle.

Firstly, while using a (private) Git repository to track my SSH/GPG data is useful, certain files have special filesystem permissions which Git does not preserve. While this can be solved with a chmod or two, it may grow to be more difficult if I need more of these files in the future - plus I might be able to automate it using mr's 'fixups' functionality.

Secondly, this is more of an observation than a problem: I'm currently using an Apache-style configuration involving both 'available.d' and 'config.d'. This works and is flexible, but it'd be simpler to only have a single directory and have equivalent information be part of the configuration itself.

Thirdly, bootstrapping from a completely clean slate is rather complicated. Certain repositories may depend on others to work / be in the correct location. Then there's the problem of access to private repositories, perhaps HTTP(s) could be used to download SSH keys using pre-entered cached credentials? A similar but lesser problem exists with GPG. Public repositories have no issues with this - if need be, they can have the master remote be changed afterwards.s

Anyway, that's all for now. If and when I solve the above issues I'll make sure to explain and blog about each my solutions. Until then, I don't expect this to come up again.

How I almost sucessfully installed Gentoo Linux

I'm not a distro-hopper by any means, but even so I have tested/tasted a number of Linux distributions. Primarily, these have been in the Debian family: X/K/Ubuntu, Debian itself, Raspbian and likely more that I'm forgetting. I was recommended and used very happily for a good while Arch Linux (experiments with GNU Guix on it notwithstanding), until my hard drive began dying one day. I had heard that Tumbleweed was also rolling-release, and provided interesting rollback functionality out of the box using BTRFS and Snapper, so I installed it on a spare USB stick. Recently, I was thinking about Gentoo Linux. I was mainly thinking of the perhaps not-entirely-accurate idea that it would take substantial time to install due to the requisite amounts of compiling I also thought that the difficulty level of the installation was roughly equivalent to that of Arch,. I wanted to see if my thoughts/perceptions were right, so I planned to install and migrate to Gentoo. This led to a sequence of events that can be divided into approximately 3 parts.

Part I: The actual installation of Gentoo

Much like Arch Linux, Gentoo has a comprehensive wiki filled with documentation about not only the installation procedures but also a large number of other things that are needed post-install. This is a very good thing, because documentation is very useful when installing either distribution (especially if you haven't done it before). As such, I mostly ended up following the Gentoo Handbook which provides a well-written resource much like Arch's own installation guide (except it seemed more organized and structured into steps). Seeing as I was going to install Gentoo onto an existing filesystem (as a BTRFS subvolume) and was installing from an existing Linux rather than a CD, I could ignore 3 segments of the first part. The remaining installation steps looked like this:

  1. Download (and extract) a precompiled base system (a stage3 tarball) This stage was very easy, only a couple of commands to execute with no decisions to make.
  2. Set appropriate compiliation settings At this point I needed to select what compiliation flags I would be using as well as decide how many parallel jobs make should be running. I decided to go with the default set of flags, only tweaking it to target GCC towards my specific CPU type (-march=amdfam10) and to also follow the recommendation for job count so that make could run up to 5 tasks in parallel. This was a very good decision - for one thing it made sure that compiling felt very fast and also ensured that all of my CPU's capacity could be used by the process if needed.
  3. Enter the installed base system and configure/update Portage (Gentoo's package manager) This step was also rather easy, a bit of copying files around and a few commands. I selected the generic 'desktop' profile, not seeing one more accurate.
  4. Rebuild the world Now that I had selected my profile, I needed to update my system to include the changed settings/flags that came with the new profile. Additionally, I needed to install the additional software selected to my profile. In short, what I (or Gentoo's portage) actually did could be succinctly explained with this image:

COMPILE ALL THE THINGS

I expected that this would be the longest part of the installation, and that was a correct expectation. Compiling 164 packages does take some time. However, it didn't take as much time as I imagined it to, things felt pretty fast actually. Building a generic linux kernel from scratch and installing it only took ~1h. I attribute this unexpected speediness to the benefits of passing -j5 to make - Allowing 4 files to be compiled at once while using an entire CPU core speeded things up very nicely, while a 5th task meant there was almost always something to do when it was otherwise idle. 5. Configuration of USE flags/locale/timezone At the present time, I decided to not really touch the USE flags immediately as they could be easily modified later as an when I needed to. I set the locale & timezone in accordance with my physical location (the UK). 6. Compiling and installation of the kernel I decided that rather than start with a custom kernel configuration that may or may not boot, I would instead start with Genkernel, which would provide me a base from which to customise my own kernel. Considering that the result was a rather generic kernel, it was a bit surprising that it only took an hour or so to compile and install the kernel from scratch. 7. General system configuration In this stage, I wrote /etc/fstab as well as configuring the network (simply automatically running DHCP on the only ethernet interface). I also assigned the system a hostname, and made sure that OpenRC used the correct keymap and started the network at boot-time. Before moving on to bootloader configuration, I selected what initial optional services I wanted installed and running at boot. These included a system logger, a cron daemon as well as mlocate.

The next stage was bootloader configuration, but I think discussion of that would fit better in Part II. This post is getting somewhat long, so that'll be in another post in a short while.

update

I never got around to finishing the previous post, but I've decided to start a combination blog/wiki thing using ikiwiki. This wiki/blog isn't the best, but the plugins are useful and everything works.

I've imported all old posts, and hope to periodically add new blog posts and likely other types too.

Posted