As I was setting up my new Linux machine with two disks, I decided to forgo my favorite Linux Mint and give Ubuntu another try. Main reason? ZFS of course.
Ubuntu already has a quite decent guide for ZFS setup but it’s slightly lacking in the mirroring department. So, here I will list steps that follow their approach closely but with slight adjustments as not only I want encrypted setup but also a proper ZFS mirror setup. If you need a single disk ZFS setup, stick with the original guide.
After booting into installation, we can go for Try Ubuntu and open a terminal. My strong suggestion would be to install openssh-server package first and connect to it remotely because that allows for copy/paste:
passwd
Changing password for ubuntu.``
(current) UNIX password: ^^(empty)^^
Enter new UNIX password: ^^password^^
Retype new UNIX password: ^^password^^
passwd: password updated successfully
sudoaptinstall--yes openssh-server
Regardless if you continue directly or you connect via SSH (username is ubuntu), the first task is to get onto root prompt and never leave it again. :)
sudo-i
To get the ZFS on, we need Internet connection and extra repository:
sudo apt-add-repository universe
apt update
Now we can finally install ZFS, partitioning utility, and an installation tool:
aptinstall--yesdebootstrap gdisk zfs-initramfs
First we clean the partition table on disks followed by a few partition definitions (do change ID to match your disks):
There is advantage into creating fine grained datasets as the official guide instructs, but I personally don’t do it. Having one big free-for-all pile is OK for me - anything of any significance I anyhow keep on my network drive where I have properly setup ZFS with rights, quotas, and all other goodies.
Since we are using LUKS encryption, we do need to mount 4th partition too. We’ll do it for both disks and deal with syncing them later:
mkdir /mnt/rpool/boot
mke2fs-t ext2 /dev/disk/by-id/ata_disk1-part4
mount /dev/disk/by-id/ata_disk1-part4 /mnt/rpool/boot
mkdir /mnt/rpool/boot2
mke2fs-t ext2 /dev/disk/by-id/^^ata_disk2^^-part4
mount /dev/disk/by-id/^^ata_disk2^^-part4 /mnt/rpool/boot2
Now we can finally start copying our Linux (do check for current release codename using lsb_release -a). This will take a while:
debootstrap ^^cosmic^^ /mnt/rpool/
Once done, turn off devices flag on pool and check if data has been written or we messed the paths up:
zfs setdevices=off rpool
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 218M 29.6G 217M /mnt/rpool
Since our system is bare, we do need to prepare a few configuration files:
As this probably updated grub, we need to both correct config (only if we have bare dataset) and copy files to the other boot partition (this has to be repeated on every grub update):
Memory on desktop PC has been a solved problem for a while now. You have certain quantity of it and you rarely really run out of it. Even when you do, there is a virtual memory to soften the blow at the cost of performance. Enter the cloud…
When you deal with mini virtual machines running on a cloud, quite often they have modest memory specification - 1 GB or even less are the usual starting point. Fortunately, they run on Linux so they don’t need much memory - except when they do.
What to do if you need just a bit more memory on already configured machine and you really don’t want to deal with reboots required for proper upscaling? Well, you can always add a swap file.
First, you create a file (I’ll call mine Swapfile.sys for sentimental reasons) with the additional 1 GB (or whatever value you want):
Then you format this file as a swap and tell the system to use it:
mkswap /swapfile.sys
swapon /swapfile.sys
Since this disappears upon reboot, you might also want to make it permanent by adding it to fstab. This step is a bit controversial since you should really think about bigger machine if you are always in need of the extra memory:
From this file I want to get both IP and DNS fields of one section - e.g. Bravo. I did find a solution that was rather close to what I wanted but I didn’t like the fact all entries got into associative array.
So I decided to make a similar solution adjusting the output to show only a single section and give it a prefix to avoid accidental conflict with other script variables. Here is the one-liner I came up with:
The first argument (-v TARGET=Bravo) just specifies which section we’re searching. I am keeping it outside as that way I can use other variable (e.g. $MACHINE) without dealing with escaping awk statements.
The second argument (-F ' *= *') is actually regex ensuring there are no spaces around equals sign.
The third argument is what makes it all happen. Code matches section line and puts it in SECTION variable. Each line with name/value pair is further checked and printed if target section name is matched. Upon printing, a prefix “FIELD_” is added before name making the whole line essentially a variable declaration.
The fourth and last argument is simply a file name.
This particular command example will output the following text:
FIELD_IP="2.2.2.2"
FIELD_DNS="bravo.example.com"
How do you use it in a script? Simple source result of awk and you get to use .ini fields as any bash variable.
Sometime in scripting you don’t get to choose your input format. For example, you might get data in multiple lines when you actually need it all in a single line. For such occasions you can go with:
cat ^^file^^ |awk'{printf "%s", $0}'
Likewise you might want lines separated by a space. Slight modification makes it happen:
cat ^^file^^ |awk'{printf "%s ", $0}'
Lastly, you might want to split a single line into multiple ones (handy for base64 printouts):
cat ^^file^^ |fold-w72
PS: Check fmt if you need word-aware line splitting.
Common key management method seen in Linux scripts is copying private and public SSH key around. While not necessarily the best way to approach things, getting your private SSH key does come in handy when easy automation is needed.
However, there is no need to copy public key if you are already copying the private one. Since private key contains everything, you can use ssh-keygen to extract public key from it:
What is the advantage you ask? Isn’t it easier just to copy two files instead of copying one and dealing with shell scripting for second?
Well, yes. However, it is also more error prone as you must always keep private and public key in sync. If you replace one and by accident forget to replace the other, you will be chasing your tail in no time.
You should never depend on root login when dealing with OpenStack cloud. Pretty much all pre-prepared cloud images have it disabled by default. Ideally all your user provisioning should be done as part of the cloud-init procedure and there you should either create your own user or work with the default cloud-user and the key you provisioned. But what if you are troubleshooting some weird (network) issue and you need console login for your image?
Well, you can always re-enable root user by directly modifying qcow2 image.
To edit qcow2 images, we need first to install libguestfs-tools. On my Linux Mint, that requires the following:
sudoapt-getinstall libguestfs-tools
Of course, if you are using yum or some other package manager, adjust accordingly. :)
Once installation is done, we simply mount image into /mnt/guestimage and modify the shadow file to assign password (changeme in this example) to the root user:
All nodes installed from this image will now allow you to use root login with password authentication. Just don’t forget to remove this change once you’re done troubleshooting.
PS: While I use Red Hat image in the example, the procedure also applies to CentOS and most of other cloud distributions too.
As I tried to upgrade Linux Mint from 18.3 to 19, all went kaboom and I was forced to decide if I want to reinstall OS from scratch or go and try to fix it. Since I was dealing with virtual machine, reinstalling it from scratch seemed like a better idea.
Once all was installed, I wanted to copy some files from the old volume. As full disk encryption was present, I knew a bit more complicated mount is needed. In theory, it should all work with the following commands:
Issue was with volume manager’s dislike for both my current installation and previous one having the exactly same volume group name - mint-vg - and thus refusing to even consider doing anything with my old disk.
Before doing anything else, a rename of volume group was required. As names are equal, we will need to know UUID of the secondary volume. The easiest way to distinguish old and new volume is by looking at Open LV value. If it’s 0, we have our target.
sudo cryptsetup luksOpen /dev/sdb5 encrypted_mapper
sudo vgdisplay
--- Volume group ---
VG Name mint-vg
Cur LV 2
Open LV ^^0^^
VG UUID ^^Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn^^
sudo vgrename Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn mint-old-vg
Processing VG mint-vg because of matching UUID Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn
Volume group "Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn" successfully renamed to "mint-old-vg"sudo vgchange -ay2 logical volume(s)in volume group "mint-vg" now active
2 logical volume(s)in volume group "mint-old-vg" now active
With the volume finally activated, we can proceed mounting the old disk:
After a failed yum upgrade (darn low memory) I noticed my CentOS NTP server was not booting anymore. Look at console showed progress bar still loading but pressing Escape showed the real issue: Failed to load SELinux policy, freezing.
The first thing in that situation is to try booting without SELinux and the easiest way I found to accomplish this was pressing e on boot menu and then adding selinux=0 at the end of line starting with linux16. Continuing boot with Ctrl+X will load CentOS but with SELinux disabled.
As I actually don’t run my public-facing servers without SELinux, it was time to fix it. Since I didn’t have package before, I installed selinux-policy-targeted but I would equally use reinstall if package was already present. In any case, running both doesn’t hurt:
Finally we need to let system know SELinux should be reapplied. This can be done by creating a special .autorelabel file in the root directory followed by a reboot:
sudotouch /.autorelabel
sudoreboot
During reboot SELinux will reapply all labeling it needs and we can enjoy our server again.
When you’re dealing with a lot of Linux servers, having a Linux client really comes in handy. My setup consisted of Linux Mint 18 and I could perform almost every task. I say almost because one task was always out of reach - viewing HP iLO console.
Two options were offered there - ActiveX and Java. While ActiveX had obvious platform restrictions, multi-platform promise of Java made its absence a bit of a curiosity. Quick search on Internet resolved that curiosity quickly - Firefox version 53 and above dropped NPAPI plugin system support and HP was just too lazy and Windows-centric to ever replace it. However, Firefox 52 still has Java support and that release is even still supported (albeit not after 2018). So why not install it and use it for Java iLO console?
First we need to download Firefox 52 ESR - the latest version still allowing for Java plugin. You can download these from Mozzila but do make sure you select release 52 and appropriate release for your computer (64-bit or 32-bit).
With the release downloaded, we can install it manually into a separate directory (/opt/firefox52) as not to disturbe the latest version. In addition to Firefox, we’ll also need IcedTea plugin installed:
Of course, just installing is worthless if we cannot start it. For this having a desktop entry is helpful. I like to use a separate profile for it as that makes running side-by-side the newest and this release possible. After this is done you’ll find “Firefox 52 ESR” right next to a normal Firefox entry.
Running recent CentOS update on machine with 512 MB of RAM caused yum to run out of memory. Thinking nothing of it, I stopped it to see what can be done. After stopping all services I was greeted with “Warning: RPMDB altered outside of yum” and “Found 93 pre-existing rpmdb problem(s), ‘yum check’ output follows”.
After trying a lot of things, I found the one that works. Removing older package without removing its dependencies and reinstalling the newer one worked a charm: