FreeBSD and pg_upgrade

Thank you kreynolds, your post makes my current job look dead easy. I want to upgrade PostgreSQL from 9.3.12 to 9.5.4, and after checking through all of the release notes between the two versions, I have narrowed the relevant ones down to the two biggest releases: 9.4 and 9.5 (thanks in part to our devs not using the darkest corners of PostgreSQL features… yet).

A side note: PostgreSQL is a shining example of how to do release notes. All in one place, linked properly between versions, available back as far as the eye can see, to pre-1.0 in the 1990s!

Another win for documentation: the FreeBSD handbook. Curated by members of the project, a simple URL, and quality information.

I could use pg_dumpall > backup.sql, upgrade the package, then psql -f backup.sql postgres, or do it in place faster with pg_upgrade. FreeBSD doesn’t simply allow both old and new packages to be installed together, so enter jails.

# jailroot=/usr/tmp/pg_upgrade
# bsdinstall jail $jailroot
# pkg -r $jailroot install postgresql93-server

In the dialog boxes, choose a nearby mirror and no extra components. Wait for the installation to complete.

# su pgsql -c 'pg_dumpall -c | bzip2' > /usr/tmp/pgdump.sql.bz2
# service puppet stop
# service postgresql onestop
# pkg delete postgresql93-client postgresql93-contrib postgresql93-server
# mv /usr/local/pgsql/data{,.93}
# pkg install postgresql95-client postgresql95-contrib postgresql95-server
# service postgresql oneinitdb
# pg_controldata -D /usr/local/pgsql/data.93

“Latest checkpoint location” needs to be the same on master and replication slaves. pg_upgrade(1) has good information on the 16 step process.

# su -l pgsql -c "pg_upgrade -b $jailroot/usr/local/bin -B /usr/local/bin -d /usr/local/pgsql/data.93 -D /usr/local/pgsql/data -j 16 -k --check"
Checking for reg* system OID user data types       fatal
Your installation contains one of the reg* data types in user tables.
These data types reference system OIDs that are not preserved by
pg_upgrade, so this cluster cannot currently be upgraded. You can
remove the problem tables and restart the upgrade. A list of the problem
columns is in the file:
    tables_using_reg.txt
[root@flora ~]# wc -l ~pgsql/tables_using_reg.txt 
 38 /usr/local/pgsql/tables_using_reg.txt

Bother. I can’t use pg_upgrade. Oh well, I’ll talk to the devs for next time, and get on with dump and restore.

PostgreSQL backup and replication systems

There are too many PostgreSQL backup and replication systems for me to keep track in my head, so here’s a list, with a few details relevant to me. The high availability matrix helped, and I found BDR performance discussion helpful, even if I didn’t pay close attention to which-one-is-fastest.

Name Installation Summary
BDR  from source Custom psql binary, asynchronous multi-master replication, high performance
Bucardo  ports tree Asynchronous multi-master or master-slave replication
OmniPITR  github Scripts to manage and monitor WAL replication, hot backups from master or slave servers
pg-rman  ports tree
pgbarman  ports tree Hot physical backup, point-in-time recovery management
pgbarman plugin: pgespresso  ports tree Enables backup from standby server
pglogical  ports tree Logical (cf physical) master-slave replication, postgresql >= 9.4, high performance, flexible, no hot standby state
pgpool-II  ports tree Synchronous multi-master replication, requires conflict resolution
postgresql hot standby  built in
repmgr  ports tree Manage and monitor replication and failover to standby server
Slony-I  ports tree
Skytools, Londiste  ports tree Asynchronous master-slave replication

I reckon pgbarman plus pgespresso plus repmgr (all from 2ndQuadrant) will do what I want. OmniPITR probably would too, with what looks like easier setup against potentially fewer restore options.

How to test a FreeBSD port

I am testing a new port written by Palle Girgensohn (https://github.com/girgen). He sent me the link https://github.com/girgen/freebsd-ports/tree/master/sysutils/filebeat to test.

I’ve never done this before, so I started thinking of how I might do so.

https://github.com/girgen/freebsd-ports being a fork of https://github.com/freebsd/freebsd-ports which in turn appears to be the mainline FreeBSD ports tree, I thought it would be possible to just get sysutils/filebeat into my own mainline ports tree (which I maintain with portsnap). So, combining my svn knowledge and git ignorance, I tried this:

root@travis:/usr/ports/sysutils # git clone
https://github.com/girgen/freebsd-ports/tree/master/sysutils/filebeat filebeat
Cloning into 'filebeat'...
fatal: repository 'https://github.com/girgen/freebsd-ports/tree/master/sysutils/filebeat/' not found

A bit of searching tells me I can’t simply do a subtree clone or checkout. I decided to do this:

root@travis:~ # mv /usr/ports /usr/ports.bak
root@travis:~ # git clone https://github.com/girgen/freebsd-ports /usr/ports

This worked, but (unsurprisingly) took a long time. I could then test the new port with the usual commands:

root@travis:~ # make -C /usr/ports/sysutils/filebeat install

So, I’ve achieved my short term goal, but it has left me wondering – how do other people do this? People who regularly test new ports or similar activity on anything other than the mainline ports tree surely have more streamlined practices. Somewhat complicated git commands to do a ‘sparse checkout’? Is it normal practice that Palle Girgensohn provided me a fork of the entire ports tree? I could ask him to do it differently. Hmm, I wonder if I could fork a subset of his fork, and then clone my entire ‘subfork’.

Thoughts welcome.

Increase disk space in FreeBSD

I use gpart to manage disk partitions in FreeBSD, because it works, and is much easier than the old bsdlabel shenanigans. Increasing the size of the last partition on a disk is easy:

  • Power down
  • resize disk (e.g. increase virtual machine’s disk allocation)
  • boot into single user mode
gpart recover da0
gpart show -p da0
gpart resize -i 5 da0
gpart show -p da0
growfs /dev/da0p5
  • reboot

mfsBSD tweaks to help automation

I PXE boot mfsBSD to boostrap FreeBSD and Puppet installation, because it is small, and it works. In the past I have used the i386 version because it’s smaller, but as of 10.1 I the difference is negligible. Some of its defaults get in the way of automation, so I change them:

  • Mount the ISO, clone its contents, mount the root filesystem
mkdir dist
fetch -o dist http://mfsbsd.vx.sk/files/iso/10/amd64/mfsbsd-10.1-RELEASE-amd64.iso
iso_image=dist/mfsbsd-10.1-RELEASE-amd64.iso
mkdir mfsiso.mnt
mfs_iso_dev=`sudo mdconfig -a -t vnode -f $iso_image`
sudo mount_cd9660 /dev/$mfs_iso_dev mfsiso.mnt
mkdir isocontents
cp -Rp mfsiso.mnt/* isocontents/
gunzip isocontents/mfsroot.gz
mkdir mfsroot.mnt
mfs_root_dev=`sudo mdconfig -a -t vnode -f isocontents/mfsroot`
sudo mount /dev/$mfs_root_dev mfsroot.mnt
  • Make desired modifications
### autologin ###
sudo sed -i '' -e 's/:ht:np:/:al=root:ht:np:/' mfsroot.mnt/etc/gettytab
### .login automatic operations ###
sudo chmod g+w mfsroot.mnt/root/{,.login}
cat > mfsroot.mnt/root/prepare.sh <<__eof_prepare__
#!/bin/sh
puppet_server=puppet
mount_point=/mnt/deploy
ping_response=-1
while [ "0" != "\$ping_response" ]; do
  echo Waiting to let network connections settle ...
  sleep 1
  ping -qc 1 \$puppet_server > /dev/null
  ping_response=\$?
done
mkdir -p \$mount_point
mount \$puppet_server:/usr/exports/deploy \$mount_point
ln -sf \$mount_point/deploy.sh ./
__eof_prepare__
chmod ug+x mfsroot.mnt/root/prepare.sh
sudo chown root mfsroot.mnt/root/prepare.sh
cat > mfsroot.mnt/root/.login <<__eof_login__
/root/prepare.sh \$tty && /root/deploy.sh
__eof_login__
sudo chmod g-w mfsroot.mnt/root/{,.login}
echo 'autoboot_delay="2"' >> isocontents/boot/loader.conf
  • Unmount and repackage the image to a new ISO[1]
sudo umount mfsroot.mnt
sudo mdconfig -d -u `echo $mfs_root_dev | sed 's/md//'`
gzip isocontents/mfsroot
boot_sector=`isoinfo -d -i $iso_image | grep Bootoff | awk '{print $3}'`
dd if=$iso_image bs=2048 count=1 skip=$boot_sector of=isocontents/boot.img
mkisofs -J -R -no-emul-boot -boot-load-size 4 -b boot.img -o new_image.iso isocontents/
sudo mv -i new_image.iso /tftpboot/images/mfsbsd10/mfsbsd-10.1-RELEASE-amd64.iso
  • Clean up
sudo umount mfsiso.mnt
sudo mdconfig -d -u `echo $mfs_iso_dev | sed 's/md//'`
rmdir mfsiso.mnt
rmdir mfsroot.mnt

[1] Thanks to this old blog post for tips on getting ISOs created.

MythTV Migration: Remote Frontend

This is the third post in my MythTV migration and upgrade efforts. Here are links to the first and second posts.

The mythfrontend wiki page tells me I need to make sure the MySQL server will accept connections from the remote frontend machine.

orange@frontend:~$ mysql -u mythtv -h tempbackend.domain
ERROR 1130 (HY000): Host 'x.x.x.x' is not allowed to connect to this MySQL server
$ ls /usr/local/share/mysql/*.cnf
/usr/local/share/mysql/my-huge.cnf		/usr/local/share/mysql/my-medium.cnf
/usr/local/share/mysql/my-innodb-heavy-4G.cnf	/usr/local/share/mysql/my-small.cnf
/usr/local/share/mysql/my-large.cnf
$ sudo cp /usr/local/share/mysql/my-large.cnf /usr/local/etc/my.cnf
$ sudo vim /usr/local/etc/my.cnf

In my.cnf, I set bind-address to the tempbackend’s IP, to enable networking. I also need to allow access to the database. I could use ‘%’ for a wildcard instead of frontend’s IP but that seems a little heavy handed.

$ mysql -u mythtv -p
mysql> grant all on mythconverg.* to 'mythtv'@'x.x.x.x' identified by 'mythtv';
mysql> flush privileges;

Running mythfrontend on the frontend machine (which automatically starts upon Xorg login due to ~/.config/autostart/mythtv.desktop) gave an error that it couldn’t work with the backend. It then indicated that it had found two backends running different versions of MythTV (0.25 and 0.27). I assumed that they referred respectively to master and tempbackend, so selected the 0.27 version. It worked, and I could play the single recording I had copied across to tempbackend.

Two issues to sort out before actual migration:

  • Mythfrontend on frontend prompts to upgrade the schema of the Music database. Hopefully this won’t be a problem for upgrading the master, since it’s Mythbuntu and tempbackend is FreeBSD – slight package version differences?
  • There is no functioning audio on the frontend.

I checked that audio worked at all with aplay:

$ aplay /usr/share/sounds/alsa/Front_Center.wav

This having worked, I delved into mythfrontend’s Setup -> Audio menu, and found a number of options for the Audio output device. I have successfully used three:

  • ALSA:hw:CARD=PCH,DEV=0 is the 3.5mm headphone jack on the front of the NUC, and has further identification of “HDA Intel PCH, ALC283 Analog”
  • ALSA:plughw:CARD=HDMI,DEV=3 and ALSA:hw:CARD=HDMI,DEV=3

The HDMI options also have text at the bottom of the setup window, which will lead me to more reading later:

HDA Intel HDMI, HDMI 0
Hardware device with all software conversions (SHARP HDMI connected to HDMI)
Device supports up to 2.0 (LPCM)

MythTV Migration: Database Restore and Config

This is the second post in my MythTV Migration and upgrade efforts. The first post is here.

My test setup is a separate machine, so for now I’m following the MythTV wiki migration guide for that. I copied the mythconverg backup file onto the new temporary backend machine, ready to restore. Once done, I needed to change the hostname in the database. I wasn’t sure whether the old hostname was fully qualified, so I searched the backup file for references.

$ cat ~/.mythtv/backuprc
DBBackupDirectory=/home/orange
DBUserName=root
DBPassword=root
$ mysql -u root
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('root');
Query OK, 0 rows affected (0.01 sec)
mysql> \q
$ sudo /usr/local/share/mythtv/mythconverg_restore.pl --verbose
...
Successfully restored backup.
$ zgrep -q "hostname\." mythconverg-1299-20150108204433.sql.gz || echo "Not found"
Not found
$ sudo /usr/local/share/mythtv/mythconverg_restore.pl --change_hostname --old_hostname="master" --new_hostname="tempbackend.domain"
Unable to update hostname in table: keybindings
Duplicate entry 'Global-UP-tempbackend.domain' for key 'PRIMARY'

Hopefully that error won’t cause me any problems. Per the instructions, I then ran mythtv-setup which prompted me to upgrade the database schema from version 1299 to 1317. I selected Upgrade to agree, then Upgrade to acknowledge a contingency backup, then watched in the console as it upgraded the schema to 1300, then 1301 etc. Once complete, I saw the usual mythtv-setup screen. In the “1. General” screen I changed the IP address to tempbackend’s for both Local Backend and Master Backend. I then started the backend service, and the frontend to test.

$ sudo service mythbackend start
Starting mythbackend.
$ mythfrontend

As expected the recordings were not there, so I stopped the service, copied a recording file across from master:/var/lib/mythtv/recordings, started the service, and watched it. The png thumbnail of the recording was automatically generated, so I could either copy them all across, or leave them behind.

I now have a working backend. Next post: Configuring a remote frontend.