Puppet, OpenStack, where to start? Ceph

Ten months into this job, and I still feel like an OpenStack novice, but it feels better than a couple of months ago at least. In fact last week we had what I felt was a big automation win, where we deployed a Ceph OSD node from bare metal to joining the cluster without any ‘manual’ intervention. That automation needs more, well, automation, but at least it’s repeatable and consistent now. But I’ve leapt ahead of myself. This is a heavily abbreviated history of how we got here:

  • Luca deployed OpenStack with Fuel. Five short words which actually represent months of detailed work and a fair bit of complaining from his cubicle. Disk partitioning, network bonds, bridges, VLANs, GRE, VXLAN, MTU settings, bugs, confusing or missing or out of date documentation, people in the wrong timezone for proper conversations… oh my. I helped a bit.
  • I created an All-In-One (AIO) deployment with the puppet-openstack-integration (POI) project. I started comparing the (hiera) data between it and the Fuel-deployed stack.
  • Using POI I deployed a compute node almost to the point of working, but we managed to break our dev stack before we got to iron out the final kinks.
  • Luca got us started with MAAS, which proved a little more intuitive than xCAT and being built by Canonical it works well with Ubuntu. We customised the MAAS deployment process to suit our hardware and needs.
  • Ceph is not as much of a core integrated component of OpenStack as the other parts so it is another good candidate for early deployment tooling, and so we got started with Puppet-Ceph. In the end we found spjmurray’s Ceph module more intuitive and reliable, and it handled the new long term stable release 12.x Luminous almost as soon as it was released.

Here’s how we deploy a Ceph OSD node:

  • PXE boot the node:
$ ipmitool -I lanplus -H $IP -U $user -P $pass chassis power off
$ ipmitool -I lanplus -H $IP -U $user -P $pass chassis bootdev pxe
$ ipmitool -I lanplus -H $IP -U $user -P $pass chassis power on
  • Commission the node: Straightforward MAAS step from the documentation.
  • Customise the node: Network bridges, disk partitions, hostname. We have a hundred-line script to do this, and the main tools in use are the MAAS CLI and jq.
  • Prepare the curtin (curt installer) script (largely one-off work, although we continue to tweak it). Currently this just installs the Puppet Agent.
  • Deploy the node: Straightforward MAAS step from the documentation.

Once the node is deployed, it lets Puppet and our modules (which in turn use the Ceph module) take over, and we have more OSDs in our cluster!

$ ceph osd df tree
ID  CLASS WEIGHT   REWEIGHT SIZE   USE    AVAIL  %USE VAR  PGS TYPE NAME
 -1       51.97385        - 53220G   177G 53043G 0.33 1.00   - root default
...
-21       20.06506        - 20547G 70573M 20478G 0.34 1.01   -     host new-node
 12   hdd  1.82410  1.00000  1867G  6458M  1861G 0.34 1.02  90         osd.23
 13   hdd  1.82410  1.00000  1867G  6433M  1861G 0.34 1.01  95         osd.24
 25   hdd  1.82410  1.00000  1867G  6344M  1861G 0.33 1.00  71         osd.25
 26   hdd  1.82410  1.00000  1867G  6429M  1861G 0.34 1.01  74         osd.26
 27   hdd  1.82410  1.00000  1867G  6394M  1861G 0.33 1.00 103         osd.27
 28   hdd  1.82410  1.00000  1867G  6412M  1861G 0.34 1.01  94         osd.28
 29   hdd  1.82410  1.00000  1867G  6429M  1861G 0.34 1.01 102         osd.29
 30   hdd  1.82410  1.00000  1867G  6559M  1861G 0.34 1.03 104         osd.30
 31   hdd  1.82410  1.00000  1867G  6343M  1861G 0.33 1.00  76         osd.31
 32   hdd  1.82410  1.00000  1867G  6474M  1861G 0.34 1.02  98         osd.32
 33   hdd  1.82410  1.00000  1867G  6293M  1861G 0.33 0.99  69         osd.33
Advertisements

Enable Java WS on Centos 7

Sometimes when you’re making changes to systems it feels wrong. Insecure, hacky, manual and frustrating. But then you move on, hoping you don’t have to do it again. Well, here’s how I got to use the IPMI (iLO, BMC, iDRAC, etc) web interface of some old servers from my Centos 7 server:

Access the IPMI web interface

ws $ ssh -X server
server $ firefox $some_ip

Login, browse to the ‘remote control’ section (they’re all pretty similar), click launch. It pops up a prompt asking me what I would like to use, to launch jviewer.jnlp.

Install and configure Java

I found a guide which says to install and configure Java; java-1.8.0-openjdk was already installed out of the box, so it was just a matter of configuring it:

server # update-alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64/jre/bin/java)
 + 2           /usr/java/jre1.8.0_121/bin/java

Enter to keep the current selection[+], or type selection number: 2

Configure Firefox to launch .jnlp files with javaws

Firefox doesn’t know how to run javaws, so it needs to be told, via these instructions.

server $ vim .mozilla/firefox/vgenq8rj.default/mimeTypes.rdf

Mangle Java security settings

Java (rightly) complains about security settings. It’s only for internal boxes on a particular network, but BeyondCorp thinking still makes me cringe. Open the Java Control Panel:

ws $ ssh -X server
server $ /usr/java/jre1.8.0_121/bin/ControlPanel

In Security, Exception Site List, I added the URLs of the servers I need to manage. It works. I feel dirty. I suspect I could install an older version of Java to skip this step, and feel just a bit dirtier.

MythTV Migration: Remote Frontend

This is the third post in my MythTV migration and upgrade efforts. Here are links to the first and second posts.

The mythfrontend wiki page tells me I need to make sure the MySQL server will accept connections from the remote frontend machine.

orange@frontend:~$ mysql -u mythtv -h tempbackend.domain
ERROR 1130 (HY000): Host 'x.x.x.x' is not allowed to connect to this MySQL server
$ ls /usr/local/share/mysql/*.cnf
/usr/local/share/mysql/my-huge.cnf		/usr/local/share/mysql/my-medium.cnf
/usr/local/share/mysql/my-innodb-heavy-4G.cnf	/usr/local/share/mysql/my-small.cnf
/usr/local/share/mysql/my-large.cnf
$ sudo cp /usr/local/share/mysql/my-large.cnf /usr/local/etc/my.cnf
$ sudo vim /usr/local/etc/my.cnf

In my.cnf, I set bind-address to the tempbackend’s IP, to enable networking. I also need to allow access to the database. I could use ‘%’ for a wildcard instead of frontend’s IP but that seems a little heavy handed.

$ mysql -u mythtv -p
mysql> grant all on mythconverg.* to 'mythtv'@'x.x.x.x' identified by 'mythtv';
mysql> flush privileges;

Running mythfrontend on the frontend machine (which automatically starts upon Xorg login due to ~/.config/autostart/mythtv.desktop) gave an error that it couldn’t work with the backend. It then indicated that it had found two backends running different versions of MythTV (0.25 and 0.27). I assumed that they referred respectively to master and tempbackend, so selected the 0.27 version. It worked, and I could play the single recording I had copied across to tempbackend.

Two issues to sort out before actual migration:

  • Mythfrontend on frontend prompts to upgrade the schema of the Music database. Hopefully this won’t be a problem for upgrading the master, since it’s Mythbuntu and tempbackend is FreeBSD – slight package version differences?
  • There is no functioning audio on the frontend.

I checked that audio worked at all with aplay:

$ aplay /usr/share/sounds/alsa/Front_Center.wav

This having worked, I delved into mythfrontend’s Setup -> Audio menu, and found a number of options for the Audio output device. I have successfully used three:

  • ALSA:hw:CARD=PCH,DEV=0 is the 3.5mm headphone jack on the front of the NUC, and has further identification of “HDA Intel PCH, ALC283 Analog”
  • ALSA:plughw:CARD=HDMI,DEV=3 and ALSA:hw:CARD=HDMI,DEV=3

The HDMI options also have text at the bottom of the setup window, which will lead me to more reading later:

HDA Intel HDMI, HDMI 0
Hardware device with all software conversions (SHARP HDMI connected to HDMI)
Device supports up to 2.0 (LPCM)

MythTV Migration: Database Restore and Config

This is the second post in my MythTV Migration and upgrade efforts. The first post is here.

My test setup is a separate machine, so for now I’m following the MythTV wiki migration guide for that. I copied the mythconverg backup file onto the new temporary backend machine, ready to restore. Once done, I needed to change the hostname in the database. I wasn’t sure whether the old hostname was fully qualified, so I searched the backup file for references.

$ cat ~/.mythtv/backuprc
DBBackupDirectory=/home/orange
DBUserName=root
DBPassword=root
$ mysql -u root
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('root');
Query OK, 0 rows affected (0.01 sec)
mysql> \q
$ sudo /usr/local/share/mythtv/mythconverg_restore.pl --verbose
...
Successfully restored backup.
$ zgrep -q "hostname\." mythconverg-1299-20150108204433.sql.gz || echo "Not found"
Not found
$ sudo /usr/local/share/mythtv/mythconverg_restore.pl --change_hostname --old_hostname="master" --new_hostname="tempbackend.domain"
Unable to update hostname in table: keybindings
Duplicate entry 'Global-UP-tempbackend.domain' for key 'PRIMARY'

Hopefully that error won’t cause me any problems. Per the instructions, I then ran mythtv-setup which prompted me to upgrade the database schema from version 1299 to 1317. I selected Upgrade to agree, then Upgrade to acknowledge a contingency backup, then watched in the console as it upgraded the schema to 1300, then 1301 etc. Once complete, I saw the usual mythtv-setup screen. In the “1. General” screen I changed the IP address to tempbackend’s for both Local Backend and Master Backend. I then started the backend service, and the frontend to test.

$ sudo service mythbackend start
Starting mythbackend.
$ mythfrontend

As expected the recordings were not there, so I stopped the service, copied a recording file across from master:/var/lib/mythtv/recordings, started the service, and watched it. The png thumbnail of the recording was automatically generated, so I could either copy them all across, or leave them behind.

I now have a working backend. Next post: Configuring a remote frontend.

MythTV Migration: Machine Prep and Database Backup

I’m deploying a MythTV frontend, and the currently available 0.27 expects a database schema 18 versions newer than my Mythbuntu 12.04 server running MythTV 0.25 master backend and frontend. I’d quite like to deploy FreeBSD for one or both machines, if possible.

Getting Xorg working in FreeBSD was simple on my Intel NUC with the vesa driver automatically detected. The fine folks at #intel-gfx on freenode IRC told me that the Intel chipset (8086:0a16) was not yet supported by FreeBSD, and that support was needed for the GT2 portion of the intel driver. Sadly Xorg crashes hard whenever I leave its display – either by switching back to console with Ctrl-Alt-[0-7], or quitting Xorg entirely. Requiring a forced power-off in such situations clearly doesn’t bode well for reliability, and I’ve had no luck – despite a bit of assistance – from the BUGS, xorg or freebsd-questions forums.

So, back to Mythbuntu on the NUC, 14.04 this time. Xorg works fine, and from my reading of /var/log/Xorg.0.log it appears to be using the intel driver, not vesa.

$ grep vesa Xorg.0.log | tail -1
[    27.466] (II) Unloading vesa

Because I don’t want to upgrade the database schema (and probably much more) on my production Mythbuntu 12.04 machine until I’ve tested the procedure, I’ve built a temporary master backend machine, running FreeBSD 10.1, Xorg 7.7 (autoconfig, vesa driver, no crashes this time), MySQL 5.5 and Mythtv 0.27. Mythbuntu packages things up and makes them a bit easier, but I got to learn about some installation steps when doing it by hand:

$ mysql -u root < /usr/local/share/mythtv/database/mc.sql

For getting the database from old to testing backend, the Mythbuntu upgrading page sent me to the MythTV database backup and restore page, which taught me that there are tools to help, and for that I’m very grateful. Backup was easy, and quick:

$ echo "DBBackupDirectory=/home/mythtv" > ~/.mythtv/backuprc
$ sudo /usr/share/mythtv/mythconverg_backup.pl
$ ls -l ~mythtv
total 14360
-rw-r--r-- 1 root root 14701471 Jan 8 20:44 mythconverg-1299-20150108204433.sql.gz

I now have a database backup, and a temporary backend machine with bare configuration. I’m ready for a test migration.