Skip to main content

The saga of the server replacement

A few weeks ago my main server lost a disk out of it's raid array. No big deal, I had a (larger than needed) spare around to replace it with. However, I got to thinking it might be about time to get some new disks and do a fresh install. The old disks were 1.5TBx4 in a encrypted/raid5 that I had installed Fedora on back in December 2008 (Fedora 10) and upgraded since then. Moving to a new fresh install would allow me to move to bigger/newer disks, add lvm (I didn't use lvm for some reason on the f10 install), move to newer/better/bigger disk encryption, and also just get rid of a bunch of cruft that had piled up over the years. Side note: Dear wordpress: It's NOT AT ALL NICE when I type out a bunch of text for a post and hit "save draft" and you DELETE a bunch of stuff. (Redoing the rest of this post for the second time, thanks wordpress). Looking at drive prices, it seemed 3TB drives were the good price point, so I picked up 4 of those. I didn't have much chance to mess with them last week as I was out at Fedora's main datacenter doing a bunch of work, but this weekend seemed a good time to get things done. I have a server chassis thats (mostly) identical to my main server box that I use for test machines, so it was easy to pull it's drives, put the new ones in and boot the Fedora 20 netinstall iso from usb. I then ran into 2 anaconda issues: First, I hit what anaconda says was a duplicate of bug 1008137. Poking around, I think this was because all 4 drives are gpt, and because I did /boot as raid1, it wanted to install grub2 to all mbr's, but there was only a 1MB bios boot partition on sda. I couldn't figure any way to get anaconda to make more, so I went and manually made one on each drive. That seemed to get me past that. Then, I hit a duplicate of bug 1040691. This may have been my fault, as I forgot that it's really important which "encrypt this" checkbox you check. There's one when you go into custom partitioning, one on each mount point, and one in the lvm/raid popup. I wanted only the last one of those checked (as I want the entire pv encrypted). With the machine installed, it was time to rsync data and configuration over. Most of my services that run on the bare server were easy to move over: squid, unbound, nsd, mediatomb, dhcpd, radvd. One was a sticking point: I long ago got some slimserver devices, which uses a perl based free media server. They in turn got bought by logitech. Logitech isn't doing much with the server, but there was a open project still developing on it until about last month, when they removed all their rpms and went away. ;( So, I think moving to this new server I am going to setup a beagle bone black with mpd and call it good. After a few days of rsync, my backups and media and other data were copied over. I decided to try and use libvirts live migration on my main virtual machine to cut down on outage time. It took a bit of tweaking to get the new server setup in a way that libvirt was happy to migrate my main guest from the old server: I had to setup a bridged network named the same as the one on the old server, I had to make the hostname NOT the same as the old server, and I had to make a link from the storage path on the old server to the new one. Then, the mirgation started, but it didn't give me progress for some reason. Many hours later it did finish. I also took the chance to resize the guest some (larger). The new server I also moved to use NetworkManager from network, since NM now handles bridges nicely. I was happy to see it was only some minor tweaking to ifcfg files and NM brought up everything just as I wanted. I did run into a small snag when I forgot to enable forwarding on the new server, but that was easily fixed. The swap of the new drives/install into the old server hardware was pretty simple (hot swap bays for the win!). Then, only a few tweaks and everything was up and running on the new install. It was a bit of effort, but it's nice having the new fresh install and setup running along.