Evernote’s Operations department has spent a great deal of effort automating our server installs. I wanted to spend some time sharing this work and provide some hints if anyone else is interested in doing the same.
If you review our architecture overview posts you will see that we can only store a finite number of users per shard. What this means to us in operations is we are deploying new servers quite frequently. In order to make our installs reproducible, predictable, and quite frankly easy for us in operations, we had to design a robust system to build servers automatically. We have been striving for a “one click” install. Click a button (or two) and a shard comes out ready to use. Our server install process is: rack the server, set it up for out of band access, and then do the install remotely by rebooting it into a network install. The rest of the build process is entirely automatic.
Our installer is driven by a web-based system that sets up the build environment for us. This coupled with some nifty shell scripting and lots of Puppet code has allowed us to achieve our goals. We rely on Debian’s Preseed function to allow the operating system to be installed automatically. We also make extensive use of Puppet for configuration and package management. We do all of our server setup in a special “embryo” VLAN that is protected from the production VLANs. The production VLANs have no ability to obtain a DHCP lease or PXE boot so that we protect the production systems from netbooting and getting formatted.
The technology behind this all is fairly simple, however, we have discovered that the Debian automated install was just a little on the tricky side. Let me walk through a typical build and then we can dive into what some of this looks like on the back end.
- Servers are racked, RAID setup, and remote access is configured.
- We go to our web-based server build application and enter some information about the machine. Namely the MAC address, IP address it will have, and the role of the server. We have several different types of machines we build quite frequently but the shard class of machine is by far the largest population.
- The web application writes a boot file on our PXE boot server. The Linux PXE project is maintained by the SYSLINUX project. This boot file tells the server what operating system to install and the automated build configuration file to use. The recipe for the automated operating system install in the Debian world is called a Preseed file.
- The web application also creates a small manifest file, which the server being built will retrieve to set the hostname and networking information.
- The server is then rebooted and told to do a network boot. It obtains a DHCP lease, PXE information, kernel parameters, and the Preseed file to use.
- The system goes into a Debian install and automatically installs the OS. The Debian preseed file is basically a response file for all of the installer options.
- After the disks have been formatted and the operating system installed, we hook into a simple shell script that obtains a post-install script to run during first boot. It then sends a GET request back to our provisioning system, which tells the DHCP/PXE boot server to wipe the previously created netboot file and replace it with a “boot off disk” file instead.
- The server reboots, requests a network boot, and falls back to a disk boot based on the above file.
- After the OS comes up we then kick off the post-install scripts. These grab the manifest file from the build server via HTTP and then set the network, hostname, DNS resolvers, etc. If this system will be a shard the scripts setup DRBD replication and lay down the Xen guests.
- The Xen guests are created through a golden image, a canned OS stored in a tarball, which is laid down on the correct file systems.
- The script then sets the networking correctly on the Xen guests based on DNS lookups and what the hypervisor’s hostname is set to. We use a very logical naming scheme here so it is easy to determine the guest names.
- After all this is done we run Puppet on the Xen dom0 and the Xen domU’s. Puppet handles the majority of software installs and configuration management for us. The network based installer puts down a very minimal operating system. We then augment this with Puppet based on what is defined for that role of machine.
- After one more reboot to reload any kernel updates we then perform extensive QA. We then move it into our production VLAN, add it to the load balancer, and start taking on new users.
That is the build process for shards and all other types of machines. The post install pieces are slightly different for different classes of machine however. For example we don’t want the Xen kernel on a non Xen machine.
One of the challenging parts while designing this process was getting the Debian Preseed file correct. The disk layout section of the preseed was really the hardest part. We kept finding different ways that it would fail and had to keep adding options. It will now cover almost all possible methods of wiping out an existing volume and proceeding. The disk layout we are using is very simple as well. We create one partition that is mounted as root without any swap. Please note that if you use this file it will completely blow away your disk without any warning at all.
So for your reading pleasure, here is the Preseed file Evernote uses. Feel free to use this as a template if you are attempting something similar.
d-i mirror/http/hostname string repo d-i mirror/http/directory string /debian/ d-i mirror/suite string squeeze d-i apt-setup/uri_type select d-i d-i apt-setup/hostname string repo d-i apt-setup/directory string /debian/ d-i apt-setup/another boolean false d-i apt-setup/security-updates boolean false d-i finish-install/reboot_in_progress note d-i prebaseconfig/reboot_in_progress note d-i apt-setup/non-free boolean true d-i apt-setup/contrib boolean true d-i apt-setup/security_host string repo/security d-i anna/no_kernel_modules boolean true d-i apt-setup/local0/repository string http://repo/ops/ squeeze main d-i debian-installer/allow_unauthenticated string trued-i grub-installer/only_debian boolean true d-i grub-installer/with_other_os boolean true d-i partman-auto/disk string /dev/sda d-i partman-auto/method string lvm d-i partman-auto/purge_lvm_from_device boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true d-i partman-lvm/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i partman-auto/choose_recipe select All files in one partition (recommended for new users) d-i partman/confirm_write_new_label boolean true d-i partman/choose_partition select Finish partitioning and write changes to disk d-i partman/confirm boolean true d-i console-tools/archs string skip-config d-i debian-installer/locale string en_US d-i console-keymaps-at/keymap select us d-i languagechooser/language-name-fb select English d-i debian-installer/locale select en_US.UTF-8 d-i tzconfig/gmt boolean true d-i tzconfig/choose_country_zone/US select Pacific d-i tzconfig/choose_country_zone_single boolean true d-i time/zone select US/Pacific d-i clock-setup/utc boolean true d-i kbd-chooser/method select American English d-i mirror/country string manual d-i clock-setup/ntp boolean false d-i passwd/root-password-crypted passwd <set your own password here> d-i passwd/user-fullname string Admin d-i passwd/username string admin d-i passwd/user-password-crypted passwd <set a non root user password here> tasksel tasksel/first multiselect standard d-i pkgsel/include string ssh puppet ifenslave-2.6 bridge-utils ethtool xfsprogs ntpdate curl telnet chkconfig vlan linux-image-3.2.10 d-i preseed/late_command string /usr/bin/wget -O /tmp/post.sh http://repo/preseed/post.sh ; chmod +x /tmp/post.sh ; /tmp/post.sh
This is stored in a file called shardhead.seed, which in turn is retrieved during the PXE boot process. The block of code on the PXE boot server looks like this:
LABEL 0 MENU LABEL Shard Head Node kernel debian/squeeze3.2/linux append initrd=debian/squeeze3.2/initrd.gz -- auto=true priority=critical url=http://repo/preseed/shardhead.seed interface=eth0 locale=en_US console-keymaps-at/keymap=us debian-installer/local=en_US.UTF-8 hostname=debian domain=evernote.com
If you are familiar with PXE boot configuration that configuration block will look fairly standard to you.
Of course there is a bit of backend technology in place to make this all work. For example: DNS, DHCP, DHCP forwarding, functioning PXE and TFTP environment, lots of Puppet work, and our own Debian repo that we have created. If you go through the preseed file you will notice we use the name “repo” for ours. If you are trying to build against a public repo all of that will need to be adjusted for your use.
In closing we have been reaping the benefits of this type of install system by being able to quickly deploy new systems as demand requires. This also ensures all machines will be the same in regards to packages installed, disk layout, and configuration because the entire process is automatic.
We can now build a new shard system in around 10 minutes using this process!