This post was originally written in July 2014 and then lounged in the dusty, dark recesses of my disk because... Windows. The hindsight of a whole year of using the Packer/Vagrant/Chef workflow led to minor updates, which speaks for the applicability of the approach
Automating Windows installations is hard!
Maybe hard is not the right word. 'Unecessarily complicated', 'obfuscated', 'frustrating' come to mind1. Sometimes, for certain, not so stelarly engineered solutions the word 'impossible' creeps around the corner.
And yet things have never been better for anyone trying to solve the riddle. All the tools are there and while looking at the *ix solutions you wish for a lot of things (like a decent shell, or no registry, or a standardized way for installer parameters, or a package repository or...you get my drift) 'impossible' has had to hide in some really obscure corners.
I can't even begin to estimate the times I have installed Windows from scratch. It became tedious very, very fast but there was no release in sight for a long time.
With the advent of desktop virtualization and laptops that can accomodate a couple of Windows VMs running concurrently I follow the practice of keeping one pristine Windows installation2 at hand and a bootstrap package of command files that install the latest ChefDK, an SVN client and git for Windows.
The workflow as it stands:
- Copy the VM
- Boot the VM and perform any outstanding updates (this includes Windows and the virtualization tools & drivers)
- Copy the bootstrap package over and run the command file
- Checkout/clone the repository with all my Windows recipes
- Run Chef to provision the VM
Not enough (?)
VMs are bulky. With a dynamically growing disk a vanilla Windows VM ways in at about 20GB. Add to that the software for toolchains etc. and it grows very large indeed (my current C/C++ embedded environment weights in at 50GB).
You can't expect everyone to download it every time it changes so the case for immutable infrastructure when creating development environments is not very strong. Also, developers tend to have a lot of half-finished and ongoing work on disk which means you can't really throw away the VM when updating, you have to do an incremental update for user friendliness sake.
A properly configured Windows base box is smaller (about 5GB) and we can skip the time it takes to create it from an .iso, because, well, we don't do it very often.
Create the base box using Packer, add a Vagrantfile to the repository and host the base box in a known location and we have the capability to recreate the development environment.
How much goes into the base box in terms of software and configuration is your choice but vanilla installations work best, especially with Windows (you get the benefit of having a pristine registry database).
Keep in mind that you have to update the base box regularly to avoid the massive Windows Update penalty when creating new VM instances. Automating the creation process with Packer is thankfully very easy and can be delegated to a CI/cron job.
Everything else is done with Chef and a bit of powershell (to bootstrap Chef from Vagrant). To help with that we have a cookbook by the name of windev.
So the workflow becomes
- vagrant up
- Run Chef to provision the VM
- Go back to 2. when necessary
We still haven't solved the performance problem. It takes a long time to start a VM and the traditional desktop hypervisors hog a lot of resources. Docker is not a viable solution because Windows, so we have a case of serious OS envy.
1 After years of deliberation I am convinced that the Windows Registry is among the top 5 of engineering solutions that on hindsight revealed themselves as colossal PITA mistakes. It may even occupy the top spot, but then, I'm biased.
2 Meaning just a vanilla installation with no additional software other than the latest Microsoft patches.