Everyone gets the same set of tools
Something that had long puzzled me was the question “Why do some people [in the organisation] have root, and others do not?” It seemed to me that the reason the sysadmins had the root passwords, and everyone else had to raise tickets, was a tooling problem. Giving everyone root would permit anyone in the organisation to fix their own problems, deploy their own software, or, less charitably, cowboy things or be downright naughty. And while everyone had root, it usually turned out that only the operations team had the on call pager.
After the wholesale failure of organisations to understand Devops, I’m a big fan of the “You build it, you run it” movement. So when George Barnett and I built the Atlassian OnDemand Cloud we made a deliberate decision that everyone would get the same tools, and (modulo permissions and audit logs) be empowered to use the platform to the full extent. There wouldn’t be one set of tools for regular users, and a super set of “power tools” reserved for operators.
To me you build it, you run it, means if you have a problem, we’ll help you learn to use the tools better, not fix your problem for you.
Virtualise the operating system, not the hardware
I remember playing with VMware in 1999 or early 2000. I thought it was an amazing trick, especially as the drivers for my sound card worked way better in virtualized Windows than the real thing.
Fast forward a few years and I was using VMware to maintain a fleet of foreign language Windows installations for testing. Skip forward a few more and the industry had figured out that virtualisation was a solution to the sprawl of single use Windows servers that cluttered up wiring cupboards and data centres.
Virtualisation is a neat trick taken well beyond the point of a joke, but it did shine a light on the dark corners of systems administration. Back when turning up a server involved purchase orders, waiting for hardware to be shipped, contract negotiations, and trips to the data centre, what was a few hours spent installing the operating system? But when virtual hardware could be conjured out of thin air in seconds, it cast a long shadow over the need to automate operating system installation and management.
This was the age of Puppet and Chef, who re-plowed the ground sowed a decade earlier by CFEngine. Now sysadmins could configure and manage servers at the speed they could be virtually provisioned. I remember, thinking back to when I started to use Puppet, and imagining about what it would have been like to have those tools in previous jobs, where automation involved SVN repositories full of perl scripts, and crontab entries lovingly copy pasta’d between machines. And so everything was good for a time in the age of configuration as code.
But, simulating the entirety of an x86 host on another, just so people can share a computer, is a ridiculous waste. This shouldn’t be a surprise, FreeBSD Jails and Solaris Zones (rest in peace) had been coughing loudly about this for decades. Bryan Cantrill said it best when he exclaimed that we should “virtualise the operating system, not the hardware“, or as we’ve come to know them: containers.
The death of the operating system
I remember where I saw Docker for the first time. The product wasn’t even a year old and Docker were carpet bombing any meetup that would have them to promote it. Canonical were sprinting at a hotel near SFO and I convinced several of my teammates to squeeze into a taxi for the first meetup in San Mateo. What I saw that night shook me to my core. It wasn’t just the speed–oh the speed, after spending two years waiting for EC2 and slow apt mirrors–it was the clarity of that Californian mindset: what would happen if I checked my entire application deployment into git?
It was clear to me that night that virtual machines were virtualising stuff that people didn’t care about; virtual video cards, virtual floppy drives, virtual ram that swapped to virtual disks. What people wanted was a virtual kernel–their own pid 1. Orchestration tools like Chef, Puppet, and Juju were trying to orchestrate an entire operating system when what developers really wanted was a way to take a single program, the one that they had written, and deploy it to a server. Filesystems, crontabs, init
/upstart
/systemd
, apt-get
and dpkg-reconfigure
, weren’t just someone else’s problem, they were irrelevant.
Anyone who’s endured to my rants about product knows my unwavering belief in the Innovator’s Dilemma. Through the window of Christensen’s logic, it was clear that the server orchestration market had been upended in that moment. Squeezed between Docker images at the low end and Netflix’s “everything is an AMI” model at the top end, was a large middle ground filled with orchestration tools that expected to be given a running operating system to configure. The Chefs and Puppets and whatnots would be desperately trying to convince the biggest orchestration users–the Netflixes of the world, with their CI/CD pipelines that pooped AMIs–to adopt agent based tooling, while all the while each developer faced with the question “How should I deploy my application?” would default to docker push
.
Orchestration as table stakes
If you’re building your own orchestration layer, then you are betting on the wrong horse–I say this as someone who’s built a bespoke container based PaaS.
Within the next year or two you’ll be able to buy access to a Kubernetes API server at every price point; on your laptop, shared as a VPS, in your own VPC, or even as an appliance. Building on top of the Kubernetes primitives is where the value lies. Building on top of the shared tooling the Kubernetes API provides the level playing field that every development team who is responsible for supporting their own software in production is entitled to.
Why did I join Heptio? Because I believe that the administration of operating systems has reached its endgame. Kubernetes is going to revolutionise the way software is developed, and deployed, and I’m honoured to be given the opportunity to join the company that is going to make that happen.