Since the dawn of the internet, automation has been a buzzword and aspiration for companies, constantly seeking new ways to reduce human micro-management of IT by allowing software to perform tasks automatically. This has led to the birth of automation’s powerful little sister, Orchestration. Orchestration is the automation of not just isolated tasks, but whole networks, computer systems, and workflows. If automation is a self-driving car, orchestration is a self-regulating city.
It’s quickly become mandatory for thousands of companies that need advanced automation and coordination to handle the massive server loads and high-speed demands of the global internet economy. Needless to say, the market has boomed over the past few years, and several companies have devoted themselves to meeting this need, each with their own take on orchestration technology. Let’s take a deeper look at how this technology works and some key differences in the top product offerings.
In the past, all IT infrastructure had to be managed manually. Servers had to be set up and configured by hand, leading to high overhead and long drawn out set up periods. It made companies sluggish and unable to adapt to changes efficiently, as any updates to existing infrastructure had to be handled manually as well. And of course, if the company was big enough, with a massive workforce of IT technicians, small instances of human error would pile up, and inconsistencies in server maintenance would lead to unique aberrations and malfunctions, which would then have to be standardized manually…
Then along came infrastructure as code.
Infrastructure as code is exactly what it says on the label. It’s high-level code that provisions IT infrastructure automatically. We’ll get into some of the different ways it can work in a moment, but basically, it’s the secret sauce that makes orchestration technology possible. Since it’s all written out in code, it’s easily transportable and duplicable, enabling the provisioning of a hundred identical servers or applications in the time it would have taken to manually implement a single one. Infrastructure as code, and the orchestration programs that wield it, makes running a digital business faster, cheaper, more consistent, and more secure. (Now in technicolor!)
But not all orchestration technologies are created equal. Let’s dive into some of the leading brands and see what makes them tick.
The oldest of the big-name technologies, hearkening all the way back to 2005. Puppet is an established name with a proven track record working with clients like Google, Reddit, and Dell. It focuses on configuration management, meaning it’s designed to handle the installation and management of software on pre-existing servers. It uses an agent and master setup, two means of controlling information flow that often goes hand in hand. Master dependent software links everything to a centralized master server. Infrastructure changes are then made hierarchically, by first updating the master server, which then communicates the system changes to all of its dependent servers. This centrality can be a nice perk, allowing the admin to view and manage everything in one location. It does come with some drawbacks as well, namely increased infrastructure and maintenance to run the extra master server, and security weaknesses due to the communication ports between master and dependent server creating another entry point that attackers can target.
Part and parcel with its master server, Puppet runs a kind of software called an agent on each of its dependent servers. The agent reconfigures infrastructure in the background of a server, executing the updates that it pulls from the master server. Fans of the system feel that it speeds the updating process up a little, but it can exacerbate some of the problems that a master server already brings to the table, adding security weaknesses and additional maintenance costs. The addition of this extra software makes it a little bulkier, and Puppet can be a bit of a pain to install compared to some of the newer products.
Puppet’s age gives it stability, but not necessarily agility, and it doesn’t always leap on top of bug fixes with the same speed that you see from younger companies in the field. If all this doesn’t sound the most user-friendly to you, there is one area where Puppet makes things easy. Puppet is set up to foster declarative coding, a style where the programmer inputs their desired end result, and the software figures out the steps needed to make it happen. This is a much simpler process than manually coding the necessary steps to build the desired infrastructure, and it allows you to focus more on what you’re trying to achieve while Puppet handles the how.
Chef emerged in 2009 and shares a lot in common with Puppet. It’s another tool that focuses on the configuration management aspect of orchestration, and it’s similarly built to take orders from a master server and run agent software on your dependent servers. This means it shares some of Puppet’s weaknesses, including initial set up difficulties, increased maintenance costs, and security weaknesses.
The biggest difference between Chef and Puppet is Chef’s approach to coding style. Chef uses imperative code, also called procedural code. This is a more old-school approach than Puppet’s declarative style, requiring the programmer to write out a step by step plan of how they want to build their infrastructure, rather than letting Chef handle that part of the process for them. This makes Chef a little less user friendly than Puppet and requires more time and effort on the part of the programmer. But if you’re a master coder who likes to break the mold, Chef’s code may offer you greater flexibility and control over your vision than you’d find with Puppet.
Bigger than both Chef and Puppet, Ansible popped up in 2012 and quickly began to overtake its predecessors, drawing significant clients like DigitalOcean and 9Gag. Its popularity may stem from the fact that Ansible feels easy to use, right out the gate. Ansible is both masterless and agentless, making it a much sleeker package to install, and cutting down on maintenance costs and security risks. They managed to ditch the master server model by setting up their servers to communicate directly to each other over SSH. This is a more secure model, though some users find the SSH connection can run a tad slow compared to agented servers. Ansible is also easier to code in. It uses a YAML Python emitter for its configuring, a more intuitive coding language than Chef’s Ruby or Puppet’s custom coding language. Ansible does use imperative code, so while their coding language is intuitive, you still have to put in the work to code out the steps of your plan yourself.
Saltstack is a 2011 contemporary of Ansible, and the two are often compared in much the same way that Chef and Puppet get lumped together. Like Ansible, it uses YAML, so it’s another go-to for people who get turned around by some of the more complex coding languages. Salt uses an agent and master network, though it has options to run masterless using Ansible style SSH connections. You probably wouldn’t want to do that though, since 1). Ansible has spent a long time perfecting that system, and 2).
Saltstack’s main draw is its iterating speed, which it gets from its efficient deployment of its Salt Minion agents. (If you’re here for the orchestration tool with the cutest agent name, look no further.) It comes with the usual agent model concerns, a bulkier installation package, and an added point of attack. Saltstack uses a pycrypto package that provides good security, not as airtight as Ansible’s SSH connections, but still decent. It supports a declarative style, which along with its intuitive configuration language, helps to make things easy for the programmer.
Terraform is a whole different kind of beast, to the point where it might not be accurate to put it in the same category as the others. Released in 2014, Terraform was built with a different set of priorities than Chef, Puppet, Ansible, and Saltstack, focusing on provisioning infrastructure rather than configuration management. These are two different jobs that people often lump together. Configuration management is needed to maintain pre-existing infrastructure and configure it into the state desired by the programmer. Provisioning is the process needed to set up IT infrastructure in the first place, whether that’s a platform, a piece of software, or a whole server. Most configuration management tools can do a little provisioning, and Terraform can handle configuration management to a degree, but both function best when they stay in their respective lanes.
The other key difference that separates Terraform from the rest of the pack is its immutable infrastructure. Saltstack, Chef, Puppet, and Ansible all use mutable infrastructure. The difference only comes into play when you want to update your infrastructure. Mutable infrastructure is updated by incorporating the new update into the existing infrastructure and making the changes necessary to bring it to the new desired state. It’s a pretty standard way of handling updates, but as you pile on update after update on the same infrastructure, you get a small but increasing chance of configuration drift, unique idiosyncrasies that make configuration bugs more difficult to diagnose and purge. Immutable infrastructure solves this problem by not actually updating its infrastructure at all. Instead, when a new update is called for, new infrastructure is provisioned with the update built-in, and then traffic is shifted from the old infrastructure to the new, creating a more standardized, duplicable infrastructure.
This can bring its own problems, including greater processing power needed when making an update, and the possibility of any local data on the initial infrastructure being lost when you bring up new infrastructure in its place. Terraform has found ways around the second issue by finding non-local data storing solutions, though that can require more space to be taken up to create a data hub that old and new infrastructure can both access. It also should be said that some of the other orchestration tools can be configured to an immutable approach as well, but it doesn’t come as naturally. Like Ansible, Terraform is agentless and masterless, making it easy to get up and running, and like Puppet and Saltstack, it uses declarative code. All in all, it’s very user-friendly, and if you are looking to provision infrastructure it’s a great tool to work with.
Here at Azion technologies, we got interested in an orchestration solution to manage our extensive serverless edge network. None of the existing offerings quite fit our needs, so we built our own. Unlike the other products discussed here, the Azion Edge Orchestrator is designed with a focus on managing the serverless nodes that make up the edge. This means we’re not really looking to serve the same function as products like Ansible or Terraform, as we’re operating in a new and different space, but we’ve still tried to embrace and build upon the innovations that came before.
We were really impressed with the iterating speeds Saltstack achieved with its agented connection, but weren’t satisfied about the security issues inherent to the agent master relationship. Rather than throwing out the model entirely, the way Ansible and Terraform have gone,we decided to make it our own. The agented system on our Edge Orchestrator is protected from external attacks by state of the art end-to-end encryption and token based security layers. Our agents give us the power of zero-touch provisioning, enabling the user to quickly and remotely provision and configure the edge nodes in their network. That zero-touch provisioning dramatically increases network scalability, facilitating the fast and uniform installation of edge applications and firewalls direct from Azion, or from third parties via Azion Marketplace.
The Edge Orchestrator uses mutable code, which means we have to contend with some of the same issues of configuration drift that products like Ansible and Puppet suffer from. We’ve installed rollback contingencies so nodes that experience idiosyncrasies can be reset to their old version in response. This works pretty well but it’s not perfect. Ultimately, we prioritized the security and preservation of client data, and found that mutable code left no possibility for data loss.
Lastly, we’re big fans of the way Terraform focuses on ease of use for its clients, and if you can’t beat ‘em, join ‘em. The Edge Orchestrator has a similar focus on starting things off simply, defaulting to a declarative style when you’re first provisioning your edge nodes, and then gently opening things up to a more specific imperative approach as the programmer dives deeper into editing and customizing our Edge Services.
Orchestration technology is a super exciting efficiency tool, and it’s already a game changer in how businesses run their IT. Of course every company wants to say that their tool is the best, but what’s really great is the flexibility of modern options means that whatever your unique needs, there’s a product out there for you. If you’re looking for quick and easy provisioning that will get your infrastructure online asap, maybe check out Terraform’s services. Fancy yourself a master coder and want to take your time to design your configuration management tool to your exact specifications? You might like taking Chef’s intricate code out for a spin. And if you’ve broken free of the constraints of old school server systems and are ready to boost your performance at the network edge, then see if the Edge Orchestrator might be right for you.