Platform Automation and Infrastructure as Code (IaC) – 1

A Trip Down Memory Lane

Today there are many tools available to automate configuration management and automate the deployment of infrastructure components. For those of us who grew up in our professional roles working on UNIX and Linux, writing scripts to automate repetitive tasks and tasks that are susceptible to human error is second nature. Back in the day, when Perl was the duct-tape of the internet, we used Perl quite extensively to do this kind of work. Perl, Korn Shell, and later with the advent of Linux, the “bourne-again shell” (bash) were the mainstay of this kind of activity.

In the days of yore, we would automate server builds using technologies like Ignite (HP-UX), Jumpstart (Solaris), Kickstart (Linux), and as “good” systems engineers, have our toolkits which we would carry with us across gigs. These toolkits would contain painstakingly developed scripts and tools that we would use to automated tasks such as applying security parameters, configuring server software components such as logical volume managers (to build mirrored root disks, RAID volumes) to host a variety of application software. 

There would be scripts to standardize and automate the tuning of various kernel parameters for security and/or performance optimization. These usually started as per-system-based configuration files, which, at a larger scale (on hundreds or thousands of nodes) became susceptible to random changes, typographic errors made by tired operators working all-nighters during progressively smaller and stricter change control windows.

Virtualization and the consequences thereof

Virtualization happened, and eventually, infrastructure components became more malleable — the physical systems became mere receptacles of software-defined servers and containers. Thus many more paradigms to configure, manage and automate these in the form of more scripts and configuration files came into existence.

Suddenly we had to contend with hundreds or thousands of physical servers and multiple thousands of Virtual Machines and containers, which had to fit some standardized profiles based on the workloads they were intended to run. Each virtualization vendor had its own set of tools/mechanisms to automate these — VMware/vSphere cli, OpenStack cli, SDKs that could be programmatically used to automate the infrastructure build, configuration, etc. 

Tools like Chef and Puppet emerged to help us maintain configuration sanity across a variety of data center assets, giving us the ability to ensure that configurations were version-controlled, centrally managed, and provided a methodology, a structure to how configuration changes could be applied without potentially, and inadvertently causing performance impacts or full-blown outages. 

The world of systems engineering and software development started to cross-pollinate, and eventually, a hybrid emerged. The advent of Cloud computing, of course, accelerated this process. 

The public cloud vendors have their unique spin to serving up virtualized infrastructure, platforms, and software — and their API frameworks, cli tools, etc. to go about spinning up/configuring/spinning down those various components.

Modern Tools for the Modern World

The old way of automating systems engineering and operational tasks was usually in the form of scripts, ranging from rudimentary checklist-type commands to comprehensive, flexible, configuration-driven tools, that were developed to be usable in a variety of scenarios. This type of programming is called the “imperative style” of automation, where explicit state-change instructions (commands) are provided in a sequential manner. 

A tool like Puppet uses what is known as the “declarative style” of automation, where the end state of a system/artifact is provided, and the tool will produce the change. The underlying assumption is that the tool has the intelligence built in to be able to bring about the declared state. Puppet focuses on configuration management predominantly.

There are other more comprehensive tools such as Ansible and Terraform that can do declarative automation, but more in deployment automation. Ansible is a great tool that I’ve used, and most major software vendors that operate in the opensource space have “playbooks” that can be used to build complex infrastructure and software. In my role at Cloudera, we used Ansible to automate the deployment of Cloudera’s Hadoop stack’s complex ecosystem as one of the modes of delivery. 

Public cloud vendors have similar declarative automation tools native to their respective platforms, such as Azure Resource Manager (ARM) templates, AWS’ CloudFormation, GCP’s Deployment Manager. 

Then there is Terraform. It is a great tool that provides a common framework for deployment across various platforms, such as Azure, AWS, GCP, OpenStack, VMware, and so on. There is a “provider” associated with each of these multiple platforms, where the terraform framework is employed to automate complex infrastructure builds, with platform-specific details being the variables.

In this blog series, the idea is to capture the process of using Terraform to deploy infrastructure assets in AWS and Azure, just to provide some high-level insights into how it works and how simple and yet, versatile this tool is.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.