In the first post of this series, I had alluded to walking through terraform and how it can be leveraged as a standard tool for deployment across a variety of environments. In this post, I will walk through the basics pre-requisites I understand them, to start using terraform on both AWS as well as Azure.
How to run Terraform
The terraform tool itself is a binary that can be downloaded from https://www.terraform.io/downloads.html. Depending on where one wants to run the code from, various OS versions are supported — macOS, Linux, FreeBSD, OpenBSD, Solaris and Windows.
After the tool has been set up, be sure to add it to the PATH variable on your OS or move it to an appropriate location such as “/usr/sbin” or “/usr/local/sbin”.
The terraform code is written in a configuration definition language (format) called HashiCorp Configuration Language (HCL). Its objective is to define various infrastructure components in a standardized manner in order for terraform to read and execute the necessary underlying code. The files have a “.tf” extension, and it can be a single flat file for simple deployments or a combination of various sub-component files organized via directories, that can be sourced as modules in a “main” file (often named main.tf).
Once the terraform file set up, the following three steps are involved to initialize the terraform configuration, validate and compose an execution plan, and execute it.
$ terraform init
$ terraform plan
$ terraform apply
If the deployment needs to be destroyed, we would run —
$ terraform destroy
Terraform Configuration
In order to be able to create resources in a Cloud platform such as AWS or Azure, the terraform tool needs to be provided credentials. These credentials will vary based on the underlying platform, but involve an ID and a Secret. For this series, I’m only looking at AWS and Azure. So an appropriate user account needs to be available in AWS IAM with appropriate permissions, and having the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY parameters set to the appropriate value.
In Azure, the way is to use a Service Principal with Contributor role at the Subscription level, which will have its own set of parameters that would need to be populated —
ARM_SUBSCRIPTION_ID
ARM_CLIENT_SECRET
ARM_TENANT_ID
ARM_CLIENT_ID
In order to be able to use terraform in a collaborative environment (where there are multiple users who are using Terraform to build infrastructure), this blog post can be followed, in order to set up a shared remote statestore and provide locking capabilities, secrets management, etc.
This Microsoft article shows how to use KeyVault to further protect the access keys, etc on azure. This other terraform article shows how to do the same for AWS.
This article by Evgeniy Brinkman provides a comprehensive guide to manage secrets in Terraform Code.
(to be continued)