Platform Automation and Infrastructure As Code – 3

In the first installment of this series, I covered a historical perspective on the topic of Infrastructure deployment/delivery and imperative style automation that used to be the norm. In the second installment, I went over the basics of Terraform, one of the most versatile and popular declarative IaC paradigms in the industry today.

In this post, I will explore a bit more about delivering a simple enough architecture — a LoadBalancer fronted VM ScaleSet in Azure. I will also cover a similar deployment in AWS in a subsequent post. There are plenty of resources on the web that cover the basics of terraform (as well the official Terraform documentation), so my intention is to simply share an approach to modular development of Infrastructure as Code (IaC).

In order for a load-balanced web-farm to be deployed, a few pre-requisites need to be addressed —

  • There must be a valid Cloud account and a deployment account with appropriate permissions to be able to create, modify or tear-down resources. On Azure this involves:
    • A valid Subscription associated with an Azure AD tenant ID.
    • A Service Principal with appropriate permissions (Contributor role at the subscription level)

Along-side the main and module components, there are two files, namely variables.tf which parameterizes infrastructure resource definitions and outputs.tf which provides the ability to print to STDOUT specific parameters, such as the ssh private key, and so on. It is a good idea to use an IDE that supports terraform’s HCL language to simplify the development process. I personally use PyCharm which is a great IDE and the community edition of this tool is free to download and use.

Once PyCharm is installed, the Terraform plugin must be added in order for it to support HCL. There is a bit of a learning curve in using an IDE like PyCharm but the benefits far outweigh the initial effort to start using this tool — there are many features such as code auto-completion, syntax highlighting, formatting, syntax checking, etc. Of course, there are many other IDEs that provide similar functionality. Being a hard-code shell and cli person myself — The vi editor has been my go-to tool for scripting and editing text files. For me, PyCharm was a revelation!

Terraform will create all the cloud resources/artifacts once executed. It is also a good idea to save the code in a GitHub repository, so having a GitHub account is a good idea.

The structure I’ve used for the code is as follows (see the tree below) —

$ tree
.
??? examples
?   ??? database.tf
?   ??? network.tf
?   ??? nsg.tf
?   ??? vm.tf
?   ??? vmss.tf
?   ??? webapp.tf
??? main.tf
??? stubs
?   ??? base.tf
?   ??? compute.tf
?   ??? install_apache.sh
?   ??? lb.tf
?   ??? network.tf
?   ??? nsg.tf
?   ??? outputs.tf
?   ??? storage.tf
?   ??? variables.tf
?
??? variables.tf

The HCL configuration syntax is built around two key components — arguments and blocks. An argument assigns a value to a particular name (such as “vnet_name = “kt-vnet”). A block is a container for other content. Refer to the Terraform official documentation for more details.

The main.tf file is the “main program” (so to speak) which calls the modules organized under the “stubs” directory and passes the variables parameterized in stubs/variables.tf. The stubs/outputs.tf file provides output values/return values of the various resource artifacts that are created by the modules.

Looking at the main.tf file, the first section defines the required_providers for the deployment, and also defines the provider “azurerm” which will be used to deploy the azure resources.

$ cat main.tf 
terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      # version = "=2.46.0"
    }
  }
}

provider "azurerm" {
  features {}
}

The module block identifies the stubs directory by the name “stubs” and also identifies its source as the directory “stubs” in the located in the CWD (“./stubs”). Also various variable parameters are passed, as these are required by the modules that reside inside the “stubs” directory.


module "stubs" {
  source = "./stubs"
  vnet_name = "kt-vnet"
  subnet_name = "kt-subnet1"
  vnet_address_space = "10.0.0.0/16"
  subnet_address_prefix = "10.0.1.0/24"
  lb_name = "kt-lb"
  storage_account_name = "ktsa"
  storage_container_name = "kt-sa-bc"
  storage_blob_name = "kt-blob"
  storage_share_name = "kt-share"
  bootdiag_storage_account_name = "ktbootdiag"
  account_tier = "Standard"
  account_repl_type = "LRS"
  nsg_name = "kt-secure"
  resource_group_name = "kicktires-rg"
  resource_group_location = "Central US"
}

The code also defines the block named “ssh_private_key” which is an output from the stubs module.

output "ssh_private_key" {
  value = module.stubs.ssh_private_key
}

The stubs/variables.tf file defines the variables to be used by the modules within the stubs directory.

$ cat stubs/variables.tf 
variable "storage_account_name" {}
variable "storage_container_name" {}
variable "storage_blob_name" {}
variable "storage_share_name" {}
variable "bootdiag_storage_account_name" {}
variable "account_tier" {}
variable "account_repl_type" {}
variable "vnet_name" {}
variable "subnet_name" {}
variable "vnet_address_space" {}
variable "subnet_address_prefix" {}
variable "lb_name" {}
variable "nsg_name" {}
variable "resource_group_name" {}
variable "resource_group_location" {}

The modules themselves have been named by the resource type that will get created. In Azure, all resources of a deployment are organized in a logical entity called a “resource group”, which is created by stubs/base.tf.

resource "azurerm_resource_group" "rg" {
  name = var.resource_group_name
  location = var.resource_group_location

  tags = {
    environment = "Terraform Kicktires"
    CreatedBy = "Admin"
  }
}

The next main component required for any deployment are the network components, which are created by stubs/network.tf

resource "azurerm_virtual_network" "TFNet" {
  name = var.vnet_name
  address_space = [
   var.vnet_address_space]
  location = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  tags = {
    environment = "Terraform VNET"
  }
}
# Create subnet
resource "azurerm_subnet" "tfsubnet" {
  name = var.subnet_name
  resource_group_name = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.TFNet.name
  address_prefix = var.subnet_address_prefix
}

#Deploy Public IP
resource "azurerm_public_ip" "pip01" {
  name = "pubip1"
  location = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method = "Static"
  domain_name_label = azurerm_resource_group.rg.name
  sku = "Standard"
}
resource "azurerm_public_ip" "pip02" {
  name = "pubip2"
  location = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method = "Static"
  # domain_name_label = azurerm_resource_group..name
  sku = "Standard"
}

output "pip01id" {
  value = azurerm_public_ip.pip01.id
}

output "pip02id" {
  value = azurerm_public_ip.pip02.id
}

output "subnetid" {
  value = azurerm_subnet.tfsubnet.id
}

output "vnetid" {
  value = azurerm_virtual_network.TFNet.id
}

In our example deployment, all compute resources will reside in the same vnet (aka VPC) and the same subnet. There will also be two Public IPs, one for a stand-alone VM and the other as a front-end IP for the LoadBalancer.

The next module creates the compute resources, in this case — a Linux VM scaleset and a Linux VM. Similarly for a Network Security Group (a L3/L4 firewall), Load Balancer and so on. Outputs can be defined in the actual module files or in a separate outputs.tf file in the modules directory (e.g., stubs/outputs.tf).

In order to execute the deployment, the sequence of terraform init; terraform plan and terraform apply will need to be run.

Terraform init

$ terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Reusing previous version of hashicorp/tls from the dependency lock file

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform plan

$ terraform plan

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.stubs.azurerm_lb.lb will be created
  + resource "azurerm_lb" "lb" {
      + id                   = (known after apply)
      + location             = "centralus"
      + name                 = "kt-lb"
      + private_ip_address   = (known after apply)
      + private_ip_addresses = (known after apply)
      + resource_group_name  = "kicktires-rg"
      + sku                  = "Standard"

      + frontend_ip_configuration {
          + id                            = (known after apply)
          + inbound_nat_rules             = (known after apply)
          + load_balancer_rules           = (known after apply)
          + name                          = "PublicIPAddress"
          + outbound_rules                = (known after apply)
          + private_ip_address            = (known after apply)
          + private_ip_address_allocation = (known after apply)
          + private_ip_address_version    = "IPv4"
          + public_ip_address_id          = (known after apply)
          + public_ip_prefix_id           = (known after apply)
          + subnet_id                     = (known after apply)
        }
    }

  # module.stubs.azurerm_lb_backend_address_pool.bpepool will be created
  + resource "azurerm_lb_backend_address_pool" "bpepool" {
      + backend_ip_configurations = (known after apply)
      + id                        = (known after apply)
      + load_balancing_rules      = (known after apply)
      + loadbalancer_id           = (known after apply)
      + name                      = "BackEndAddressPool"
      + outbound_rules            = (known after apply)
      + resource_group_name       = "kicktires-rg"
    }
         ...
         ...
         ...
 <Output truncated for readability> 

  # module.stubs.tls_private_key.rsadmin_ssh will be created
  + resource "tls_private_key" "rsadmin_ssh" {
      + algorithm                  = "RSA"
      + ecdsa_curve                = "P224"
      + id                         = (known after apply)
      + private_key_pem            = (sensitive value)
      + public_key_fingerprint_md5 = (known after apply)
      + public_key_openssh         = (known after apply)
      + public_key_pem             = (known after apply)
      + rsa_bits                   = 4096
    }

Plan: 24 to add, 0 to change, 0 to destroy.

And then terraform apply

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.stubs.azurerm_lb.lb will be created
  + resource "azurerm_lb" "lb" {
      + id                   = (known after apply)
      + location             = "centralus"
      + name                 = "kt-lb"
      + private_ip_address   = (known after apply)
      + private_ip_addresses = (known after apply)
      + resource_group_name  = "kicktires-rg"
      + sku                  = "Standard"

      + frontend_ip_configuration {
          + id                            = (known after apply)
          + inbound_nat_rules             = (known after apply)
          + load_balancer_rules           = (known after apply)
          + name                          = "PublicIPAddress"
          + outbound_rules                = (known after apply)
          + private_ip_address            = (known after apply)
          + private_ip_address_allocation = (known after apply)
          + private_ip_address_version    = "IPv4"
          + public_ip_address_id          = (known after apply)
          + public_ip_prefix_id           = (known after apply)
          + subnet_id                     = (known after apply)
        }
    }

  
<output truncated for readability>

  

  # module.stubs.tls_private_key.rsadmin_ssh will be created
  + resource "tls_private_key" "rsadmin_ssh" {
      + algorithm                  = "RSA"
      + ecdsa_curve                = "P224"
      + id                         = (known after apply)
      + private_key_pem            = (sensitive value)
      + public_key_fingerprint_md5 = (known after apply)
      + public_key_openssh         = (known after apply)
      + public_key_pem             = (known after apply)
      + rsa_bits                   = 4096
    }

Plan: 24 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ssh_private_key = (known after apply)



Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.stubs.tls_private_key.rsadmin_ssh: Creating...
module.stubs.azurerm_resource_group.rg: Creating...
module.stubs.azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg]
module.stubs.azurerm_network_security_group.nsg: Creating...
module.stubs.azurerm_public_ip.pip02: Creating...
module.stubs.azurerm_virtual_network.TFNet: Creating...
module.stubs.azurerm_public_ip.pip01: Creating...
module.stubs.azurerm_storage_account.sa: Creating...
module.stubs.azurerm_storage_account.sabootdiag: Creating...
module.stubs.tls_private_key.rsadmin_ssh: Creation complete after 7s [id=2ace9f934f32374d007f4262182c618648c213df]
module.stubs.azurerm_network_security_group.nsg: Creation complete after 4s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/networkSecurityGroups/kt-secure]
module.stubs.azurerm_network_security_rule.allow_ib_ssh: Creating...
module.stubs.azurerm_network_security_rule.allow_ib_http: Creating...
module.stubs.azurerm_network_security_rule.allow_ob_http: Creating...
module.stubs.azurerm_public_ip.pip02: Creation complete after 4s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/publicIPAddresses/pubip2]
module.stubs.azurerm_virtual_network.TFNet: Creation complete after 4s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/virtualNetworks/kt-vnet]
module.stubs.azurerm_subnet.tfsubnet: Creating...
module.stubs.azurerm_lb.lb: Creating...
module.stubs.azurerm_lb.lb: Creation complete after 2s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/loadBalancers/kt-lb]
module.stubs.azurerm_lb_nat_pool.lbnatpool: Creating...
module.stubs.azurerm_lb_probe.lbprobe: Creating...
module.stubs.azurerm_lb_rule.http: Creating...
module.stubs.azurerm_lb_nat_pool.lbnatpool: Creation complete after 1s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/loadBalancers/kt-lb/inboundNatPools/ssh]
module.stubs.azurerm_lb_backend_address_pool.bpepool: Creating...
module.stubs.azurerm_lb_probe.lbprobe: Creation complete after 1s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/loadBalancers/kt-lb/probes/http-probe]
module.stubs.azurerm_public_ip.pip01: Creation complete after 7s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/publicIPAddresses/pubip1]
module.stubs.azurerm_lb_rule.http: Creation complete after 2s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/loadBalancers/kt-lb/loadBalancingRules/LBRule]
module.stubs.azurerm_subnet.tfsubnet: Creation complete after 4s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/virtualNetworks/kt-vnet/subnets/kt-subnet1]
module.stubs.azurerm_subnet_network_security_group_association.nsghook01: Creating...
module.stubs.azurerm_network_interface.nic01: Creating...
module.stubs.azurerm_storage_account.sa: Still creating... [10s elapsed]
module.stubs.azurerm_storage_account.sabootdiag: Still creating... [10s elapsed]
module.stubs.azurerm_network_interface.nic01: Creation complete after 2s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/networkInterfaces/nic01]
module.stubs.azurerm_network_security_rule.allow_ob_http: Still creating... [10s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_http: Still creating... [10s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_ssh: Still creating... [10s elapsed]
module.stubs.azurerm_network_security_rule.allow_ob_http: Creation complete after 11s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/networkSecurityGroups/kt-secure/securityRules/Web80Out]
module.stubs.azurerm_lb_backend_address_pool.bpepool: Still creating... [10s elapsed]
module.stubs.azurerm_subnet_network_security_group_association.nsghook01: Still creating... [10s elapsed]
module.stubs.azurerm_lb_backend_address_pool.bpepool: Creation complete after 11s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/loadBalancers/kt-lb/backendAddressPools/BackEndAddressPool]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Creating...
module.stubs.azurerm_storage_account.sabootdiag: Still creating... [20s elapsed]
module.stubs.azurerm_storage_account.sa: Still creating... [20s elapsed]
module.stubs.azurerm_storage_account.sabootdiag: Creation complete after 21s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Storage/storageAccounts/ktbootdiag]
module.stubs.azurerm_virtual_machine.vm01: Creating...
module.stubs.azurerm_storage_account.sa: Creation complete after 23s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Storage/storageAccounts/ktsa]
module.stubs.azurerm_storage_container.sacontainer: Creating...
module.stubs.azurerm_storage_share.fshare: Creating...
module.stubs.azurerm_storage_container.sacontainer: Creation complete after 0s [id=https://ktsa.blob.core.windows.net/kt-sa-bc]
module.stubs.azurerm_storage_blob.lab: Creating...
module.stubs.azurerm_storage_blob.lab: Creation complete after 0s [id=https://ktsa.blob.core.windows.net/kt-sa-bc/kt-blob]
module.stubs.azurerm_storage_share.fshare: Creation complete after 1s [id=https://ktsa.file.core.windows.net/kt-share]
module.stubs.azurerm_network_security_rule.allow_ib_http: Still creating... [20s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_ssh: Still creating... [20s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_ssh: Creation complete after 21s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/networkSecurityGroups/kt-secure/securityRules/SSH]
module.stubs.azurerm_subnet_network_security_group_association.nsghook01: Still creating... [20s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [10s elapsed]
module.stubs.azurerm_virtual_machine.vm01: Still creating... [10s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_http: Still creating... [30s elapsed]
module.stubs.azurerm_network_security_rule.allow_ib_http: Creation complete after 32s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/networkSecurityGroups/kt-secure/securityRules/Web80]
module.stubs.azurerm_subnet_network_security_group_association.nsghook01: Still creating... [30s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [20s elapsed]
module.stubs.azurerm_subnet_network_security_group_association.nsghook01: Creation complete after 32s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Network/virtualNetworks/kt-vnet/subnets/kt-subnet1]
module.stubs.azurerm_virtual_machine.vm01: Still creating... [20s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [30s elapsed]
module.stubs.azurerm_virtual_machine.vm01: Still creating... [30s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [40s elapsed]
module.stubs.azurerm_virtual_machine.vm01: Still creating... [40s elapsed]
module.stubs.azurerm_virtual_machine.vm01: Creation complete after 47s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Compute/virtualMachines/vm01]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [50s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [1m0s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Still creating... [1m10s elapsed]
module.stubs.azurerm_linux_virtual_machine_scale_set.vmss: Creation complete after 1m13s [id=/subscriptions/<subscription-id>/resourceGroups/kicktires-rg/providers/Microsoft.Compute/virtualMachineScaleSets/mytestscaleset-1]

Apply complete! Resources: 24 added, 0 changed, 0 destroyed.

Outputs:

ssh_private_key = <<EOT
-----BEGIN RSA PRIVATE KEY-----
A bunch of stuff goes here...not shared for securiy 
-----END RSA PRIVATE KEY-----

EOT

The infrastructure should now be visible in Azure Portal.

All the resources we outlined have been created.

I can ssh into the scaleset nodes using the frontend Public IP of the load-balancer and the ports as defined in the inbound NAT rules —

$ ssh -i myprivkey.pem rsadmin@20.40.243.244 -p 50005
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1108-azure x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

0 packages can be updated.
0 of these updates are security updates.


Last login: Thu Mar 11 17:41:42 2021 from 73.110.193.124
rsadmin@mytestscaleset-1000005:~$ 

The GitHub repository contains the relevant code — https://github.com/implicateorder/terraform.git

(to be continued)

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.