Skip to main content

Spinning up basic lab env with basic Terraform & Ansible

·798 words·4 mins
IT Linux Terraform Ansible Study-Note

All files are located in the same project dir ~/lab-tf/

Cloud-init files
#

user-data.cfg file

#cloud-config
datasource: NoCloud
password: 12345
chpasswd:
  expire: False
ssh_authorized_keys:
    - <public-key>
ssh_pwauth: False
package_update: true # this may cause slow VM first boot
packages:
    - qemu-guest-agent

I don’t use cloud-init to add NIC to VMs because it seems to not persist after reboot. So no need to define network-config.cfg. Instead, I define NIC in Terraform main file. I can be wrong, but this has to do for now.

main.tf file
#

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
      # version = "0.7.6"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

##############
# DEFINE POOLS 
##############

resource "libvirt_pool" "freeipa" {
  name = "freeipa"
  type = "dir"
  path = "/path/to/pool"
}

resource "libvirt_pool" "dns23" {
  name = "dns23"
  type = "dir"
  path = "/path/to/pool"
}

##############
# DEFINE VOLUMES IN POOLS 
##############

resource "libvirt_volume" "freeipa0-qcow2" {
  name = "freeipa0-qcow2"
  pool = libvirt_pool.freeipa.name
  format = "qcow2"
  source = "/path/to/vm/img/rhel-8.9-x86_64-kvm.qcow2"
}

resource "libvirt_volume" "freeipa1-qcow2" {
  name = "freeipa1-qcow2"
  pool = libvirt_pool.freeipa.name
  format = "qcow2"
  source = "/path/to/vm/img/rhel-8.9-x86_64-kvm.qcow2"
}

resource "libvirt_volume" "dns23-qcow2" {
  name = "dns23-qcow2"
  pool = libvirt_pool.dns23.name
  format = "qcow2"
  source = "/path/to/vm/img/rhel-8.9-x86_64-kvm.qcow2"
}

##############
# DEFINE CLOUD INIT
##############

data "template_file" "user_data" {
  template = file("${path.module}/user-data.cfg")
}

# use cloud-init to pass ssh-key to vm
resource "libvirt_cloudinit_disk" "commoninit0" {
  name           = "commoninit0.iso"
  user_data      = data.template_file.user_data.rendered
  pool           = libvirt_pool.freeipa.name
}

resource "libvirt_cloudinit_disk" "commoninit1" {
  name           = "commoninit1.iso"
  user_data      = data.template_file.user_data.rendered
  pool           = libvirt_pool.freeipa.name
}

resource "libvirt_cloudinit_disk" "commoninit2" {
  name           = "commoninit2.iso"
  user_data      = data.template_file.user_data.rendered
  pool           = libvirt_pool.dns23.name
}

##############
# CREATE NETWORK 
##############

resource "libvirt_network" "lab" {
   name = "lab"
   autostart = true
   addresses = ["192.168.100.0/24"]
   mode = "nat" # mode can be: "nat" (default), "none", "route", "open", "bridge"
   dhcp {
      enabled = true
   }
  # Enables usage of the host dns if no local records match
  dns {
    enabled = true
    local_only = false
  }
}

##############
# CREATE MACHINES (DOMAINS)
##############

resource "libvirt_domain" "freeipa0" {
  name   = "freeipa0"
  memory = "2048"
  vcpu   = 2
  qemu_agent = true
  autostart  = true
  cloudinit = libvirt_cloudinit_disk.commoninit1.id

  network_interface {
    network_name = "lab"
    mac = "52:54:00:f6:73:04"
    addresses = ["192.168.100.20"]
    hostname = "freeipa0"
  }
  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }
  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }
  disk {
    volume_id = libvirt_volume.freeipa0-qcow2.id
  }
  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

resource "libvirt_domain" "freeipa1" {
  name   = "freeipa1"
  memory = "2048"
  vcpu   = 2
  qemu_agent = true
  autostart  = true
  cloudinit = libvirt_cloudinit_disk.commoninit1.id

  network_interface {
    network_name = "lab"
    mac = "52:54:00:52:9e:fb"
    addresses = ["192.168.100.21"]
    hostname = "freeipa1"
  }
  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }
  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }
  disk {
    volume_id = libvirt_volume.freeipa1-qcow2.id
  }
  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

resource "libvirt_domain" "dns23" {
  name   = "dns23"
  memory = "2048"
  vcpu   = 2
  qemu_agent = true
  autostart  = true
  cloudinit = libvirt_cloudinit_disk.commoninit2.id

  network_interface {
    network_name = "lab"
    mac = "52:54:00:d6:32:24"
    addresses = ["192.168.100.11"]
    hostname = "dns23"
  }
  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }
  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }
  disk {
    volume_id = libvirt_volume.dns23-qcow2.id
  }
  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

One thing I realised is that when specifying static IP address with terraform, it actually go add the static lease to teh virtual network (called “lab” in this case) instead of changing the VM’s NIC config itself.

Also all VM can share the same commoninit.iso file, and share the same pool (but different volume). So a large part of “Define cloud init” above can be redundant.

Ansible inventory file
#

Located in the same project dir. Named “inventory”

[foreman]
192.168.100.10

[dns]
192.168.100.11

[freeipa]
192.168.100.20
192.168.100.21

# Group all other groups of servers
[all:children]
freeipa
foreman
dns

# Var for all centos server
[all:vars]
ansible_ssh_user=centos
ansible_ssh_private_key_file=/absolute/path/to/key/file

Make consistent user password
#

Change default user password (centos) just in case we lose SSH connection. Not the best security practice, but passable in lab environment.

ansible -i inventory all -b -a "echo \"12345\" \| sudo passwd \-\-stdin centos"

then register all VMs. This can be done in cloud-init user-data.cfg file as well.

ansible -i inventory all -b -a "sudo subscription-manager register --username=<RHEL-user> --password=<password>"

Create VM snapshots
#

Do this for each VM

sudo virsh snapshot-create-as --domain freeipa --name "init"

Ha! I just found a better way to do this with this one nifty one-line command:

sudo virsh list --all | awk '{print $2}' | sed '/^$/d' | grep -v "Name" | xargs -I {} sudo virsh snapshot-create-as --domain {} --name "init"

Related

Deploying hugo on RHEL9 AWS instance
·736 words·4 mins
IT Linux Study-Note
While maintaining repeatability and consistency
Lessons learned deleting 15 years of media content
·816 words·4 mins
IT Linux Study-Note
I was learning more about ZFS while applying what I learned to my OWN data. Of course, as Murphy’s Law states… so here are the lesson from a fool.
How to Install NVIDIA Drivers on AlmaLinux 8
·430 words·3 mins
IT Linux Study-Note
Installing NVIDIA GPU driver on AlmaLinux is a pain in the ass as there are so many sources with different approaches. These steps worked for me