dark

Terraform and libvirt nodes

blank
blank

Finally the nodes will defined in the code. A module will be used to define these nodes, as several nodes need to be deployed with the same resource configuration.
Example:

module "host01-node" {
    source = "../modules/kvm-node"

    name         = "host01"
    dns_domain   = var.dns_domain
    memory       = 8192 # in Mb
    vcpu         = 4
    disk_size    = 40 # in Gb
    ip_address   = "10.1.2.10"
    gw_address   = var.gw_address
    user_pwd     = var.default_password

    install_image   = libvirt_volume.local_install_image.id
    libvirt_pool    = libvirt_pool.default
    libvirt_network = libvirt_network.my_network
}

The module will first define some storage resources which are required for the installation and post-installation steps. Post-installation steps are managed by “cloud-init” and require its own storage volume. This storage volume is configured by two templates (hashicorp/template provider):

  • the cloud-init configuration file
  • the network configuration file
data "template_file" "user_data" {
  template = "${file("${path.module}/templates/cloud-init.cfg")}"
  vars = {
    hostname = var.name
    fqdn     = "${var.name}.${var.dns_domain}"
    password = var.user_pwd
  }
}

data "template_file" "network_config" {
  template = "${file("${path.module}/templates/network-config.cfg")}"
  vars = {
    domain     = var.dns_domain
    ip_address = var.ip_address
    gw_address = var.gw_address
  }
}

resource "libvirt_cloudinit_disk" "commoninit" {
    name = "${var.name}-commoninit.iso"
    pool = var.libvirt_pool.name
    user_data = "${data.template_file.user_data.rendered}"
    network_config = "${data.template_file.network_config.rendered}"
}

The last storage volume defined is the virtual disk used by the virtual node:

resource "libvirt_volume" "node-disk-qcow2" {
  name   = "${var.name}-disk-ubuntu-focal.qcow2"
  pool   = var.libvirt_pool.name

  size   = 1024*1024*1024*var.disk_size

  base_volume_id   = var.install_image
  base_volume_pool = var.libvirt_pool.name

  format = "qcow2"
}

This volume is based on the “local_install_image” which was created previously.

Finally, since all extra resources are defined, the virtual node resource can be configured:

resource "libvirt_domain" "kvm_node" {
  name = var.name

  memory = var.memory
  vcpu   = var.vcpu

  qemu_agent = true
  autostart  = true

  cloudinit = libvirt_cloudinit_disk.commoninit.id

  network_interface {
    network_id     = var.libvirt_network.id
  }

  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  disk {
    volume_id = libvirt_volume.node-disk-qcow2.id
  }
}

The network interface just needs to be assigned to an existing network, the actual IP configuration will be done by cloud-init.

Having all configuration in place, Terraform needs to be initialized and it should create a plan (which will be saved to a file) of what needs to be deployed.

terraform init --upgrade
terraform plan -out myplan

Once the plan successfully finishes and the changes have been reviewed, apply the plan to roll out the changes:

terraform apply "myplan"

The code snippets in this article can be downloaded from: https://github.com/insani4c/terraform-libvirt

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Post
blank

Terraform: Create a map of subnet IDs in Azure

Next Post
blank

IPTables Logging in JSON with NFLOG and ulogd2

Related Posts