For a long time, Terraform was associated with deploying resources in the cloud. But what many people don’t know is that Terraform already had private and community-based providers that worked perfectly in non-cloud environments. Today, we will discover how to deploy a compute VM in a KVM (Kernel-based Virtual Machine) host. Not only that, but we will also do it on top of VirtualBox in a nested virtualization environment. As always, I will provide the vagrant build to allow you to launch the lab for a front-row experience. It is indeed the cheapest way to use Terraform on-prem on your laptop.
Libvirtd Provider is a community-based project built by Duncan Mac-Vicar. There is no difference between using Terraform on cloud platforms and doing it with the Libvirtd provider. In this lab, I had to enable nested virtualization in my VBox to make it easier to run the demo. The resulting HyperVision is qemu-kvm, a non-bare-metal KVM environment also known as type 2 HyperVisor (virtual hardware emulation).
No need to subscribe to a cloud Free-tier using credit cards to play with Terraform. You can start this lab right now on your laptop with my vagrant build. The environment comes with all necessary modules & packages to deploy VMs using Terraform.
Lab Content:
– KVM
– KCLI (wrapper tool for managing VMs)
– Terraform 1.0
– Libvirt Terraform provider
– Terraform configuration samples to get started (ubuntu.tf , kvm-compute.tf)
GitHub repo: https://github.com/brokedba/KVM-on-virtualbox
C:\Users\brokedba> git clone https://github.com/brokedba/KVM-on-virtualbox.git
C:\Users\brokedba> cd KVM-on-virtualbox
C:\Users\*\KVM-on-virtualbox> vagrant up
C:\Users\*\KVM-on-virtualbox> vagrant ssh ---- access to KVM host
Now you have a new virtual machine shipped with KVM and terraform which will help us complete the lab.
Note: Terraform files will be located under /root/projects/terraform/
Up until Terraform version 0.12, Hashicorp didn’t officially recognize this libvirt provider, you could still run config files if the plugin was in a local plugin folder (i.e. /root/.terraform.d/plugins/)
But after version 0.13, terraform enforced Explicit Provider Source Locations. As a result, you’ll need a few tweaks to make it run in Terraform. Everything is documented in GitHub issue1 & 2 but I’ll summarize it below.
The steps to run libvirt provider in Terraform v1.0 (Already done in my build)
– Download the Binary (current vers: 0.6.12). For my part, I used an older version for fedora (0.6.2)
[root@localhost]# wget URL
[root@localhost]# tar xvf terraform-provider-libvirt-**.tar.gz
– Add the plugin to a local registry
[root@localhost]# mkdir –p ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
[root@localhost]# mv terraform-provider-libvirt ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
– Add the below code block to the main.tf
file to map libvirt references with the actual provider
[root@localhost]# vi libvirt.tf
...
terraform { required_version = ">= 0.13" required_providers { libvirt = { source = "dmacvicar/libvirt" version = "0.6.2" } } }
... REST of the Config
– Initialize and validate by running terraform init which will detect and add libvirt plugin in the local registry
[root@localhost]# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)
Let’s first provision a simple ubuntu VM on our KVM environment. Again in a nested virtualization mode, we are using hardware emulated HyperVision “Qemu”, and this will require a small hack by setting a special variable. Will
explain why further down. Just bear with me for now.
[root@localhost]# export TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE="qemu"
[root@/*/ubuntu/]# ls /root/projects/terraform/ubuntu/
.. ubuntu.tf --- you can click to download or read content
[root@/*/ubuntu/]# vi ubuntu.tf
provider "libvirt" {
uri = "qemu:///system"}
terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.2"
}
}
} ## 1. --------> Section that declares the provider in Terraform registry
# 2. ----> We fetch the smallest ubuntu image from the cloud image repo
resource "libvirt_volume" "ubuntu-disk" {
name = "ubuntu-qcow2"
pool = "default" ## ---> This should be same as your disk pool name
source = https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
format = "qcow2"
}
# 3. -----> Create the compute vm
resource "libvirt_domain" "ubuntu-vm" {
name = "ubuntu-vm"
memory = "512"
vcpu = 1
network_interface {
network_name = "default" ## ---> This should be the same as your network name
}
console { # ----> define a console for the domain.
type = "pty"
target_port = "0"
target_type = "serial" }
disk { volume_id = libvirt_volume.ubuntu-disk.id } # ----> map/attach the disk
graphics { ## ---> graphics settings
type = "spice"
listen_type = "address"
autoport = "true"}
}
[root@localhost]# terraform init
[root@localhost]# terraform plan
Terraform will perform the following actions:
# libvirt_domain.ubuntu-vm will be created
+ resource "libvirt_domain" "ubuntu-vm" {
+ arch = (known after apply)
+ disk = [
+ {
+ block_device = null
+ file = null
+ scsi = null
+ url = null
+ volume_id = (known after apply)
+ wwn = null
},
]
+ emulator = (known after apply)
+ fw_cfg_name = "opt/com.coreos/config"
+ id = (known after apply)
+ machine = (known after apply)
+ memory = 512
+ name = "ubuntu-vm"
+ qemu_agent = false
+ running = true
+ vcpu = 1
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "0"
+ target_type = "serial"
+ type = "pty"
}
+ graphics {
+ autoport = true
+ listen_address = "127.0.0.1"
+ listen_type = "address"
+ type = "spice"
}
+ network_interface {
+ addresses = (known after apply)
+ hostname = (known after apply)
+ mac = (known after apply)
+ network_id = (known after apply)
+ network_name = "default"
}
}
# libvirt_volume.ubuntu-disk will be created
+ resource "libvirt_volume" "ubuntu-disk" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "ubuntu-qcow2"
+ pool = "default"
+ size = (known after apply)
+ source = https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
}
Plan: 2 to add, 0 to change, 0 to destroy.
[root@localhost]# terraform apply -auto-approve
Plan: 2 to add, 0 to change, 0 to destroy.
libvirt_volume.ubuntu-disk: Creating...
libvirt_volume.ubuntu-disk: Creation complete after 17s [id=/u01/guest_images/ubuntu-qcow2]
libvirt_domain.ubuntu-vm: Creating...
libvirt_domain.ubuntu-vm: Creation complete after 0s [id=29735a37-ef91-4c26-b194-05887b1fb264]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
[root@localhost ubuntu]# kcli list vm
+-----------+--------+----------------+--------+------+---------+
| Name | Status | Ips | Source | Plan | Profile |
+-----------+--------+----------------+--------+------+---------+
| ubuntu-vm | up | 192.168.122.74 | | | |
+-----------+--------+----------------+--------+------+---------+
[root@localhost]# terraform destroy -auto-approve
Destroy complete! Resources: 2 destroyed.
Same way as with Cloud VMs we can also call startup scripts to do anything we want during the bootstrap.
I chose Centos in this example where CloudInit bootstrap actions were:
– Set a new password to root user
– Add an SSH key to the root user
– Change the hostname
# cd ~/projects/terraform
[root@~/projects/terraform]# cat cloud_init.cfg
#cloud-config
disable_root: 0
users:
- name: root
ssh-authorized-keys: ### –> add a public SSH key
- ${file("~/.ssh/id_rsa.pub")}
ssh_pwauth: True
chpasswd: ### –> change the password
list: |
root:unix1234
expire: False
runcmd:
- hostnamectl set-hostname terracentos
# cd ~/projects/terraform
[root@~/projects/terraform]# cat kvm_compute.tf
provider "libvirt" {
…
resource "libvirt_volume" "centos7-qcow2" {
…
## 1. ----> Instantiate cloudinit as a media drive to add our startup tasks
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default" ## ---> This should be same as your disk pool name
user_data = data.template_file.user_data.rendered
}
## 2. ----> Data source converting the cloudinit file into a userdata format
data "template_file" "user_data" { template = file("${path.module}/cloud_init.cfg")}
resource "libvirt_domain" "centovm" {
name = "centovm"
memory = "1024"
vcpu = 1
cloudinit = libvirt_cloudinit_disk.commoninit.id ## 3. ----> map CloudInit...---> Rest of the usual domain declaration
[root@~/projects/terraform]# terraform init
[root@~/projects/terraform]# terraform plan... Other resources declaration
ssh_pwauth: True chpasswd: list: | root:unix1234 expire: False runcmd: - hostnamectl set-hostname terracentos EOT }
# libvirt_cloudinit_disk.commoninit will be created
+ resource "libvirt_cloudinit_disk" "commoninit" {
+ id = (known after apply)
+ name = "commoninit.iso"
+ pool = "default"
+ user_data = <<-EOT
#cloud-config
disable_root: 0
users:
- name: root
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ** root@localhost.localdomain... Remaining declaration
[root@~/projects/terraform]# terraform apply -auto-approve
Plan: 3 to add, 0 to change, 0 to destroy.
libvirt_cloudinit_disk.commoninit: Creation complete after 1m22s [id=/u01/guest_images/commoninit.iso;61c50cfc-**]
...
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
[root@~/projects/terraform]# kcli list vm
+-----------+--------+----------------+--------+------+---------+
| Name | Status | Ips | Source | Plan | Profile |
+-----------+--------+----------------+--------+------+---------+
| centovm | up | 192.168.122.68 | | | |
+-----------+--------+----------------+--------+------+---------+
-- 1. SSH
[root@~/projects/terraform]# ssh -i ~/.ssh/id_rsa root@192.168.122.68
Warning: Permanently added '192.168.122.68' (RSA) to the list of known hosts.
[root@terracentos ~]# cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)
-- 2. Password
[root@~/projects/terraform]# virsh console centovm
Connected to domain centovm Escape character is ^]
CentOS Linux 7 (Core)
Kernel 3.10.0-1127.el7.x86_64 on an x86_64
terracentos login: root
Password:
[root@terracentos ~]#
And here you go, your local terraform VM was changed during startup using a simple config file just like the ones on AWS ;).
I can now explain why we needed to set the environment variable to “qemu” to have your deployment working. The VM will never start up without this trick. Let’s find why
I asked them to replace that variable by an attribute inside terraform code but the bug is still there, see more in my issue
--- Workaround for non BareMetal hosts (nested)export TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE="qemu"
--- Below Go check happens where qemu is selected (domain_def.go)if v := os.Getenv("TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE"); v != "" {
domainDef.Type = v
} else {domainDef.Type = "kvm"}
Thank you for reading!