· 9 min read

Notes On Learning OpenTofu

Provisioning & Deploying my personal projects.

Having worked in the middle of nowhere, I was mostly dealing with on-prem hardware and subsequently I used Ansible often for configuration management. The times I did work with the cloud I generally would deploy something to a single VPS instance add on a database instance and then set up DNS and backups. This type of infrastructure I would set up once and modify rarely and then I would automate the application deployments to the VPS via Ansible. So, at the time I did not see the need to invest in learning Terraform or similar tools. But, that has changed.

Now, I am working towards fully automating the deployment of my personal projects which means I want to fully declare my infrastructure in code.

Terraform vs OpenTofu

I remember hearing of the OpenTofu fork on a Fireship video.

So now looking into it myself 18 months on I can find OpenTofu is surviving and is building new features, slowly distancing itself from Terraform. Although in practice I find them still nearly interchangeable.

While trying to see how the online community felt about OpenTofu since its fork I found a lot of initial “Why should I care about OpenTofu” posts. However, it was clear any recent mentions of OpenTofu online were a lot more positive and supportive of new feature releases. This was enough for me that I would begin using OpenTofu.

My previous track record of researching options and adopting a particular technology has worked well in the past (Vue/FastAPI/Astro) so I wasn’t so worried by this decision on admittedly loose criteria. It helps that if I am wrong later I should be able to transfer my knowledge to Terraform with minimal headache.

Pulumi???

I did look at other options Pulumi stood out, but I gather my simple use cases do not require its features. I may like to look at it in the future but for now OpenTofu is sufficient.

Key Resources

I found the video “Complete Terraform Course” by DevOps Director on YouTube with accompanying Github repo to follow along a perfect starter. It had just the right learning curve to ease into working with Terraform — or OpenTofu!

However, it is a bit opinionated and took a straight path from single file configuration to a configuration with modules and multiple environments. When I was working on my own configuration I felt I was missing context on how much abstraction I needed and when would I need a certain level of abstraction and that’s where I found “Evolving Your Infrastructure with Terraform” from HasiCorp.

The talk demystified the progression from a single file to a progressively more modular configuration. It went further in complexity than I needed which was great to see and let me settle on my own level of abstraction I needed for my configuration.

Design

As a reminder my goal is to automate the complete provisioning and deployment of my personal projects. I can fit my projects on a single VPS so I will need to design around that constraint. Following what I’ve learned I believe a Terramod setup would be sufficient for me; however, I decided to settle on a Terraservices pattern.

The Terraservices pattern allows me to write each project’s required infrastructure as an independent unit so later I can easily add and remove projects from the shared infrastructure. I also learned with this pattern I could write each project’s IaC as separate git repos, but as an individual owning all my own projects that’s unnecessary complexity.

For my environments I opted to forego using workspaces because, coming from a Python background, I prefer explicit over implicit. But more seriously, I just don’t like environments being hidden behind the tool and I don’t mind writing a bit extra while I’m learning to avoid over-engineering at this stage. This tracks well with my preference for WET code too.

Therefore, I’ll model both Ansible and OpenTofu with the similar structures in regards to environments and roles or modules respectively.

.
├── ansible
│   ├── environments
│   │   └── development
│   ├── playbooks
│   └── roles
├── keys
└── opentofu
    ├── environments
    │   └── development
    └── modules

OpenTofu

Following my Terraservices pattern I’ll define services in each environment. One global service for provisioning the shared VPS and then additional services for each project.

.
└── opentofu
    ├── environments
    │   ├── development
    │   │   ├── global
    │   │   └── proj_vmgd
    │   └── production
    └── modules
        ├── proj_vmgd
        ├── vps_firewall
        ├── vps_provision
        └── vps_setup

The Global Service

The global service does the most and at the moment is tightly coupled with its Ansible counterpart. When a VPS is provisioned I then use local-exec to configure the VPS with a user and include my SSH key. Then I follow up with the global service’s Ansible playbook which hardens the server. Lastly, I provision a Linode firewall for the VPS.

opentofu/environments/development/global/main.tf
locals {
  environment_name = "development"
}
 
module "vps_provision" {
  source = "../../../modules/vps_provision"
  token = "${var.linode_token}"
 
  label = "my-test-linode"
  image = "linode/debian12"
  linode_region = "us-east"
  linode_instance_type = "g6-nanode-1"
  root_ssh_pubkey = "${var.root_ssh_pubkey}"
}
 
module "vps_setup" {
  source = "../../../modules/vps_setup"
 
  environment_name = "${local.environment_name}"
  host = "${module.vps_provision.vps_public_ip}"
  username = "${var.username}"
  root_ssh_privkey = "${var.root_ssh_privkey}"
  user_ssh_privkey = "${var.user_ssh_privkey}"
  user_ssh_pubkey = "${var.user_ssh_pubkey}"
}
 
module "vps_firewall" {
  source = "../../../modules/vps_firewall"
  token = "${var.linode_token}"
 
  label = "${local.environment_name}-test-firewall"
  vps_instance_id = module.vps_provision.vps_instance_id
}
opentofu/modules/vps_provision/main.tf
resource "linode_instance" "instance" {
  label = var.label
  image = var.image
  region = var.linode_region
  type = var.linode_instance_type
  authorized_keys = ["${chomp(file(var.root_ssh_pubkey))}"]
}
opentofu/modules/vps_setup/main.tf
...
 
resource "null_resource" "ansible_setup" {
  depends_on = [null_resource.setup]
 
  provisioner "local-exec" {
    command = "ansible-galaxy install -r ../../../../ansible/requirements.yml"
  }
 
  provisioner "local-exec" {
    command = "../../../generate-inventory.sh > ../../../../ansible/environments/${var.environment_name}/inventory"
 
    environment = {
      USERNAME = "${var.username}"
      SSH_PRIVKEY = "${var.user_ssh_privkey}"
      HOST_IP = "${var.host}"
    }
  }
 
}
 
resource "null_resource" "ansible_playbook" {
  depends_on = [null_resource.ansible_setup]
 
  provisioner "local-exec" {
    command = "cd ../../../../ansible && ansible-playbook -i ./environments/${var.environment_name}/inventory playbook.yml"
 
    environment = {
      ANSIBLE_HOST_KEY_CHECKING = "False"
    }
  }
}

Above you can see the ugly and brittle ../../../../ansible paths which really need to be decoupled. But, for now it is nice have a VPS ready to SSH into from a single tofu apply command. Later I will consider decoupling Ansible.

The Project Services

The project services are simple. Each of my projects are fairly self-contained in their Docker compose files so I am mostly provisioning DNS records in these services.

Each begins by grabbing the outputs from the global service then calling their respective project’s module. In this project I only need to set the DNS records to point to the global service’s VPS.

NOTE: I’m using a different provider for DNS instead of the Linode domain manager.

opentofu/environments/development/proj_vmgd/main.tf
data "terraform_remote_state" "global" {
  backend = "local"
  config = {
    path = "../global/terraform.tfstate"
  }
}
 
locals {
  environment_name = "development"
}
 
module "proj_vmgd" {
  source = "../../../modules/proj_vmgd"
  token = "${var.namecheap_token}"
  username = "${var.namecheap_username}"
  client_ip = "${var.namecheap_client_ip}"
 
  environment_name = "${local.environment_name}"
  subdomain = "${var.subdomain}"
  vps_public_ip = "${data.terraform_remote_state.global.outputs.vps_public_ip}"
}
opentofu/modules/proj_vmgd/main.tf
locals {
  subsubdomain = var.environment_name == "production" ? "" : "${var.environment_name}."
}
 
resource "namecheap_domain_records" "vmgd-michaeltoohig-com" {
  domain = "michaeltoohig.com"
  mode = "MERGE"
 
  record {
    hostname = "${local.subsubdomain}${var.subdomain}"
    type = "A"
    address = "${var.vps_public_ip}"
  }
}

Ansible

After the OpenTofu services are provisioned I then move to my ansible directory to configure and deploy each project. It’s immediately obvious there is a lot more going on here.

Each playbook will configure or deploy a particular project. Given each project is sharing the VPS and multiple projects need Docker and Nginx those projects will have similar playbooks which call the common, docker and nginx roles. However, when it comes to deploying each project, then the playbooks diverge and will use their unique project templates and such.

.
└── ansible
    ├── environments
    │   └── development
    │       ├── group_vars
    │       └── vars
    ├── playbooks
    └── roles
        ├── common
        │   └── tasks
        ├── docker
        │   ├── defaults
        │   ├── tasks
        │   └── templates
        ├── nginx
        │   ├── tasks
        │   └── templates
        └── proj_vmgd
            ├── handlers
            ├── tasks
            └── templates

The Project Playbooks

Just like the OpenTofu services before, I make use of roles to create generic deployment playbooks for a project. Then call each playbook with the appropriate environment to deploy it for the VPS as indended.

Environment specific variables are stored in the ansible/environments/${environment_name}/vars directory.

ansible/playbooks/proj_vmgd.yml
- name: Run application role
  hosts: all
  become: true
  vars_files:
    - ../environments/{{ environment_name }}/vars/proj_vmgd_vars.yml
    - ../environments/{{ environment_name }}/vars/proj_vmgd_vault.yml
  vars:
    app_directory: "/opt/{{ app_name }}"
  roles:
    - ../roles/proj_vmgd
ansible/roles/proj_vmgd/tasks/main.yml
---
- name: Clone the repository
  git:
    repo: "{{ git_repo_url }}"
    dest: "{{ app_directory }}"
    version: "{{ git_branch }}"
  become: true
 
- name: Copy env file
  template:
    src: "../templates/env.template.j2"
    dest: "{{ app_directory }}/data/.env.{{ environment_name }}"
 
- name: Copy compose env file
  template:
    src: "../templates/compose.env.j2"
    dest: "{{ app_directory }}/data/.env.compose"
 
- name: Build and run Docker containers
  community.docker.docker_compose_v2:
    project_src: "{{ app_directory }}"
    env_files:
      - data/.env.compose
    files:
      - "docker-compose.{{ environment_name }}.yml"
    state: present
    recreate: always
  become: true
 
- name: Copy Nginx site config
  template:
    src: "../templates/nginx.conf.j2"
    dest: "/etc/nginx/sites-available/{{ app_name }}.conf"
 
- name: Enable Nginx site
  file:
    src: /etc/nginx/sites-available/{{ app_name }}.conf
    dest: /etc/nginx/sites-enabled/{{ app_name }}.conf
    state: link
 
- name: Restart Nginx
  systemd:
    name: nginx
    enabled: true
    state: restarted

This project is deployed via the following command which loads the appropriate environment.

ansible-playbook --vault-password-file ./vault-password -i environments/development playbooks/proj_vmgd.yml

And if you missed it earlier I generated the Ansible inventory files in the global OpenTofu service.

generate-inventory.sh
#! /bin/bash
 
cat <<EOF
all:
  vars:
    username: $USERNAME
    ansible_ssh_user: $USERNAME
    ansible_private_key_file: $SSH_PRIVKEY
  children:
    webservers:
      hosts:
        www:
          ansible_host: $HOST_IP
EOF

Moving Forward

I do not like the coupling in the global service between OpenTofu and Ansible. I would consider using a script or Make file at the root of the codebase to run OpenTofu followed by Ansible. It would be nice to have the same single command to provision and deploy a single project.

All in all, the learning experience has been good and I will be continuing to migrate my individual projects’ deployment scripts into this project structure.

    Share:
    Back to Blog