← Back to Blog

IaC Deploy Generator — One Form, Terraform + Ansible, Five Clouds, Production-Hardened

IaC Deploy Generator — One Form, Terraform + Ansible, Five Clouds, Production-Hardened

The gap between "I've shipped something to production" and "I could ship another production instance in 15 minutes" is where most solo operators and small teams lose weeks every year. You provision a VM through the cloud dashboard, apt install some things, copy a Docker Compose file you cobbled together last time, hand-configure nginx, run certbot, realize you forgot fail2ban, install it, SSH port needs closing, you meant to set up unattended-upgrades, where's the backup script again.

Three weeks later the same server configuration lives in seven notebooks and one Slack DM. Next deploy, you do the whole thing again. Each repetition drifts a little.

The IaC Deploy Generator takes the shortcut that you always meant to write: a Terraform file that provisions the VM + firewall + reserved IP, plus an Ansible playbook that installs everything and configures the reverse proxy. One form, one output, one repeatable sequence. Next deploy, you change the hostname and run the same two commands.

What's in the output

Four files, four tabs:

main.tf, Terraform HCL for the cloud you picked. DigitalOcean droplet + firewall + reserved IP. AWS EC2 + security group + elastic IP + encrypted gp3 root volume. Hetzner server + firewall + IPv6. Vultr instance + firewall group. Linode instance + firewall. All use Ubuntu 24.04 LTS with SSH key auth pre-installed from your local public key. All expose only 22/80/443 inbound (plus UDP 443 for HTTP/3 where the cloud supports it).

site.yml, Ansible playbook that:

  • Updates APT, installs baseline tools
  • Enables UFW with a deny-by-default policy and only 22/80/443 open
  • Disables SSH password auth (key-only)
  • Installs Docker CE + Compose plugin from Docker's official APT repo
  • Installs your reverse proxy (Caddy / nginx + certbot / Traefik) with a config file or container and security headers baked in
  • Installs and starts each service container you selected (static / Node / WordPress / Ghost / n8n / Ollama / Postgres / Redis / Prometheus + Grafana)
  • Sets up fail2ban for SSH brute-force defense (if checked)
  • Enables unattended-upgrades for automatic APT security patches (if checked)
  • Runs a backup sidecar that tars named volumes and uploads to S3 at 03:00 (if checked)

inventory.ini, the Ansible inventory with a placeholder for the IP you'll paste in after Terraform outputs it.

README.md, the three-command deploy sequence: set credentials, terraform apply, ansible-playbook. Plus verification steps using the site's other audit tools.

Everything is idempotent. Run the playbook again and it converges to the same state. Change a service list, re-run, and you end up with exactly the services you asked for.

The reverse-proxy choice

The tool makes you pick Caddy, nginx + certbot, or Traefik. This is the most consequential choice for a small-to-medium deployment.

Caddy 2 is the default. Automatic HTTPS via Let's Encrypt (with ZeroSSL fallback configurable, see the ZeroSSL writeup). Two-line config per host. HTTP/3 by default. Best for solo operators, first-time Docker hosts, or anyone who'd rather ship than tune.

nginx + certbot is the industry-standard pairing. More verbose config, manual certbot renewal cron. Pick this when the team already knows nginx intimately or when you need an obscure module (custom Lua, specific rate-limiting algorithm) that isn't in Caddy's plugin list.

Traefik 3 reads Docker labels. When every service is in docker-compose, Traefik auto-discovers routes from traefik.http.routers.* labels on each container. Best for stacks that scale up to 10+ services where hand-writing route configs gets tedious.

The Ansible playbook emits the right config for whichever you pick. Switch between them by re-running the playbook (though Caddy and Traefik can't coexist on 80/443, uninstall one before adding the other).

The cloud choice

Five options, each with different cost + control trade-offs:

DigitalOcean, the Goldilocks pick for most small deployments. $6-$24/month for useful sizes. Clean API. Great docs. Reserved IPs (DO calls them "Floating IPs") keep your public IP stable across droplet rebuilds. No real downside for standard web workloads.

AWS EC2, pick when the rest of your stack is on AWS. Most expensive of the five at small sizes (though t3.micro on the free tier is cheaper than any competitor for the first 12 months). Integrates with every other AWS service. Security groups + VPC give fine-grained network control that none of the others match.

Hetzner, cheapest if you're OK with EU data centers. ~$4-$10/month for tiers that cost double elsewhere. Excellent network performance within EU. US-East location (Ashburn) available since 2024. The pick for content sites that don't care where the VM lives.

Vultr, strongest global region coverage. 32+ data centers including Lagos, Seoul, Warsaw, Johannesburg. Pick this for latency-sensitive workloads where you need a specific city.

Linode / Akamai, acquired by Akamai in 2022, still runs independently. Comparable to DigitalOcean on price/features, slightly better on network performance for customers near Akamai's edge. Solid long-term track record.

The Terraform output is idiomatic for each provider — digitalocean_droplet, aws_instance, hcloud_server, vultr_instance, linode_instance. Switching between providers means regenerating the Terraform and re-provisioning; the Ansible playbook is the same because every target is Ubuntu 24.04.

The deploy sequence

Three commands after a one-time credential setup:

# 1. Provision the VM + firewall + IP
terraform init
terraform apply
# Note the output "ip" — that's your new host.

# 2. Point DNS at the VM
# (Use /tools/dns-records-generator/ for the full record set.)
# Add an A record for your domain → <IP from step 1>.

# 3. Configure the host
# Paste the IP into inventory.ini, then:
ansible-galaxy collection install community.general community.docker
ansible-playbook -i inventory.ini site.yml

On a reasonable internet connection, steps 1 + 3 take ~10 minutes total. Most of the time is waiting for APT updates and Docker image pulls. The VM itself is provisioned in 60-90 seconds on most clouds.

Verification

After Ansible finishes, the site has three audit tools that tell you the deploy landed correctly:

  • Security Headers Audit, should grade A or B. If it grades lower, the reverse-proxy config didn't apply; check the Caddyfile / nginx config path in the playbook.
  • CWV Audit, should report fast TTFB and a sane content stack. If TTFB is >1s from your geographic region, you picked a far-away cloud region.
  • Sitemap Audit, if you deployed a static site, this confirms your sitemap loads and parses.

Plus ssh root@<IP> and docker ps should show all your services running.

Where this fits in the tool chain

  • Upstream: Docker Gen, pick services, get docker-compose.yml. The IaC Deploy Generator can use the same service list as its Ansible service targets.
  • Sister: DNS Records Generator, emits the A record to add for your new host plus the full email auth record set.
  • Downstream: Security Headers Audit + CWV Audit, verify the deploy is production-grade.

Four tools, one production deploy pipeline. No SaaS subscriptions beyond the cloud bill.

What it doesn't do (honest caveats)

This is a starting point, not a finished product. Specifically:

  • No state management for secrets. Secrets (database passwords, n8n encryption key, Grafana admin password) are generated by lookup('password', ...) on the Ansible control host. Good enough for a single-operator deploy. For a team, stand up HashiCorp Vault or use cloud-native secrets (AWS Secrets Manager, DO 1-Click secrets).
  • No blue-green / zero-downtime deploys. ansible-playbook runs against a single host and restarts containers in place. For real zero-downtime deploys you want a load balancer in front of two identical hosts.
  • No multi-region replication. Single VM, single region. For multi-region or multi-AZ, the Terraform needs to provision N hosts and you'd need a real orchestrator (Nomad, Kubernetes, or a managed platform).
  • Ansible playbook assumes Ubuntu 24.04. Works on 22.04 with minor tweaks. Different base distros (Rocky, Debian, Alpine) need playbook edits.
  • Backup sidecar ships to S3 only. Extend to Backblaze B2 / Wasabi / any other S3-compatible storage by changing the environment variables, the backup container supports it, the playbook just doesn't wire those variants.

For most content sites, newsletters, small SaaS backends, and homelab externalizations, the deploy sequence here is enough. If your requirements push past the caveats above, you're at the "please hire an SRE" tier and the playbook is still a fine starting point that you'd evolve from.

Related reading

If you're building an owned-infrastructure play as part of reducing your SaaS bill, the $100 Network walks through the multi-site, multi-service pattern at scale: The $100 Network.

Fact-check notes and sources

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026