Introduction
To locally setup InfluxDB every other day with a "docker run" or docker-compose becomes cumbersome. All additional buckets, users, that need to be created can't be declaratively done. A workaround would be to just write init scripts that took care of it, but they are susceptible to API changes and require maintenance.
An alternative to doing this would be to set it up with Terraform. This would allow you to create buckets, users, tokens, with code, which can be shared with your fellow devs as well!
Note: The following is with reference to v2+ of InfluxDB
Existing Terraform providers for InfluxDB
Luckily, Terraform providers for InfluxDB exist, written and open-sourced by the folks over at Lancey Energy. You can find the provider documentation here. You may notice that there are 2 providers, one called influxdb-v2
and one called influxdb-v2-onboarding
. This is with reference to the fact that running InfluxDB is a 2-step procedure, first being to setup the db, and the next to actually run the container. This process has been abstracted away by these 2 modules.
Step 1: Initialize your Environment
Since this setup is primarily be targeted towards local development, we will not require any TLS certificates, or remote state.
mkdir my-influx-config
cd my-influx-config && terraform init
Create the following files
touch main.tf variables.tf outputs.tf versions.tf providers.tf buckets.tf
Edit the versions.tf
file
terraform {
required_version = "1.0.4"
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.15.0"
}
influxdb-v2-onboarding = {
source = "lancey-energy-storage/influxdb-v2-onboarding"
version = "0.4.1"
}
influxdb-v2 = {
source = "lancey-energy-storage/influxdb-v2"
version = "0.4.1"
}
}
}
Re-Init your workspace
terraform init
Copy the variables.tf
and providers.tf
from this repo
Step 2: Defining the Docker Container Spec
Edit your main.tf
file to contain definitions for the InfluxDB version number, volumes and the container itself
/* Docker Image for InfluxDB v2 */
resource "docker_image" "influxdb_image" {
name = "influxdb:${var.influxdb_version}"
}
/* Docker Volume for InfluxDB v2 */
resource "docker_volume" "influxdb_volume" {
name = "influxdb-data"
}
/* Docker Container for InfluxDB v2 */
resource "docker_container" "influxdb_container" {
name = "influxdb"
image = docker_image.influxdb_image.latest
network_mode = "host"
volumes {
volume_name = docker_volume.influxdb_volume.name
container_path = "/data"
host_path = "${path.root}/influxdb-data"
}
env = [
"INFLUXD_BOLT_PATH=/data/bolt",
"INFLUXD_ENGINE_PATH=/data/engine",
"INFLUXD_HTTP_IDLE_TIMEOUT=0",
"INFLUXD_QUERY_MEMORY_BYTES=${var.query_memory_bytes}",
"INFLUXD_QUERY_QUEUE_SIZE=${var.query_queue_size}",
"INFLUXD_SESSION_LENGTH=12000",
]
ports {
external = var.expose_port
internal = 8086
}
}
What this does is creates a Docker Container with the BoltDB and Engine directories being bound to a Docker Volume which is at the root of the tf workspace. I've increased the session length so that I don't need to re-login a day later.
Step 3: Setup InfluxDB
Before we run the apply, we have to write some code which sets up the database for use (with initial username, password, bucket name etc).
You must think that we could have used the "DOCKER_*" env arguments to automate this process, but it does not return the org_id, which we need to create buckets in an organization. To make things easier, I've open sourced the modules.
Add the following to your main.tf
data "external" "wait" {
program = ["./wait_for_influx.sh", local.influx_url]
depends_on = [
docker_container.influxdb_container
]
}
/* Setup process for InfluxDB V2 */
module "setup_influxdb" {
source = "https://github.com/rymnc/terraform-influx.git//modules/setup"
init_username = var.init_username
init_password = var.init_password
init_bucket = var.init_bucket
init_org = var.init_org
depends_on = [
data.external.wait
]
providers = {
influxdb-v2-onboarding = influxdb-v2-onboarding.onboarding
}
}
Since it takes the container a few seconds to be ready to receive requests, we use an external data source, with a simple bash script that polls a URL using curl. As soon as the response is 200, it exits. The module that initializes InfluxDB has to run only after it is responsive, that's why we have the depends_on
field in the module definition.
Step 4: Create the required buckets
Add the following to the buckets.tf
,
module "bucket_test" {
source = "https://github.com/rymnc/terraform-influx.git//modules/bucket"
bucket_name = "test"
bucket_description = "my super cool test bucket"
org_id = module.setup_influxdb.influxdb_details.org_id
retention_days = 15
depends_on = [
data.external.wait
]
}
This just creates a test bucket. Since it depends on external data, which is available only after InfluxDB is responsive, there wont be any failed requests to the url.
You can create as many buckets as you'd like here, or use a for_each and iterate over a map of buckets defined as a variable.
Step 5: Launch!
After editing your terraform.tfvars
file, run
terraform apply # Note that if you're using podman, sudo will be required
Give it a minute to pull the image, and then the InfluxDB instance will be ready for development! You can create access tokens as well, by modularizing the existing provider.
Step 6: Tear down
To destroy all resources
terraform destroy
Conclusion
Using Terraform with InfluxDB allows me to have reproducible setups accross different machines, and saves time as well. Hope you liked it!