Setting Up a Grafana-InfluxDB-UnPoller Docker In GCP

A total overview of setting up this stack in Google Cloud Compute. I'll update this sometime with the glorious datas!

Alright, first blog post ever so welcome! I’ll be going over my setup and configuring of a Ubuntu 22.04 GCP VM with a full Grafana ‘stack’ that will be used to monitor a Unifi Instance. The projects used can be viewed at UnPoller, Grafana, and InfluxDB.

Let’s Start With the VM Config

Heading into the VM Instances, we can create a new VM with some pretty basic specs since it’s not going to be doing anything crazy. I chose E2-Small as it has enough RAM to deal with Influx moving large amounts of data and after running this for 2+ months, it recommended upgrading to E2-Small. Otherwise, feel free to name it and choose the servers however you want. To become better at the command line, I didn’t enable “Display device”. The boot disk option is the puzzling one and I would recommend at least looking through these settings and reading about the differences. I decided to boot from a Public Image (Ubuntu 22.04 LTS) with an SSD Persistent Disk of 32 GBs. Flip any other settings you need too as you read through them!

SSH In and Update + Install

Heading into our new VM instance, we can start it up in the top menu bar and click the “SSH” button after GCP notifies us that the system is running. We need to update our system if anything has changed and install docker, so we do that with

sudo apt-get update && sudo apt upgrade -y
sudo apt install docker docker-compose

We are going to build this entire stack inside a singular docker-compose so that all containers can communicate with one another on the same docker network making intercontainer IPing really simple.

Setting Up Grafana

Initial Setup of Filesystem

Now that we have our dependencies installed, we can start to set up the main point in our entire stack - Grafana. First we are going to make our docker directory to hold the docker-compose.yaml file and any other related files we might need. Then we will also make our docker-compose.yaml file to get it off the ground.

mkdir main-docker/
nano main-docker/docker-compose.yaml

I personally like Nano as a command-line text editor so we’ll be using that through the rest of this. Feel free to read about it here: Nano.

Docker-Compose for Grafana

I put the following information into docker-compose.yaml file to setup my container the way I wanted.

version: "3"
        container_name: up_grafana
        image: grafana/grafana
        restart: unless-stopped
        - '3000:3000'
        - grafana-storage:/var/lib/grafana
        - grafana-etc:/etc/grafana
        - GF_SECURITY_ADMIN_USER=admin
        - GF_INSTALL_PLUGINS=grafana-clock-panel,natel-discrete-panel,grafana-piechart-panel

All right, that’s a lot of stuff right there so lets break it down into readable amounts. We start by specifying the version of docker-compose that will be used. We then move into services, this is where all of the containers we will be running will go. At the moment, it’s only populated with Grafana, but InfluxDB and UnPoller will be here once we get there. The grafana container comes first and under it are all the customization options that we have. I decided to name the container up_grafana and we pull the image from the official hosting at Grafana. I specify the flag for unless-stopped which allows the container to not come back online after it is stopped unless done so manually by an admin (me in this case). There are a lot of options, Docker has a nice list located here. Next, we move to the port layout of our container and what should be open. The mapping here is internal port in the container to the external port everywhere; this configuration can be changed but 3000 is the default internal port. My favorite thing that I learned is the use volumes as persistent storage. By mapping the areas that grafana keeps data (/var/lib/grafana and /etc/grafana), after a reboot of the VM these areas will be preserved and persist upon boot. This also allows you to configure files (such as grafana.ini) with changes that will stick around after a container restarts. The environment variables tell Grafana what to install and what to name the admin user for logging in the first time. Lastly, we clarify the volumes we want to define so Docker knows how to create them and loads them into storage.

We can bring this docker-compose online by running the command:

sudo docker-compose up

This command will allow the file to build the containers and start up printing all the logs to the screen. The other option is to run the command:

sudo docker-compose up -d

This command will run the docker-compose in daemon mode, meaning if the server reboots the containers will launch again automatically.

Setting Up SSL for HTTPS Traffic

We all like the security of HTTPS sites, so lets configure our Grafana instance to work over HTTPS! This was not something I knew how to do overly well, but I was able to learn the gist from this article. This was my first time with LetsEncrypt and I will probably write another post about how I to automatically renew the certificates each time they are going to expire.

We start by installing certbot and running it in standalone mode.

sudo apt-get install certbot
sudo certbot certonly --standalone

From there, we will want to follow the steps on screen including putting in your domain and other information to generate the certificates we need. After finishing this process, we will bring our docker container offline and put the files we need into the corresponding file locations.

sudo docker-compose down
sudo cp /etc/letsencrypt/live/**YOUR-DOMAIN**/privkey.pem /var/lib/docker/volumes/main-docker_grafana-etc/_data/
sudo cp /etc/letsencrypt/live/**YOUR-DOMAIN**/fullchain.pem /var/lib/docker/volumes/main-docker_grafana-etc/_data/

Unlike the articles, since we are running this in a docker container we will move the files into the our persistent storage that we have mapped. We took the /etc/grafana folder and mapped it to the main-docker_grafana-etc folder in the docker area. This means that when we put our files there, Grafana will be able to find them where they think /etc/grafana/ is but it’s actually just a hoax to fool it. By putting the certificates here, Grafana can reference them at anytime and after a reboot. We will move onto the grafana.ini file that contains the config for all of Grafana. We can edit the file with Nano.

nano /var/lib/docker/volumes/main-docker_grafana-etc/_data/grafana.ini

This file is pretty large but we are going to scroll down to the section about [server] which has the sections we need to edit. We want to change the field for protocol, domain, cert_file, cert_key. We are going to update these fields to the following:

# Protocol (http, https, h2, socket)  
protocol = https
# The public facing domain name used to access grafana from a browser
domain = **YOUR-DOMAIN**
# https certs & key file  
cert_file = /etc/grafana/fullchain.pem  
cert_key = /etc/grafana/privkey.pem

These lines may have a ; in front of them which will need to be removed for the lines to be read while the application loads. This is all we need to do for the traffic to now flow over HTTPS instead of normal HTTP. Just a quick note about learning Docker too; we navigated to /var/lib/docker/volumes/main-docker_grafana-etc/ which is where we mapped our storage in the volume section of the docker-compose.yaml.

Oh No - Errors In Grafana…

Upon launching our container, I was immediately greeted with some permission errors that were keeping the container from starting correctly. The errors seemed to be focused on permissions in the database and plugins. Some example error messages were:

up_grafana | logger=migrator t=2022-04-22T20:57:53.1+0000 lvl=eror msg="failed to determine the status of alerting engine. Enable either legacy or unified alerting explicitly and try again" err="failed to verify if the 'alert' table exists: unable to open database file: permission denied"

up_grafana | logger=server t=2022-04-22T21:00:54.28+0000 lvl=eror msg="Server shutdown" error="*api.HTTPServer run error: open /etc/grafana/privkey.pem: permission denied"

up_grafana | *api.HTTPServer run error: open /etc/grafana/privkey.pem: permission denied

Well, this was a huge rabbit hole, but let me share the solution I ended up figuring out. The /var/lib/docker/ directory is normally not editable and hence I decided to chown the directory to allow myself to use the directory and make edits to different files as I played around. Well, that doesn’t really make docker containers happy, so I needed to chown the directories in question back to the Docker user and not sudo. This can be accomplished by using chown with the docker group, it’s typically 472 but you may have to deep dive to find it. Once you have it, you can run this command to change the directory back.

sudo chown -R 472:472 /var/lib/docker/volumes/main-docker_grafana-storage/_data
sudo chown -R 472:472 /var/lib/docker/volumes/main-docker_grafana-etc/_data

Launch Grafana for Real!

Alright, now that it should be all set and permission happy, lets go ahead and pull our docker-compose back online and see if we can navigate to the page. In my case, this was a success and I was able to login with the default admin config and start customizing Grafana like normal. The details about first login can be found here on Grafana’s website. Yay, that’s ⅓ of the stack all done but luckily it only gets easier from here on out!

Setting Up UnPoller and InfluxDB

Docker-Compose for InfluxDB and UnPoller

So, we have to pull our container offline to update our docker-compose.yaml. After taking it offline, we can nano in and add our other to applications. This is my final docker-compose.yaml that we will break down for full understandings.

version: "3"
        container_name: up_grafana
        image: grafana/grafana
        restart: unless-stopped
        - '3000:3000'
        - grafana-storage:/var/lib/grafana
        - grafana-etc:/etc/grafana
        - influxdb
        - GF_SECURITY_ADMIN_USER=admin
        - GF_INSTALL_PLUGINS=grafana-clock-panel,natel-discrete-panel,grafana-piechart-panel

        container_name: up_influxdb
        restart: unless-stopped
        image: influxdb:1.8
        - '8086:8086'
        - '8086'
        - influxdb-storage:/var/lib/influxdb
        - INFLUXDB_DB=unifi

        container_name: up_poller
        restart: unless-stopped
        image: golift/unifi-poller:latest
        - '3190:3190'
        - influxdb
        - grafana
        - unpoller-config:/config


Luckily, not much has changed other than adding the other containers. We however do see one new field with that being depends_on. This field is specifically for having the containers “boot” in a specific order. InfluxDB will boot first since it doesn’t wait on anything. Grafana depends on InfluxDB because with no data, there won’t be anything to display in the graphs. Lastly, we want UnPoller to come online last since it relies on communicating to both InfluxDB and Grafana. UnPoller only works with the InfluxDB v1.8, so we manually specify the version when we choose the InfluxDB image. Otherwise, we map the data storage to a persistent volume and define the environment variables for the database. For UnPoller, we simply define the normal and then map the config volume to persist.

Configuring UnPoller

UnPoller has a config file that we will have to edit to specify our Unifi Controller and details about our InfluxDB database IP. The only other setup we need to do is in our Unifi Controller, we make a Super Admin that is read only with any username and password (just make it secure). We can navigate to the persistent data in the VM and make our config file with the default one provided at this link. We will copy the file from GitHub and then paste it into the window once we have the file loaded up in Nano. This file can be created at the following directory with this command:

nano /var/lib/docker/volumes/main-docker_unpoller-config/_data/up.conf

The main fields we will need to update are under [influxdb] with url, user, pass, db and then [unifi.defaults] with url, user, pass. Do note, there is a lot of customization available but the defaults were fine by me for most things in this installation. Most of the things were configured in our docker-compose.yaml so we can copy and paste from there (this is mostly the InfluxDB fields). The following items are listed here for configuring notes.

    disable = false
    # InfluxDB does not require auth by default, so the user/password are probably unimportant.
    url = "http://up_influxdb:8086"
    # Password for InfluxDB user (above).
    # Be sure to create this database. See the InfluxDB Wiki page for more info.

    # URL for the UniFi Controller. Do not add any paths after the host:port.
    # Do not use port 8443 if you have a UDM; just use "https://ip".
    # Make a read-only user in the UniFi Admin Settings, allow it access to all sites.
    # Password for UniFi controller user (above).

Note: Our InfluxDB URL is http://up_poller:8086 , if you name your container something else you would put that here instead. This is actually something super neat I learned about Docker networks, but if you need to reference a container by IP this method will always update to a new IP if the container changes after a reboot.

Configuring InfluxDB

InfluxDB is much easier to configure and comes out of the box almost ready to rumble. We just need to enter into the container and add a retention policy to the database so we don’t store more data than our VM can hold. In order to do this, we will need to bring our docker-compose online but it’s all set pretty much. Once it is up and running, we want to get into the shell of our InfluxDB container so we can access the command line for the Influx tool. We can get BASH access by running the following command.

sudo docker exec -it up_influxdb /bin/bash

Once, getting into the container we need to enter the Influx shell, we can do that by running the Influx command. Upon getting in, we can choose the database we want to work on, in the case of this example it will be the unifi database. After that, we create a retention policy on the database for 1 week and set it as the default retention policy for this unifi database. For more information on Influx and specifically the Retention Policies, check our there documentation here. The commands ran here are pretty simple, so I grouped them into one code block.

USE “unifi”

That just about does it for InfluxDB setup, assuming all is well we should see our retention policy in place and can get out of the docker shell by running the exit command twice.

Bringing It All Online and Putting Them Together

Alright, the hard part is done. Everything is set up and ready to rumble with the last piece simply importing the InfluxDB data into Grafana and then making dashboards. We will start the docker-compose for real this time and let it run while navigating to the Grafana homepage and logging in.

Adding Our InfluxDB Data Source

Upon logging into Grafana, we are going to head into the left sidebar and click into the configuration menu. It well default to opening the data sources tab, we are going to add a new new data source by clicking the “Add Data Source”. From the list, choose “InfluxDB” and head onto the configuration page for our data source. The first field we will need to fill out is the “URL” where we will want to put our http://up_influxdb:8086. Upon scrolling down to the bottom, we will see some fields for username, password, and database; in these fields, we want to put the corresponding information from the environment variables in our InfluxDB section of the docker-compose.yaml. Once you are all set, hit the “Save and Test” button and if it turns green you are all set to keep on rocking.

Adding Dashboards to Grafana

We are going back over to the left sidebar and hovering over the “+” option and clicking on the “Import” drop-down choice. All the premade dashboards available can be found on the creators page with this link. However, we only want the InfluxDB options so we can choose from USW (switches), UAP (access points), Sites, and Client Insights. Once you choose one (don’t worry, you can do all of them), click the “Copy Id to Clipboard” button on the right hand side and paste that into the “Import via” text box and then click “Load”. Feel free to choose any name for it and then choose the default InfluxDB data source that we previously created before clicking “Import”.

That’s All Folks

Thanks for reading along, this is my first post and its kinda huge but I figured there’s no better place to start than with something I really enjoyed learning about and building. Feel free to drop a comment with issues or suggestion! I’ll see you all in the next one!