Here’s what happened
I was playing around with a system that my team was building at work. The system, when deployed, will have 3 components living on 3 different servers with their networks locked down. You can think of it like:
a — b — c
Server a can talk to server b, b to a and c, and c to b.
To test these without deploying to the actual environment, I figured I may as well dockerize it! Frequenting our stack at work is Seq which is a logging platform. Our components write to a common Seq setup to help us with debugging.
Originally, I wasn’t going to include the Seq configuration in my docker stack mostly because I was lazy. However I ran into an issue in my e2e test and I couldn’t, for the life of me, debug it. So instead of doing it right the first time, I had to go back and do it again. I had to add Seq to the stack.
Seq requires an API Key for tools to log to it but you cannot configure the Seq docker image to have an API key by default. Since my setup is a ‘run and throwaway’ setup, I wasn’t mounting any volumes so I would lose any keys that I create in my Seq instance. I set out on a mission to find a better way
I found a few solutions on the web. Most of them were derivatives of this forum post. This solution suggests to load up the seqcli and to run a command on startup. Since seq takes a while to startup, you need to wait for it to come up. The forum suggests that we can fix this by doing something like this:
command: /bin/bash -c "sleep 10; /bin/seqcli/seqcli apikey create --title='newapikey' --token='123456' --server=http://seq"
For the life of me, I couldn’t get this to work.
I think it’s because the seqcli docker image must run the
seqcli command by default so any bash commands that I try to run just end up with an error stating:
Usage: seqcli <command> [<args>] Type `seqcli help` for available commands
After smacking my head against the wall, I deleted most of the start of the command. I ended up with this:
command: apikey create -t newapikey --token 12345678901234567890 -s http://seq
It worked! Sort of. Since I wasn’t waiting for seq to start up, I just kept getting the error:
seqcli_1 | The command failed: Connection refused
Which is expected since the image isn’t up yet.
After some mulling, I remembered that docker-compose has the awesome
restart feature which will… restart your container after a failure. Since this was issuing a failure command, I set the
restart as follows
On my next docker-compose up, it worked! Below is a minimal dockerfile to get seq up and running
version: "3" services: seqcli: image: datalust/seqcli:latest command: apikey create -t newapikey --token 12345678901234567890 -s http://seq depends_on: - seq restart: on-failure networks: - seqnetwork seq: image: datalust/seq:latest environment: - ACCEPT_EULA=Y ports: - 8003:80 networks: - seqnetwork networks: seqnetwork:
Paste this into a
docker-compose.yml file and run it. I’ve bound seq to a local port
8003 so I can see if it is working.
Head over to
http://localhost:8003 to see the seq dashboard. Head to settings -> API Keys and you will see our very secret and not at all guessable key
12345678901234567890 Now you can pre-configure your docker-compose files to have an api key handed to all your services. Pretty decent right?
Hopefully this helps anyone that was stuck hacking around like I was earlier this week. This solution seems to work! Let me know if you have any better ideas below.