Demonstrating the GCE Auth method for Vault So, I discussed in my previous blog post how I was trying to automate my Vault and GCE demo, so lets talk about that!
Understanding Vault As I’ve been working with customers and the community on the HashiCorp stack, I’ve been beginning to understand the core philosphies behind a lot of the products.
Mitchell gave a great presentation on Vault’s 0.10 release at the London HashiCorp User Group a few months ago, and there was a slide in there that really helped me understand how Vault works:
Writing and playing with custom Terraform Providers I’ve been digging deeper on Terraform. It’s something I’ve tinkered with in the past, but I’ve not really sat down to really use it in anger and try and tie a large project together.
So, I picked something that I recently was doing manually: the configuration of a demo of Vault with the GCP backend. Right now I was doing most of the steps for that manually, and I wanted to automate the entire process, and have a fully reproducible demo environment created in GCP.
EDIT: So, this blog was linked by DevOps Weekly #379, and seems to have climbed the SEO ranking for Hiera and Vault, but I’ve learn a lot and have some general changes since then.
I did a webinar on the subject I’ll be writing an updated version of how to use Vault and Hiera soon, and link it here.
Although I don’t work at Puppet anymore, it’s still my favourite config management software, and I use it for the management of machines under my control, including my home storage server and several MacBooks.
The main piece of tech that I maintain myself is this blog, so I usually find time to play with it, try new technology such as Docker.
Unsuprisingly, this has led it to be a little over-engineered, and I wanted something simpler. Why was I trying to figure out how to configure a database in a container for a blog that only I was maintaining? I didn’t need CMS-like functionality, if anything I needed a static site.
Today is a very big day for me.
Today is my last day at Puppet.
Puppet has been a huge part of my life for almost 7 years. It all started way back in 2011, when I was working at Simply Business. We’d just done a big shift to continous deployment. We had a fully working deployment pipeline, but our ops workflows were starting to creak a bit at the seams.
So, I looked at my blog and realised I didn’t blog at all in 2017.
Honestly, it was a pretty evenful year for me, so I think I have a good excuse:
I switched roles at Puppet, changing from Pro Service Engineer to Techinical Account Manager I got married I went on a two week honeymoon And right at the end of the year, I had a baby I did a lot of travel, not quite as much as 2016, but a lot: So, these conditions are not exactly condusive for a heavy blogging world.
Well, here it is, Day 30 of the #vdm30in30.
Here’s some stats on the last 30 days of posts:
23,343 Words 213,374 Characters 238 Sentences 29 Paragraphs 1 hr 25 mins Reading Time 2 hrs 10 mins Speaking Time Most used words:
puppet 264 (4%) provider 116 (2%) puppetlabs 93 (1%) package 90 (1%) should 87 (1%) code 85 (1%) cockpit 85 (1%) opt 84 (1%) executed 80 (1%) run 79 (1%) Not sure how accurate that is because there’s a lot of code snippets in there, but seems about right: Each post was roughly about 750 words or so.
Day 29 in the #vDM30in30 Image from https://flic.kr/p/sqiJKP
Let’s talk about the lifecycle of a Puppet run.
The Puppet agent process collects information about the host it is running on including facts, which it passes to the server. The parser uses that system information and Puppet modules on local disk to compile a configuration for that particular host and returns it to the agent. The agent applies that configuration locally, thus affecting the local state of the host, and files the resulting report with the server, including the facts from the system Essentially, Puppet runs in an atomic fashion: Information it has is locked at the start of the run, and is not changed.
Day 28 in the #vDM30in30 Image source: https://flic.kr/p/y1DUPj
So previously I blogged about about how to ensure a /var/run directory exists before a systemd service starts, using the ExecStartPre steps to ensure the directory exists.
ExecStartPre=-/usr/bin/mkdir /run/jmxtrans/ ExecStartPre=/usr/bin/chown -R jmxtrans:jmxtrans /run/jmxtrans/ I took the idea from a blog by Jari Turkia.
However, I made the rookie mistake of not checking the comments to see if things had changed and there was a better way, since the original post was written in 2013.
Day 27 in the #vDM30in30 I started off as a Java developer, so I was used to Eclipse and Intelij, quite heavy IDE’s with lots of assistance.
After moving away from Java and into Ruby, I noticed one of the other developers was using TextMate, and it seemed much nimbler and easier to get things done.
However, there were a few annoying things that about Textmate at the time (circa 2012~), such as Textmate crashing when you looked at large files.