Terraform mass move

I need to rename a terraform resource created with a for_each over a loop. I can just change the terraform and apply, which results in a destroy/create. Not terrible in this case, but I want something better just by changing state. Best of all, I want to script this.

Situation

My resource needs to go from:

resource "aws_ssm_parameter" "ecs_api_parameter" {
  for_each = var.ecs_api_parameters
  name     = each.key
  type     = "String"
  value    = each.value
}

To:

resource "aws_ssm_parameter" "ecs_parameter" {
  for_each = var.ecs_parameters
  name     = each.key
  type     = "String"
  value    = each.value
}

Terraform plan tells me it’ll be destroy/create –

$ terraform plan

# aws_ssm_parameter.ecs_api_parameter["one"] will be destroyed
# (because aws_ssm_parameter.ecs_api_parameter is not in configuration)
  - resource "aws_ssm_parameter" "ecs_api_parameter" {
<snip>
# aws_ssm_parameter.ecs_parameter["one"] will be created
  + resource "aws_ssm_parameter" "ecs_parameter" {
<snip>
Plan: 4 to add, 0 to change, 4 to destroy.

That would be okay in this case – these are just parameters, there’s no interesting history I want to keep, there’s no impact at all. There will be times you don’t want to do this – that DynamoDB table may well have a backup, but if you don’t have to restore it then life’s easier.

Instead we can just tell terraform the resources have changed their names, and we do this by working on the state file.

Changing resources in state

We cam work out what needs to happen just be looking at the code, but inspecting the plan output is rather helpful. We can see that we have these resources:

aws_ssm_parameter.ecs_api_parameter["one"]
aws_ssm_parameter.ecs_api_parameter["two"]
aws_ssm_parameter.ecs_api_parameter["ninety-nine"]
aws_ssm_parameter.ecs_api_parameter["5"]

Which need to become these:

aws_ssm_parameter.ecs_parameter["one"]
aws_ssm_parameter.ecs_parameter["two"]
aws_ssm_parameter.ecs_parameter["ninety-nine"]
aws_ssm_parameter.ecs_parameter["5"]

The terraform state mv command lets us do this easily enough. The basic solution is going to be along the lines of:

terraform state mv aws_ssm_parameter.ecs_api_parameter["one"] aws_ssm_parameter.ecs_parameter["one"] 
terraform state mv aws_ssm_parameter.ecs_api_parameter["two"] aws_ssm_parameter.ecs_parameter["two"] 
terraform state mv aws_ssm_parameter.ecs_api_parameter["ninety-nine"] aws_ssm_parameter.ecs_parameter["ninety-nine"] 
terraform state mv aws_ssm_parameter.ecs_api_parameter["5"] aws_ssm_parameter.ecs_parameter["5"] 

Four resources, I can do some copy and paste, it’s easy. The thing is there could be an awful lot more than four, and manual stuff can easily go wrong. Okay, scripting can too, but bear with me here.

Scripting this

I start with a dead basic one-liner to iterate over the output of terraform state list, grepping for the resource type, saying what I’ve found:

$ for param in terraform state list | grep aws_ssm_parameter.ecs_api_parameter; do echo $param ; done

aws_ssm_parameter.ecs_api_parameter["one"]
aws_ssm_parameter.ecs_api_parameter["two"]
aws_ssm_parameter.ecs_api_parameter["ninety-nine"]
aws_ssm_parameter.ecs_api_parameter["5"]

Next I want to get just that string so I can construct my terraform state mv command, I’ll use cut here to only give me what’s between the quotes:

$ for param in terraform state list | grep aws_ssm_parameter.ecs_api_parameter | cut -d '"' -f 2 ; do echo $param ; done

one
two
ninety-nine
5

Now to test I’ll try a terraform state show – something totally non-destructive. I need to escape the quotes in the terragrunt command or it doesn’t work. Try it and see…

$ for param in terraform state list | grep aws_ssm_parameter.ecs_api_parameter | cut -d '"' -f 2 ; do terraform state show aws_ssm_parameter.ecs_api_parameter[\"$param\"] ; done

# aws_ssm_parameter.ecs_api_parameter["one"]:
resource "aws_ssm_parameter" "ecs_api_parameter" {
arn = "arn:aws:ssm:eu-west-2:xxx:parameter/one"
data_type = "text"
id = "/one"
name = "/one"
<snip>

Then we can switch out the show with a mv to try the terraform state mv command – but I’m going to use the -dry-run option because I’m a coward:

$ for param in terraform state list | grep aws_ssm_parameter.ecs_api_parameter | cut -d '"' -f 2 ; do terraform state mv -dry-run aws_ssm_parameter.ecs_api_parameter[\"$param\"] aws_ssm_parameter.ecs_parameter[\"$param\"]; done

Would move "aws_ssm_parameter.ecs_api_parameter[\"one\"]" to "aws_ssm_parameter.ecs_parameter[\"one\"]"
Would move "aws_ssm_parameter.ecs_api_parameter[\"two\"]" to "aws_ssm_parameter.ecs_parameter[\"two\"]"
Would move "aws_ssm_parameter.ecs_api_parameter[\"ninety-nine\"]" to "aws_ssm_parameter.ecs_parameter[\"ninety-nine\"]"
Would move "aws_ssm_parameter.ecs_api_parameter[\"5\"]" to "aws_ssm_parameter.ecs_parameter[\"5\"]"

If I were being a real coward I’d change my loop more to pick a single resource before going for it, but I’m happy with this.

$ for param in `terraform state list | grep aws_ssm_parameter.ecs_api_parameter |  cut -d '"' -f 2` ; do terraform state mv aws_ssm_parameter.ecs_api_parameter[\"$param\"] aws_ssm_parameter.ecs_parameter[\"$param\"]; done

Move "aws_ssm_parameter.ecs_api_parameter[\"one\"]" to "aws_ssm_parameter.ecs_parameter[\"one\"]"
Successfully moved 1 object(s).
Move "aws_ssm_parameter.ecs_api_parameter[\"two\"]" to "aws_ssm_parameter.ecs_parameter[\"two\"]"
Successfully moved 1 object(s).
Move "aws_ssm_parameter.ecs_api_parameter[\"ninety-nine\"]" to "aws_ssm_parameter.ecs_parameter[\"ninety-nine\"]"
Successfully moved 1 object(s).
Move "aws_ssm_parameter.ecs_api_parameter[\"5\"]" to "aws_ssm_parameter.ecs_parameter[\"5\"]"
Successfully moved 1 object(s).

And now if I try a plan?

$ terraform plan

aws_ssm_parameter.ecs_parameter["one"]: Refreshing state... [id=/one]
aws_ssm_parameter.ecs_parameter["one"]: Refreshing state... [id=/two]
aws_ssm_parameter.ecs_parameter["ninety-nine"]: Refreshing state... [id=/ninety-nine]
aws_ssm_parameter.ecs_parameter["5"]: Refreshing state... [id=/5]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

So yes, it worked!

AWS Data Engineer Associate

Preamble

A while back I was lucky enough to work on a Data Hub solution for a global logistics firm. I’ll be honest, I knew very little about the subject at the beginning, but I think I picked stuff up fairly well and quickly and delivered some okay stuff.

While working on it I learned about Redshift, Glue, Lakeformation and more in-depth stuff on S3 and DynamoDB. I started to get the hang of the whole data pipeline idea and how things hang together, and I started on the AWS Data Analytics Speciality course (on ACloudGuru). This was initially to gain knowledge on Redshift, but I quickly found that some aspects of the course were a bit 101, and some parts (like Redshift partitioning) were out of scope for me as the infrastructure guy – the Data Engineers and Architects were on that stuff.

I turned instead to a few different AWS certs (SA Pro, Security, DBs), but I was always tempted to return to that course. After all, I’d covered some of the content through lectures, and some I knew from experience. Also (the best reason) I actually found it generally interesting. I had a little dismay when I found the course was being retired – but what’s this? There’s an Assoc Data Engineer course instead? Interesting….

AWS Data Engineer Assoc Courses

This cert was in beta, and due to come out in January 2024, but it makes sense to start prepping now. I found an exam prep section on the AWS SkillBuilder site and noted I’d taken the sample questions and scored 70% a while back, and this will have been without any pre-reading/watching. A nice surprise, but looking at the actual syllabus I realised there were an awful lot of things I didn’t know at all. I’m not all that interested in just passing a cert to get the badge, I do actually want it to have meaning (like, show I know stuff).

I looked on ACloudGuru (as my work gives access to this) but I didn’t see anything there. Not too displeased, there’s only so much “Hello Cloud Gurus” I can stand.

Instead, I noted a Maarek course on Udemy, on offer at a tenner. Bargain! I really like the Maarek courses, I seem to get on well with his presentation and accent (makes a difference to me). I was a little disappointed to find the course started with some other chap going through Data Engineering Fundamentals, but I really quickly warmed to him and the AWS-specific topics seem to be majority Maarek. Looking again, it’s not just billed as a Maarek course – the other chap is Frank Kane and I reckon I’ll try courses with him generally in future.

The course is based on the beta syllabus, but they’ve made the point that it’ll be updated as AWS pushes out the final exams etc. Overall, while I know AWS’s own prep stuff will be good, and I’m sure there will be an ACG course some time soon, I’m very happy to have bought this course.

AWS Data Engineer Syllabus

Looking at the Udemy course contents, the syllabus seems to cover:

  • Data Engineering Fundamentals
  • Storage (S3, EFS, EBS, Backup)
  • DBs (mainly Redshift, a fair bit of DynamoDB, touches on RDS and a few others)
  • Migration (DMS, DataSync, Snow, Transfer family)
  • Compute (EC2, Lambda, SAM)
  • Containers (ECS, ECR, EKS – but not really heavy on this)
  • Analytics (Glue, Athena, EMR, Kinesis, OpenSearch)
  • Application Integration (SQS/SNS, Step Functions, EventBridge)
  • Security etc (IAM, KMS, Secrets, WAF – all those things)
  • Networking (R53 and Cloudfront really)
  • Management & Governance (CloudWatch&Trail, Parameter Store)
  • Machine Learning (Sagemaker)
  • Developer Tools (CDK, Cloud9, Code*, CLI)
  • “Everything Else” (Budgets & Cost Explorer API Gateway)

I’m seeing a lot of these covered in other courses – things like storage, compute, networking, security, management, dev tools etc are all really common topics, and quite honestly bread and butter. I guess as an Assoc course these things need to be covered, not just assumed (as you might be able to with a Specialty cert). I’ll certainly go through all bar the most basic content in case there are nuances I need for this exam – there are certainly some S3 things I’m not sure on (access points) so it’s good for the refresher there anyway!

I’m really looking forward to this course. It’ll help refresh and update my knowledge in an interesting area, and I think this along with the SysOps Assoc I just passed will be a nice additional grounding for the DevOps Pro cert I want to take later in the year. I’ve not really been massively interested in ML before, but I’ll see if the Sagemaker aspects of this tickle my fancy – maybe the next Spec cert could be ML, instead of the Networking one I’ve been considering?

It’s all about learning and moving forwards. I’ve done staying still in the past, and it’s a daft sucky path to take.

Resources

Terraform modules & for_each

[WROTE THIS WITH A HEAVY COLD – I NEED TO CHECK IT]

I like for_each. I like modules that use for_each and create a bunch of resources based on an input map. But is this the best way?

What do I mean? A super basic module for parameter store might contain a single resource with a for_each iterating through the input map variable:

resource "aws_ssm_parameter" "store" {
  for_each = var.store

  name        = each.value.name
  description = each.value.description
  type        = each.value.type
  value       = each.value.value
}

And we’d call it as:

module "parameters" {
  source = "../..//modules/parameter-store"

  store = {
    beverley_hills_s3_bucket = {
      name        = "s3/buckets/beverley_hills_s3_bucket"
      description = "Bucket for 1990s TV show and more"
      type        = "String"
      value       = "s3a://bucket-90210/temp"
    }
    smokey_s3_bucket = {
      name        = "s3/buckets/smokey_s3_bucket"
      description = "An unusually mobile bucket"
      type        = "String"
      value       = "s3a://bucket-20252/temp"
    }
  }
}

And then we’d have a couple of parameters in SSM Parameter Store.

Why do I like this? I don’t have to call the module once per group of allied resources, which means my code is little DRYer. Why not a count through a list (instead of for_each through. map)? Because count really sucks.

What are the downsides though?

Simple modules are fine with this, but the more complex the set of resources, the larger and deeper the map has to get, and the muckier dynamic blocks can get. Instead, I’m looking at returning to the normal module method, assuming a single set of resources (which might include a map – for example, a map of ECS services and task definitions for the cluster that’s being managed).

The way forward will be to call the module once with a for_each inside that, so our single resource module goes back to a vanilla:

resource "aws_ssm_parameter" "store" {
  name        = var.name
  description = var.description
  type        = var.type
  value       = var.value
}

With the module call using a map and for_each like:

locals {
  store = {
    beverley_hills_s3_bucket = {
      name        = "s3/buckets/beverley_hills_s3_bucket"
      description = "Bucket for 1990s TV show and more"
      type        = "String"
      value       = "s3a://bucket-90210/temp"
    }
    smokey_s3_bucket = {
      name        = "s3/buckets/smokey_s3_bucket"
      description = "An unusually mobile bucket"
      type        = "String"
      value       = "s3a://bucket-20252/temp"
    }
  }
}

module "parameters" {
  source = "../..//modules/parameter-store"

  for_each = locals.store

  name        = each.value.name
  description = each.value.description
  type        = each.value.type
  value       = each.value.value
}

AWS AZ Failure Simulation

Another not sexy but actually cool new feature – AZ failure simulation. What and how and why?

AWS’s Fault Injection System has been around a while, allowing you to trigger system issues and failures. This is the sort of thing Netflix made famous with their Chaos Monkeys – we can randomly terminate instances, max out their CPU, interrupt connectivity.

Some new features were unveiled in re:Invent, to allow us to test resilience against entire AZ failures (e.g. power or network) and cross-region connectivity failures: https://aws.amazon.com/about-aws/whats-new/2023/11/aws-fault-injection-service-two-requested-scenarios/

Right now I’m more interested in AZ failure as most work on my current project is single region, but we do use multiple AZs as normal for HA. We assume that the HA will work as we expect, and we can test by terminating resources, but until now it’s not been simple to test the entire AZ (including networking) failing.

So how do we do this?

While a lot of simulations can be executed programatically, there are scenarios which are purely console based, and AZ failure is one of them. It looks relatively straightforward, but my lack of IAM right now is stopping me getting far with the account I’m using. Basically we specify the arns/tags of resources we want hit by the various scenarios, press the burton and watch the world burn. Or rather, watch the simulation of the AZ power fail for 30 minutes and then have intermittent issues for the following 30.

The sad thing for me is that this does not cover Amazon ECS tasks running on AWS Fargate, as these are the workloads my current project is moving toward, and I’d love to prove their availability. At least I’ll be able to demonstrate the EC2 and RDS failover.

I’ve not tried AWS FIS at all before now, but now I really want to – it gives me a chance to have a bit of fun testing what happen when things go wrong (beyond just terminating an instance and watching the ASG bring up another in possibly the same AZ). Obviously not something I’ll try on a customer account on a whim, but I’ll propose investigating this in a future sprint.

S3 Bits – Express and Mount

A recent news item got me reading on new S3 features, and I discovered one I’d missed. Here we go.

S3 Express One Zone

Faster, cheaper data retrieval single AZ S3. Why? Why not?
Firstly here’s the announcement: https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/

Pros

Faster
This really is lower latency. If you’ve a whole bunch of systems needing to access shared storage this could do the trick. This lower latency isn’t just a nice thing, it can save money if it reduces the amount of compute time required. This isn’t conjecture (see: https://aws.amazon.com/blogs/storage/amazon-s3-express-one-zone-delivers-cost-and-performance-gains-for-chaossearch-customers/), but clearly you’ll need to be optimising heavily to actually benefit from a saving here.

Still, speed is nice, if you need it.

Cheaper data retrieval

The data retrieval cost is lower. From https://aws.amazon.com/s3/pricing/:

(per 1000 request)PUT/POST etcGET etc
Standard$0.005$0.0004
Express$0.0025$0.0002

I can see this being awesome for lots of requests for small objects. I used to work with HPC people in the past and they really needed just fast access to shared storage of small files. They used NFS and then GPFS, but this could fit with similar workloads, provided they were okay with object instead of file based storage. More on that later…

Cons

More expensive storage

Yes, data retrieval is cheaper, but the actual storage cost is higher than S3 Standard.

Using the same pricing page as above, storage/GB for Standard is $0.023 to $0.023, Express One Zone is $0.16 per GB. That’s a really significant hike, so this is clearly not going to be good for general storage.

Single AZ

Yes, this in a single AZ – which I reckon makes the lower latency make sense, the data’s going to be in the same data centre. But having the data only in one AZ carries a risk. The normal S3 hardware resilience is built in (lots of 9s there), except in case of AZ fire, flood etc. If the data centre has a catastrophe the data might be gone for good. How much is that a problem?

Given the cost and use case I’m seeing this as not being the primary location of this data – anything really needed would shift to Standard at some point.

Limited Regions

In time-honoured tradition, this is only available in a few regions right now, but it should be rolled out to others. For now, it’s N.Virginia, Oregon, Tokyo, Stockholm.

Conclusion

I’m seeing this really as a expensive fast local cache – it’s 8x the price, but it’s closer to your compute and faster (and cheaper) to get the data out of. That it’s an object (not file) store can be overcome with the use of…

S3 Mount

A while back I worked on a project requiring s3 to be mounted in a Linux system – we used s3fs because that’s what there was, and it really worked fine. Since then, AWS has released S3Mount – what’s this about?

S3Mount also offers a mountpoint for S3, but it has some pretty big differences. Basically, S3mount is more performant, but has fewer features. If you need high throughput reading from multiple clients (maybe using S3 Express One Zone?), it’s great. If you want to be able to rename a file/object it can’t. S3fs is more general purpose and implements more POSIX features, but I don’t think it’ll be anywhere near as fast. EFS or FSx might be the right fit, ymmv.

S3Mount slightly deeper details: https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md

Terraform Associate – Why, How, What Next?

I’ve just recertified my Terraform Associate certification. Why? One set of reasons relates to my employer – I understand it’s useful for our relationship with Hashicorp, may look good to our customers, and it’s certainly good for my KPIs. So what?

Are there other reasons? Should other people bother with this? Is it really worth looking at as employer? Yes, yes and yes. And here I begin, followed by some exam tips and next steps…

Terraform Associate value

The syllabus covers a load of domains, but basically if you cover it sufficiently through a course (or following the exam guides) you’ll have knowledge of:

  • Basic IaC and how Terraform works – what we want (our config), what terraform thinks we have (state) and what’s in place in AWS (etc) – and how he keep this all together.
  • Terraform providers – how the core system does the basics, but plugins (providers) let you work with CSPs or… well, I’ve used it for GitHub repos and I found a not-quite-mature-enough plugin for LogicMonitor.
  • Terraform language, variables, functions, and how to use modules to make life simpler.
  • Terraform commands – so how to make it actually do something with the config you’ve written, also how to play games with the state file.
  • Terraform workspaces and Terraform Cloud – neither of which I’ve used, but you need these for the exam, and I’ve no doubt they’re good to know.

Terraform Associate value as a developer

So the syllabus covers the essentials of Terraform. So what? Well, if you abide by the syllabus coupled with experience or a decent course then you’ll understand some key areas:

  • Basic use
    Seems obvious, but you do need to get to grips with the language and commands. Sadly I reckon it’d be possible to pass the exam without playing with code, but I think it’d be faster to learn by doing. Follow the syllabus and any tutorials and you’ll end up with a good grasp of the basics – this’ll help you past the exam, but also give you confidence to use the tool in real life.
  • State
    Okay, I was petrified of playing with state for ages, because I first encountered Terraform in a very large and complex environment where there were sometimes state issues – in particular resulting from removing a resource from the middle of a bunch provisioned through a list, also through only running Terraform through Jenkins, and when that went wrong we’d be left with a stale lock which I couldn’t undo. The Assoc cert doesn’t cover state in massive depth, but does teach the basics of reading state and manipulating resources. Just do make sure your state file is versioned 😉
  • for_each
    This is a god-send. Count still has its place of course, but I mainly expect this to be 0 or 1 now – it’s just for conditional creation of a resource as far as I’m concerned. What’s wrong with count? Creating resources based on a count results in an ordered list with integer index. The integer index is only an issue in that it’s slightly harder to see what’s what in state (and plan/apply). The biggie is that ordered nature. If you remove an item from the middle of the list then all the items after will need to have a destroy/create as they shift up. The alternative could be some funky state games which could be scripted, but are probably best avoided.
    for_each instead gives us a nicely named bunch of resources, we can insert and remove as we wish.
  • Dynamic blocks
    I don’t think they were in the Assoc cert I passed in 2021. They confused me when I first encountered them, but now they’re just normal. The Assoc syllabus might not make you an expert, but it’ll be a good introduction, and this is something you want to get your head around.
  • Modules
    Love modules. They’re awesome. You can certainly use Terraform like you’d use Cloudformation (or how I’ve seen it in the one project that used it) – write everything out again and again and again, copying endless blocks of text, or you can embrace DRY and re-use code. Even better, you can re-use someone else’s code.

There’s lots more (workspaces, policies, Terraform Cloud), but not all places use these. Awareness of these will doubtless help get you started in a project using them. but I think you could probably skill-up quickly enough should it be needed.

Terraform Associate value as an employer/customer

I work in a consultancy, so I have two bunches of people to think about – my employer and my prospective customers/projects.

As an employer you want to know there’s a certain level of expertise in your staff. I feel this cert does actually give that. While I first passed and considered myself not very good, the cert had at least drilled into me the basic good practices, added understanding of state. It also taught me about Terraform Cloud and workspaces, neither of which are useful to me yet, but I get they *could* be.

Essentially it provides a base line – if someone’s passed this then they do know some stuff. If they’ve passed it through playing with code, all the better, so it’s worth providing a playground (AWS labs, whatever) so they can create and destroy stuff without much harm. Just make sure the cost implications are understood!

As a customer you probably want to know that prospective team members will work together effectively, quickly. If they’ve all relatively recently passed this then you know there’s a common understanding of the Hashicorp best practices, such as:

  • That terraform fmt and validate are good things. These help enormously. Ideally these would just be “normal” in any team, built into workflows – this could be as an IDE plugin (vscode can fmt as you go), through precommit, through cicd. Basically though, if all devs do these in some way from the beginning, life is better.
  • That secrets are stored in the right place. You really don’t want a rogue dev putting a secret somewhere silly. The cert covers this pretty well.
  • That modules are the way to go. They reduce repeated code and help people deploy standardised resources. You don’t really want to have to change every terraform file that mentions a resource just because there’s now a requirement to add a feature – put this into the module and go.

But a dev needn’t have passed the cert to be good – there are some truly stellar devs I know who’ve not taken it recently (or at all) and I’m in awe of their Terraform-fu – it’s just that if they have recently passed then there’s a baseline you know you can expect. The really awesome devs will just be obvious anyway.

Learning Terraform

Approaching the exam

This depends on your experience.

  • If you’re fairly new to Terraform then I think you’d be best served by working a course. If your workplace has one available, ask others what they think of it. Otherwise you could get something something on Udemy (other training providers are available) or schlep through the Hashicorp materials.
  • If you’re been using this a lot then you know the language so you’ll probably need to swot up on the things you don’t use every day. I used a general summary of the syllabus to find my weaknesses and dipped into the documentation/tutorials. For what it’s worth, I knew little about Terraform Cloud, workspaces, policies.

Take practice tests! These are worth buying, I used the Bryan Krausen set on Udemy, but you may have others available or recommended. I took four of the five question sets available, passing each one fairly well. I’d intended to take the last on the morning of the exam, but didn’t wake up in time.

Taking the exam

I really hated this. It’s online only, and I far prefer exam centres. It wasn’t helped by me waking up only an hour before my 1pm exam due to my stinking cold.

Here’s what you need to sort out:

  • Your laptop needs to have passed the basic checks beforehand. Work laptops can be an issue if they’re locked down, so do this way in advance so you can sort out a Plan B.
  • You need to install the Secure Browser, but you’re only able to do this 30 mins before the exam, this threw me. My laptop passed the checks but somehow the secure browser had issues with the web cam. When I then installed it on my work laptop it got past that stage and told me the network connectivity in that back room wasn’t good enough.
  • Your test environment needs to satisfy their requirements. The proctor will ask you to show all bits of the room to make sure there’s no funny business going on (people to help, recording devices). Having had to move to a different room this was a bit of a faff, I had to move things out of sight. The proctor asked me to show under the cushions next to me on the sofa, and round the side and back of the sofa. I really get why this is so, it was just unwelcome after the earlier palaver.

In the exam itself, do not speak. I uttered “Oh my gawd” (at feeling so rough with my cold) and my screen flashed up a warning about talking to myself.

The actual exam itself is multi choice (single or pick two/three), and there was one question expecting me to type a single “thing” (being careful with the NDA here!). Normal rules and methods apply – if not sure then eliminate the stupid and then work out the most likely, flag things you have a doubt about. Sometimes a bit of info in one question will give you think twice (or give a hint) about another.

Next steps

Use the language. Build stuff and destroy it.

Things to work on or read up on:

  • Read and absorb Google’s Terraform best practices. I don’t care if you’re not using GCP, what it says is pretty good all round.
  • Now read it again.
  • Play with state – create resources manually and import them, get used to looking into the state file and breaking things.
  • Create modules, shove them in git and version them. Play with using different module versions for different environments.
  • Creating the remote state file using terraform. This feels chicken and egg, how would you approach it? Why? Are there tools that can help you?
  • Play with Terragrunt (which might help with both of the above).

Serverless email on AWS

This been done before by other people, but I fancy a stab at it.

Idea is to use all-serverless tech on AWS to build an email service. Entirely personal, single user and really I don’t care about HA etc. This is just to replace the existing email service I use (and pay for) annually, so I want to have this working before renewal in September next year.

What are the building blocks going to be?

  • S3 for message storage.
  • DynamoDB for message index.
  • API gateway for endpoints.
  • SNS/SQS for handling.
  • Lambda for the actual work.
  • Route 53 for DNS (got that in there already, but not terraformed, sadly)

I mention Terraform – because of course this all needs to be in IaC. Ideal world has that all delivered using pipelines, but that’s another small game to add with a fresh account.

Couch to 5k

One massive downside to WFH is that it’s removed the vast majority of my exercise – I used to walk about 15k steps a day in my commute, and following WFH and lockdown this diminished to nearly nothing.

My midriff has increased somewhat, and I don’t think my diet has changed sufficiently to take the blame.

Last year I saw a note on the company messaging system about a Couch To 5k group, and I thought it worth a try. Well, it’s making a difference. Not to my figure as yet, but I can certainly run more than when I started, and I can imagine that I’ll continue to gain strength/endurance as time progresses.

The issue now is that I’ve a dodgy right knee – not been quite right after a nastily sprained ankle years ago, and I don’t think I can run currently. I’ll try to make an effort to walk at least, just to get exercise and some air in my lungs.

But yes, C25k – it’s worth a try!

Attempt… three?

This blog was started in 2018, by the look of things. There’s a previous one hosted on WordPress itself which I started in 2014, which I’ve just imported into this one.

Am I really going to get this going again?
Possibly, possibly not.

Why use this one instead of a hosted one?
I really really like the name of the hosted one, but by the look of things I need to pay to use some features(CSS is the one I noticed), and really? Why pay?

Anyway – j’y suis, j’y reste – peut etre.

Internet Visibility

Being escorted from an interview (job, not police!), I had a small chat with my prospective manager. I mentioned security checks, and he replied that he has a background in security (computer, I assume) and had looked me over online.

It’s entirely reasonable to do this. Sensible, in fact. LinkedIn, Twitter and my CV page are all safe assumptions – I broadly link these together and slightly publicise them. I was left wondering how much else he’d seen, but thought better than to ask at that stage. Perhaps I’ll leave this as a challenge should I end up working there.

So, did he find my Facebook profile? Nothing worrying there as I’d kept the public persona pretty safe. Mainly pictures, some pro-EU bits and a few notes on security. A few more personal notes, but all intended for general consumption. Could he have seen my private FB bits? Probably not as I’ve not accepted any friends in a while, but impersonation would certainly be a path in.

Now, this blog. Quite a bit less visible, but could be found through a WHOIS search (based on my professional-related domain). My Amazon wishlist would be tricky to find if you don’t know a little about me, but once in you’d be able to understand some of my interests, which may help with social engineering….

Other blogs/sites? I’m not even sure which ones exist, but they may at least be indexed on the Wayback Engine, and the same goes for the various fora to which I’ve contributed over the years….