How Big Things Get Done

Monju is an extreme case, but it’s not in a category by itself. Far from it. Nuclear power plants are one of the worst-performing project types in my database, with an average cost overrun of 120 percent in real terms and schedules running 65 percent longer than planned. Even worse, they are at risk of fat-tail extremes for both cost and schedule, meaning they may go 20 or 30 percent over budget. Or 200 or 300 percent. Or 500 percent. Or more. There is almost no limit to how bad things can get, as Monju demonstrated so spectacularly.

How Big Things Get Done - Bent Flyvbjerg and Dan Gardner

What was so spectacular about Monju was that it was a nuclear power plant that over the entire lifetime of the project from when it started to generate electricity in 1994 to when it was decommissioned in 2016, a lifetime of only 22 years, it contributed to the grid for roughly one hour. For about 10 billion dollars. The table below gives a general timeline of the main events but yeah it was a spectacular failure.

YearEvent
1985Construction begins in Tsuruga, Fukui Prefecture
1991Plant commissioning starts
Apr 1994Achieves initial criticality (self-sustaining nuclear reaction)
Aug 1995Connects to grid and generates first electricity
Dec 1995Sodium coolant leak causes fire, forcing shutdown
2000-2005Legal battles over reactor safety; Supreme Court approves restart
May 2010Restarted after 14-years
Aug 2010Shut again after fuel-handling accident
Dec 2016Japanese government officially decides to decommission
2047Planned completion of full decommissioning (30-year process)

How does the book propose we avoid these spectacular failures? The key issue with these

Read Full Post...
February 13, 2025 · 3 min

Shop Class as Soulcraft

when Henry Ford introduced the assembly line in 1913, workers simply walked out. One of Ford’s biographers wrote, “So great was labor’s distaste for the new machine system that toward the close of 1913 every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963.” https://www.thenewatlantis.com/publications/shop-class-as-soulcraft

When I think of the assembly line and the supposedly better working conditions brought in by Ford, I think surely workers would have been clamouring to get these well paying well treated jobs. But the opposite must have been the case where they had to hire nearly 10x the number of men for the number of positions that were to be filled.

I’ve read the article this is from but not yet the book that was inspired by the article. The basic premise is how work evolved from a craft, as something to be proud of and based on skill, to assembly line based where no real skill was required.

The article also lays out how this new assembly line work gave rise to debt. Likely used by employers to hook the employees on regular payments too.

The habituation of workers to the assembly line was thus perhaps made easier by another innovation of the early twentieth century: consumer debt. As Jackson Lears has shown in a recent article, through the installment plan, previously unthinkable acquisitions became thinkable, and more than thinkable: it became normal to carry debt. The display of a new car bought on installment

Read Full Post...
February 12, 2025 · 2 min

How I made this blog

It was very much on a whim that I decided to make this. I know I wanted to do something like this for a while but couldn’t ever get started on making the site. I know I’ll be more interested in something if I completely control and customise it, but it also needs to be easy to start and stick with. There’s no real use if it takes ages to get started so I need it to be easy above all else to add to it. I already use Obsidian for general note taking so that’s going to be what I use there. Everything else must build off that. This means markdown, bonus points if it is easy to translate Obsidian flavoured markdown to whatever standard the renderer uses. Markdown is great but every tool seemingly has their own variants or additions to the language so it’ll be a challenge making sure compatibility is 100% across everything

Astro

I started trying to use Astro because I know people have talked about that a bit recently. However I’m not a frontend dev so I decided very early on to rely on AI tools to generate at the very least an initial template I could build off. I found a theme I liked and set it up. But once I got to customising it I ran into roadblock after roadblock and soon gave up. I’m not interested in spending hours and hours getting started so I’m not doing that now. Perhaps down the line I’ll revisit this and redo

Read Full Post...
February 9, 2025 · 4 min

Go chi router jwtauth

I’ve been using the golang chi router in one of my projects recently and for the most part it’s been great. I’ve just a simple router with a few routes so I’ve not pushed it to any advanced use cases but it seems simple enough and extensible enough it should get a long way there.

The latest project I’ve needed to do was add Authentication using JWT to two routes. This seemed simple and straightforward as they provide a default JWT middleware implementation called github.com/go-chi/jwtauth. It’s simple and straightforward so easy to get going and I was able to get it working accepting a single token relatively easily.

However as I said above, I needed to accept multiple tokens. Maybe the library could be modified in some way to accept multiple tokens but it doesn’t make it easy and to me did not look possible for the work it would require. This is especially unfortunate as there is a thing in JSON Web land called JSON Web Key Sets which allow you to accept multiple tokens. There is a PR to accept JWK Sets but this has had no activity lately so it’s not clear if it’ll happen any time soon: https://github.com/go-chi/jwtauth/pull/71

Instead in the end I created my own middleware to read a JWK Set using the same underlying library as jwtauth: github.com/lestrrat-go/jwx. As I said above, chi makes it easy to create middleware and the lestrrat-go library makes it easy to accept JWT Sets.

This can be done in a two step process. The first is to parse

Read Full Post...
January 11, 2024 · 2 min

Rotating JWK

JSON Web Keys are what is used to generate and verify JSON Web Tokens. Here I will explain how best to rotate JWKs.

If you’re coming from zero like me and building something that uses JWT to verify a request is what it should be then the first thing you’ll do is use a JWK to verify a JWT. Using the github.com/lestrrat-go/jwx library in go it can be done like this:

parsedKey, err := jwk.ParseKey(key)
parsedToken, err := jwt.ParseString(token, jwt.WithKey(jwa.HS256, parsedKey))

However now any time you change the key you’re going to run into problems where it will be a hard break. So now the old token will stop working straight away and the new token will be accepted without any overlap period between. This is a problem for APIs as now you’re going to have a whole load of failed requests before you can get the new token to each of them. Sure this may be okay if you think you can roll it out quickly enough or you can add some extra code to the clients but this extra complexity is not really what we want.

Instead it’s better to use JSON Web Key Sets (JWK Sets) which are arrays of JWKs like the example below. You can see we have two keys with different key (k) and key ID (kid) values to differentiate them. Any keys in the set will be accepted so now when you are rolling a token, you can just update the set to have both the new and old JWKs. Once you’re finished updating

Read Full Post...
January 11, 2024 · 2 min

Exponential Backoff and Jitter

If retries aren’t randomly distributed over the retry window, a small perturbation (e.g., a network blip) can cause retry ripples to schedule at the same time, which can then amplify themselves

Using exponential backoff is great because if you keep retrying over and over without limits you’re eventually going to break something. Exponential backoff increases this retry period exponentially until you eventually give up. Jitter takes this one step further by adding a random difference between the retries so that if all requests fail at once, they are not all retried together after a short blip. Without jitter you may cause a ripple where all requests are retried at once causing further issues. for example a network being down for 1 second will cause all requests to pile up and be retried at once, then again and again, snowballing until a simple network error causes a wider server outage

https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ https://sre.google/sre-book/addressing-cascading-failures/

January 5, 2024 · 1 min

PR Reviews

Characteristics of a good change

Reviews: As a reviewer you should know it’s not going to be a perfect change so don’t focus on it being 100% perfect. This means avoid nitpicking as much as possible. Focus on moving faster and changing things continuously rather than getting it perfect straight away. Instead know the change is just one part of a process and continuously improving the codebase will bring it up to a better standard. Be open to the author’s approach instead of expecting it to be done a certain way.

Comments should be treated as a learning opportunity where you share your knowledge about language features and the codebase. Avoid personal criticism in reviews as it’s easy for some comments to be taken personally.

Comments: Aim for small changes above all else. Prefer a series of smaller changes over one large one all at once. As a rule of thumb each one should be under 200 lines of code. Each change should maintain or improve the health of the codebase

Outcome: Following these guidelines means the majority of changes should be small and only take one reviewer who leaves no comments. The majority (70%) should be committed less than 24h after asking for a review.

Google Critique: one feature of Critique is it provides static analysis tools which provide actionable feedback automatically. This avoids much of the nitpicking comments that hold up many reviews. It also makes obvious who is supposed to be taking the next action

https://read.engineerscodex.com/i/139414745/googles-code-review-guidelines

January 5, 2024 · 2 min

Git Identities

Background

I have one laptop with both personal and work projects. I have organised code into two folders, work and personal so I want to use a different git identity for each:

  • /code/work
  • /code/personal

.gitconfig

The basis for the solution here is we can conditionally include other config files in the main .gitconfig file and then create a different .gitconfig file for each identity. For example here is the main file:

[user]
    name = Mutable Comment
    email = [email protected]

[includeIf "gitdir:~/code/work/"]
    path = ~/.gitconfig-work

The .gitconfig-work file will look like this:

[user]
    name = Mutable Comment
    email = [email protected]

This setup allows me to use a different git identity per folder. For this particular setup, the default is my personal account but the /code/work folder will use the work email.

https://garrit.xyz/posts/2023-10-13-organizing-multiple-git-identities

December 19, 2023 · 1 min

Golang Error Handling

Tips for better error handling in go:

  • Wrap the error being returned using %w and fmt.Errorf()
  • Avoid words like failed or error - it is an error so we know something went wrong
  • Use present tense to describe what the code is trying to do
// good
fmt.Errorf("connecting to db: %w", err)
// bad
fmt.Errorf("could not connect to db: %w", err)

what makes Go’s error handling different is the opportunity it gives the programmer to tell a story. The trick to telling the story right is to add meaningful context to the error wherever possible. In Go, adding error context literally means expanding the message of the error you’ve just received with some explanatory text of what you were doing when the error occurred.

The error type in Go is a simple interface, exposing an Error() method returning a string. So, for practical reasons, all errors in Go can be equated to strings (although you can make them more complex if you want to).

December 19, 2023 · 1 min

F1 Expansion

I love watching Formula 1. I fell in love with it for the race between engineers as much as the drivers. Both pushing the limits of what can be done within the legal boundaries and outside too without getting caught. But the problem F1 faces in recent years is it has become a victim of its own success. It has become so popular that it cannot put on enough races for the entire calendar. Races draw in crowds of hundreds of thousands with millions more worldwide watching on TV.

With this success Formula 1 has tried to expand the series by adding more races but 25 seems to be about the upper limit it can fit into a calendar year, at least not many more. Dragging the entire circus halfway across the world for a race and then back again the following week eventually catches up to the series. Heritage circuits like those in Europe that draw in massive crowds are becoming under pressure from tracks in the Middle East spending huge money to host Grand Prix with Formula 1 potentially loosing some of the history and appeal as those tracks fall off the calendar. Additionally the number of teams cannot be expanded much more, even though there’s been plenty of parties that have expressed interest in doing so. One more thing that could be improved is the fact some races are plain boring. Even the championship itself is boring some years, with 6 races left this year with the drivers and constructers Championships already decided. Finally

Read Full Post...
October 10, 2023 · 8 min