Unit Testing fetch calls in a Cloudflare Worker

Once you have a worker set up, you probably want to write tests for it. In this guide I will show how to write tests for outbound fetch() requests using vitest and miniflare. Originally I used the following guide to get the tests working but it seems not everything is included within the guide that may be obvious for first time Worker developers.

https://miniflare.dev/testing/vitest

Setup

Install Vitest

$ npm install -D vitest-environment-miniflare vitest

If you are not using service bindings you can use the following vitest.config.ts file:

import { defineConfig } from "vitest/config";

export default defineConfig({
  test: {
    environment: "miniflare",
  },
})

Finally use the following to enable Typescript support and type checking

{
  "compilerOptions": {
    ...
    "types": [
      "@cloudflare/workers-types",
      "vitest-environment-miniflare/globals" // add this line
    ]
  }
}

Minimum Worker Example

To be able to test a worker I had to use either a Service Worker format or have a handle() function that manipulated the request which was then called from the Module Worker. Here is an example worker

export default {
	async fetch(
		request: Request,
	): Promise<Response> {
		return  await handleRequest(request)
	},
};

async function handleRequest(request: Request): Promise<Response> {
	const response =  await fetch(request)
	const newResponse = new Response(response.body, response);
	newResponse.headers.set("x-new-header", "hello world")
	return newResponse
}

This will simply makes a request to what was passed in, adds a new header and then returns the response.

Writing Tests

This can now be tested using the following

import { expect, it } from "vitest";
import { handleRequest } from ".";

const describe = setupMiniflareIsolatedStorage();

describe("Worker", () => {
Read Full Post...
April 27, 2023 · 2 min

Vary Header

The vary header is a response header used to create different variations of an object in the cache. In this post I will show how the vary header works, how we can use it and more importantly ways not to use the header.

How the vary header works

The most widely used use case for the vary header is to vary by the Accept-Encoding header. In this use case, the response returned would be different depending on what Content-Encoding the browser can support. In this case the response header sent by the origin would look something like the following:

vary: Accept-Encoding

To generate the response, the cache will compare the Accept-Encoding request header value with the content-encoding returned by the origin

# request header:
Accept-Encoding: gzip, deflate

# response header:
content-encoding: gzip

In this case we can see the browser supports gzip and the resource stored in cache is of type gzip so we are able to return this without further processing.

Where the vary header does not work well

Issues with caches supporting the vary header come from more advanced use-cases. Here’s an example where we cache variations on the Accept-Language header on top of the previous Accept-Encoding header.

vary: Accept-Language Accept-Encoding

This would allow us to serve different versions to say English and French speakers based on what their Accept-Language header is:

Accept-Language: en
Accept-Language: fr

The problems come from the fact there are many different ways of browsers saying the same thing. All of the below examples are where English should be returned:

en
en-us
en-US,en;q=0.5
en-US
en-US,en;q=0.8
en-US;q=0.8,en;q=0.6,fr-CA,fr;q=0.4

The last one is an example of one where the user speaks both English and

Read Full Post...
April 25, 2023 · 5 min

Friday Deploys

I used to be against Friday deploys. We all know the memes and wisdom that tell us how Friday deploys are a bad thing. I never really stopped to consider otherwise. On a basic level having a deploy freeze on a Friday makes sense for a number of reasons. One thing is we don’t want to be stuck working all Friday evening or all weekend if things go bad. Its easier to justify working a weekday evening than a Friday evening where many have other plans. Another reason is we’re highly likely to have mentally checked out, especially late in the evening, on a Friday. This causes us to rush out things without properly checking them and giving a once over which causes more issues. The superstitious side of me remembers the few Friday deploys I did which went wrong for one reason or another and justifies that as to never deploy on Fridays.

But is this thinking right? Back when deploys were big events and required lots of manual interventions and monitoring it did make sense but the core idea of modern software engineering and devops is to make deploys a non event. Continuous deployment and the pipelines that come with should take care of making sure a change works as expected so we should have full confidence in our deploys.

FRIDAY DEPLOY FREEZES ARE EXACTLY LIKE MURDERING PUPPIES

This piece by Charity Majors helped shift my thinking on Friday deploys. Ignore the extreme headline for a minute and just read the piece. It goes into the many

Read Full Post...
March 31, 2023 · 3 min

Increasing resource quotas in GKE

Resource quotas in GKE managed clusters are hard limits for the number of resources that can be created in a namespace. For a lot of cases you may not run into issues but the quotas are set low enough by default they may be run into for a cluster with more workloads. The quotas increase as you add more nodes to the cluster but for clusters with aggressive scale to zero or lots of small pods this increase may not happen. In this, we will go through how to increase or even remove the quotas from the cluster.

How to see the resource quotas

There is a resourcequota resource created in each namespace when you create the cluster. The default limits are:

  • ingresses.extensions: 100
  • ingresses.networking.k8s.io: 100
  • jobs.batch: 5k
  • pods: 1500
  • services: 500 You can use kubectl to see the resourcequota for a particular namespace:
$ kubectl get resourcequotas
NAME                  AGE   REQUEST                                                                                        
Read Full Post...
March 23, 2023 · 2 min

Pareto Principle in Software Engineering

I’ve read this piece recently by Casey Cobb about how we can use the Pareto Principle in software engineering. The Pareto Principle, or the 80/20 principle, is how for example 80% of car accidents are caused by 20% of drivers. In this piece Casey goes into how we can use this knowledge in software engineering to ship code faster.

https://projectricochet.com/blog/software-development-pareto-principle-and-80-solution

Applying the Pareto Principle to software then the following must be true:

  • 80% of features are covered by 20% of the codebase
  • 80% of complexity is in 20% of the codebase
  • 80% of engineering time is spent on 20% of the codebase
  • 80% of customers only care about 20% of the features

The key idea of the piece is that if all of the above are true then we should be able to develop 80% of the application in 20% of the time.

The Bad

My initial reaction to it was that it is a way over-simplification of software engineering. It’s not as easy as just stripping out 20% of the code so everything becomes much quicker. For example the deployment process may take time to set up and configure along with other plumbing which is required for the app to run but cannot just be removed. Similarly auth and using external APIs often takes time depending on how good the docs are but those can be core to the use of the entire app, not just sole features.

The Good

However after thinking about it some more I think the core idea of the piece is sound once proper communication

Read Full Post...
March 20, 2023 · 4 min

Book: Software Craftsmanship

Book: The Software Craftsman: Professionalism, Pragmatism, Pride Author: Sandro Mancuso

At the title of the book says, we should treat coding as a craft rather than just another job. We should be deliberate about the code we write but also about everything in our careers from the companies we work for, the learning we do right to the time we should search for a new job. So many people, not just software engineers, go through life without fully considering these decisions ahead of time. This book aims to make you realise we should be deliberate about everything we do.

This book is mainly concerned with the general concepts of the craft rather than getting into the nitty gritty of technical details. I feel a lot of what the book has to say has become fairly common knowledge across the industry but it is still no harm to hear it said out loud. For example, he considers QA as a waste of time due to the use of unit testing should leave no surprises at the end.

Two key ideas of the book focus around software craftsmanship and continuous learning so I will outline those below

Software Craftsmanship

Software Craftsmanship is about professionalism in software development.

This is about mindset as much as anything else. Instead of treating writing code as just another job we should be deliberate about the decisions we make and follow best practices. Well crafted software does not only just work. It should be easy to understand and maintain. What it does should be predictable

Read Full Post...
March 15, 2023 · 6 min

Why the Dutch love cycling

Go seemingly anywhere in Amsterdam or more widely in the Netherlands and you’ll find it that seemingly everyone cycles. Certainly more than most other western nations. But why is this so?

Weather and landscape play a factor. Having nice weather, basically just not raining, and flat roads to cycle on do play major roles but I think the more important one is that the infrastructure is there. Look around and you see segregated cycle lanes that don’t just end randomly like they do many other cities. They’ve even built an underwater bike shed instead of leaving it up to you to find somewhere to put your bike like many other cities.

Socially too it is accepted or even expected to cycle.

So why is this so. If you went back to the post World War 2 days of the 50s and 60s you’d probably find the popularity of cars rising at the same rate as most other western European nations. Oil was cheap and the car was the ultimate expression of freedom. The continent was rebuilding and the economies were growing rapidly.

This all began to change in the early 1970s with a series of major oil shocks. Over the decade the price of a barrel of oil rose from about 3 dollars to about 12 dollars, a 300% increase. For comparison, today the price of oil sits around 80 dollars. A similar increase would move it to 240 dollars. Nowadays we are somewhat used to oil prices fluctuating daily but back then in the post World War years they

Read Full Post...
February 21, 2023 · 3 min

Cloudflare redirect

In this article I will walk through how to create page redirects in Cloudflare. The redirect I’m creating here is one to go from one page to another so for example to go from /blog to /posts. In many cases it may be more preferable to set these redirects up in Cloudflare as you can avoid extra requests going to the site which results in lower load and cheaper running costs. Cloudflare also serves these requests closer to wherever the visitor is coming from so it leads to a faster experience for them too.

Cloudflare have recently updated their redirects feature which may cause confusion as most of the posts I’ve found online relate to the old way of doing this through page rules.

Example

The example I’m using here is to redirect the homepage (/) to the /posts page.

  1. First navigate to the redirect rules page which is under the rules section in the Cloudflare dashboard
  2. The incoming requests should match whatever you choose so in my example it will be “URI Path” equals /. This is represented in the expression preview box as (http.request.uri.path eq "/")
  3. As this is a single path we can use a static URL redirect and set the URL to be /posts. I also want to keep the query string (e.g. /?foo=bar) so I will tick that box. Status code can be whatever you like, I’ve a section below with more detail. In the end it will look like the following screenshot:

Cloudflare Redirect!

Testing it works

Here’s an example

Read Full Post...
February 7, 2023 · 3 min

State of the blog

At the start of this year, 2023, I set out the goal to write 100 blog posts by the end of the year. I knew this was an ambitious goal but it seemed vaguely achievable. One hundred seems a large enough number but twice per week shouldn’t take up too much of my time. It allows enough slack that some weeks can be less while others the volume increases to meet the goal.

However things have not gone according to plan to say the least. It’s now the end of January and I’ve completed a grand total of 3 posts. If I’m keeping up with my goal I should have about 8 completed by now so I’ve fallen well short. This piece is my January retro to try dig into why I’ve fallen short and what I can try do going forward to attempt to salvage the original and increasingly ambitious goal.

Initially I wanted this blog to be something where I could write out my thoughts on programming and learn along the way. It was going to be a place where as I came across cool new tech or concepts I could write about them. They say to truly show you understand something you should teach it to someone so that’s what this was going to be about. I was focusing on software engineering because that’s where I spend most of my time anyway at my day job. Writing about this would allow me to think out loud on topics that I may only touch on so

Read Full Post...
January 31, 2023 · 4 min

Kubernetes Multi Cluster

Kubernetes itself is great for scheduling within a single cluster but if we have many clusters, spread across geographies, we start to need something above the cluster level. This is where Kubernetes Multi-Cluster comes in in allowing us to manage apps across clusters.

Ask nearly anyone who uses kubernetes what they prefer between treating applications like cattle or pets nearly all will say cattle. Pets require special individual attention but cattle are managed within herds, at least that’s what the metaphor explains. In kubernetes land it means using a deployment or daemonset to create your pods and let kubernetes take care of the rest with things like scheduling and restarting.

This is easy to manage for a single cluster but becomes increasingly difficult as the number of clusters scales up. Creating the same deployment on each cluster becomes tedious and error prone.

In this post I’m going to give the situation where an internal platform team creates and manages clusters with some base level of services in it, for example logging, metrics and ingress along with a few others. We then allow other teams to deploy their apps to this cluster.

For our own services and a low number of clusters the most straightforward way would be to hook up some CICD system like jenkins to be able to deploy to each cluster. There may be some unique config per cluster so each app would get templated out and then we could use something like helm to generate the config as we deploy. However this process becomes unwieldy as the

Read Full Post...
January 17, 2023 · 5 min