IoT p0wnership

The recent total war bombardment of Brian Krebs’ site, and the subsequent allegation that the traffic emanated from compromised home routers, cameras, baby monitors, doorbells, thermostats, and whatnot, got me thinking.

So DDoS is a thing, and as much as I enjoy the lampooning of IoT by everyone’s favourite wat account, @internetofshit, I wonder if the status quo of insecure consumer devices will have an unexpected knock-on effect.

Previously, DDoS traffic was assumed to come from compromised servers (waves hand in the approximate direction of the cloud) or malware infected PCs. For the former, cloud providers have gotten pretty good at rooting out insecure hosts and booting them off their networks, and organised crime have pretty much figured out that deleting stuff of people’s home computer is less profitable than encrypting said stuff and holding it for ransom. There’s always the chance of someone spotting unexpected outbound traffic from a box — that is one thing the host based AV industry does seem to be good at — because there’s usually a human sitting in front of the device whenever it’s awake. But not so with the cable router or ADSL modem sitting under the hall table, or the IP connected baby monitor you installed in the nursery, or the hundreds of other IP connected whatevers produced for the lowest possible cost because that is what we, as consumers, demand; price, the ultimate arbiter of quality.

My home router, what's it doing? I've got no idea.

My home router. What’s it doing? I’ve got no idea.

Homes filled with tiny linux boxes running weak software are a tempting target. Not just because owning them is easy, but as long as the devices continue to work as reliably as they did before compromise, nobody is going to suspect that their excess capacity is being soaked up under someone else’s control. Embedded devices have other attractive properties, they’re usually online 24/7, not sporadically like a laptop trying to conserve battery power, and commonly enjoy a wired ethernet connection, not the whims of a rapidly changing WiFi network. Can any of you reading this post tell me that you know the provenance of every packet that leaves your home network?

But back to Krebs and the IP cameras. With the nose dive in desktop and laptop sales, it’s pretty clear that the botnet action has moved to the embedded space. Assuming the attribution is correct, then DDoS has gone from being manageable to very not manageable, quickly. It’s the early 2000’s spam wars all over again, and companies that make real money on the internet are going expect a solution. And when I say solution, I mean litigation.

Who’s to blame for shitty insecure consumer devices? Who’s going to receive the summons?

Will it be the local ISPs, or carriers? Unlikely. In the States, carriers enjoy common carrier status which indemnifies them from crimes committed using their service. In Australia the situation is less clear but the message isn’t. ISPs are not interested in policing the behaviour of the users of that service. If you’re Walmart who’s been forced offline by 1Tbps of traffic during the holiday sales, you can forget about suing ISPs.

Ok, what about the device manufacturers themselves? I’ll be honest, I don’t have the stamina to read the EULA paperwork that comes with a device so I cannot assert this as a fact, but I would be amazed if the liability for the damage the device did, if not expressly waived by opening the box, exceeded the purchase price. Looking towards other industries, car manufacturers are not liable for the damage their vehicles do in the case of misuse, which is why in many parts of the world licensing a vehicle to drive on a public road requires compulsory third party insurance.

Let’s cut to the chase, the reason IoT DDoS is a thing is because the security of the software inside those devices is laughable. You can debate about why this is why it is, but that does not change the fact that all the other members of this supply chain have deftly sidestepped the buck on this one, and so the liability for insecure software rests with us, the authors of said software. Because, as Robert C. Martin likes to remind us, software rules the world, so programmers rule the world.

Serious stuff.

Serious stuff.

If you look at the history of unsafe products; food, paint, hair spray, electrical goods, governments have forced manufacturers to improve with a combination of regulations and import controls. In Australia, for example, it is illegal to import a product that connects to the mains supply unless its plug has the correct shroud over the live and neutral pins. This is how governments work, they cut off a manufacturer’s air supply by forbidding them from importing their products into the country. That tends to effect change, smartly.

Will IoT be the tipping point that forces the software industry to adopt voluntary, or mandatory, regulation or procedural standardisation?


Go 1.8 performance improvements, one month in

Sunday September the 18th marks a month since the Go 1.8 cycle opened officially. I’m passionate about the performance of Go programs, and of the compiler itself. This post is a brief look at the state of play, roughly 1/2 way into the development cycle for Go 1.81.

Note: these results are of course preliminary and represent only a point in time, not the performance of the final Go 1.8 release.

Compile times

Nothing much to report here. Using the methodology from my previous Go 1.7 benchmarks, there is a 3.22%–5.11% improvement in full compile time compared to Go 1.7.

Go 1.4.3, Go 1.7, Go tip

Performance improvements

Intel amd64

Better code generation and small improvements to the runtime and standard library show some small improvements for amd642, but really nothing to write home about yet.

name                       old time/op    new time/op  delta
BinaryTree17-4              3.07s ± 2%     3.06s ± 2%    ~      (p=0.661 n=10+9)
Fannkuch11-4                3.23s ± 1%     3.22s ± 0%  -0.43%   (p=0.008 n=9+10)
FmtFprintfEmpty-4          64.4ns ± 0%    61.8ns ± 4%  -4.17%   (p=0.005 n=9+10)
FmtFprintfString-4          162ns ± 0%     162ns ± 0%    ~      (p=0.065 n=10+9)
FmtFprintfInt-4             142ns ± 0%     142ns ± 0%    ~      (p=0.137 n=8+10)
FmtFprintfIntInt-4          220ns ± 0%     217ns ± 0%  -1.18%   (p=0.000 n=9+10)
FmtFprintfPrefixedInt-4     224ns ± 0%     224ns ± 1%    ~       (p=0.206 n=9+9)
FmtFprintfFloat-4           313ns ± 0%     312ns ± 0%  -0.26%   (p=0.001 n=10+9)
FmtManyArgs-4               906ns ± 0%     894ns ± 0%  -1.32%    (p=0.000 n=7+6)
GobDecode-4                8.88ms ± 1%    8.81ms ± 0%  -0.81%  (p=0.003 n=10+10)
GobEncode-4                7.93ms ± 1%    7.88ms ± 0%  -0.66%   (p=0.008 n=9+10)
Gzip-4                      272ms ± 1%     277ms ± 0%  +1.95%   (p=0.000 n=10+9)
Gunzip-4                   47.4ms ± 0%    47.4ms ± 0%    ~      (p=0.720 n=9+10)
HTTPClientServer-4          201µs ± 4%     202µs ± 2%    ~     (p=0.631 n=10+10)
JSONEncode-4               19.3ms ± 0%    19.3ms ± 0%    ~     (p=0.063 n=10+10)
JSONDecode-4               61.0ms ± 0%    61.2ms ± 0%  +0.33%   (p=0.000 n=10+8)
Mandelbrot200-4            5.20ms ± 0%    5.20ms ± 0%    ~      (p=0.475 n=10+7)
GoParse-4                  3.95ms ± 1%    3.97ms ± 1%  +0.65%    (p=0.003 n=9+9)
RegexpMatchEasy0_32-4      88.4ns ± 0%    88.7ns ± 0%  +0.34%   (p=0.001 n=10+9)
RegexpMatchEasy0_1K-4      1.14µs ± 0%    1.14µs ± 0%    ~       (p=0.369 n=9+6)
RegexpMatchEasy1_32-4      82.6ns ± 0%    82.0ns ± 0%  -0.70%   (p=0.000 n=9+10)
RegexpMatchEasy1_1K-4       469ns ± 0%     463ns ± 0%  -1.23%    (p=0.000 n=6+9)
RegexpMatchMedium_32-4      138ns ± 1%     136ns ± 0%  -1.38%   (p=0.000 n=10+9)
RegexpMatchMedium_1K-4     43.6µs ± 1%    42.0µs ± 0%  -3.74%    (p=0.000 n=9+9)
RegexpMatchHard_32-4       2.25µs ± 1%    2.23µs ± 0%  -0.57%    (p=0.000 n=8+8)
RegexpMatchHard_1K-4       68.8µs ± 0%    68.6µs ± 0%  -0.37%    (p=0.000 n=8+8)
Revcomp-4                   477ms ± 1%     472ms ± 0%  -1.03%    (p=0.000 n=8+8)
Template-4                 76.1ms ± 0%    76.4ms ± 0%  +0.35%    (p=0.000 n=9+9)
TimeParse-4                 367ns ± 0%     366ns ± 0%  -0.16%   (p=0.003 n=10+8)
TimeFormat-4                386ns ± 0%     384ns ± 0%  -0.58%    (p=0.000 n=9+9)

name                     old speed      new speed      delta
GobDecode-4              86.4MB/s ± 1%  87.1MB/s ± 0%  +0.81%  (p=0.003 n=10+10)
GobEncode-4              96.7MB/s ± 1%  97.4MB/s ± 0%  +0.66%   (p=0.007 n=9+10)
Gzip-4                   71.4MB/s ± 1%  70.0MB/s ± 0%  -1.91%   (p=0.000 n=10+9)
Gunzip-4                  409MB/s ± 0%   410MB/s ± 0%    ~      (p=0.703 n=9+10)
JSONEncode-4              101MB/s ± 0%   100MB/s ± 0%    ~     (p=0.084 n=10+10)
JSONDecode-4             31.8MB/s ± 0%  31.7MB/s ± 0%  -0.33%   (p=0.000 n=10+8)
GoParse-4                14.7MB/s ± 1%  14.6MB/s ± 1%  -0.67%    (p=0.002 n=9+9)
RegexpMatchEasy0_32-4     362MB/s ± 0%   361MB/s ± 0%  -0.36%   (p=0.000 n=10+9)
RegexpMatchEasy0_1K-4     898MB/s ± 0%   898MB/s ± 0%    ~       (p=0.762 n=9+8)
RegexpMatchEasy1_32-4     387MB/s ± 0%   390MB/s ± 0%  +0.70%   (p=0.000 n=9+10)
RegexpMatchEasy1_1K-4    2.18GB/s ± 0%  2.21GB/s ± 0%  +1.20%    (p=0.000 n=9+9)
RegexpMatchMedium_32-4   7.23MB/s ± 1%  7.32MB/s ± 0%  +1.19%   (p=0.000 n=10+9)
RegexpMatchMedium_1K-4   23.5MB/s ± 1%  24.4MB/s ± 0%  +3.88%    (p=0.000 n=9+9)
RegexpMatchHard_32-4     14.2MB/s ± 1%  14.3MB/s ± 0%  +0.58%    (p=0.000 n=8+8)
RegexpMatchHard_1K-4     14.9MB/s ± 0%  14.9MB/s ± 0%  +0.34%    (p=0.000 n=8+7)
Revcomp-4                 533MB/s ± 1%   539MB/s ± 0%  +1.04%    (p=0.000 n=8+8)
Template-4               25.5MB/s ± 0%  25.4MB/s ± 0%  -0.36%    (p=0.000 n=9+9)


The major improvement that landed recently in the development branch is the conversion of the remaining architecture backends to use the compiler’s SSA form. This has brought a substantial improvement in generated code for non Intel architectures, like ARM3.

name                       old time/op    new time/op    delta
BinaryTree17-4              33.8s ± 1%      27.7s ± 0%  -18.06%  (p=0.000 n=10+10)
Fannkuch11-4                42.0s ± 0%      19.3s ± 0%  -54.10%  (p=0.000 n=10+10)
FmtFprintfEmpty-4           670ns ± 1%      581ns ± 1%  -13.30%  (p=0.000 n=10+10)
FmtFprintfString-4         2.04µs ± 1%     1.65µs ± 0%  -19.09%  (p=0.000 n=10+10)
FmtFprintfInt-4            1.71µs ± 0%     1.21µs ± 0%  -29.39%   (p=0.000 n=10+9)
FmtFprintfIntInt-4         2.69µs ± 1%     1.94µs ± 0%  -27.77%  (p=0.000 n=10+10)
FmtFprintfPrefixedInt-4    2.70µs ± 0%     1.85µs ± 0%  -31.41%   (p=0.000 n=10+9)
FmtFprintfFloat-4          5.15µs ± 0%     3.65µs ± 0%  -29.01%   (p=0.000 n=9+10)
FmtManyArgs-4              11.3µs ± 0%      8.5µs ± 0%  -24.79%   (p=0.000 n=10+9)
GobDecode-4                 112ms ± 0%       77ms ± 1%  -31.04%    (p=0.000 n=9+9)
GobEncode-4                88.5ms ± 1%     77.2ms ± 1%  -12.78%  (p=0.000 n=10+10)
Gzip-4                      4.79s ± 0%      3.34s ± 0%  -30.18%    (p=0.000 n=9+9)
Gunzip-4                    702ms ± 0%      463ms ± 0%  -34.05%  (p=0.000 n=10+10)
HTTPClientServer-4          645µs ± 3%      571µs ± 3%  -11.45%  (p=0.000 n=10+10)
JSONEncode-4                227ms ± 0%      186ms ± 0%  -18.16%  (p=0.000 n=10+10)
JSONDecode-4                845ms ± 0%      618ms ± 0%  -26.81%  (p=0.000 n=10+10)
Mandelbrot200-4            59.3ms ± 0%     40.0ms ± 0%  -32.47%  (p=0.000 n=10+10)
GoParse-4                  45.0ms ± 0%     37.0ms ± 0%  -17.68%    (p=0.000 n=9+9)
RegexpMatchEasy0_32-4       974ns ± 0%      878ns ± 0%   -9.81%   (p=0.000 n=10+9)
RegexpMatchEasy0_1K-4      4.60µs ± 0%     4.48µs ± 0%   -2.57%  (p=0.000 n=10+10)
RegexpMatchEasy1_32-4      1.02µs ± 0%     0.94µs ± 0%   -8.08%   (p=0.000 n=8+10)
RegexpMatchEasy1_1K-4      6.92µs ± 0%     6.08µs ± 0%  -12.10%  (p=0.000 n=10+10)
RegexpMatchMedium_32-4     1.61µs ± 0%     1.27µs ± 0%  -20.98%    (p=0.000 n=9+6)
RegexpMatchMedium_1K-4      447µs ± 0%      317µs ± 0%  -29.05%   (p=0.000 n=10+9)
RegexpMatchHard_32-4       24.9µs ± 0%     18.4µs ± 0%  -25.89%  (p=0.000 n=10+10)
RegexpMatchHard_1K-4        740µs ± 0%      552µs ± 0%  -25.36%  (p=0.000 n=10+10)
Revcomp-4                  81.0ms ± 1%     65.2ms ± 0%  -19.53%    (p=0.000 n=9+9)
Template-4                  1.17s ± 0%      0.81s ± 0%  -31.28%    (p=0.000 n=9+9)
TimeParse-4                5.52µs ± 0%     3.79µs ± 0%  -31.42%   (p=0.000 n=10+9)
TimeFormat-4               10.6µs ± 0%      8.5µs ± 0%  -19.14%  (p=0.000 n=10+10)

name                     old speed      new speed        delta
GobDecode-4              6.86MB/s ± 0%   9.95MB/s ± 1%  +45.00%    (p=0.000 n=9+9)
GobEncode-4              8.67MB/s ± 1%   9.94MB/s ± 1%  +14.69%  (p=0.000 n=10+10)
Gzip-4                   4.05MB/s ± 0%   5.81MB/s ± 0%  +43.32%   (p=0.000 n=10+9)
Gunzip-4                 27.6MB/s ± 0%   41.9MB/s ± 0%  +51.63%  (p=0.000 n=10+10)
JSONEncode-4             8.53MB/s ± 0%  10.43MB/s ± 0%  +22.20%  (p=0.000 n=10+10)
JSONDecode-4             2.30MB/s ± 0%   3.14MB/s ± 0%  +36.39%   (p=0.000 n=9+10)
GoParse-4                1.29MB/s ± 0%   1.56MB/s ± 0%  +20.93%   (p=0.000 n=9+10)
RegexpMatchEasy0_32-4    32.8MB/s ± 0%   36.4MB/s ± 0%  +10.87%  (p=0.000 n=10+10)
RegexpMatchEasy0_1K-4     222MB/s ± 0%    228MB/s ± 0%   +2.64%  (p=0.000 n=10+10)
RegexpMatchEasy1_32-4    31.3MB/s ± 0%   34.0MB/s ± 0%   +8.75%   (p=0.000 n=9+10)
RegexpMatchEasy1_1K-4     148MB/s ± 0%    168MB/s ± 0%  +13.76%  (p=0.000 n=10+10)
RegexpMatchMedium_32-4    620kB/s ± 0%    790kB/s ± 0%  +27.42%   (p=0.000 n=10+8)
RegexpMatchMedium_1K-4   2.29MB/s ± 0%   3.23MB/s ± 0%  +41.05%  (p=0.000 n=10+10)
RegexpMatchHard_32-4     1.29MB/s ± 0%   1.74MB/s ± 0%  +34.88%   (p=0.000 n=9+10)
RegexpMatchHard_1K-4     1.38MB/s ± 0%   1.85MB/s ± 0%  +34.06%  (p=0.000 n=10+10)
Revcomp-4                31.4MB/s ± 1%   39.0MB/s ± 0%  +24.26%    (p=0.000 n=9+9)
Template-4               1.65MB/s ± 0%   2.41MB/s ± 0%  +45.71%   (p=0.000 n=10+9)


  1. Despite the Go 1.8 development cycle opening 18 days late, in order to keep to the 6 month cadence, the feature freeze for this cycle will still occur on the 1st of November.
  2. Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz, 3.13.0-95-generic #142-Ubuntu
  3. Freescale i.MX6, 3.14.77-1-ARCH

SOLID Go Design

This post is based on the text of my GolangUK keynote delivered on the 18th of August 2016.
A recording of the talk is available on YouTube.

This post has been translated into Traditional Chinese by Haohao Tian. Thanks Haohao!

How many Go programmers are there in the world?

How many Go programmers are there in the world? Think of a number and hold it in your head, we’ll come back to it at the end of this talk.

Code review

Who here does code review as part of their job? [the entire room raised their hand, which was encouraging]. Okay, why do you do code review? [someone shouted out “to stop bad code”]

If code review is there to catch bad code, then how do you know if the code you’re reviewing is good, or bad?

Now it’s fine to say “that code is ugly” or ”wow that source code is beautiful”, just as you might say “this painting is beautiful” or “this room is beautiful” but these are subjective terms, and I’m looking for objective ways to talk about the properties of good or bad code.

Bad code

What are some of the properties of bad code that you might pick up on in code review?

  • Rigid. Is the code rigid? Does it have a straight jacket of overbearing types and parameters, that making modification difficult?
  • Fragile. Is the code fragile? Does the slightest change ripple through the code base causing untold havoc?
  • Immobile. Is the code hard to refactor? Is it one keystroke away from an import loop?
  • Complex. Is there code for the sake of having code, are things over-engineered?
  • Verbose. Is it just exhausting to use the code? When you look at it, can you even tell what this code is trying to do?

Are these positive sounding words? Would you be pleased to see these words used in a review of your code?

Probably not.

Good design

But this is an improvement, now we can say things like “I don’t like this because it’s too hard to modify”, or “I don’t like this because i cannot tell what the code is trying to do”, but what about leading with the positive?

Wouldn’t it be great if there were some ways to describe the properties of good design, not just bad design, and to be able to do so in objective terms?


In 2002 Robert Martin published his book, Agile Software Development, Principles, Patterns, and Practices. In it he described five principles of reusable software design, which he called the SOLID principles, after the first letters in their names.

  • Single Responsibility Principle
  • Open / Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

This book is a little dated, the languages that it talks about are the ones in use more than a decade ago. But, perhaps there are some aspects of the SOLID principles that may give us a clue about how to talk about a well designed Go programs.

So this is what I want to spend some time discussing with you this morning.

Single Responsibility Principle

The first principle of SOLID, the S, is the single responsibility principle.

A class should have one, and only one, reason to change.
–Robert C Martin

Now Go obviously doesn’t have classes—instead we have the far more powerful notion of composition—but if you can look past the use of the word class, I think there is some value here.

Why is it important that a piece of code should have only one reason for change? Well, as distressing as the idea that your own code may change, it is far more distressing to discover that code your code depends on is changing under your feet. And when your code does have to change, it should do so in response to a direct stimuli, it shouldn’t be a victim of collateral damage.

So code that has a single responsibility therefore has the fewest reasons to change.

Coupling & Cohesion

Two words that describe how easy or difficult it is to change a piece of software are coupling and cohesion.

Coupling is simply a word that describes two things changing together–a movement in one induces a movement in another.

A related, but separate, notion is the idea of cohesion, a force of mutual attraction.

In the context of software, cohesion is the property of describing pieces of code are naturally attracted to one another.

To describe the units of coupling and cohesion in a Go program, we might talk about functions and methods, as is very common when discussing SRP but I believe it starts with Go’s package model.

Package names

In Go, all code lives inside a package, and a well designed package starts with its name. A package’s name is both a description of its purpose, and a name space prefix. Some examples of good packages from the Go standard library might be:

  • net/http, which provides http clients and servers.
  • os/exec, which runs external commands.
  • encoding/json, which implements encoding and decoding of JSON documents.

When you use another package’s symbols inside your own this is accomplished by the `import` declaration, which establishes a source level coupling between two packages. They now know about each other.

Bad package names

This focus on names is not just pedantry. A poorly named package misses the opportunity to enumerate its purpose, if indeed it ever had one.

What does package server provide? … well a server, hopefully, but which protocol?

What does package private provide? Things that I should not see? Should it have any public symbols?

And package common, just like its partner in crime, package utils, is often found close by these other offenders.

Catch all packages like these become a dumping ground for miscellany, and because they have many responsibilities they change frequently and without cause.

Go’s UNIX philosophy

In my view, no discussion about decoupled design would be complete without mentioning Doug McIlroy’s Unix philosophy; small, sharp tools which combine to solve larger tasks, oftentimes tasks which were not envisioned by the original authors.

I think that Go packages embody the spirit of the UNIX philosophy. In effect each Go package is itself a small Go program, a single unit of change, with a single responsibility.

Open / Closed Principle

The second principle, the O, is the open closed principle by Bertrand Meyer who in 1988 wrote:

Software entities should be open for extension, but closed for modification.
–Bertrand Meyer, Object-Oriented Software Construction

How does this advice apply to a language written 21 years later?

package main

type A struct {

        year int


func (a A) Greet() { fmt.Println("Hello GolangUK", a.year) }

type B struct {



func (b B) Greet() { fmt.Println("Welcome to GolangUK", b.year) }

func main() {

        var a A

        a.year = 2016

        var b B

        b.year = 2016

        a.Greet() // Hello GolangUK 2016

        b.Greet() // Welcome to GolangUK 2016


We have a type A, with a field year and a method Greet. We have a second type, B which embeds an A, thus callers see B‘s methods overlaid on A‘s because A is embedded, as a field, within B, and B can provide its own Greet method, obscuring that of A.

But embedding isn’t just for methods, it also provides access to an embedded type’s fields. As you see, because both A and B are defined in the same package, B can access A‘s private year field as if it were declared inside B.

So embedding is a powerful tool which allows Go’s types to be open for extension.

package main

type Cat struct {

        Name string


func (c Cat) Legs() int { return 4 }

func (c Cat) PrintLegs() {

        fmt.Printf("I have %d legs\n", c.Legs())


type OctoCat struct {



func (o OctoCat) Legs() int { return 5 }

func main() {

        var octo OctoCat

        fmt.Println(octo.Legs()) // 5

        octo.PrintLegs()         // I have 4 legs


In this example we have a Cat type, which can count its number of legs with its Legs method. We embed this Cat type into a new type, an OctoCat, and declare that Octocats have five legs. However, although OctoCat defines its own Legs method, which returns 5, when the PrintLegs method is invoked, it returns 4.

This is because PrintLegs is defined on the Cat type. It takes a Cat as its receiver, and so it dispatches to Cat‘s Legs method. Cat has no knowledge of the type it has been embedded into, so its method set cannot be altered by embedding.

Thus, we can say that Go’s types, while being open for extension, are closed for modification.

In truth, methods in Go are little more than syntactic sugar around a function with a predeclared formal parameter, their receiver.

func (c Cat) PrintLegs() {
        fmt.Printf("I have %d legs\n", c.Legs())

func PrintLegs(c Cat) {
        fmt.Printf("I have %d legs\n", c.Legs())

The receiver is exactly what you pass into it, the first parameter of the function, and because Go does not support function overloading, OctoCats are not substitutable for regular Cats. Which brings me to the next principle.

Liskov Substitution Principle

Coined by Barbara Liskov, the Liskov substitution principle states, roughly, that two types are substitutable if they exhibit behaviour such that the caller is unable to tell the difference.

In a class based language, Liskov’s substitution principle is commonly interpreted as a specification for an abstract base class with various concrete subtypes. But Go does not have classes, or inheritance, so substitution cannot be implemented in terms of an abstract class hierarchy.


Instead, substitution is the purview of Go’s interfaces. In Go, types are not required to nominate that they implement a particular interface, instead any type implements an interface simply provided it has methods whose signature matches the interface declaration.

We say that in Go, interfaces are satisfied implicitly, rather than explicitly, and this has a profound impact on how they are used within the language.

Well designed interfaces are more likely to be small interfaces; the prevailing idiom is an interface contains only a single method. It follows logically that small interfaces lead to simple implementations, because it is hard to do otherwise. Which leads to packages comprised of simple implementations connected by common behaviour.


type Reader interface {
        // Read reads up to len(buf) bytes into buf.
        Read(buf []byte) (n int, err error)

Which brings me to io.Reader, easily my favourite Go interface.

The io.Reader interface is very simple; Read reads data into the supplied buffer, and returns to the caller the number of bytes that were read, and any error encountered during read. It seems simple but it’s very powerful.

Because io.Reader‘s deal with anything that can be expressed as a stream of bytes, we can construct readers over just about anything; a constant string, a byte array, standard in, a network stream, a gzip’d tar file, the standard out of a command being executed remotely via ssh.

And all of these implementations are substitutable for one another because they fulfil the same simple contract.

So the Liskov substitution principle, applied to Go, could be summarised by this lovely aphorism from the late Jim Weirich.

Require no more, promise no less.
–Jim Weirich

And this is a great segue into the fourth SOLID principle.

Interface Segregation Principle

The fourth principle is the interface segregation principle, which reads:

Clients should not be forced to depend on methods they do not use.
–Robert C. Martin

In Go, the application of the interface segregation principle can refer to a process of isolating the behaviour required for a function to do its job. As a concrete example, say I’ve been given a task to write a function that persists a Document structure to disk.

// Save writes the contents of doc to the file f.
func Save(f *os.File, doc *Document) error

I could define this function, let’s call it Save, which takes an *os.File as the destination to write the supplied Document. But this has a few problems.

The signature of Save precludes the option to write the data to a network location. Assuming that network storage is likely to become requirement later, the signature of this function would have to change, impacting all its callers.

Because Save operates directly with files on disk, it is unpleasant to test. To verify its operation, the test would have to read the contents of the file after being written. Additionally the test would have to ensure that f was written to a temporary location and always removed afterwards.

*os.File also defines a lot of methods which are not relevant to Save, like reading directories and checking to see if a path is a symlink. It would be useful if the signature of our Save function could describe only the parts of *os.File that were relevant.

What can we do about these problems?

// Save writes the contents of doc to the supplied ReadWriterCloser.
func Save(rwc io.ReadWriteCloser, doc *Document) error

Using io.ReadWriteCloser we can apply the Interface Segregation Principle to redefine Save to take an interface that describes more general file-shaped things.

With this change, any type that implements the io.ReadWriteCloser interface can be substituted for the previous *os.File. This makes Save both broader in its application, and clarifies to the caller of Save which methods of the *os.File type are relevant to its operation.

As the author of Save I no longer have the option to call those unrelated methods on *os.File as it is hidden behind the io.ReadWriteCloser interface. But we can take the interface segregation principle a bit further.

Firstly, it is unlikely that if Save follows the single responsibility principle, it will read the file it just wrote to verify its contents–that should be responsibility of another piece of code. So we can narrow the specification for the interface we pass to Save to just writing and closing.

// Save writes the contents of doc to the supplied WriteCloser.
func Save(wc io.WriteCloser, doc *Document) error

Secondly, by providing Save with a mechanism to close its stream, which we inherited in a desire to make it look like a file shaped thing, this raises the question of under what circumstances will wc be closed. Possibly Save will call Close unconditionally, or perhaps Close will be called in the case of success.

This presents a problem for the caller of Save as it may want to write additional data to the stream after the document is written.

type NopCloser struct {

// Close has no effect on the underlying writer.
func (c *NopCloser) Close() error { return nil }

A crude solution would be to define a new type which embeds an io.Writer and overrides the Close method, preventing Save from closing the underlying stream.

But this would probably be a violation of the Liskov Substitution Principle, as NopCloser doesn’t actually close anything.

// Save writes the contents of doc to the supplied Writer.
func Save(w io.Writer, doc *Document) error

A better solution would be to redefine Save to take only an io.Writer, stripping it completely of the responsibility to do anything but write data to a stream.

By applying the interface segregation principle to our Save function, the results has simultaneously been a function which is the most specific in terms of its requirements–it only needs a thing that is writable–and the most general in its function, we can now use Save to save our data to anything which implements io.Writer.

A great rule of thumb for Go is accept interfaces, return structs.
–Jack Lindamood

Stepping back a few paces, this quote is an interesting meme that has been percolating in the Go zeitgeist over the last few years.

This tweet sized version lacks nuance, and this is not Jack’s fault, but I think it represents one of the first piece of defensible Go design lore.

Dependency Inversion Principle

The final SOLID principle is the dependency inversion principle, which states:

High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
–Robert C. Martin

But what does dependency inversion mean, in practice, for Go programmers?

If you’ve applied all the principles we’ve talked about up to this point then your code should already be factored into discrete packages, each with a single well defined responsibility or purpose. Your code should describe its dependencies in terms of interfaces, and those interfaces should be factored to describe only the behaviour those functions require. In other words, there shouldn’t be much left to do.

So what I think Martin is talking about here, certainly the context of Go, is the structure of your import graph.

In Go, your import graph must be acyclic. A failure to respect this acyclic requirement is grounds for a compilation failure, but more gravely represents a serious error in design.

All things being equal the import graph of a well designed Go program should be a wide, and relatively flat, rather than tall and narrow. If you have a package whose functions cannot operate without enlisting the aid of another package, that is perhaps a sign that code is not well factored along package boundaries.

The dependency inversion principle encourages you to push the responsibility for the specifics, as high as possible up the import graph, to your main package or top level handler, leaving the lower level code to deal with abstractions–interfaces.

SOLID Go Design

To recap, when applied to Go, each of the SOLID principles are powerful statements about design, but taken together they have a central theme.

The Single Responsibility Principle encourages you to structure the functions, types, and methods into packages that exhibit natural cohesion; the types belong together, the functions serve a single purpose.

The Open / Closed Principle encourages you to compose simple types into more complex ones using embedding.

The Liskov Substitution Principle encourages you to express the dependencies between your packages in terms of interfaces, not concrete types. By defining small interfaces, we can be more confident that implementations will faithfully satisfy their contract.

The Interface Substitution Principle takes that idea further and encourages you to define functions and methods that depend only on the behaviour that they need. If your function only requires a parameter of an interface type with a single method, then it is more likely that this function has only one responsibility.

The Dependency Inversion Principle encourages you move the knowledge of the things your package depends on from compile time–in Go we see this with a reduction in the number of import statements used by a particular package–to run time.

If you were to summarise this talk it would probably be; interfaces let you apply the SOLID principles to Go programs.

Because interfaces let Go programmers describe what their package provides–not how it does it. This is all just another way of saying “decoupling”, which is indeed the goal, because software that is loosely coupled is software that is easier to change.

As Sandi Metz notes:

Design is the art of arranging code that needs to work today, and to be easy to change forever.
–Sandi Metz

Because if Go is going to be a language that companies invest in for the long term, the maintenance of Go programs, the ease of which they can change, will be a key factor in their decision.


In closing, let’s return to the question I opened this talk with; How many Go programmers are there in the world? This is my guess:

By 2020, there will be 500,000 Go developers.

What will half a million Go programmers do with their time? Well, obviously, they’ll write a lot of Go code and, if we’re being honest, not all of it will be good, and some will be quite bad.

Please understand that I do not say this to be cruel, but, every one of you in this room with experience with development in other languages–the languages you came from, to Go–knows from your own experience that there is an element of truth to this prediction.

Within C++, there is a much smaller and cleaner language struggling to get out.
–Bjarne Stroustrup, The Design and Evolution of C++

The opportunity for all Go programmers to make our language a success hinges directly on our collective ability to not make such a mess of things that people start to talk about Go the way that they joke about C++ today.

The narrative that derides other languages for being bloated, verbose, and overcomplicated, could one day well be turned upon Go, and I don’t want to see this happen, so I have a request.

Go programmers need to start talking less about frameworks, and start talking more about design. We need to stop focusing on performance at all cost, and focus instead on reuse at all cost.

What I want to see is people talking about how to use the language we have today, whatever its choices and limitations, to design solutions and to solve real problems.

What I want to hear is people talking about how to design Go programs in a way that is well engineered, decoupled, reusable, and above all responsive to change.

… one more thing

Now, it’s great that so many of you are here today to hear from the great lineup of speakers, but the reality is that no matter how large this conference grows, compared to the number of people who will use Go during its lifetime, we’re just a tiny fraction.

So we need to tell the rest of the world how good software should be written. Good software, composable software, software that is amenable to change, and show them how to do it, using Go. And this starts with you.

I want you to start talking about design, maybe use some of the ideas I presented here, hopefully you’ll do your own research, and apply those ideas to your projects. Then I want you to:

  • Write a blog post about it.
  • Teach a workshop about it what you did.
  • Write a book about what you learnt.
  • And come back to this conference next year and give a talk about what you achieved.

Because by doing these things we can build a culture of Go developers who care about programs that are designed to last.

Thank you.

Transistor logic fundamentals

Long time readers of this blog will know that when I’m not shilling for the Go language, my hobbies include electronics and retro computing. For me, projects like James Newman’s Megaprocessor, a computer built entirely from discrete components, is about as good as it gets.

James has recently finished construction of the Megaprocessor and has started to document it on YouTube, you should totally check it out. But this post isn’t about the Megaprocessor.

When I subscribed to the Megaprocessor channel on YouTube I discovered James has produced another series of videos focused on the fundamentals of implementing digital logic with transistors. In the three videos embedded below, James lays out the foundations of digital logic.

The first video describes (in James’ wonderfully understated manner) the operation of the simplest digital logic circuit; a voltage controlled inverter built with one transistor1.

In the second video, James adds a second transistor in series with the first and demonstrates the implementation of the NAND (Not AND) function2.

In the third video, by reorganising the transistors in parallel, James shows the circuit now implements the logical NOR (Not OR) function.

… and that’s it. There are more videos in James’ Stepping Stones video series, but with these three operations, NOT (inversion), NAND, and NOR, any combination of digital logic of any size can be created, as the Megaprocessor shows3.

Why is this important?

The circuits described in this set of videos feature far fewer transistors than you would find in real processor, but they are not simplified. The circuits described in this video were used in mainframe computers in the 1960’s and formed the basis for the integrated microprocessors of the 1970’s.

In these three videos James describes the entire foundation for contemporary computation. No matter how many layers of operating systems, networks, and source code abstraction you build on top, the fundamentals of computation and digital logic remain as simple as these three videos.

Notes and further reading

If you’re interested in learning more, here are a few suggestions for your own research.

  1. If you have no background in electronics a simple analogy for the relationships between voltage, current, and resistance is water flowing through a pipe. In this analogy, voltage represents water pressure, pushing water through the pipe. Current is the water itself. The volume of water in the pipe is a property of both the diameter of the pipe, and any resistance which may cause segments of the pipe to be less than full. Resistance, the final property, is any constriction or obstruction of the pipe. The higher the resistance, the more the pipe is constricted, reducing the amount of water (current) flowing through it.
  2. James’ tutorials use discrete TTL logic. TTL stands for Transistor to Transistor Logic, introduced in the early 1960’s by Sylvania (yes, the lightbulb makers). Before TTL there were at least two other forms of digital logic, what were they, and why did they succumb to TTL?
  3. James’ tutorials, and the Megaprocessor itself, use NPN transistors. If the Megaprocessor was shrunk down to a single integrated circuit it would most likely be implemented using NMOS logic. NMOS was very popular in the 70’s and early 80’s but has since given way to CMOS logic. What are the differences between NMOS and CMOS and why would James have chosen NMOS to implement the Megaprocessor?

Automatically fetch your project’s dependencies with gb

gb has been in development for just over a year now. Since the announcement in May 2015 the project has received over 1,600 stars, produced 16 releases, and attracted 41 contributors.

Thanks to a committed band of early adopters, gb has grown to be a usable day to day replacement for the go tool. But, there is one area where gb has not lived up to my hopes, and that is dependency management.

gb’s $PROJECT/vendor/ directory was the inspiration for the go tool’s vendor/ directory (although their implementations differ greatly) and has delivered on its goal of reproducible builds for Go projects. However, the success of gb’s project based model, and vendoring code in general, has a few problems. Specifically, wholesale copying (or forking if you prefer) of one code base into another continues to sidestep the issue of adoption of a proper release and versioning culture amongst Go developers.

To be fair, for Go developers using the tools they have access to today–including gb–there is no incentive to release their code. As a Go package author, you get no points for doing proper versioned releases if your build tool just pulls from HEAD anyway. There is similarly limited value in adopting a version numbering policy like SemVer if your tools only memorise the git revision you last copied your code at.

A second problem, equally poorly served by gb or the vendor/ support in the go tool, are developers and projects who cannot, usually for legal reasons, or do not wish to, copy code wholesale into their project. Suggestions of using git submodules have been soundly dismissed as unworkable.

With the release of gb 0.4.3, there is a new way to manage dependencies with gb. This new method does not replace gb vendor or $PROJECT/vendor as the recommended method for achieving reproducible builds, but it does acknowledge that vendoring is not appropriate for all use cases.

To be clear, this new mode of managing dependencies does not supersede or deprecate the existing mechanisms of cloning source code into $PROJECT/vendor. The automatic download feature is optional and is activated by the project author creating a file in their project’s root called, $PROJECT/depfile.

If you have a gb project that is currently vendoring code, or you’re using gb vendor restore to actively avoid cloning code into your project, you can try this feature today, with the following caveats:

  1. Currently only GitHub is supported. This is because the new facility uses the GitHub API to download release tarballs via https. Vanity urls that redirect to GitHub are also not supported yet, but will be supported soon.
  2. The repository must have made a release of its code, and that release must be tagged with a tag containing a valid SemVer 2.0.0 version number. The format of the tag is described in this proposal. If a dependency you want to consume in your gb project has not released their code, then please ask them to do so.

Polishing this feature will be the remainder of the 0.4.x development series. After this work is complete gb vendor will be getting some attention. Ultimately both gb vendor and $PROJECT/depfile do the same thing–one copies the source of your dependencies into your project, the other into your home directory.

Gophers, please tag your releases

What do we want? Version management for Go packages! When do we want it? Yesterday!

What does everyone want? We want our Go build tool of choice to fetch the latest stable version when you start using the package in your project. We want them to grab security updates and bug fixes automatically, but not upgrade to a version where the author deleted a method you were using.

But as it stands, today, in 2016, there is no way for a human, or a tool, to look at an arbitrary git (or mercurial, or bzr, etc) repository of Go code and ask questions like:

  • What versions of this project have been released?
  • What is the latest stable release of this software?
  • If I have version 1.2.3, is there a bugfix or security update that I should apply?

The reason for this is Go projects (repositories of Go packages) don’t have versions, at least not in the way that our friends in other languages use that word. Go projects do not have versions because there is no formalised release process.

But there’s vendor/ right?

Arguing about tools to manage your vendor/ directory, or which markup format a manifest file should be written in is eating the elephant from the wrong end.

Before you can argue about the format of a file that records the version of a package, you have to have some way of actually knowing what that version is. A version number has to be sortable, so you can ask, “is there a newer version available than the one you have on disk?” Ideally the version number should give you a clue to how large the jump between versions is, perhaps even give a clue to backwards or forwards compatibility between two versions.

SemVer is no one’s favourite, yet one format is everyone’s favourite.

I recommended that Go projects adopt SemVer 2.0.0. It’s a sound standard, it is well understood by many, not just Go programmers, and semantic versioning will let people write tools to build a dependency management ecosystem on top of a minimal release process.

Following the lead of the big three Go projects, Docker, Kubernetes, and CoreOS (and GitHub’s on releases page), the format of the tag must be:


That is, the letter v followed by a string which is SemVer 2.0.0 compliant. Here are some examples:

git tag -a v1.2.3
git tag -a v0.1.0
git tag -a v1.0.0-rc.1

Here are some incorrect examples:

git tag -a 1.2.3        // missing v prefix
git tag -a v1.0         // 1.0 is not SemVer compliant
git tag -a v2.0.0beta3  // also not SemVer compliant

Of course, if you’re using hg, bzr, or another version control system, please adjust as appropriate. This isn’t just for git or GitHub repos.

What do you get for this?

Imagine if could show you the documentation for the version of the package you’re using, not just the latest from HEAD.

Now, imagine if could not just show you the documentation, but also serve you a tarball or zip file of the source code of that version. Imagine not having to install mercurial just to go get that one dependency that is still on google code (rest in peace), or bitbucket in hg form.

Establishing a single release process for Go projects and adopting semantic versioning will let your favourite Go package management or vendoring tool provide you things like a real upgrade command. Instead of letting you figure out which revision to switch to, SemVer gives tool writers the ability to do things like upgrade a dependency to the latest patch release of version 1.2.

Build it and they will come

Tagging releases is pointless if people don’t write tools to consume the information. Just like writing tools that can, at the moment, only record git hashes is pointless.

Here’s the deal: If you release your Go projects with the correctly formatted tags, then there are a host of developers who are working dependency management tools for Go packages that want to consume this information.

How can I declare which versions of other packages my project depends on?

If you’ve read this far you are probably wondering how using tagging releases in your own repository is going to help specify the versions of your Go project’s dependencies.

The Go import statement doesn’t contain this version information, all it has is the import path. But whether you’re in the camp that wants to add version information to the import statement, a comment inside the source file, or you would prefer to put that information in a metadata file, everyone needs version information, and that starts with tagging your release of your Go projects.

No version information, no tools, and the situation never improves. It’s that simple.

Automatically run your package’s tests with inotifywait

This is a short post to illustrate how I use the inotifywait command as a cheap and cheerful way to run my tests automatically on save.

Note: inotify is only available on linux, sorry OS X users.

Step 1. Install inotify-tools

On Debian/Ubuntu, inotifywait and friends live in the inotify-tools package.

% sudo apt-get install inotify-tools

If you live in an RPM universe the package name will hopefully be similar.

Step 2. Create a helper function

Remembering the full inotifywait incantation can be taxing, so save yourself some effort and define a function in .bashrc (or your shell of choice’s startup script).

watch() { while inotifywait --exclude .swp -e modify -r .; do $@; done; }

If you use /usr/bin/watch frequently, you might want to pick another name for this function.

Step 3. Run a command on save

Using tmux (you do use tmux, right?), split the window and run

% watch go test .

Any time that a file in the current working directory is modified, inotifywait will return, which runs the command you provided, then loops back around.

watch will trigger on a modification to anything in the current working directory or below it. The command that runs when inotifywait detects a modification can be anything you like. For example you could be working in one package inside your project, and have watch rebuild all the commands any time you save, like this:

% cd $GOPATH/src/
% watch go install -v github/com/you/yourproject/cmd/...

Stack traces and the errors package

A few months ago I gave a presentation on my philosophy for error handling. In the talk I introduced a small errors package designed to support the ideas presented in the talk.

This post is an update to my previous blog post which reflects the changes in the errors package as I’ve put it into service in my own projects.

Wrapping and stack traces

In my April presentation I gave examples of using the Wrap function to produce an annotated error that could be unwrapped for inspection, yet mirrored the recommendations from Kernighan and Donovan’s book.

package main

import "fmt"
import ""

func main() {
        err := errors.New("error")
        err = errors.Wrap(err, "open failed")
        err = errors.Wrap(err, "read config failed")

        fmt.Println(err) // read config failed: open failed: error

Wraping an error added context to the underlying error and recorded the file and line that the error occurred. This file and line information could be retrieved via a helper function, Fprint, to give a trace of the execution path leading away from the error. More on that later.

However, when I came to integrate the errors package into my own projects, I found that using Wrap at each call site in the return path often felt redundant. For example:

func readconfig(file string) {
        if err := openfile(file); err != nil {
                return errors.Wrap(err, "read config failed")
        // ...

If openfile failed it would likely annotate the error it returned with open failed, and that error would also include the file and line of the openfile function. Similarly, readconfig‘s wrapped error would be annotated with read config failed as well as the file and line of the call to errors.Wrap inside the readconfig function.

I realised that, at least in my own code, it is likely that the name of the function contains sufficient information to frequently make the additional context passed to Wrap redundant. But as Wrap requires a message, even if I had nothing useful to add, I’d still have to pass something:

if err != nil {
        return errors.Wrap(err, "") // ewww

I briefly considered making Wrap variadic–to make the second parameter optional–before realising that rather than forcing the user to manually annotate each stack frame in the return path, I can just record the entire stack trace at the point that an error is created by the errors package.

I believe that for 90% of the use cases, this natural stack trace–that is the trace collected at the point New or Errorf are called–is correct with respect to the information required to investigate the error’s cause. In the other cases, Wrap and Wrapf can be used to add context when needed.

This lead to a large internal refactor of the package to collect and expose this natural stack trace.

Fprint and Print have been removed

As mentioned earlier, the mechanism for printing not just the err.Error() text of an error, but also its stack trace, has also changed with feedback from early users.

The first attempts were a pair of functions; Print(err error), which printed the detailed error to os.Stderr, and Fprint(w io.Writer, err error) which did the same but allowed the caller to control the destination. Neither were very popular.

Print was removed in version 0.4.0 because it was just a wrapper around Fprint(os.Stderr, err) and was hard to test, harder to write an example test for, and didn’t feel like its three lines paid their way. However, with Print gone, users were unhappy that Fprint required you to pass an io.Writer, usually a bytes.Buffer, just to retrieve a string form of the error’s trace.

So, Print and Fprint were the wrong API. They were too opinionated, without it being a useful opinion. Fprint has been slowly gutted over the period of 0.5, 0.6 and now has been replaced with a much more powerful facility inspired by Chris Hines’ go-stack/stack package.

The errors package now leverages the powerful fmt.Formatter interface to allow it to customise its output when any error generated, or wrapped by this package, is passed to fmt.Printf. This extended format is activated by the %+v verb. For example,

func main() {
        err := parseArgs(os.Args[1:])
        fmt.Printf("%v\n", err)

Prints, as expected,

not enough arguments, expected at least 3, got 0

However if we change the formatting verb to %+v,

func main() {
        err := parseArgs(os.Args[1:])
        fmt.Printf("%+v\n", err)

the same error value now results in

not enough arguments, expected at least 3, got 0

For those that need more control the Cause and StackTrace behaviours return values who have their own fmt.Formatter implementations. The latter is alias for a slice of Frame values which represent each frame in a call stack. Again, Frame implements several fmt.Formatter verbs that allow its output to be customised as required.

Putting it all together

With the changes to the errors package, some guidelines on how to use the package are in order.

  • In your own code, use errors.New or errors.Errorf at the point an error occurs.
    func parseArgs(args []string) error {
            if len(args) < 3 {
                    return errors.Errorf("not enough arguments, expected at least 3, got %d", len(args))
            // ...
  • If you receive an error from another function, it is often sufficient to simply return it.
    if err != nil {
           return err
  • If you interact with a package from another repository, consider using errors.Wrap or errors.Wrapf to establish a stack trace at that point. This advice also applies when interacting with the standard library.
    f, err := os.Open(path)
    if err != nil {
            return errors.Wrapf(err, "failed to open %q", path)
  • Always return errors to their caller rather than logging them throughout your program.
  • At the top level of your program, or worker goroutine, use %+v to print the error with sufficient detail.
    func main() {
            err := app.Run()
            if err != nil {
                    fmt.Printf("FATAL: %+v\n", err)
  • If you want to exclude some classes of error from printing, use errors.Cause to unwrap errors before inspecting them.


The errors package, from the point of view of the four package level functions, New, Errorf, Wrap, and Wrapf, is done. Their API signatures are well tested, and now this package has been integrated into over 100 other packages, are unlikely to change at this point.

The extended stack trace format, %+v, is still very new and I encourage you to try it and leave feedback via an issue.

Test fixtures in Go

This is a quick post to describe how you can use test fixtures, data files on disk, with the Go testing package. Using fixtures with the Go testing package is quite straight forward because of two convenience features built into the go tool.

First, when you run go test, for each package in scope, the test binary will be executed with its working directory set to the source directory of the package under test. Consider this test in the example package:

package example

import (

func TestWorkingDirectory(t *testing.T) {
        wd, _ := os.Getwd()

Running this from a random directory, and remembering that go test takes a path relative to your $GOPATH, results in:

% pwd
% go test -v
=== RUN   TestWorkingDirectory
--- PASS: TestWorkingDirectory (0.00s)
        example_test.go:10: /Users/dfc/src/
ok   0.013s

Second, the Go tool will ignore any directory in your $GOPATH that starts with a period, an underscore, or matches the word testdata.

Putting this together, locating a fixture from your test code is as simple as

f, err := os.Open("testdata/somefixture.json")

(technically this code should use filepath.Join but in these simple cases Windows copes fine with the forward slash). Here are some random examples from the standard library:

  1. debug/elf
  2. net/http
  3. image

Happy testing!

Don’t just check errors, handle them gracefully

This post is an extract from my presentation at the recent GoCon spring conference in Tokyo, Japan.

Don't just check errors, handle them gracefully

Errors are just values

I’ve spent a lot of time thinking about the best way to handle errors in Go programs. I really wanted there to be a single way to do error handling, something that we could teach all Go programmers by rote, just as we might teach mathematics, or the alphabet.

However, I have concluded that there is no single way to handle errors. Instead, I believe Go’s error handling can be classified into the three core strategies.

Sentinel errors

The first category of error handling is what I call sentinel errors.

if err == ErrSomething { … }

The name descends from the practice in computer programming of using a specific value to signify that no further processing is possible. So to with Go, we use specific values to signify an error.

Examples include values like io.EOF or low level errors like the constants in the syscall package, like syscall.ENOENT.

There are even sentinel errors that signify that an error did not occur, like go/build.NoGoError, and path/filepath.SkipDir from path/filepath.Walk.

Using sentinel values is the least flexible error handling strategy, as the caller must compare the result to predeclared value using the equality operator. This presents a problem when you want to provide more context, as returning a different error would will break the equality check.

Even something as well meaning as using fmt.Errorf to add some context to the error will defeat the caller’s equality test. Instead the caller will be forced to look at the output of the error‘s Error method to see if it matches a specific string.

Never inspect the output of error.Error

As an aside, I believe you should never inspect the output of the error.Error method. The Error method on the error interface exists for humans, not code.

The contents of that string belong in a log file, or displayed on screen. You shouldn’t try to change the behaviour of your program by inspecting it.

I know that sometimes this isn’t possible, and as someone pointed out on twitter, this advice doesn’t apply to writing tests. Never the less, comparing the string form of an error is, in my opinion, a code smell, and you should try to avoid it.

Sentinel errors become part of your public API

If your public function or method returns an error of a particular value then that value must be public, and of course documented. This adds to the surface area of your API.

If your API defines an interface which returns a specific error, all implementations of that interface will be restricted to returning only that error, even if they could provide a more descriptive error.

We see this with io.Reader. Functions like io.Copy require a reader implementation to return exactly io.EOF to signal to the caller no more data, but that isn’t an error.

Sentinel errors create a dependency between two packages

By far the worst problem with sentinel error values is they create a source code dependency between two packages. As an example, to check if an error is equal to io.EOF, your code must import the io package.

This specific example does not sound so bad, because it is quite common, but imagine the coupling that exists when many packages in your project export error values, which other packages in your project must import to check for specific error conditions.

Having worked in a large project that toyed with this pattern, I can tell you that the spectre of bad design–in the form of an import loop–was never far from our minds.

Conclusion: avoid sentinel errors

So, my advice is to avoid using sentinel error values in the code you write. There are a few cases where they are used in the standard library, but this is not a pattern that you should emulate.

If someone asks you to export an error value from your package, you should politely decline and instead suggest an alternative method, such as the ones I will discuss next.

Error types

Error types are the second form of Go error handling I want to discuss.

if err, ok := err.(SomeType); ok { … }

An error type is a type that you create that implements the error interface. In this example, the MyError type tracks the file and line, as well as a message explaining what happened.

type MyError struct {
        Msg string
        File string
        Line int

func (e *MyError) Error() string { 
        return fmt.Sprintf("%s:%d: %s”, e.File, e.Line, e.Msg)

return &MyError{"Something happened", “server.go", 42}

Because MyError error is a type, callers can use type assertion to extract the extra context from the error.

err := something()
switch err := err.(type) {
case nil:
        // call succeeded, nothing to do
case *MyError:
        fmt.Println(“error occurred on line:”, err.Line)
// unknown error

A big improvement of error types over error values is their ability to wrap an underlying error to provide more context.

An excellent example of this is the os.PathError type which annotates the underlying error with the operation it was trying to perform, and the file it was trying to use.

// PathError records an error and the operation
// and file path that caused it.
type PathError struct {
        Op   string
        Path string
        Err  error // the cause

func (e *PathError) Error() string

Problems with error types

So the caller can use a type assertion or type switch, error types must be made public.

If your code implements an interface whose contract requires a specific error type, all implementors of that interface need to depend on the package that defines the error type.

This intimate knowledge of a package’s types creates a strong coupling with the caller, making for a brittle API.

Conclusion: avoid error types

While error types are better than sentinel error values, because they can capture more context about what went wrong, error types share many of the problems of error values.

So again my advice is to avoid error types, or at least, avoid making them part of your public API.

Opaque errors

Now we come to the third category of error handling. In my opinion this is the most flexible error handling strategy as it requires the least coupling between your code and caller.

I call this style opaque error handling, because while you know an error occurred, you don’t have the ability to see inside the error. As the caller, all you know about the result of the operation is that it worked, or it didn’t.

This is all there is to opaque error handling–just return the error without assuming anything about its contents. If you adopt this position, then error handling can become significantly more useful as a debugging aid.

import “”

func fn() error {
        x, err := bar.Foo()
        if err != nil {
                return err
        // use x

For example, Foo‘s contract makes no guarantees about what it will return in the context of an error. The author of Foo is now free to annotate errors that pass through it with additional context without breaking its contract with the caller.

Assert errors for behaviour, not type

In a small number of cases, this binary approach to error handling is not sufficient.

For example, interactions with the world outside your process, like network activity, require that the caller investigate the nature of the error to decide if it is reasonable to retry the operation.

In this case rather than asserting the error is a specific type or value, we can assert that the error implements a particular behaviour. Consider this example:

type temporary interface {
        Temporary() bool
// IsTemporary returns true if err is temporary.
func IsTemporary(err error) bool {
        te, ok := err.(temporary)
        return ok && te.Temporary()

We can pass any error to IsTemporary to determine if the error could be retried.

If the error does not implement the temporary interface; that is, it does not have a Temporary method, then then error is not temporary.

If the error does implement Temporary, then perhaps the caller can retry the operation if Temporary returns true.

The key here is this logic can be implemented without importing the package that defines the error or indeed knowing anything about err‘s underlying type–we’re simply interested in its behaviour.

Don’t just check errors, handle them gracefully

This brings me to a second Go proverb that I want to talk about; don’t just check errors, handle them gracefully. Can you suggest some problems with the following piece of code?

func AuthenticateRequest(r *Request) error {
        err := authenticate(r.User)
        if err != nil {
                return err
        return nil

An obvious suggestion is that the five lines of the function could be replaced with

return authenticate(r.User)

But this is the simple stuff that everyone should be catching in code review. More fundamentally the problem with this code is I cannot tell where the original error came from.

If authenticate returns an error, then AuthenticateRequest will return the error to its caller, who will probably do the same, and so on. At the top of the program the main body of the program will print the error to the screen or a log file, and all that will be printed is: No such file or directory.
No such file or directory
There is no information of file and line where the error was generated. There is no stack trace of the call stack leading up to the error. The author of this code will be forced to a long session of bisecting their code to discover which code path trigged the file not found error.

Donovan and Kernighan’s The Go Programming Language recommends that you add context to the error path using fmt.Errorf

func AuthenticateRequest(r *Request) error {
        err := authenticate(r.User)
        if err != nil {
                return fmt.Errorf("authenticate failed: %v", err)
        return nil

But as we saw earlier, this pattern is incompatible with the use of sentinel error values or type assertions, because converting the error value to a string, merging it with another string, then converting it back to an error with fmt.Errorf breaks equality and destroys any context in the original error.

Annotating errors

I’d like to suggest a method to add context to errors, and to do that I’m going to introduce a simple package. The code is online at The errors package has two main functions:

// Wrap annotates cause with a message.
func Wrap(cause error, message string) error

The first function is Wrap, which takes an error, and a message and produces a new error.

// Cause unwraps an annotated error.
func Cause(err error) error

The second function is Cause, which takes an error that has possibly been wrapped, and unwraps it to recover the original error.

Using these two functions, we can now annotate any error, and recover the underlying error if we need to inspect it. Consider this example of a function that reads the content of a file into memory.

func ReadFile(path string) ([]byte, error) {
        f, err := os.Open(path)
        if err != nil {
                return nil, errors.Wrap(err, "open failed")
        defer f.Close()
        buf, err := ioutil.ReadAll(f)
        if err != nil {
                return nil, errors.Wrap(err, "read failed")
        return buf, nil

We’ll use this function to write a function to read a config file, then call that from main.

func ReadConfig() ([]byte, error) {
        home := os.Getenv("HOME")
        config, err := ReadFile(filepath.Join(home, ".settings.xml"))
        return config, errors.Wrap(err, "could not read config")
func main() {
        _, err := ReadConfig()
        if err != nil {

If the ReadConfig code path fails, because we used errors.Wrap, we get a nicely annotated error in the K&D style.

could not read config: open failed: open /Users/dfc/.settings.xml: no such file or directory

Because errors.Wrap produces a stack of errors, we can inspect that stack for additional debugging information. This is the same example again, but this time we replace fmt.Println with errors.Print

func main() {
        _, err := ReadConfig()
        if err != nil {

We’ll get something like this:

readfile.go:27: could not read config
readfile.go:14: open failed
open /Users/dfc/.settings.xml: no such file or directory

The first line comes from ReadConfig, the second comes from the os.Open part of ReadFile, and the remainder comes from the os package itself, which does not carry location information.

Now we’ve introduced the concept of wrapping errors to produce a stack, we need to talk about the reverse, unwrapping them. This is the domain of the errors.Cause function.

// IsTemporary returns true if err is temporary.
func IsTemporary(err error) bool {
        te, ok := errors.Cause(err).(temporary)
        return ok && te.Temporary()

In operation, whenever you need to check an error matches a specific value or type, you should first recover the original error using the errors.Cause function.

Only handle errors once

Lastly, I want to mention that you should only handle errors once. Handling an error means inspecting the error value, and making a decision.

func Write(w io.Writer, buf []byte) {

If you make less than one decision, you’re ignoring the error. As we see here, the error from w.Write is being discarded.

But making more than one decision in response to a single error is also problematic.

func Write(w io.Writer, buf []byte) error {
        _, err := w.Write(buf)
        if err != nil {
                // annotated error goes to log file
                log.Println("unable to write:", err)
                // unannotated error returned to caller
                return err
        return nil

In this example if an error occurs during Write, a line will be written to a log file, noting the file and line that the error occurred, and the error is also returned to the caller, who possibly will log it, and return it, all the way back up to the top of the program.

So you get a stack of duplicate lines in your log file, but at the top of the program you get the original error without any context. Java anyone?

func Write(w io.Write, buf []byte) error {
        _, err := w.Write(buf)
        return errors.Wrap(err, "write failed")

Using the errors package gives you the ability to add context to error values, in a way that is inspectable by both a human and a machine.


In conclusion, errors are part of your package’s public API, treat them with as much care as you would any other part of your public API.

For maximum flexibility I recommend that you try to treat all errors as opaque. In the situations where you cannot do that, assert errors for behaviour, not type or value.

Minimise the number of sentinel error values in your program and convert errors to opaque errors by wrapping them with errors.Wrap as soon as they occur.

Finally, use errors.Cause to recover the underlying error if you need to inspect it.