Category Archives: Small ideas

Thinking about $GOPATH

This is a short blog post about my thoughts on using Go in anger through several workplaces, as a developer and an advocate.

What is $GOPATH?

Back when Go was first announced we used Makefiles to compile Go code. These Makefiles referenced some shared logic stored in the Go distribution. This is where $GOROOT comes from.

Back then, if you wrote Go code, you’d probably also used these Makefiles, and while you could check out your source code anywhere, most people would put their own Go code in what today we’d call $GOROOT/src as you must’ve compiled Go from source, so this directory was always going to be present.

Towards the 1.0 release goinstall, then go get, solidified the use of domain names in import paths to provide a globally unique namespace. These tools introduced a new location into which Go code would be fetched. This location was separate from $GOROOT to make clear the distinction between code provided by the Go project, and code written by the developer. By the time Go 1.1 was released in 2013, $GOROOT was removed as a fallback option.

Why does $GOPATH exist?

$GOPATH exists for two main reasons:

  1. In Go, the import declaration references a package via its fully qualified import path. $GOPATH exist so that from any directory inside $GOPATH/src the go tool can compute the absolute import path of the package in question.1
  2. A location to store dependencies fetched by go get.

Having a per user $GOPATH environment variable also means developers could use the go tool from any directory on their system to build, test and install code, but I suspect only a minority utilise this feature.

What’s wrong with $GOPATH?

In my experience, many newcomers to Go are frustrated with the single workspace $GOPATH model. They are confused that $GOPATH doesn’t let them check out the source of a project in a directory of their choice like they are used to with other languages. Additionally, $GOPATH does not let the developer have more than one copy of a project (or its dependencies)  checked out at the same time without having to update $GOPATH constantly.

I think it is important to recognise that these issues are legitimate points of confusion for many newcomers (including those on the Go team) and act as a drag on Go adoption. As we’re on the cusp of a blessed dependency management tool for Go, I think it’s equally important to continue to question the base assumptions that this new tool will build on, namely requiring a $GOPATH.

In my opinion, any Go build tool needs to provide (in addition to actually building and testing code) a way for Go code checked out in an arbitrary location on disk to recover its intended fully qualified import path; the path other code will import it as.

The $GOPATH model answers this question by subtracting the prefix of $GOPATH/src from the path to the directory of the current package; the remainder is the package’s fully qualified import path. This is why if you check out a package outside a $GOPATH workspace, the go tool cannot figure out the packages’ fully qualified import path and everything falls apart.

What are some alternatives to $GOPATH?

I attempted to address both issues with gb, which gives developers the ability to check out a project anywhere you want, but has no solution for libraries, and gb projects were not go gettable. However gb showed that writing a new build tool that did not wrap the go tool meant it was not forced to reorganise the world to fit into the $GOPATH model allowing gb users to include the source of all their dependencies in their project without the pitfalls of the Go 1.6’s vendor/ directory.

Recently, on a suggestion from Bill Kennedy, I built an experimental build tool that recorded the expected import prefix in a manifest file. That prefix, rather than one computed by $GOPATH directory arithmetic, is used to determine the fully qualified import path.

I’m working on a similar tool (unfinished) based on a suggestion from Brad Fitzpatrick that uses the .git directory as a sentinel to determine the root of the project and hopefully infer the full import path from the git remote configuration.

While these experiments are unfinished, both demonstrate that you can avoid the $GOPATH restrictions and retain compatibility with the go get ecosystem. Potentially in the case of Kodos, even avoid a manifest file.


Kang and Kodos use a lot of forked code from gb, which I hope to rectify over the new years’ break. If you are interesting in contributing or better yet, building your own Go tool to explore this problem space, Kang, Kodos, and gb are permissively licensed.


  1. This is notably different from the way imports work in scripting languages like Python and Ruby, which use directly scanning and inserting onto a global search path source code directories.

Do not fear first class functions

This is the text of my dotGo 2016 presentation. A recording and slide deck are also available.


Hello, welcome to dotGo.

Two years ago I stood on a stage, not unlike this one, and told you my opinion for how configuration options should be handled in Go. The cornerstone of my presentation was Rob Pike’s blog post, Self-referential functions and the design of options.

Since then it has been wonderful to watch this idea mature from Rob’s original blog post, to the gRPC project, who in my opinion have continued to evolve this design pattern into its best form to date.

But, when talking to Gophers at a conference in London a few months ago, several of them expressed a concern that while they understood the notion of a function that returns a function, the technique that powers functional options, they worried that other Go programmers—I suspect they meant less experienced Go programmers—wouldn’t be able to understand this style of programming.

And this made me a bit sad because I consider Go’s support of first class functions to be a gift, and something that we should all be able to take advantage of. So I’m here today to show you, that you do not need to fear first class functions.

Functional options recap

To begin, I’ll very quickly recap the functional options pattern

type Config struct{ ... }

func WithReticulatedSplines(c *Config) { ... }

type Terrain struct {
        config Config

func NewTerrain(options ...func(*Config)) *Terrain {
        var t Terrain
        for _, option := range options {
        return &t


func main() {
        t := NewTerrain(WithReticulatedSplines)
        // [ simulation intensifies ]

We start with some options, expressed as functions which take a pointer to a structure to configure. We pass those functions to a constructor, and inside the body of that constructor each option function is invoked in order, passing in a reference to the Config value. Finally, we call NewTerrain with the options we want, and away we go.

Okay, everyone should be familiar with this pattern. Where I believe the confusion comes from, is when you need an option function which take a parameter. For example, we have WithCities, which lets us add a number of cities to our terrain model.

 // WithCities adds n cities to the Terrain model
func WithCities(n int) func(*Config) { ... }

func main() {        
        t := NewTerrain(WithCities(9))      
        // ...

Because WithCities takes an argument, we cannot simply pass WithCities to NewTerrain, its signature does not match. Instead we evaluate WithCities, passing in the number of cities to create, and use the result as the value to pass to NewTerrain.

Functions as first class values

What’s going on here? Let’s break it down. Fundamentally, evaluating a function returns a value. We have functions that take two numbers and return a number.

package math

func Min(a, b float64) float64

We have functions that take a slice, and return a pointer to a structure.

package bytes

func NewReader(b []byte) *Reader

and now we have a function which returns a function.

func WithCities(n int) func(*Config)

The type of the value that is returned from WithCities is a function which takes a pointer to a Config. This ability to treat functions as regular values leads to their name: first class functions.


Another way to think about what is going on here is to try to rewrite the functional option pattern using an interface.

type Option interface {

Rather than a function type we declare an interface, we’ll call it Option, and give it a single method, Apply which takes a pointer to a Config.

func NewTerrain(options ...Option) *Terrain {
        var config Config
        for _, option := range options {
        // ...

Whenever we call NewTerrain we pass in one or more values that implement the Option interface. Inside NewTerrain, just as before, we loop over the slice of options and call the Apply method on each.

This doesn’t look too different to the previous example. Rather than ranging over a slice of functions and calling them, we range over a slice of interface values and call a method on each. Let’s take a look at the other side, declaring the WithReticulatedSplines option.

type splines struct{}

func (s *splines) Apply(c *Config) { ... }

func WithReticulatedSplines() Option {
        return new(splines)

Because we’re passing around interface implementations, we need to declare a type to hold the Apply method. We also need to declare a constructor function to return our splines option implementation–you can already see that this is going to be more code.

To write WithCities using our Option interface we need to do a bit more work.

type cities struct {
        cities int

func (c *cities) Apply(c *Config) { ... }

func WithCities(n int) Option {
        return &cities{
                cities: n,

In the previous, functional, version the value of n, the number of cities to create, was captured lexically for us in the declaration of the anonymous function. Because we’re using an interface we need to declare a type to hold the count of cities and we need a constructor to assign the field during construction.

func main() {
        t := NewTerrain(WithReticulatedSplines(), WithCities(9))
        // ...

Putting it all together, we call NewTerrain with the results of evaluating WithReticulatedSplines and WithCities.

At GopherCon last year Tomás Senart spoke about the duality of a first class function and an interface with one method. You can see this duality play out in our example; an interface with one method and a function are equivalent.

But, you can also see that using functions as first class values involves much less code.

Encapsulating behaviour

Let’s leave interfaces for a moment and talk about some other properties of first class functions.

When we invoke a function or a method, we do so passing around data. The job of that function is often to interpret that data and take some action. Function values allow you to pass behaviour to be executed, rather that data to be interpreted. In effect, passing a function value allows you to declare code that will execute later, perhaps in a different context.

To illustrate this, here is a simple calculator.

type Calculator struct {
        acc float64

const (
        OP_ADD = 1 << iota

It has a set of operations it understands.

func (c *Calculator) Do(op int, v float64) float64 {
        switch op {
        case OP_ADD:
                c.acc += v
        case OP_SUB:
                c.acc -= v
        case OP_MUL:
                c.acc *= v
                panic("unhandled operation")
        return c.acc

It has one method, Do, which takes an operation and an operand, v. For convenience, Do also returns the value of the accumulator after the operation is applied.

func main() {
        var c Calculator
        fmt.Println(c.Do(OP_ADD, 100))     // 100
        fmt.Println(c.Do(OP_SUB, 50))      // 50
        fmt.Println(c.Do(OP_MUL, 2))       // 100

Our calculator only knows how to add, subtract, and multiply. If we wanted to implement division, we’d have to allocate an operation constant, then open up the Do method and add the code to implement division. Sounds reasonable, it’s only a few lines, but what if we wanted to add square root and exponentiation?

Each time we did this, Do grows longer and become harder to follow, because each time we add an operation we have to encode into Do knowledge of how to interpret that operation.

Let’s rewrite our calculator a little.

type Calculator struct {
        acc float64

type opfunc func(float64, float64) float64

func (c *Calculator) Do(op opfunc, v float64) float64 {
        c.acc = op(c.acc, v)
        return c.acc

As before we have a Calculator, which manages its own accumulator. The Calculator has a Do method, which this time takes an function as the operation, and a value as the operand. Whenever Do is called, it calls the operation we pass in, using its own accumulator and the operand we provide.

So, how do we use this new Calculator? You guessed it, by writing our operations as functions.

func Add(a, b float64) float64 { return a + b }

This is the code for Add. What about the other operations? It turns out they aren’t too hard either.

func Sub(a, b float64) float64 { return a - b }
func Mul(a, b float64) float64 { return a * b }

func main() {
        var c Calculator
        fmt.Println(c.Do(Add, 5))       // 5
        fmt.Println(c.Do(Sub, 3))       // 2
        fmt.Println(c.Do(Mul, 8))       // 16

As before we construct a Calculator and call it passing operations and an operand.

Extending the calculator

Now we can describe operations as functions, we can try to extend our calculator to handle square root.

func Sqrt(n, _ float64) float64 {
        return math.Sqrt(n)

But, it turns out there is a problem. math.Sqrt takes one argument, not two. However our Calculator’s Do method’s signature requires an operation function that takes two arguments.

func main() {
        var c Calculator
        c.Do(Add, 16)
        c.Do(Sqrt, 0) // operand ignored

Maybe we just cheat and ignore the operand. That’s a bit gross, I think we can do better.

Let’s redefine Add from a function that is called with two values and returns a third, to a function which returns a function that takes a value and returns a value.

func Add(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc + n

func (c *Calculator) Do(op func(float64) float64) float64 {
        c.acc = op(c.acc)
        return c.acc

Do now invokes the operation function passing in its own accumulator and recording the result back in the accumulator.

func main() {
        var c Calculator
        c.Do(Add(10))   // 10
        c.Do(Add(20))   // 30

Now in main we call Do not with the Add function itself, but with the result of evaluating Add(10). The type of the result of evaluating Add(10) is a function which takes a value, and returns a value, matching the signature that Do requires.

func Sub(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc - n

func Mul(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc * n

Subtraction and multiplication are similarly easy to implement. But what about square root?

func Sqrt() func(float64) float64 {
        return func(n float64) float64 {
                return math.Sqrt(n)

func main() {
        var c Calculator
        c.Do(Sqrt())   // 1.41421356237

This implementation of square root avoids the awkward syntax of the previous calculator’s operation function, as our revised calculator now operates on functions which take and return only one value.

Hopefully you’ve noticed that the signature of our Sqrt function is the same as math.Sqrt, so we can make this code smaller by reusing any function from the math package that takes a single argument.

func main() {
        var c Calculator
        c.Do(Add(2))      // 2
        c.Do(math.Sqrt)   // 1.41421356237
        c.Do(math.Cos)    // 0.99969539804

We started with a model of hard coded, interpreted logic. We moved to a more functional model, where we pass in the behaviour we want. Then, by taking it a step further, we generalised our calculator to work for operations regardless of their number of arguments.

Let’s talk about actors

Let’s change tracks a little and talk about why most of us are here at a Go conference; concurrency, specifically actors. To give due credit, the examples here are inspired by Bryan Boreham’s talk from GolangUK, you should check it out.

Suppose we’re building a chat server, we plan to be the next Hipchat or Slack, but we’ll start small for the moment.

type Mux struct {
        mu    sync.Mutex
        conns map[net.Addr]net.Conn

func (m *Mux) Add(conn net.Conn) {
        m.conns[conn.RemoteAddr()] = conn

We have a way to register new connections.

func (m *Mux) Remove(addr net.Addr) {
        delete(m.conns, addr)

Remove old connections.

func (m *Mux) SendMsg(msg string) error {
        for _, conn := range m.conns {
                err := io.WriteString(conn, msg)
                if err != nil {
                        return err
        return nil

And a way to send a message to all the registered connections. Because this is a server, all of these methods will be called concurrently, so we need to use a mutex to protect the conns map and prevent data races. Is this what you’d call idiomatic Go code?

Don’t communicate by sharing memory, share memory by communicating.

Our first proverb–don’t mediate access to shared memory with locks and mutexes, instead share that memory by communicating. So let’s apply this advice to our chat server.

Rather than using a mutex to serialise access to the Mux‘s conns map, we can give that job to a goroutine, and communicate with that goroutine via channels.

type Mux struct {
        add     chan net.Conn
        remove  chan net.Addr
        sendMsg chan string

func (m *Mux) Add(conn net.Conn) {
        m.add <- conn

Add sends the connection to add to the add channel.

func (m *Mux) Remove(addr net.Addr) {
        m.remove <- addr

Remove sends the address of the connection to the remove channel.

func (m *Mux) SendMsg(msg string) error {
        m.sendMsg <- msg
        return nil

And send message sends the message to be transmitted to each connection to the sendMsg channel.

func (m *Mux) loop() {
        conns := make(map[net.Addr]net.Conn)
        for {
                select {
                case conn := <-m.add:
                        m.conns[conn.RemoteAddr()] = conn
                case addr := <-m.remove:
                        delete(m.conns, addr)
                case msg := <-m.sendMsg:
                        for _, conn := range m.conns {
                                io.WriteString(conn, msg)

Rather than using a mutex to serialise access to the conns map, loop will wait until it receives an operation in the form of a value sent over one of the add, remove, or sendMsg channels and apply the relevant case. We don’t need a mutex anymore because the shared state, our conns map, is local to the loop function.

But, there’s still a lot of hard coded logic here. loop only knows how to do three things; add, remove and broadcast a message. As with the previous example, adding new features to our Mux type will involve:

  • creating a channel.
  • adding a helper to send the data over the channel.
  • extending the select logic inside loop to process that data.

Just like our Calculator example we can rewrite our Mux to use first class functions to pass around behaviour we want to executed, not data to interpret. Now, each method sends an operation to be executed in the context of the loop function, using our single ops channel.

type Mux struct {
        ops chan func(map[net.Addr]net.Conn)

func (m *Mux) Add(conn net.Conn) {
        m.ops <- func(m map[net.Addr]net.Conn) {
                m[conn.RemoteAddr()] = conn

In this case the signature of the operation is a function which takes a map of net.Addr’s to net.Conn’s. In a real program you’d probably have a much more complicated type to represent a client connection, but it’s sufficient for the purpose of this example.

func (m *Mux) Remove(addr net.Addr) {
        m.ops <- func(m map[net.Addr]net.Conn) {
                delete(m, addr)

Remove is similar, we send a function that deletes its connection’s address from the supplied map.

func (m *Mux) SendMsg(msg string) error {
        m.ops <- func(m map[net.Addr]net.Conn) {
                for _, conn := range m {
                        io.WriteString(conn, msg)
        return nil

SendMsg is a function which iterates over all connections in the supplied map and calls io.WriteString to send each a copy of the message.

func (m *Mux) loop() {

        conns := make(map[net.Addr]net.Conn)
        for op := range m.ops {

You can see that we’ve moved the logic from the body of loop into anonymous functions created by our helpers. So the job of loop is now to create a conns map, wait for an operation to be provided on the ops channel, then invoke it, passing in its map of connections.

But there are a few problems still to fix. The most pressing is the lack of error handling in SendMsg; an error writing to a connection will not be communicated back to the caller. So let’s fix that now.

func (m *Mux) SendMsg(msg string) error {
        result := make(chan error, 1)
        m.ops <- func(m map[net.Addr]net.Conn) {
                for _, conn := range m.conns {
                        err := io.WriteString(conn, msg)
                        if err != nil {
                                result <- err
                result <- nil
        return <-result

To handle the error being generated inside the anonymous function we pass to loop we need to create a channel to communicate the result of the operation. This also creates a point of synchronisation, the last line of SendMsg blocks until the function we passed into loop has been executed.

func (m *Mux) loop() {
        conns := make(map[net.Addr]net.Conn)
        for op := range m.ops {

Note that we didn’t have the change the body of loop at all to incorporate this error handling. And now we know how to do this, we can easily add a new function to Mux to send a private message to a single client.

func (m *Mux) PrivateMsg(addr net.Addr, msg string) error {
        result := make(chan net.Conn, 1)
        m.ops <- func(m map[net.Addr]net.Conn) {
                result <- m[addr]
        conn := <-result
        if conn == nil {
                return errors.Errorf("client %v not registered", addr)
        return io.WriteString(conn, msg)

To do this we pass a “lookup function” to loop via the ops channel, which will look in the map provided to it—this is loop‘s conns map—and return the value for the address we want on the result channel.

In the rest of the function we check to see if the result was nil—the zero value from the map lookup implies that the client is not registered. Otherwise we now have a reference to the client and we can call io.WriteString to send them a message.

And just to reiterate, we did this all without changing the body of loop, or affecting any of the other operations.


In summary

  • First class functions bring you tremendous expressive power. They let you pass around behaviour, not just dead data that must be interpreted.
  • First class functions aren’t new or novel. Many older languages have offered them, even C. In fact it was only somewhere along the lines of removing pointers did programmers in the OO stream of languages lose access to first class functions. If you’re a Javascript programmer, you’ve probably spent the last 15 minutes wondering what the big deal is.
  • First class functions, like the other features Go offers, should be used with restraint. Just as it is possible to make an overcomplicated program with the overuse of channels, it’s possible to make an impenetrable program with an overuse of first class functions. But that does not mean you shouldn’t use them at all; just use them in moderation.
  • First class functions are something that I believe every Go programmer should have in their toolbox. First class functions aren’t unique to Go, and Go programmers shouldn’t be afraid of them.
  • If you can learn to use interfaces, you can learn to use first class functions. They aren’t hard, just a little unfamiliar, and unfamiliarity is something that I believe can be overcome with time and practice.

So next time you define an API that has just one method, ask yourself, shouldn’t it really just be a function?

Introducing Go 2.0

Just so we’re clear, this post is a thought experiment, not any form of commitment to deliver Go 2.0 in any time frame. While I personally believe there will be a Go 2.0 in the future, I’m in no position to influence its creation; hence, this post is mere speculation.

Why introduce a new major version of Go?

Go 1.0 was released over 4 years ago, and since then the Go 1 compatibility contract has been a boon to anyone investing in Go as the language to build their product.  So, why introduce a new version of Go?

By the time that Go 1.8 is released at the start of 2017, the standard library will have accumulated cruft and hacks for five years, and if you consider that Go started life in 2007, it’s closer to ten. An opportunity to address this cruft and remove some of the packages which are now understood to be a bad idea would make the standard library more consistent and approachable to newcomers.

It is possible the language itself could become smaller. Rob Pike noted in 2014 that there are too many ways to declare a variable in Go, and this could be rationalised. Similarly the incongruence between make and new might be resolved. Then there is the problem of non latin characters not being considered upper case. So, lots of little cleanups to do.

Obviously some kind of solution for templated types would have to be part of any Go 2.0 discussion and, as David Symonds pointed out several years ago, they would have to be used to rewrite the standard library, both causing, and justifying, the compatibility break.

Backward compatibility

Backwards compatibility is not about syntax or features, backwards compatibility is about investment. Investment in the language; both at a technical and career level. Investment in libraries. Investment in backends that generate machine code. Investment in the mid part of the compiler that transforms and optimises code. Investment in build scripts and toolchains that embeds one piece of compiled code into another.

Brian Goetz, the Java language architect, describes the commitment to backward compatibility as the “central park effect“. This is something our cousins in the hardware world have long understood–never let the customer unbolt your product from the rack, ‘cos they might take the opportunity to use that space for your competition.

The lessons of Python 3000 are prescient; ignore backward compatibility at your peril. No matter how compelling the new version of your language, if you make it incompatible with the investment in the previous version, you are launching a new product which is in direct competition with itself. And just to make it clear, I’m not picking on Python specifically, there are plenty of other examples; D 2.0, Perl 6, and also come to mind.

All of these examples show the danger of creating a new version of a language that requires its users to rewrite all the source of their program, including all their dependencies (which may be non trivial), before it will compile and run.

A plausible implementation

So, how to create a new Go 2.0 language, with a new syntax and a new standard library, without making it incompatible every piece of Go code written to date? How could we avoid the all or nothing stand-off in which other languages place their users?

What if we could combine code written in Go 1.0 and a proposed Go 2.0 in one program using the package level as the boundary between language versions? Go 2.0 would be a new language, with a new standard library built upon a runtime shared between itself and Go 1.0, thereby allowing users to work outwards from their Go 2.0 main package to the limbs of their dependency graph, one package at a time.

A Go 2.0 package would be able to call down to Go 1.0, but not the other way around. Go 2.0 types would be able to interoperate with Go 1.0 types, but Go 1.0 types would be unaware of Go 2.0 constructed code. Perhaps calling from Go 2.0 to Go 1.0 looks conceptually like using cgo to call C code, except without the overhead as both languages would be compiled to the same intermediary form.

The key is both language versions would be compiled to a single intermediate representation, one that can represent the superset of both syntaxes. This has been done before; in the first few versions of Go, C code and Go code was compiled to an intermediate representation, Ken Thompson’s universal assembly language, then converted to machine code at link time. Now with Keith Randall’s SSA compiler, there is a single low level intermediate representation (similar to gcc’s GIMPLE and LLVM’s IR) that describes all the things that make Go programs Go1.

There is a strong precedent for this; the ~Sun~ Oracle JVM. For more than a decade the JVM has hosted byte-code that was not compiled from .java source file. Combined with a version of gofix that could automate some of the effort in migrating a package to Go 2.0 syntax, this could be a plausible way to introduce a new version of Go without abrogating the investment in code written for Go 1.0.

  1. This also raises the possibility of developing other language front-ends using the Go toolchain. If you look at what LLVM has done for projects like Pony, Crystal, and Rust, think of what a portable, cross platform, optimising compiler, with user space concurrency built in, and written in Go, not C++, would mean for language experimentation.

IoT p0wnership

The recent total war bombardment of Brian Krebs’ site, and the subsequent allegation that the traffic emanated from compromised home routers, cameras, baby monitors, doorbells, thermostats, and whatnot, got me thinking.

So DDoS is a thing, and as much as I enjoy the lampooning of IoT by everyone’s favourite wat account, @internetofshit, I wonder if the status quo of insecure consumer devices will have an unexpected knock-on effect.

Previously, DDoS traffic was assumed to come from compromised servers (waves hand in the approximate direction of the cloud) or malware infected PCs. For the former, cloud providers have gotten pretty good at rooting out insecure hosts and booting them off their networks, and organised crime have pretty much figured out that deleting stuff of people’s home computer is less profitable than encrypting said stuff and holding it for ransom. There’s always the chance of someone spotting unexpected outbound traffic from a box — that is one thing the host based AV industry does seem to be good at — because there’s usually a human sitting in front of the device whenever it’s awake. But not so with the cable router or ADSL modem sitting under the hall table, or the IP connected baby monitor you installed in the nursery, or the hundreds of other IP connected whatevers produced for the lowest possible cost because that is what we, as consumers, demand; price, the ultimate arbiter of quality.

My home router, what's it doing? I've got no idea.

My home router. What’s it doing? I’ve got no idea.

Homes filled with tiny linux boxes running weak software are a tempting target. Not just because owning them is easy, but as long as the devices continue to work as reliably as they did before compromise, nobody is going to suspect that their excess capacity is being soaked up under someone else’s control. Embedded devices have other attractive properties, they’re usually online 24/7, not sporadically like a laptop trying to conserve battery power, and commonly enjoy a wired ethernet connection, not the whims of a rapidly changing WiFi network. Can any of you reading this post tell me that you know the provenance of every packet that leaves your home network?

But back to Krebs and the IP cameras. With the nose dive in desktop and laptop sales, it’s pretty clear that the botnet action has moved to the embedded space. Assuming the attribution is correct, then DDoS has gone from being manageable to very not manageable, quickly. It’s the early 2000’s spam wars all over again, and companies that make real money on the internet are going expect a solution. And when I say solution, I mean litigation.

Who’s to blame for shitty insecure consumer devices? Who’s going to receive the summons?

Will it be the local ISPs, or carriers? Unlikely. In the States, carriers enjoy common carrier status which indemnifies them from crimes committed using their service. In Australia the situation is less clear but the message isn’t. ISPs are not interested in policing the behaviour of the users of that service. If you’re Walmart who’s been forced offline by 1Tbps of traffic during the holiday sales, you can forget about suing ISPs.

Ok, what about the device manufacturers themselves? I’ll be honest, I don’t have the stamina to read the EULA paperwork that comes with a device so I cannot assert this as a fact, but I would be amazed if the liability for the damage the device did, if not expressly waived by opening the box, exceeded the purchase price. Looking towards other industries, car manufacturers are not liable for the damage their vehicles do in the case of misuse, which is why in many parts of the world licensing a vehicle to drive on a public road requires compulsory third party insurance.

Let’s cut to the chase, the reason IoT DDoS is a thing is because the security of the software inside those devices is laughable. You can debate about why this is why it is, but that does not change the fact that all the other members of this supply chain have deftly sidestepped the buck on this one, and so the liability for insecure software rests with us, the authors of said software. Because, as Robert C. Martin likes to remind us, software rules the world, so programmers rule the world.

Serious stuff.

Serious stuff.

If you look at the history of unsafe products; food, paint, hair spray, electrical goods, governments have forced manufacturers to improve with a combination of regulations and import controls. In Australia, for example, it is illegal to import a product that connects to the mains supply unless its plug has the correct shroud over the live and neutral pins. This is how governments work, they cut off a manufacturer’s air supply by forbidding them from importing their products into the country. That tends to effect change, smartly.

Will IoT be the tipping point that forces the software industry to adopt voluntary, or mandatory, regulation or procedural standardisation?


The value of TDD

What is the value of test driven development?

Is the value writing tests at the same time as you write the code? Sure, I like that property. It means that at any time you’re one control-Z away from your tests passing; either revert your test change, or fix the code so the test pass. The nice property of this method is once you’ve implemented your feature, by definition, it’s already tested. Push that branch and lean in for the code review.

Another important property of TDD is it forces you to think about writing code that is testable, as a first class citizen. You don’t add testing after the fact, in the same way you don’t add performance or security after the code is “done” — right?

But for me, the most important property of TDD is it forces you to write your tests as a consumer of your own code, making you think about its API, continuously.

Many times people have said to me that they like the idea of TDD in principle, but have found they felt slower when they tried it. I understand completely. TDD does slow you down if you don’t have a design to work from. TDD doesn’t relieve you of the responsibility of designing your code first.

How much design you do is really up to you, but if you find yourself in the situation where you find TDD is slowing you down because you’re fighting the double whammy of changing the code and the tests at the same time, that’s a sure fire sign that you’ve run off the edge of your design map.

Robert Martin says you should not write a line of production code without a failing unit test–the key word is production code. It’s 100% OK to skip writing tests you’re exploring the design space, just remember to budget time to rewrite this code in a TDD fashion. The good news is it won’t take you very long, you’ve already designed the code, and built one to throw away.

Threads are a strange abstraction

When you think about it, threads are a strange abstraction.

From the programmer’s point of view, threads are great. It’s as if you can create virtual CPUs, on the fly, and the operating system will take care of simulating these virtual CPUs on top of real ones.

But on an implementation level, what is a simple abstraction can lead you, the programmer, into a trap. You don’t have an infinite number of CPUs and applying threaded abstractions in an unbounded manner invites you to overwhelm the real CPU if you try to actually use all these virtual CPUs simultaneously, or overwhelm the address space if they sit idle due to the overhead needed to maintain the illusion.

Careful tuning of one or both of the number of threads in use by the program, or the amount of memory each thread is allocated is needed whenever threads as a concurrency primitive are used in anger. So much for abstraction.

Go’s goroutines sit right in the middle, the same programmer interface as threads, a nice imperative coding model, but also efficient implementation model based on coroutines.

Must be willing to relocate

The must be willing to relocate to San Francisco meme has been doing the rounds on Twitter to great effect. The best jokes have a grain of truth to them. I think it is absurd to expect to draw on an infinite supply of debt burdened twenty somethings to relocate to the hottest real estate market on the planet.

A long time ago I worked for a company who hired globally and was willing to relocate people to work in Sydney on very generous terms and worked hard to make the process as painless as possible–and just to be clear, I’m not having a go at this companies’ policies, I like living in Sydney, I’m sure you would too.

What I want to discuss is the potential this creates for a conflict of interest.

Say you’ve up and moved your family to Australia. Your partner has had to leave their job, your kids have left their school, everyone’s left their circle of extended family and friends. It’s a huge upheaval.

Your company is not just your sponsor, but your spouse and your children’s. You’ve got a strong incentive to keep your employer happy with you–your contract stipulates a 6 month probationary period. Not to mention the 12 month lease you just signed on a three bedroom apartment.

You believe in the company, you just moved your family half way around the planet to prove it, and you want to succeed at this job. But, do you really want to take big risks if it could mean finding yourself having to explain to your partner that you have to find another job in the next two weeks or you all have to leave the country?

Employers, in asking people to relocate and placing them in the position of having to make long term commitments to work at your chosen location, are you putting those employees in a position where they can speak freely and act in your best interests?

How will you be programming in a decade ?

What does the computing landscape look like in a decade ?

In a word, bifurcated.

At the individual level there will be range of battery powered devices; watches, mobile phones, tablets with removable keyboards, and those without. They will be numerous, at a wide range of price points, allowing them to be dedicated to the individual. A personal computer if you will.

Of course these devices will have to always be connected to a network and the eponymous cloud, and thus the other half of the puzzle. If you think you’re going to be able to walk downstairs in a decade and touch the hardware your software runs on — you’re in for a rude shock.

What happened to the middle ?

Well, Steve Jobs blew up the desktop market, and with it the outlook for PC shipments.

Desktop PC shipments, 2010-2019

Desktop PC shipments, 2010-2019

But, but, I hear you say. You, the reader, might have a desktop computer for gaming, or enjoy software development on a workstation, rather than a laptop, or your phone.

That’s fine, nobody said you’re wrong, but you are increasingly a minority, and the economics of scale are not working in your favour.

What about us developers ?

Yongsan Electronics Market. Korea’s strategic reserve of whitebox desktop PCs stretches to the horizon.

So we know where the hardware is going, but a16z says software is eating the world. Who’s going to write all this software ? And if there are no desktop computers, how ?

Maybe, companies like and Koding are right, and we’ll all be using online tools. In which case tablets with all day battery life and WiFi are the ticket — the market is certainly betting on that.

But I think there are serious and persistent problems with the idea of always on that cannot be fixed with money.

Broadband cellular or WiFi data has an upper limit, sure you can stack channels to make a single TCP flow go faster, but when everyone wants fast flows — and good upload bandwidth, will that scale to cities with tens of millions of individual ? Will it scale to people who don’t want to live in said Megalopolis ?

Probably not.

The other outcome is the developer PC continues to exist, in an increasingly rarified (and expensive) form as workstations migrate to the economies of scale that drive server chip sets.

Video editing suite

Video editing suite

This is what video editing looks like today, part PC, mostly custom packaged single use solution. Imagine what it would look like if this was what was required to produce software ?

How will you be programming in a decade ?

LISP Machine

Let’s talk about logging

This is a post inspired by a thread that Nate Finch started on the Go Forum. This post focuses on Go, but if you can see your way past that, I think the ideas presented here are widely applicable.

Why no love ?

Go’s log package doesn’t have leveled logs, you have to manually add prefixes like debug, info, warn, and error, yourself. Also, Go’s logger type doesn’t have a way to turn these various levels on or off on a per package basis. By way of comparison let’s look at a few replacements from third parties.

glog from Google provides the following levels:

  • Info
  • Warning
  • Error
  • Fatal (which terminates the program)

Looking at another library, loggo, which we developed for Juju, provides the following levels:

  • Trace
  • Debug
  • Info
  • Warning
  • Error
  • Critical

Loggo also provides the ability to adjust the verbosity of logging on a per package basis.

So here are two examples, clearly influenced by other logging libraries in other languages. In fact their linage can be traced back to syslog(3), maybe even earlier. And I think they are wrong.

I want to take a contradictory position. I think that all logging libraries are bad because the offer too many features; a bewildering array of choice that dazzles the programmer right at the point they must be thinking clearly about how to communicate with the reader from the future; the one who will be consuming their logs.

I posit that successful logging packages need far less features, and certainly fewer levels.

Let’s talk about warnings

Let’s start with the easiest one. Nobody needs a warning log level.

Nobody reads warnings, because by definition nothing went wrong. Maybe something might go wrong in the future, but that sounds like someone else’s problem.

Furthermore, if you’re using some kind of leveled logging then why would you set the level at warning ? You’d set the level at info, or error. Setting the level to warning is an admission that you’re probably logging errors at warning level.

Eliminate the warning level, it’s either an informational message, or an error condition.

Let’s talk about fatal

Fatal level is effectively logging the message, then calling os.Exit(1). In principal this means:

  • defer statements in other goroutines don’t run.
  • buffers aren’t flushed.
  • temporary files and directories aren’t removed.

In effect, log.Fatal is a less verbose than, but semantically equivalent to, panic.

It is commonly accepted that libraries should not use panic1, but if calling log.Fatal2 has the same effect, surely this should also be outlawed.

Suggestions that this cleanup problem can be solved by registering shutdown handlers with the logging system introduces tight coupling between your logging system and every place where cleanup operations happen; its also violates the separation of concerns.

Don’t log at fatal level, prefer instead to return an error to the caller. If the error bubbles all the way up to main.main then that is the right place to handle any cleanup actions before exiting.

Let’s talk about error

Error handling and logging are closely related, so on the face of it, logging at error level should be easily justifiable. I disagree.

In Go, if a function or method call returns an error value, realistically you have two options:

  • handle the error.
  • return the error to your caller. You may choose to gift wrap the error, but that is not important to this discussion.

If you choose to handle the error by logging it, by definition it’s not an error any more — you handled it. The act of logging an error handles the error, hence it is no longer appropriate to log it as an error.

Let me try to convince you with this code fragment:

err := somethingHard()
if err != nil {
        log.Error("oops, something was too hard", err)
        return err // what is this, Java ?

You should never be logging anything at error level because you should either handle the error, or pass it back to the caller.

To be clear, I am not saying you should not log that a condition occurred

if err := planA(); err != nil {
        log.Infof("could't open the foo file, continuing with plan b: %v", err)

but in effect log.Info and log.Error have the same purpose.

I am not saying DO NOT LOG ERRORS! Instead the question is, what is the smallest possible logging API ? And when it comes to errors, I believe that an overwhelming proportion of items logged at error level are simple done that way because they are related to an error. They are in fact, just informational, hence we can remove logging at error level from our API.

What’s left ?

We’ve ruled out warnings, argued that nothing should be logged at error level, and shown that only the top level of the application should have some kind of log.Fatal behaviour. What’s left ?

I believe that there are only two things you should log:

  1. Things that developers care about when they are developing or debugging software.
  2. Things that users care about when using your software.

Obviously these are debug and info levels, respectively.

log.Info should simply write that line to the log output. There should not be an option to turn it off as the user should only be told things which are useful for them. If an error that cannot be handled occurs, it should bubble up main.main where the program terminates. The minor inconvenience of having to insert the FATAL prefix in front of the final log message, or writing directly to os.Stderr with fmt.Fprintf is not sufficient justification for a logging package growing a log.Fatal method.

log.Debug, is an entirely different matter. It is for the developer or support engineer to control. During development, debugging statements should be plentiful, without resorting to trace or debug2 (you know who you are) level. The log package should support fine grained control to enable or disable debug, and only debug, statements at the package or possibly even finer scope.

Wrapping up

If this were a twitter poll, I’d ask you to choose between

  • logging is important
  • logging is hard

But the fact is, logging is both. The solution to this problem must be to de-construct and ruthlessly pair down unnecessary distraction.

What do you think ? Is this just crazy enough to work, or just plain crazy ?


  1. Some libraries may use panic/recover as an internal control flow mechanism, but the overriding mantra is they must not let these control flow operations leak outside the package boundary.
  2. Ironically while it lacks a debug level output, the Go standard log package has both Fatal and Panic functions. In this package the number of functions that cause a program to exit abruptly outnumber those that do not.

Programming language markets

Last night at the Sydney Go Users’ meetup, Jason Buberel, product manager for the Go project, gave an excellent presentation on a product manager’s perspective on the Go project.

As part of his presentation, Buberel broke down the marketplace for a programming language into seven segments.

Programming Language market for Go, courtesy Jason Buberel

Programming language market for Go, courtesy Jason Buberel

As a thought experiment, I’ve taken Buberel’s market segments and applied them across a bunch of contemporary languages.

Disclaimer: I’m not a product manager, I’ve just seen one on stage.

Language Embedded and
Systems and
Server and
Web and mobile4 Big data and
scientific computing
Desktop applications5 Mobile applications
Go 0 0 3 2 1 16 1
Rust 1 1 0 2 0 26, 11 0
Java 214 0 2 3 3 27 3
Python 1 0 312 3 3 26, 10 0
Ruby 0 0 3 3 0 16 0
Node.js (Javascript / v8) 113 0 0 2 0 0 28
Objective-C / Swift 0 3 2 29 0 3 3
C/C++ 3 3 3 2 3 3 2

Is your favourite language missing ? Feel free to print this table out and draw in the missing row.

Scoring system: 0 – no presence, lack of interest or technical limitation. 1 – emerging presence or proof of concept. 2 – active competitor. 3 – market leader.


If there is a conclusion to be drawn from this rather unscientific study, every language is in competition to be the language of the backend. As for the other market segments, everyone competes with C and C++, even Java.


  1. The internet of things that are too small to run linux; micrcontrollers, arduino, esp8266, etc.
  2. Can you write a kernel, kernel module, or operating system in it ?
  3. Monitoring systems, databases, configuration management systems, that sort of thing.
  4. Web application backends, REST APIs, microservices of all sorts.
  5. Desktop applications, including games, because the mobile applications category would certainly include games.
  6. OpenGL libraries or SDL bindings.
  7. Swing, ugh.
  8. Phonegap, React Native.
  9. Who remembers WebObjects ?
  10. Python is a popular scripting language for games.
  11. Servo, the browser rendering engine is targeting Firefox.
  12. Openstack.
  13. Technically Lua pretending to be Javascript, but who’s counting.
  14. Thanks to @rakyll for reminding me about the Blu Ray drives, and j2me running in everyone’s credit cards.