How to write a successful conference proposal

As an organiser of a large programming conference and a speaker who’s pitched talk ideas to many conferences, I’ve been on both sides of the selection process. Last month I published a piece on writing a proposal for GopherCon. I wanted to revisit that post in the form of more general advice to give some insight into the why, not just the how, of writing a good conference proposal.

Presentations and proposals are different things

Your talk and the proposal to give that talk are different because they target different audiences. The former is what you are going to present on stage, the latter is a pitch to the reviewers to let you give that presentation.

Writing a good conference proposal is a different skill than writing the presentation itself. This article is aimed at writing a good proposal with a focus on the reviewer of your proposal as the audience.

Focus on the audience

Speaking of audiences, good public speakers start planning a presentation by identifying the audience they want to address. Presenting at a conference is like teaching a class, you have to present the material at the level of the people in the room.

It’s not just a question of beginner, advanced, or expert, you also have to consider the kinds of people at the conference. If it’s a vendor conference, there are probably going to be lots of managers, (pre) sales people, and business decision makers in the audience. While they might also be competent engineers, they’re at that conference wearing their business leader hat. They want to hear a different story; reliability, ease of maintenance, or evidence of widespread adoption, than an audience of software engineers who are more likely interested in things such as performance, orthogonality, and extensiblity.

So, if a proposal is for the conference reviewers, not the audience, should you pitch the presentation at the reviewers? Well no, but you should focus on what the reviewers want.
Your goal in writing a proposal is to convince the reviewers that, as well as thinking about your idea, and how to present it, you’ve considered the people who will come to see your talk.

Who are the reviewers and what do they want?

For smaller conferences it’ll be the organiser, or organisers, of the conference you’re applying too. For larger conferences it will likely be a group of reviewers who the organisers have invited to review proposals, this is the model that GopherCon follows. For really large conferences, such as OSCON, they will have a group of reviewers per track who funnel their recommendations up to a programme chair or set of program coordinators.

Regardless of their size, conference reviewers are charged with recommending to the organisers a set of talks they think are interesting and appropriate for the audience of the conference.

Most review panels are confidential, so you shouldn’t know anything about the individual reviewers, although you can probably guess that they will be experienced in the subject of your conference.

Most proposal are reviewed anonymously, at least in the initial rounds. This means the reviewers must judge your proposal, and your ability to present it, using only the fields provided on the submission form.

it’s important to remember that at least in part, all conferences are commercial enterprises. Venue owners have bills to pay just like the rest of us, and at a minimum speakers need to be compensated for their travel and lodging, otherwise the programme will be filled with people who are paid by their employer to speak.

To put it bluntly, reviewers are looking for talks that people will pay to see. This might sound capitalistic, but it turns out that this is what the audience want as well. At GopherCon we cover the travel and accommodation expenses of all our speakers. We think this is important because we want to hear what the speaker thinks, not their marketing department.

All of these are factors that reviewers will be considering when reading your proposal.

What to put in a proposal

Almost every conference call for proposals will ask for the following; title, abstract, and description. They may ask for other things like a biography, questions about AV requirements for your talk, and so on, but with respect to successful acceptance, these three items are key.

Title

A title is mandatory on almost every talk submission system I’ve seen. It’s your one line elevator pitch to entice the audience to come to your talk.

Keeping the title a little vague, or quixotic is popular, but I tend to stay away from 11 things that will make your proposal sound like a buzzfeed article. I’m not saying never do that, but if you do, you’d better pack a heck of a proposal behind your braggadocios

Abstract

Conference organisers usually ask you to provide a talk abstract as they often don’t feel it is appropriate to summarise your proposal for you. This abstract will be printed in the program or placed on the website so potential visitors to the event know what they’ll be seeing.

There are usually restrictions on the size of the abstract. One sentence that describes the topic that you’ll be talking about, and one sentence that describes what the audience will take away from listening to your talk or participating in your workshop, is all you need.

Together with the title, these are the two pieces of information the eventual conference audience will use to decide if they want to come to your session or not.

Talk description

This is where you sell your talk idea, and this is the place as a reviewer, I have seen so many good proposals with interesting ideas fail to make the cut because they simply didn’t include enough detail.

This is where my advice differs from other’s you’ll read on the web. Many pieces of advice encourage you to write less in your description, sometimes out of recognition that the organisers are busy and you don’t wish to burden them. I wanted to take some time to explain why I push every speaker to write more detail.

You are looking to do three things when writing a description of your talk:

  1. Make it clear to the reviewers that you know what you are talking about.
  2. That you have a plan to communicate what you know to the audience and you’ve thought about how to do this within the time limit of the speaking slot.
  3. Answer all the selection criteria for the conference.

The first point is self explanatory, but you still need to make sure that you communicate this clearly to your reviewers. For example, if you’re talking about how to manage a large open source project, then make sure you mention that in the proposal, “as the maintainer of a large open source project”. If you plan to talk about a subject in the third person, then you should cite your sources, “for my PhD thesis I studied the day to day interactions of the top 10 projects on GitHub”. You don’t have to be an expert, but if your goal is to communicate something new to the audience, you should demonstrate that you know more about the topic than they do.

The second point relates to how likely you are to effectively communicate your ideas. The reviewers want to feel comfortable that you have a plan. It is all too common to see a proposal for an hour long session with only a sentence or two for the description. The less you write in a conference proposal, the more the reviewers are left to take it on faith that you’ll do a good job.

The opposite is also true. Occasionally I see a proposal for a talk that includes every possible aspect of a subject. Reviewers are generally wary that the speaker cannot cover all their material in the time available–few conferences can afford C++Con’s multi-part multi-hour format.

A presenter that doesn’t manage their time, rambles without conclusion, or covers a lot of material in common knowledge is going to waste the audience’s time. That’s not just unfair to the audience, but unfair on the speakers that follow who must deal with a disgruntled audience.

One thing that I recommend to anyone considering submitting a proposal is to include an outline of your talk in the proposal. This can be literally the headings of your slides, or your ideas in bullet points. As a reviewer this makes it crystal clear that you’ve not only thought about your idea, but how to present it.

The last point, address all the selection criteria, I cannot emphasise enough. Review committees strive to be fair and often rate all proposal by a common standard. It crucial to address the selection criteria clearly as these are the ground rules by which every proposal are judged.

This point is probably the trickiest as not all conferences publish their selection criteria. Sometimes conferences ask for talks along a particular theme and these can be substituted for criteria in a pinch. If there are no criteria available–don’t guess, ask the organisers. If they don’t have any to share, which can happen with smaller conferences, then think about the audience and the wider ecosystem of the conference’s focus and ask yourself “if I were thinking about coming to this conference, what would I like to hear about?”

If you take away one thing from this section it is this–proposals with less detail loose out to proposals that provide more–as they do not provide the reviewer with sufficient evidence to be confident in their recommendations.

Don’t sell snow to Eskimos

Before closing I want to highlight a very common mistake I see in both conference proposals, and conference presentations, which is a speaker selling their audience on a thing the audience already likes.

To give an example, you wouldn’t go to the JVM Language Summit and give a presentation about how great the JVM is and they should use it. Instead, you’d go to the JVM language summit and show the audience the JVM is great by telling them about your project which was only possible because you chose to base it on the JVM.

Don’t take my word for it

Finally, if you’ve read this far, I encourage you to read what others have written on the topic, especially where their advice differs.

Karolina Szczur recently wrote a great article on writing conference proposals and includes many references to similar articles for further reading.

Conclusion

Reviewers are looking to put together the best conference they can. They want to see your talk on stage, but you have to give them the evidence they need to feel confident in recommending you. Show the reviewers you’ve thought about the audience, and you’ll make their decision a lot simpler.

I’m speaking at GopherChina and GopherCon Singapore

In April and May I’ll be speaking at GopherChina and GopherCon Singapore, respectively. This post is a teaser for the talks that were selected by the organisers. If you’re in the area, I hope you’ll come and hear me speak.

GopherChina

GopherChina is the third event in this conference series and this year will return to Shanghai. I was lucky to attend the event in 2016 and am looking forward to 2017.

The hidden #pragmas of Go

Go isn’t like C. It doesn’t have a preprocessor, it doesn’t have macros, and it certainly doesn’t have #define, but Go does have pragmas.

What are pragmas? The name come from the #pragma declaration that tells C compilers to alter their interpretation of a piece of code. Now, Go doesn’t have a #pragma directive, but it does have ways of altering the operation of the Go compiler via directive syntax hidden in comments.

This talk will explore the history of these directives, how and why they are used, and how you can, but probably shouldn’t, use them in your own code.

GopherCon Singapore

GopherCon Singapore is the latest in the GopherCon franchise, and as flight times go, relatively close to home. I’m delighted to have the opportunity to present at their inaugural conference in May.

Concurrency made Easy

In my experience, many people who come to Go do so because they have a problem where being able to run more than one task at a time in their program would be beneficial. Ruby and Python programmers come to Go because the concurrency story is much better, the same is true of Node programmers; the event loop is still inherently single threaded.

But, most programmers who stick with Go for a while tend to look back on their early efforts and say things like “wow, I really went overboard with channels” or “I went crazy with goroutines when I started writing Go. It was impossible to understand what the program did”. For people who learn Go formally from an instructor or a book, the concurrency section is always the last section they cover.

So there is a dichotomy here. Go’s headline feature is simple, lightweight concurrency. As a product the language sells itself on that feature alone. On the other hand, there is a narrative that concurrency isn’t actually that easy to use, otherwise people wouldn’t make it the last things in their books or classes, or perhaps more accurately, concurrency is not the solution to every problem.

With this as a background, I’d like to explore some strategies for using concurrency in Go without the pitfalls of convoluted code, the importance of memory ownership, and the best way to structure a Go program using goroutines.

Context is for cancelation

In my previous post I suggested that the best way to break the compile time coupling between the logger and the loggee was passing in a logger interface when constructing each major type in your program. The suggestion has been floated several times that logging is context specific, so maybe a logger can be passed around via a context.Context. I think this suggestion is flawed (as are most uses of context.Value, but that’s another story). This post explains why.

context.Value() is goroutine thread local storage

Using context.Context to pass a logger into a function is a poor design pattern. In effect context.Context is being used as a conduit to arbitrarily extend the API of any method that takes a context.Context value. It’s like Python’s **kwargs, or whatever the name is for that Ruby pattern of always passing a hash. Using context.Context in this way avoids an API break by smuggling data in the unstructured bag of values attached to the context. It’s thread local storage in a cheap suit.

It’s not just that values are boxed into an interface{} inside context.WithValue that I object to. The far more serious concern is there is no schema to this data, so there is no way for a method that takes a context to ensure that it contains the specific key required to complete the operation. context.Value returns nil if the key is not found, which means any code doing the naïve

log := ctx.Value("logger").(log.Logger)
log.Warn("something you'll ignore later")

will blow up if the "logger" key is not present.

Sure, you can check that the assertion succeeded, but I feel pretty confident that if this pattern were to become popular then people would eschew the two arg form of type assertion and just expect that the key always returned a valid logger. This would be especially true as logging in error paths is rarely tested, so you’ll hit this when you need it the most.

In my opinion passing loggers inside context.Context would be the worst solution to the problem of decoupling loggers from implementations. We’d have gone from an explicit compile time dependency to an implicit run time dependency, one that could not be enforced by the compiler.

To quote @freeformz

Loggers should be injected into dependencies. Full stop.

It’s verbose, but it’s the only way to achieve decoupled design.

The package level logger anti pattern

This post is a spin-off from various conversations around improving (I’m trying not to say standardising, otherwise I’ll have to link to XKCD) the way logging is performed in Go projects.

Consider this familiar pattern for establishing a package level log variable.

package foo

import “mylogger”

var log = mylogger.GetLogger(“github.com/project/foo”)

What’s wrong with this pattern?

The first problem with declaring a package level log variable is the tight coupling between package foo and package mylogger. Package foo now depends directly on package mylogger at compile time.

The second problem is the tight coupling between package foo and package mylogger is transitive. Any package that consumes package foo is itself dependant on mylogger at compile time.

This leads to a third problem, Go projects composed of packages using multiple logging
libraries, or fiefdoms of projects who can only consume packages that use their particular logging library.

Avoid source level coupling

The solution to this anti pattern is to delay the binding between the type that does the logging, and the type that needs to log, until it is needed. That is, until the variable is declared.

package foo

import "github.com/pkg/log"

type T struct {
        logger log.Logger
        // other fields
}

Now, the consumer of  type T supplies a value of type log.Logger when constructing new T‘s, and the methods on T use the logger they were provided when they want to log.

Interfaces to the rescue

The eagle eyed reader will note that the previous selection removed the package level log variable, but the coupling between package foo and package log remains.

However, this can be remedied by the consumer of the logger type declaring its own interface for the behaviour it expects.

package foo

type logger interface {
        Printf(string, ...interface{})
}

type T struct {
        logger
        // other fields
}

As long as the type assigned to foo.T.logger implements foo.logger the decision for which specific type to use can be deferred until run time in the same way that io.Copy escapes any knowledge of the io.Reader and io.Writer implementations in use until it is invoked.

It’s not just logging

Logging is a cross cutting concern, but the anti patterns associated with it also apply to other common areas like metrics, telemetry, and auditing.

Get involved

The Go 1.9 development window is opening next month. If this topic is important to you, get involved.

Never start a goroutine without knowing how it will stop

In Go, goroutines are cheap to create and efficient to schedule. The Go runtime has been written for programs with tens of thousands of goroutines as the norm, hundreds of thousands are not unexpected. But goroutines do have a finite cost in terms of memory footprint; you cannot create an infinite number of them.

Every time you use the go keyword in your program to launch a goroutine, you must know how, and when, that goroutine will exit. If you don’t know the answer, that’s a potential memory leak.

Consider this trivial code snippet:

ch := somefunction()
go func() {
        for range ch { }
}()

This code obtains a channel of int from somefunction and starts a goroutine to drain it. When will this goroutine exit? It will only exit when ch is closed. When will that occur? It’s hard to say, ch is returned by somefunction. So, depending on the state of somefunction, ch might never be closed, causing the goroutine to quietly leak.

In your design, some goroutines may run until the program exits, for example a background goroutine watching a configuration file, or the main conn.Accept loop in your server. However, these goroutines are rare enough I don’t consider them an exception to this rule.

Every time you write the statement go in a program, you should consider the question of how, and under what conditions, the goroutine you are about to start, will end.

Thinking about $GOPATH

This is a short blog post about my thoughts on using Go in anger through several workplaces, as a developer and an advocate.

What is $GOPATH?

Back when Go was first announced we used Makefiles to compile Go code. These Makefiles referenced some shared logic stored in the Go distribution. This is where $GOROOT comes from.

Back then, if you wrote Go code, you’d probably also used these Makefiles, and while you could check out your source code anywhere, most people would put their own Go code in what today we’d call $GOROOT/src as you must’ve compiled Go from source, so this directory was always going to be present.

Towards the 1.0 release goinstall, then go get, solidified the use of domain names in import paths to provide a globally unique namespace. These tools introduced a new location into which Go code would be fetched. This location was separate from $GOROOT to make clear the distinction between code provided by the Go project, and code written by the developer. By the time Go 1.1 was released in 2013, $GOROOT was removed as a fallback option.

Why does $GOPATH exist?

$GOPATH exists for two main reasons:

  1. In Go, the import declaration references a package via its fully qualified import path. $GOPATH exist so that from any directory inside $GOPATH/src the go tool can compute the absolute import path of the package in question.1
  2. A location to store dependencies fetched by go get.

Having a per user $GOPATH environment variable also means developers could use the go tool from any directory on their system to build, test and install code, but I suspect only a minority utilise this feature.

What’s wrong with $GOPATH?

In my experience, many newcomers to Go are frustrated with the single workspace $GOPATH model. They are confused that $GOPATH doesn’t let them check out the source of a project in a directory of their choice like they are used to with other languages. Additionally, $GOPATH does not let the developer have more than one copy of a project (or its dependencies)  checked out at the same time without having to update $GOPATH constantly.

I think it is important to recognise that these issues are legitimate points of confusion for many newcomers (including those on the Go team) and act as a drag on Go adoption. As we’re on the cusp of a blessed dependency management tool for Go, I think it’s equally important to continue to question the base assumptions that this new tool will build on, namely requiring a $GOPATH.

In my opinion, any Go build tool needs to provide (in addition to actually building and testing code) a way for Go code checked out in an arbitrary location on disk to recover its intended fully qualified import path; the path other code will import it as.

The $GOPATH model answers this question by subtracting the prefix of $GOPATH/src from the path to the directory of the current package; the remainder is the package’s fully qualified import path. This is why if you check out a package outside a $GOPATH workspace, the go tool cannot figure out the packages’ fully qualified import path and everything falls apart.

What are some alternatives to $GOPATH?

I attempted to address both issues with gb, which gives developers the ability to check out a project anywhere you want, but has no solution for libraries, and gb projects were not go gettable. However gb showed that writing a new build tool that did not wrap the go tool meant it was not forced to reorganise the world to fit into the $GOPATH model allowing gb users to include the source of all their dependencies in their project without the pitfalls of the Go 1.6’s vendor/ directory.

Recently, on a suggestion from Bill Kennedy, I built an experimental build tool that recorded the expected import prefix in a manifest file. That prefix, rather than one computed by $GOPATH directory arithmetic, is used to determine the fully qualified import path.

I’m working on a similar tool (unfinished) based on a suggestion from Brad Fitzpatrick that uses the .git directory as a sentinel to determine the root of the project and hopefully infer the full import path from the git remote configuration.

While these experiments are unfinished, both demonstrate that you can avoid the $GOPATH restrictions and retain compatibility with the go get ecosystem. Potentially in the case of Kodos, even avoid a manifest file.

Conclusion

Kang and Kodos use a lot of forked code from gb, which I hope to rectify over the new years’ break. If you are interesting in contributing or better yet, building your own Go tool to explore this problem space, Kang, Kodos, and gb are permissively licensed.


Notes:

  1. This is notably different from the way imports work in scripting languages like Python and Ruby, which use directly scanning and inserting onto a global search path source code directories.

Declaration scopes in Go

This post is about declaration scopes and shadowing in Go.

package main

import "fmt"

func f(x int) {
	for x := 0; x < 10; x++ {
		fmt.Println(x)
	}
}

var x int

func main() {
	var x = 200
	f(x)
}

This program declares x four times. All four are different variables because they exist in different scopes.

package main

import "fmt"

func f() {
	x := 200
	fmt.Println("inside f: x =", x)
}

func main() {
	x := 100
	fmt.Println("inside main: x =", x)
	f()
	fmt.Println("inside main: x =", x)
}

In Go the scope of a declaration is bound to the closest pair of curly braces, { and }. In this example, we declare x to be 100 inside main, and 200 inside f.

What do you expect this program will print?

package main

import "fmt"

func main() {
	x := 100
	for i := 0; i < 5; i++ {
		x := i
		fmt.Println(x)
	}
	fmt.Println(x)
}

There are several scopes in a Go program; block scope, function scope, file scope, package scope, and universe scope. Each scope encompasses the previous. What you are seeing is called shadowing.

var x = 100

func main() {
        var x = 200
        fmt.Println(x)
}

Most developers are comfortable with a function scoped variable shadowing a package scoped variable.

func f() {
        var x = 99
        if x > 90 {
                x := 60
                fmt.Println(x)
        }
}

But a block scoped variable shadowing a function scoped variable may be surprising.

The justification for a declaration in one scope shadowing another is consistency, prohibiting just block scoped declarations from shadowing another scope, would be inconsistent.

Simulating minicomputers on microcontrollers

This is a short blog post to reference the slides from my builderscon 2016 presentation.

I had a great time at buildercon, the talks were varied and engaging from a wide selection of Japanese makers. I’m grateful to the builderscon organisers for accepting my talk and inviting me to present at the inaugural builderscon conference in Tokyo, Japan.

Slides:

Further reading:

Go 1.8 toolchain improvements

This is a progress report on the Go toolchain improvements during the 1.8 development cycle.

Now we’re well into November, the 1.8 development window is closing fast on the few remaining in fly change lists, with the remainder being told to wait until the 1.9 development season opens when Go 1.8 ships in February 2017.

For more in this series, read my previous post on the Go 1.8 toolchain improvements from September, and my post on the improvements to the Go toolchain in the 1.7 development cycle.

Faster compilation

Since Go 1.5, released in August 2015, compile times have been significantly slower than Go 1.4. Work on addressing this slow down started in ernest in the Go 1.7 cycle, and is still ongoing.

Robert Griesemer and Matthew Dempsky’s worked on rewriting the parser to make it faster and remove many of the package level variables inherited from the previous yacc based parser. This parser produces a new abstract syntax tree while the rest of the compiler expects the previous yacc syntax tree. For 1.8 the new parser must transform its output into the previous syntax tree for consumption by the rest of the compiler. Even with this extra transformation step the new parser is no slower than the previous version and plans are being made to remove this transformation requirement in Go 1.9.

Compile time for full build relative to Go 1.4.3

Compile time for full build relative to Go 1.4.3

The take away is Go 1.8 is on target to improve compile times by an average of 15% over Go 1.7. Compared to the 3-5% improvements reported two months prior, it’s nice to know that there is still blood in this stone.

Note: The benchmark scripts for jujud, kube-controller-manager, and gogs are online. Please try them yourself and report your findings.

Code generation improvements

The big feature of the previous 1.7 cycle was the new SSA backend for 64 bit Intel. In Go 1.8 the SSA backend has been rolled out to all the other architectures that Go supports and the old backend code has been deleted.

amd64, by virtue of being the most popular production architecture, has always been the fastest. As I reported a few months ago, the results comparing Go 1.8 to Go 1.7 on Intel architectures show middling improvement driven equally by improvements to code generation, escape analysis improvements, and optimisations to the std library.

name                     old time/op    new time/op    delta
BinaryTree17-4              3.04s ± 1%     3.03s ± 0%     ~     (p=0.222 n=5+5)
Fannkuch11-4                3.27s ± 0%     3.39s ± 1%   +3.74%  (p=0.008 n=5+5)
FmtFprintfEmpty-4          60.0ns ± 3%    58.3ns ± 1%   -2.70%  (p=0.008 n=5+5)
FmtFprintfString-4          177ns ± 2%     164ns ± 2%   -7.47%  (p=0.008 n=5+5)
FmtFprintfInt-4             169ns ± 2%     157ns ± 1%   -7.22%  (p=0.008 n=5+5)
FmtFprintfIntInt-4          264ns ± 1%     243ns ± 1%   -8.10%  (p=0.008 n=5+5)
FmtFprintfPrefixedInt-4     254ns ± 2%     244ns ± 1%   -4.02%  (p=0.008 n=5+5)
FmtFprintfFloat-4           357ns ± 1%     348ns ± 2%   -2.35%  (p=0.032 n=5+5)
FmtManyArgs-4              1.10µs ± 1%    0.97µs ± 1%  -11.03%  (p=0.008 n=5+5)
GobDecode-4                9.85ms ± 1%    9.31ms ± 1%   -5.51%  (p=0.008 n=5+5)
GobEncode-4                8.75ms ± 1%    8.17ms ± 1%   -6.67%  (p=0.008 n=5+5)
Gzip-4                      282ms ± 0%     289ms ± 1%   +2.32%  (p=0.008 n=5+5)
Gunzip-4                   50.9ms ± 1%    51.7ms ± 0%   +1.67%  (p=0.008 n=5+5)
HTTPClientServer-4          195µs ± 1%     196µs ± 1%     ~     (p=0.095 n=5+5)
JSONEncode-4               21.6ms ± 6%    19.8ms ± 3%   -8.37%  (p=0.008 n=5+5)
JSONDecode-4               70.2ms ± 3%    71.0ms ± 1%     ~     (p=0.310 n=5+5)
Mandelbrot200-4            5.20ms ± 0%    4.73ms ± 1%   -9.05%  (p=0.008 n=5+5)
GoParse-4                  4.38ms ± 3%    4.28ms ± 2%     ~     (p=0.056 n=5+5)
RegexpMatchEasy0_32-4      96.7ns ± 2%    98.1ns ± 0%     ~     (p=0.127 n=5+5)
RegexpMatchEasy0_1K-4       311ns ± 1%     313ns ± 0%     ~     (p=0.214 n=5+5)
RegexpMatchEasy1_32-4      97.9ns ± 2%    89.8ns ± 2%   -8.33%  (p=0.008 n=5+5)
RegexpMatchEasy1_1K-4       519ns ± 0%     510ns ± 2%   -1.70%  (p=0.040 n=5+5)
RegexpMatchMedium_32-4      158ns ± 2%     146ns ± 0%   -7.71%  (p=0.016 n=5+4)
RegexpMatchMedium_1K-4     46.3µs ± 1%    47.8µs ± 2%   +3.12%  (p=0.008 n=5+5)
RegexpMatchHard_32-4       2.53µs ± 3%    2.46µs ± 0%   -2.91%  (p=0.008 n=5+5)
RegexpMatchHard_1K-4       76.1µs ± 0%    74.5µs ± 2%   -2.12%  (p=0.008 n=5+5)
Revcomp-4                   563ms ± 2%     531ms ± 1%   -5.78%  (p=0.008 n=5+5)
Template-4                 86.7ms ± 1%    82.2ms ± 1%   -5.16%  (p=0.008 n=5+5)
TimeParse-4                 433ns ± 3%     399ns ± 4%   -7.90%  (p=0.008 n=5+5)
TimeFormat-4                467ns ± 2%     430ns ± 1%   -7.76%  (p=0.008 n=5+5)

name                     old speed      new speed      delta
GobDecode-4              77.9MB/s ± 1%  82.5MB/s ± 1%   +5.84%  (p=0.008 n=5+5)
GobEncode-4              87.7MB/s ± 1%  94.0MB/s ± 1%   +7.15%  (p=0.008 n=5+5)
Gzip-4                   68.8MB/s ± 0%  67.2MB/s ± 1%   -2.27%  (p=0.008 n=5+5)
Gunzip-4                  381MB/s ± 1%   375MB/s ± 0%   -1.65%  (p=0.008 n=5+5)
JSONEncode-4             89.9MB/s ± 5%  98.1MB/s ± 3%   +9.11%  (p=0.008 n=5+5)
JSONDecode-4             27.6MB/s ± 3%  27.3MB/s ± 1%     ~     (p=0.310 n=5+5)
GoParse-4                13.2MB/s ± 3%  13.5MB/s ± 2%     ~     (p=0.056 n=5+5)
RegexpMatchEasy0_32-4     331MB/s ± 2%   326MB/s ± 0%     ~     (p=0.151 n=5+5)
RegexpMatchEasy0_1K-4    3.29GB/s ± 1%  3.27GB/s ± 0%     ~     (p=0.222 n=5+5)
RegexpMatchEasy1_32-4     327MB/s ± 2%   357MB/s ± 2%   +9.20%  (p=0.008 n=5+5)
RegexpMatchEasy1_1K-4    1.97GB/s ± 0%  2.01GB/s ± 2%   +1.76%  (p=0.032 n=5+5)
RegexpMatchMedium_32-4   6.31MB/s ± 2%  6.83MB/s ± 1%   +8.31%  (p=0.008 n=5+5)
RegexpMatchMedium_1K-4   22.1MB/s ± 1%  21.4MB/s ± 2%   -3.01%  (p=0.008 n=5+5)
RegexpMatchHard_32-4     12.6MB/s ± 3%  13.0MB/s ± 0%   +2.98%  (p=0.008 n=5+5)
RegexpMatchHard_1K-4     13.4MB/s ± 0%  13.7MB/s ± 2%   +2.19%  (p=0.008 n=5+5)
Revcomp-4                 451MB/s ± 2%   479MB/s ± 1%   +6.12%  (p=0.008 n=5+5)
Template-4               22.4MB/s ± 1%  23.6MB/s ± 1%   +5.43%  (p=0.008 n=5+5)

The big improvements from the switch to the SSA backend show up on non intel architectures. Here are the results for Arm64:

name                     old time/op    new time/op     delta
BinaryTree17-8              10.6s ± 0%       8.1s ± 1%  -23.62%  (p=0.016 n=4+5)
Fannkuch11-8                9.19s ± 0%      5.95s ± 0%  -35.27%  (p=0.008 n=5+5)
FmtFprintfEmpty-8           136ns ± 0%      118ns ± 1%  -13.53%  (p=0.008 n=5+5)
FmtFprintfString-8          472ns ± 1%      331ns ± 1%  -29.82%  (p=0.008 n=5+5)
FmtFprintfInt-8             388ns ± 3%      273ns ± 0%  -29.61%  (p=0.008 n=5+5)
FmtFprintfIntInt-8          640ns ± 2%      438ns ± 0%  -31.61%  (p=0.008 n=5+5)
FmtFprintfPrefixedInt-8     580ns ± 0%      423ns ± 0%  -27.09%  (p=0.008 n=5+5)
FmtFprintfFloat-8           823ns ± 0%      613ns ± 1%  -25.57%  (p=0.008 n=5+5)
FmtManyArgs-8              2.69µs ± 0%     1.96µs ± 0%  -27.12%  (p=0.016 n=4+5)
GobDecode-8                24.4ms ± 0%     17.3ms ± 0%  -28.88%  (p=0.008 n=5+5)
GobEncode-8                18.6ms ± 0%     15.1ms ± 1%  -18.65%  (p=0.008 n=5+5)
Gzip-8                      1.20s ± 0%      0.74s ± 0%  -38.02%  (p=0.008 n=5+5)
Gunzip-8                    190ms ± 0%      130ms ± 0%  -31.73%  (p=0.008 n=5+5)
HTTPClientServer-8          205µs ± 1%      166µs ± 2%  -19.27%  (p=0.008 n=5+5)
JSONEncode-8               50.7ms ± 0%     41.5ms ± 0%  -18.10%  (p=0.008 n=5+5)
JSONDecode-8                201ms ± 0%      155ms ± 1%  -22.93%  (p=0.008 n=5+5)
Mandelbrot200-8            13.0ms ± 0%     10.1ms ± 0%  -22.78%  (p=0.008 n=5+5)
GoParse-8                  11.4ms ± 0%      8.5ms ± 0%  -24.80%  (p=0.008 n=5+5)
RegexpMatchEasy0_32-8       271ns ± 0%      225ns ± 0%  -16.97%  (p=0.008 n=5+5)
RegexpMatchEasy0_1K-8      1.69µs ± 0%     1.92µs ± 0%  +13.42%  (p=0.008 n=5+5)
RegexpMatchEasy1_32-8       292ns ± 0%      255ns ± 0%  -12.60%  (p=0.000 n=4+5)
RegexpMatchEasy1_1K-8      2.20µs ± 0%     2.38µs ± 0%   +8.38%  (p=0.008 n=5+5)
RegexpMatchMedium_32-8      411ns ± 0%      360ns ± 0%  -12.41%  (p=0.000 n=5+4)
RegexpMatchMedium_1K-8      118µs ± 0%      104µs ± 0%  -12.07%  (p=0.008 n=5+5)
RegexpMatchHard_32-8       6.83µs ± 0%     5.79µs ± 0%  -15.27%  (p=0.016 n=4+5)
RegexpMatchHard_1K-8        205µs ± 0%      176µs ± 0%  -14.19%  (p=0.008 n=5+5)
Revcomp-8                   2.01s ± 0%      1.43s ± 0%  -29.02%  (p=0.008 n=5+5)
Template-8                  259ms ± 0%      158ms ± 0%  -38.93%  (p=0.008 n=5+5)
TimeParse-8                 874ns ± 1%      733ns ± 1%  -16.16%  (p=0.008 n=5+5)
TimeFormat-8               1.00µs ± 1%     0.86µs ± 1%  -13.88%  (p=0.008 n=5+5)

name                     old speed      new speed       delta
GobDecode-8              31.5MB/s ± 0%   44.3MB/s ± 0%  +40.61%  (p=0.008 n=5+5)
GobEncode-8              41.3MB/s ± 0%   50.7MB/s ± 1%  +22.92%  (p=0.008 n=5+5)
Gzip-8                   16.2MB/s ± 0%   26.1MB/s ± 0%  +61.33%  (p=0.008 n=5+5)
Gunzip-8                  102MB/s ± 0%    150MB/s ± 0%  +46.45%  (p=0.016 n=4+5)
JSONEncode-8             38.3MB/s ± 0%   46.7MB/s ± 0%  +22.10%  (p=0.008 n=5+5)
JSONDecode-8             9.64MB/s ± 0%  12.49MB/s ± 0%  +29.54%  (p=0.016 n=5+4)
GoParse-8                5.09MB/s ± 0%   6.78MB/s ± 0%  +33.02%  (p=0.008 n=5+5)
RegexpMatchEasy0_32-8     118MB/s ± 0%    142MB/s ± 0%  +20.29%  (p=0.008 n=5+5)
RegexpMatchEasy0_1K-8     605MB/s ± 0%    534MB/s ± 0%  -11.85%  (p=0.016 n=5+4)
RegexpMatchEasy1_32-8     110MB/s ± 0%    125MB/s ± 0%  +14.23%  (p=0.029 n=4+4)
RegexpMatchEasy1_1K-8     465MB/s ± 0%    430MB/s ± 0%   -7.72%  (p=0.008 n=5+5)
RegexpMatchMedium_32-8   2.43MB/s ± 0%   2.77MB/s ± 0%  +13.99%  (p=0.016 n=5+4)
RegexpMatchMedium_1K-8   8.68MB/s ± 0%   9.87MB/s ± 0%  +13.71%  (p=0.008 n=5+5)
RegexpMatchHard_32-8     4.68MB/s ± 0%   5.53MB/s ± 0%  +18.08%  (p=0.016 n=4+5)
RegexpMatchHard_1K-8     5.00MB/s ± 0%   5.83MB/s ± 0%  +16.60%  (p=0.008 n=5+5)
Revcomp-8                 126MB/s ± 0%    178MB/s ± 0%  +40.88%  (p=0.008 n=5+5)
Template-8               7.48MB/s ± 0%  12.25MB/s ± 0%  +63.74%  (p=0.008 n=5+5)

These are pretty big improvements from just recompiling your binary.

Defer and cgo improvements

The question of if defer can be used in hot code paths remains open, but during the 1.8 cycle Austin reduced the overhead of using defer by a half, according to some benchmarks.

The runtime package benchmarks are a little less rosy.

name         old time/op  new time/op  delta
Defer-4       101ns ± 1%    66ns ± 0%  -34.73%  (p=0.000 n=20+20)
Defer10-4    93.2ns ± 1%  62.5ns ± 8%  -33.02%  (p=0.000 n=20+20)
DeferMany-4   148ns ± 3%   131ns ± 3%  -11.42%  (p=0.000 n=19+19)

According to them defer improved by a third in most common circumstances where the statement closes over no more than a single variable.

Additionally, an optimisation by David Crawshaw reduced the overhead of defer in the cgo path by nearly half.

name       old time/op  new time/op  delta
CgoNoop-8  93.5ns ± 0%  51.1ns ± 1%  -45.34%  (p=0.016 n=4+5)

One more thing

Go 1.7 supported 64 bit mips platforms, thanks to the work of Minux and Cherry. However, the less powerful but plentiful, 32 bit mips platforms were not supported. As a bonus, thanks to the work of Vladimir Stefanovic, Go 1.8 will ship will support for 32 bit mips.

% env GOARCH=mips go build -o godoc.mips golang.org/x/tools/cmd/godoc
% file godoc.mips 
godoc.mips: ELF 32-bit MSB  executable, MIPS, MIPS32 version 1 (SYSV), statically linked, not stripped

While 32 bit mips hosts are probably too small to compile Go programs natively, you can always cross compile from your development workstation for linux/mips.

Do not fear first class functions

This is the text of my dotGo 2016 presentation. A recording and slide deck are also available.


firstclass-functions-763

Hello, welcome to dotGo.

Two years ago I stood on a stage, not unlike this one, and told you my opinion for how configuration options should be handled in Go. The cornerstone of my presentation was Rob Pike’s blog post, Self-referential functions and the design of options.

Since then it has been wonderful to watch this idea mature from Rob’s original blog post, to the gRPC project, who in my opinion have continued to evolve this design pattern into its best form to date.

But, when talking to Gophers at a conference in London a few months ago, several of them expressed a concern that while they understood the notion of a function that returns a function, the technique that powers functional options, they worried that other Go programmers—I suspect they meant less experienced Go programmers—wouldn’t be able to understand this style of programming.

And this made me a bit sad because I consider Go’s support of first class functions to be a gift, and something that we should all be able to take advantage of. So I’m here today to show you, that you do not need to fear first class functions.

Functional options recap

To begin, I’ll very quickly recap the functional options pattern

type Config struct{ ... }

func WithReticulatedSplines(c *Config) { ... }

type Terrain struct {
        config Config
}

func NewTerrain(options ...func(*Config)) *Terrain {
        var t Terrain
        for _, option := range options {
                option(&t.config)
        }
        return &t

}

func main() {
        t := NewTerrain(WithReticulatedSplines)
        // [ simulation intensifies ]
}

We start with some options, expressed as functions which take a pointer to a structure to configure. We pass those functions to a constructor, and inside the body of that constructor each option function is invoked in order, passing in a reference to the Config value. Finally, we call NewTerrain with the options we want, and away we go.

Okay, everyone should be familiar with this pattern. Where I believe the confusion comes from, is when you need an option function which take a parameter. For example, we have WithCities, which lets us add a number of cities to our terrain model.

 // WithCities adds n cities to the Terrain model
func WithCities(n int) func(*Config) { ... }

func main() {        
        t := NewTerrain(WithCities(9))      
        // ...
}

Because WithCities takes an argument, we cannot simply pass WithCities to NewTerrain, its signature does not match. Instead we evaluate WithCities, passing in the number of cities to create, and use the result as the value to pass to NewTerrain.

Functions as first class values

What’s going on here? Let’s break it down. Fundamentally, evaluating a function returns a value. We have functions that take two numbers and return a number.

package math

func Min(a, b float64) float64

We have functions that take a slice, and return a pointer to a structure.

package bytes

func NewReader(b []byte) *Reader

and now we have a function which returns a function.

func WithCities(n int) func(*Config)

The type of the value that is returned from WithCities is a function which takes a pointer to a Config. This ability to treat functions as regular values leads to their name: first class functions.

interface.Apply

Another way to think about what is going on here is to try to rewrite the functional option pattern using an interface.

type Option interface {
        Apply(*Config)
}

Rather than a function type we declare an interface, we’ll call it Option, and give it a single method, Apply which takes a pointer to a Config.

func NewTerrain(options ...Option) *Terrain {
        var config Config
        for _, option := range options {
                option.Apply(&config)
        }
        // ...
}

Whenever we call NewTerrain we pass in one or more values that implement the Option interface. Inside NewTerrain, just as before, we loop over the slice of options and call the Apply method on each.

This doesn’t look too different to the previous example. Rather than ranging over a slice of functions and calling them, we range over a slice of interface values and call a method on each. Let’s take a look at the other side, declaring the WithReticulatedSplines option.

type splines struct{}

func (s *splines) Apply(c *Config) { ... }

func WithReticulatedSplines() Option {
        return new(splines)
}

Because we’re passing around interface implementations, we need to declare a type to hold the Apply method. We also need to declare a constructor function to return our splines option implementation–you can already see that this is going to be more code.

To write WithCities using our Option interface we need to do a bit more work.

type cities struct {
        cities int
}

func (c *cities) Apply(c *Config) { ... }

func WithCities(n int) Option {
        return &cities{
                cities: n,
        }
}

In the previous, functional, version the value of n, the number of cities to create, was captured lexically for us in the declaration of the anonymous function. Because we’re using an interface we need to declare a type to hold the count of cities and we need a constructor to assign the field during construction.

func main() {
        t := NewTerrain(WithReticulatedSplines(), WithCities(9))
        // ...
}

Putting it all together, we call NewTerrain with the results of evaluating WithReticulatedSplines and WithCities.

At GopherCon last year Tomás Senart spoke about the duality of a first class function and an interface with one method. You can see this duality play out in our example; an interface with one method and a function are equivalent.

But, you can also see that using functions as first class values involves much less code.

Encapsulating behaviour

Let’s leave interfaces for a moment and talk about some other properties of first class functions.

When we invoke a function or a method, we do so passing around data. The job of that function is often to interpret that data and take some action. Function values allow you to pass behaviour to be executed, rather that data to be interpreted. In effect, passing a function value allows you to declare code that will execute later, perhaps in a different context.

To illustrate this, here is a simple calculator.

type Calculator struct {
        acc float64
}

const (
        OP_ADD = 1 << iota
        OP_SUB
        OP_MUL
)

It has a set of operations it understands.

func (c *Calculator) Do(op int, v float64) float64 {
        switch op {
        case OP_ADD:
                c.acc += v
        case OP_SUB:
                c.acc -= v
        case OP_MUL:
                c.acc *= v
        default:
                panic("unhandled operation")
        }
        return c.acc
}

It has one method, Do, which takes an operation and an operand, v. For convenience, Do also returns the value of the accumulator after the operation is applied.

func main() {
        var c Calculator
        fmt.Println(c.Do(OP_ADD, 100))     // 100
        fmt.Println(c.Do(OP_SUB, 50))      // 50
        fmt.Println(c.Do(OP_MUL, 2))       // 100
}

Our calculator only knows how to add, subtract, and multiply. If we wanted to implement division, we’d have to allocate an operation constant, then open up the Do method and add the code to implement division. Sounds reasonable, it’s only a few lines, but what if we wanted to add square root and exponentiation?

Each time we did this, Do grows longer and become harder to follow, because each time we add an operation we have to encode into Do knowledge of how to interpret that operation.

Let’s rewrite our calculator a little.

type Calculator struct {
        acc float64
}

type opfunc func(float64, float64) float64

func (c *Calculator) Do(op opfunc, v float64) float64 {
        c.acc = op(c.acc, v)
        return c.acc
}

As before we have a Calculator, which manages its own accumulator. The Calculator has a Do method, which this time takes an function as the operation, and a value as the operand. Whenever Do is called, it calls the operation we pass in, using its own accumulator and the operand we provide.

So, how do we use this new Calculator? You guessed it, by writing our operations as functions.

func Add(a, b float64) float64 { return a + b }

This is the code for Add. What about the other operations? It turns out they aren’t too hard either.

func Sub(a, b float64) float64 { return a - b }
func Mul(a, b float64) float64 { return a * b }

func main() {
        var c Calculator
        fmt.Println(c.Do(Add, 5))       // 5
        fmt.Println(c.Do(Sub, 3))       // 2
        fmt.Println(c.Do(Mul, 8))       // 16
}

As before we construct a Calculator and call it passing operations and an operand.

Extending the calculator

Now we can describe operations as functions, we can try to extend our calculator to handle square root.

func Sqrt(n, _ float64) float64 {
        return math.Sqrt(n)
}

But, it turns out there is a problem. math.Sqrt takes one argument, not two. However our Calculator’s Do method’s signature requires an operation function that takes two arguments.

func main() {
        var c Calculator
        c.Do(Add, 16)
        c.Do(Sqrt, 0) // operand ignored
}

Maybe we just cheat and ignore the operand. That’s a bit gross, I think we can do better.

Let’s redefine Add from a function that is called with two values and returns a third, to a function which returns a function that takes a value and returns a value.

func Add(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc + n
        }
}

func (c *Calculator) Do(op func(float64) float64) float64 {
        c.acc = op(c.acc)
        return c.acc
}

Do now invokes the operation function passing in its own accumulator and recording the result back in the accumulator.

func main() {
        var c Calculator
        c.Do(Add(10))   // 10
        c.Do(Add(20))   // 30
}

Now in main we call Do not with the Add function itself, but with the result of evaluating Add(10). The type of the result of evaluating Add(10) is a function which takes a value, and returns a value, matching the signature that Do requires.

func Sub(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc - n
        }
}

func Mul(n float64) func(float64) float64 {
        return func(acc float64) float64 {
                return acc * n
        }
}

Subtraction and multiplication are similarly easy to implement. But what about square root?

func Sqrt() func(float64) float64 {
        return func(n float64) float64 {
                return math.Sqrt(n)
        }
}

func main() {
        var c Calculator
        c.Do(Add(2))
        c.Do(Sqrt())   // 1.41421356237
}

This implementation of square root avoids the awkward syntax of the previous calculator’s operation function, as our revised calculator now operates on functions which take and return only one value.

Hopefully you’ve noticed that the signature of our Sqrt function is the same as math.Sqrt, so we can make this code smaller by reusing any function from the math package that takes a single argument.

func main() {
        var c Calculator
        c.Do(Add(2))      // 2
        c.Do(math.Sqrt)   // 1.41421356237
        c.Do(math.Cos)    // 0.99969539804
}

We started with a model of hard coded, interpreted logic. We moved to a more functional model, where we pass in the behaviour we want. Then, by taking it a step further, we generalised our calculator to work for operations regardless of their number of arguments.

Let’s talk about actors

photofunia-1475700854-1253
Let’s change tracks a little and talk about why most of us are here at a Go conference; concurrency, specifically actors. To give due credit, the examples here are inspired by Bryan Boreham’s talk from GolangUK, you should check it out.

Suppose we’re building a chat server, we plan to be the next Hipchat or Slack, but we’ll start small for the moment.

type Mux struct {
        mu    sync.Mutex
        conns map[net.Addr]net.Conn
}

func (m *Mux) Add(conn net.Conn) {
        m.mu.Lock()
        defer m.mu.Unlock()
        m.conns[conn.RemoteAddr()] = conn
}

We have a way to register new connections.

func (m *Mux) Remove(addr net.Addr) {
        m.mu.Lock()
        defer m.mu.Unlock()
        delete(m.conns, addr)
}

Remove old connections.

func (m *Mux) SendMsg(msg string) error {
        m.mu.Lock()
        defer m.mu.Unlock()
        for _, conn := range m.conns {
                err := io.WriteString(conn, msg)
                if err != nil {
                        return err
                }
        }
        return nil
}

And a way to send a message to all the registered connections. Because this is a server, all of these methods will be called concurrently, so we need to use a mutex to protect the conns map and prevent data races. Is this what you’d call idiomatic Go code?

Don’t communicate by sharing memory, share memory by communicating.

Our first proverb–don’t mediate access to shared memory with locks and mutexes, instead share that memory by communicating. So let’s apply this advice to our chat server.

Rather than using a mutex to serialise access to the Mux‘s conns map, we can give that job to a goroutine, and communicate with that goroutine via channels.

type Mux struct {
        add     chan net.Conn
        remove  chan net.Addr
        sendMsg chan string
}

func (m *Mux) Add(conn net.Conn) {
        m.add <- conn
}

Add sends the connection to add to the add channel.

func (m *Mux) Remove(addr net.Addr) {
        m.remove <- addr
}

Remove sends the address of the connection to the remove channel.

func (m *Mux) SendMsg(msg string) error {
        m.sendMsg <- msg
        return nil
}

And send message sends the message to be transmitted to each connection to the sendMsg channel.

func (m *Mux) loop() {
        conns := make(map[net.Addr]net.Conn)
        for {
                select {
                case conn := <-m.add:
                        m.conns[conn.RemoteAddr()] = conn
                case addr := <-m.remove:
                        delete(m.conns, addr)
                case msg := <-m.sendMsg:
                        for _, conn := range m.conns {
                                io.WriteString(conn, msg)
                        }
                }
        }
}

Rather than using a mutex to serialise access to the conns map, loop will wait until it receives an operation in the form of a value sent over one of the add, remove, or sendMsg channels and apply the relevant case. We don’t need a mutex anymore because the shared state, our conns map, is local to the loop function.

But, there’s still a lot of hard coded logic here. loop only knows how to do three things; add, remove and broadcast a message. As with the previous example, adding new features to our Mux type will involve:

  • creating a channel.
  • adding a helper to send the data over the channel.
  • extending the select logic inside loop to process that data.

Just like our Calculator example we can rewrite our Mux to use first class functions to pass around behaviour we want to executed, not data to interpret. Now, each method sends an operation to be executed in the context of the loop function, using our single ops channel.

type Mux struct {
        ops chan func(map[net.Addr]net.Conn)
}

func (m *Mux) Add(conn net.Conn) {
        m.ops <- func(m map[net.Addr]net.Conn) {
                m[conn.RemoteAddr()] = conn
        }
}

In this case the signature of the operation is a function which takes a map of net.Addr’s to net.Conn’s. In a real program you’d probably have a much more complicated type to represent a client connection, but it’s sufficient for the purpose of this example.

func (m *Mux) Remove(addr net.Addr) {
        m.ops <- func(m map[net.Addr]net.Conn) {
                delete(m, addr)
        }
}

Remove is similar, we send a function that deletes its connection’s address from the supplied map.

func (m *Mux) SendMsg(msg string) error {
        m.ops <- func(m map[net.Addr]net.Conn) {
                for _, conn := range m {
                        io.WriteString(conn, msg)
                }
        }
        return nil
}

SendMsg is a function which iterates over all connections in the supplied map and calls io.WriteString to send each a copy of the message.

func (m *Mux) loop() {

        conns := make(map[net.Addr]net.Conn)
        for op := range m.ops {
                op(conns)
        }
}

You can see that we’ve moved the logic from the body of loop into anonymous functions created by our helpers. So the job of loop is now to create a conns map, wait for an operation to be provided on the ops channel, then invoke it, passing in its map of connections.

But there are a few problems still to fix. The most pressing is the lack of error handling in SendMsg; an error writing to a connection will not be communicated back to the caller. So let’s fix that now.

func (m *Mux) SendMsg(msg string) error {
        result := make(chan error, 1)
        m.ops <- func(m map[net.Addr]net.Conn) {
                for _, conn := range m.conns {
                        err := io.WriteString(conn, msg)
                        if err != nil {
                                result <- err
                                return
                        }
                }
                result <- nil
        }
        return <-result
}

To handle the error being generated inside the anonymous function we pass to loop we need to create a channel to communicate the result of the operation. This also creates a point of synchronisation, the last line of SendMsg blocks until the function we passed into loop has been executed.

func (m *Mux) loop() {
        conns := make(map[net.Addr]net.Conn)
        for op := range m.ops {
                op(conns)
        }
}

Note that we didn’t have the change the body of loop at all to incorporate this error handling. And now we know how to do this, we can easily add a new function to Mux to send a private message to a single client.

func (m *Mux) PrivateMsg(addr net.Addr, msg string) error {
        result := make(chan net.Conn, 1)
        m.ops <- func(m map[net.Addr]net.Conn) {
                result <- m[addr]
        }
        conn := <-result
        if conn == nil {
                return errors.Errorf("client %v not registered", addr)
        }
        return io.WriteString(conn, msg)
}

To do this we pass a “lookup function” to loop via the ops channel, which will look in the map provided to it—this is loop‘s conns map—and return the value for the address we want on the result channel.

In the rest of the function we check to see if the result was nil—the zero value from the map lookup implies that the client is not registered. Otherwise we now have a reference to the client and we can call io.WriteString to send them a message.

And just to reiterate, we did this all without changing the body of loop, or affecting any of the other operations.

Conclusion

In summary

  • First class functions bring you tremendous expressive power. They let you pass around behaviour, not just dead data that must be interpreted.
  • First class functions aren’t new or novel. Many older languages have offered them, even C. In fact it was only somewhere along the lines of removing pointers did programmers in the OO stream of languages lose access to first class functions. If you’re a Javascript programmer, you’ve probably spent the last 15 minutes wondering what the big deal is.
  • First class functions, like the other features Go offers, should be used with restraint. Just as it is possible to make an overcomplicated program with the overuse of channels, it’s possible to make an impenetrable program with an overuse of first class functions. But that does not mean you shouldn’t use them at all; just use them in moderation.
  • First class functions are something that I believe every Go programmer should have in their toolbox. First class functions aren’t unique to Go, and Go programmers shouldn’t be afraid of them.
  • If you can learn to use interfaces, you can learn to use first class functions. They aren’t hard, just a little unfamiliar, and unfamiliarity is something that I believe can be overcome with time and practice.

So next time you define an API that has just one method, ask yourself, shouldn’t it really just be a function?