Category Archives: Go

Go 1.1 on the Cubieboard 2

Thanks to Little Bird Electronics I just picked up the recently released Cubieboard 2.
Cubieboard 2
For less than 90 bucks Australian you get the case, 4Gb of onboard NAND flash, a USB to serial adapter, USB to power adapter (althought you should use a real wall wart), and an adapter for the onboard SATA port which can driver the 5 volt rail of a 2.5″ drive directly from the board! This blows the old iMX.53 loco into the weeds.

cubie@Cubian:~/go/test/bench/go1$ go version
go version go1.1.1 linux/arm
cubie@Cubian:~/go/test/bench/go1$ go test -bench=.
testing: warning: no tests to run
PASS
BenchmarkBinaryTree17          1        48584450065 ns/op
BenchmarkFannkuch11            1        39106118893 ns/op
BenchmarkFmtFprintfEmpty         2000000               946 ns/op
BenchmarkFmtFprintfString        1000000              2901 ns/op
BenchmarkFmtFprintfInt   1000000              2111 ns/op
BenchmarkFmtFprintfIntInt         500000              3291 ns/op
BenchmarkFmtFprintfPrefixedInt    500000              3530 ns/op
BenchmarkFmtFprintfFloat          200000              7820 ns/op
BenchmarkFmtManyArgs      100000             13727 ns/op
BenchmarkGobDecode            10         132398137 ns/op           5.80 MB/s
BenchmarkGobEncode            50          62968325 ns/op          12.19 MB/s
BenchmarkGzip          1        5177324419 ns/op           3.75 MB/s
BenchmarkGunzip        5         900631458 ns/op          21.55 MB/s
BenchmarkHTTPClientServer           2000            793348 ns/op
BenchmarkJSONEncode            5         632428975 ns/op           3.07 MB/s
BenchmarkJSONDecode            1        1431043626 ns/op           1.36 MB/s
BenchmarkMandelbrot200        50          43100116 ns/op
BenchmarkGoParse              50          53644104 ns/op           1.08 MB/s
BenchmarkRegexpMatchEasy0_32     1000000              1186 ns/op  26.96 MB/s
BenchmarkRegexpMatchEasy0_1K      500000              4956 ns/op 206.61 MB/s
BenchmarkRegexpMatchEasy1_32     1000000              1205 ns/op  26.55 MB/s
BenchmarkRegexpMatchEasy1_1K      200000             11982 ns/op  85.46 MB/s
BenchmarkRegexpMatchMedium_32    1000000              2066 ns/op   0.48 MB/s
BenchmarkRegexpMatchMedium_1K       5000            680173 ns/op   1.51 MB/s
BenchmarkRegexpMatchHard_32        50000             36728 ns/op   0.87 MB/s
BenchmarkRegexpMatchHard_1K         2000           1115221 ns/op   0.92 MB/s
BenchmarkRevcomp              20         103053970 ns/op          24.66 MB/s
BenchmarkTemplate              1        1372074834 ns/op           1.41 MB/s
BenchmarkTimeParse        200000             11836 ns/op
BenchmarkTimeFormat       200000             10199 ns/op
ok      _/home/cubie/go/test/bench/go1  168.097s

If you compare that to my OMAP4 based pandaboard, Go 1.1 performance is 5 to 10 % better. Not bad.

An introduction to cross compilation with Go 1.1

This post is a compliment to one I wrote in August of last year, updating it for Go 1.1. Since last year tools such as goxc have appeared which go a beyond a simple shell wrapper to provide a complete build and distribution solution.

Introduction

Go provides excellent support for producing binaries for foreign platforms without having to install Go on the target. This is extremely handy for testing packages that use build tags or where the target platform is not suitable for development.

Support for building a version of Go suitable for cross compilation is built into the Go build scripts; just set the GOOS, GOARCH, and possibly GOARM correctly and invoke ./make.bash in $GOROOT/src. Therefore, what follows is provided simply for convenience.

Getting started

1. Install Go from source. The instructions are well documented on the Go website, golang.org/doc/install/source. A summary for those familiar with the process follows.

% hg clone https://code.google.com/p/go
% cd go/src
% ./all.bash

2. Checkout the support scripts from Github, github.com/davecheney/golang-crosscompile

% git clone git://github.com/davecheney/golang-crosscompile.git
% source golang-crosscompile/crosscompile.bash

3. Build Go for all supported platforms

% go-crosscompile-build-all
go-crosscompile-build darwin/386
go-crosscompile-build darwin/amd64
go-crosscompile-build freebsd/386
go-crosscompile-build freebsd/amd64
go-crosscompile-build linux/386
go-crosscompile-build linux/amd64
go-crosscompile-build linux/arm
go-crosscompile-build windows/386
go-crosscompile-build windows/amd64

This will compile the Go runtime and standard library for each platform. You can see these packages if you look in go/pkg.

% ls -1 go/pkg 
darwin_386
darwin_amd64
freebsd_386
freebsd_amd64
linux_386
linux_amd64
linux_arm
obj
tool
windows_386
windows_amd64

Using your cross compilation environment

Sourcing crosscompile.bash provides a go-$GOOS-$GOARCH function for each platform, you can use these as you would the standard go tool. For example, to compile a program to run on linux/arm.

% cd $GOPATH/github.com/davecheney/gmx/gmxc
% go-linux-arm build 
% file ./gmxc 
./gmxc: ELF 32-bit LSB executable, ARM, version 1 (SYSV), 
statically linked, not stripped

This file is not executable on the host system (darwin/amd64), but will work on linux/arm.

Some caveats

Cross compiled binaries, not a cross compiled Go installation

This post describes how to produce an environment that will build Go programs for your target environment, it will not however build a Go environment for your target. For that, you must build Go directly on the target platform. For most platforms this means installing from source, or using a version of Go provided by your operating systems packaging system.
If you are using

No cgo in cross platform builds

It is currently not possible to produce a cgo enabled binary when cross compiling from one operating system to another. This is because packages that use cgo invoke the C compiler directly as part of the build process to compile their C code and produce the C to Go trampoline functions. At the moment the name of the C compiler is hard coded to gcc, which assumes the system default gcc compiler even if a cross compiler is installed.

In Go 1.1 this restriction was reinforced further by making CGO_ENABLED default to 0 (off) when any cross compilation was attempted.

GOARM flag needed for cross compiling to linux/arm.

Because some arm platforms lack a hardware floating point unit the GOARM value is used to tell the linker to use hardware or software floating point code. Depending on the specifics of the target machine you are building for, you may need to supply this environment value when building.

% GOARM=5 go-linux-arm build

As of e4b20018f797 you will at least get a nice error telling you which GOARM value to use.

$ ./gmxc 
runtime: this CPU has no floating point hardware, so it cannot 
run this GOARM=7 binary. Recompile using GOARM=5.

By default, Go assumes a hardware floating point unit if no GOARM value is supplied. You can read more about Go on linux/arm on the Go Language Community Wiki.

Introducing profile, super simple profiling for Go programs

This package has been superseded. Please read this blog post for more details.


Introduction

The Go runtime has built in support for several types of profiling that can be used to inspect the performance of your programs. A common way to leverage this support is via the testing package, but if you want to profile a full application it is sometimes complicated to configure the various profiling mechanisms correctly.

I wrote profile to scratch my own itch and create a simple way to profile an existing Go program without having to restructure it as a benchmark.

Installation

profile is go getable so installation is a simple as

go get github.com/davecheney/profile

Usage

Enabling profiling in your application is as simple as one line at the top of your main function

import "github.com/davecheney/profile"

func main() {
        defer profile.Start(profile.CPUProfile).Stop()
        ...
}

Options

What to profile is controlled by the *profile.Config value passed to profile.Start. A nil *profile.Config is the same as choosing all the defaults. By default no profiles are enabled.

In this more complicated example a *profile.Config is constructed by hand which enables memory profiling, but disables the shutdown hook.

import "github.com/davecheney/profile"

func main() {
        cfg := profile.Config {
                MemProfile: true,
                NoShutdownHook: true, // do not hook SIGINT
        }
        // p.Stop() must be called before the program exits to  
        // ensure profiling information is written to disk.
        p := profile.Start(&cfg)
        ...
}

Several convenience variables are provided for cpu, memory, and block (contention) profiling.

For more complex options, consult the documentation on the profile.Config type. Enabling more than one profile may cause your results to be less reliable as profiling itself is not without overhead.

Example

To show profile in action, I modified cmd/godoc following the instructions in the first example.

% godoc -http=:8080
2013/07/07 15:29:11 profile: cpu profiling enabled, /tmp/profile002803/cpu.pprof

In another window I visited http://localhost:8080 a few times to have some profiling data to record, then stopped godoc.

^C2013/07/07 15:29:33 profile: caught interrupt, stopping profiles
% go tool pprof $(which godoc) /tmp/profile002803/cpu.pprof
Welcome to pprof!  For help, type 'help'.
(pprof) top10
Total: 15 samples
       2  13.3%  13.3%        2  13.3% go/scanner.(*Scanner).next
       2  13.3%  26.7%        2  13.3% path.Clean
       1   6.7%  33.3%        3  20.0% go/scanner.(*Scanner).Scan
       1   6.7%  40.0%        1   6.7% main.hasPathPrefix
       1   6.7%  46.7%        3  20.0% main.mountedFS.translate
       1   6.7%  53.3%        1   6.7% path.Dir
       1   6.7%  60.0%        1   6.7% path/filepath.(*lazybuf).append
       1   6.7%  66.7%        1   6.7% runtime.findfunc
       1   6.7%  73.3%        2  13.3% runtime.makeslice
       1   6.7%  80.0%        2  13.3% runtime.mallocgc

Note In the example above we’re passing the godoc binary and the profile produced by running that binary to go tool pprof. When profiling your own code, you must pass your binary, and its profile to go tool pprof otherwise the profile will not make sense.

Licence

profile is available under a BSD licence.

How to write benchmarks in Go

This post continues a series on the testing package I started a few weeks back. You can read the previous article on writing table driven tests here. You can find the code mentioned below in the https://github.com/davecheney/fib repository.

Introduction

The Go testing package contains a benchmarking facility that can be used to examine the performance of your Go code. This post explains how to use the testing package to write a simple benchmark.

You should also review the introductory paragraphs of Profiling Go programs, specifically the section on configuring power management on your machine. For better or worse, modern CPUs rely heavily on active thermal management which can add noise to benchmark results.

Writing a benchmark

We’ll reuse the Fib function from the previous article.

func Fib(n int) int {
        if n < 2 {
                return n
        }
        return Fib(n-1) + Fib(n-2)
}

Benchmarks are placed inside _test.go files and follow the rules of their Test counterparts. In this first example we’re going to benchmark the speed of computing the 10th number in the Fibonacci series.

// from fib_test.go
func BenchmarkFib10(b *testing.B) {
        // run the Fib function b.N times
        for n := 0; n < b.N; n++ {
                Fib(10)
        }
}

Writing a benchmark is very similar to writing a test as they share the infrastructure from the testing package. Some of the key differences are

  • Benchmark functions start with Benchmark not Test.
  • Benchmark functions are run several times by the testing package. The value of b.N will increase each time until the benchmark runner is satisfied with the stability of the benchmark. This has some important ramifications which we’ll investigate later in this article.
  • Each benchmark must execute the code under test b.N times. The for loop in BenchmarkFib10 will be present in every benchmark function.

Running benchmarks

Now that we have a benchmark function defined in the tests for the fib package, we can invoke it with go test -bench=.

% go test -bench=.
PASS
BenchmarkFib10   5000000               509 ns/op
ok      github.com/davecheney/fib       3.084s

Breaking down the text above, we pass the -bench flag to go test supplying a regular expression matching everything. You must pass a valid regex to -bench, just passing -bench is a syntax error. You can use this property to run a subset of benchmarks.

The first line of the result, PASS, comes from the testing portion of the test driver, asking go test to run your benchmarks does not disable the tests in the package. If you want to skip the tests, you can do so by passing a regex to the -run flag that will not match anything. I usually use

go test -run=XXX -bench=.

The second line is the average run time of the function under test for the final value of b.N iterations. In this case, my laptop can execute Fib(10) in 509 nanoseconds. If there were additional Benchmark functions that matched the -bench filter, they would be listed here.

Benchmarking various inputs

As the original Fib function is the classic recursive implementation, we’d expect it to exhibit exponential behavior as the input grows. We can explore this by rewriting our benchmark slightly using a pattern that is very common in the Go standard library.

func benchmarkFib(i int, b *testing.B) {
        for n := 0; n < b.N; n++ {
                Fib(i)
        }
}

func BenchmarkFib1(b *testing.B)  { benchmarkFib(1, b) }
func BenchmarkFib2(b *testing.B)  { benchmarkFib(2, b) }
func BenchmarkFib3(b *testing.B)  { benchmarkFib(3, b) }
func BenchmarkFib10(b *testing.B) { benchmarkFib(10, b) }
func BenchmarkFib20(b *testing.B) { benchmarkFib(20, b) }
func BenchmarkFib40(b *testing.B) { benchmarkFib(40, b) }

Making benchmarkFib private avoids the testing driver trying to invoke it directly, which would fail as its signature does not match func(*testing.B). Running this new set of benchmarks gives these results on my machine.

BenchmarkFib1   1000000000               2.84 ns/op
BenchmarkFib2   500000000                7.92 ns/op
BenchmarkFib3   100000000               13.0 ns/op
BenchmarkFib10   5000000               447 ns/op
BenchmarkFib20     50000             55668 ns/op
BenchmarkFib40         2         942888676 ns/op

Apart from confirming the exponential behavior of our simplistic Fib function, there are some other things to observe in this benchmark run.

  • Each benchmark is run for a minimum of 1 second by default. If the second has not elapsed when the Benchmark function returns, the value of b.N is increased in the sequence 1, 2, 5, 10, 20, 50, … and the function run again.
  • The final BenchmarkFib40 only ran two times with the average was just under a second for each run. As the testing package uses a simple average (total time to run the benchmark function over b.N) this result is statistically weak. You can increase the minimum benchmark time using the -benchtime flag to produce a more accurate result.
    % go test -bench=Fib40 -benchtime=20s
    PASS
    BenchmarkFib40        50         944501481 ns/op

Traps for young players

Above I mentioned the for loop is crucial to the operation of the benchmark driver. Here are two examples of a faulty Fib benchmark.

func BenchmarkFibWrong(b *testing.B) {
        for n := 0; n < b.N; n++ {
                Fib(n)
        }
}

func BenchmarkFibWrong2(b *testing.B) {
        Fib(b.N)
}

On my system BenchmarkFibWrong never completes. This is because the run time of the benchmark will increase as b.N grows, never converging on a stable value. BenchmarkFibWrong2 is similarly affected and never completes.

A note on compiler optimisations

Before concluding I wanted to highlight that to be completely accurate, any benchmark should be careful to avoid compiler optimisations eliminating the function under test and artificially lowering the run time of the benchmark.

var result int

func BenchmarkFibComplete(b *testing.B) {
        var r int
        for n := 0; n < b.N; n++ {
                // always record the result of Fib to prevent
                // the compiler eliminating the function call.
                r = Fib(10)
        }
        // always store the result to a package level variable
        // so the compiler cannot eliminate the Benchmark itself.
        result = r
}

Conclusion

The benchmarking facility in Go works well, and is widely accepted as a reliable standard for measuring the performance of Go code. Writing benchmarks in this manner is an excellent way of communicating a performance improvement, or a regression, in a reproducible way.

Stress test your Go packages

This is a short post on stress testing your Go packages. Concurrency or memory correctness errors are more likely to show up at higher concurrency levels (higher values of GOMAXPROCS). I use this script when testing my packages, or when reviewing code that goes into the standard library.

#!/usr/bin/env bash -e

go test -c
# comment above and uncomment below to enable the race builder
# go test -c -race
PKG=$(basename $(pwd))

while true ; do 
        export GOMAXPROCS=$[ 1 + $[ RANDOM % 128 ]]
        ./$PKG.test $@ 2>&1
done

I keep this script in $HOME/bin so usage is

$ cd $SOMEPACKAGE
$ stress.bash
PASS
PASS
PASS

You can pass additional arguments to your test binary on the command line,

stress.bash -test.v -test.run=ThisTestOnly

The goal is to be able to run the stress test for as long as you want without a test failure. Once you achieve that, uncomment go test -c -race and try again.

How to install Go 1.1 on CentOS 5.9

Important Go 1.0 or 1.1 has never supported RHEL5 or CentOS5. In fact, I don’t think Go will even compile on RHEL5/CentOS5 after version 1.3. Please do not interpret anything in this article as a statement that Go does support RHEL5 or CentOS5.


Introduction

Go has never supported RedHat 5 or CentOS 5. We’ve been pretty good at getting that message out, but it still catches a few people by surprise. The reason these old releases are not supported is the Linux kernel that ships with them, a derivative of 2.6.18, does not provide three facilities required by the Go runtime.

These are

  1. Support for the O_CLOEXEC flag passed to open(2). We attempt to work around this in the os.OpenFile function, but not all kernels that do not support this flag return an error telling us they don’t support it. The result on RHEL5/CentOS5 systems is file descriptors can leak into child processes, this isn’t a big problem, but does cause test failures.
  2. Support for accept4(2). accept4(2) was introduced in kernel 2.6.28 to allow O_CLOEXEC to be set on newly accepted socket file descriptors. In the case that this syscall is not supported, we fall back to the older accept(2) syscall at a small performance hit.
  3. Support for high resolution vDSO clock_gettime(2). vDSO is a way of projecting a small part of the kernel into your process address space. This means you can call certain syscalls (known as vsyscalls) without the cost of a trap into kernel space or a context switch. Go uses clock_gettime(2) via the vDSO in preference to the older gettimeofday(2) syscall as it is both faster, and higher precision.

Installing Go from source

As RHEL5/CentOS5 are not supported, there are no binary packages available on the golang.org website. To install Go you will need to use the source tarball, in this case we’re using the Go 1.1.1 release. I’m using a CentOS 5.9 amd64 image running in a vm.

Install prerequisites

The packages required to build Go on RedHat platforms are listed on the Go community wiki.

$ sudo yum install gcc glibc-devel

Download and unpack

We’re going to download the Go 1.1.1 source distribution and unpack it to $HOME/go.

$ curl https://go.googlecode.com/files/go1.1.1.src.tar.gz | tar xz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8833k  100 8833k    0     0   710k      0  0:00:12  0:00:12 --:--:--  974k
$ ls
Desktop  go

Build

$ cd go/src
$ ./make.bash
# Building C bootstrap tool.
cmd/dist

# Building compilers and Go bootstrap tool for host, linux/amd64.
lib9
libbio
...
Installed Go for linux/amd64 in /home/dfc/go
Installed commands in /home/dfc/go/bin

Add go to PATH

$ export PATH=$PATH:$HOME/go/bin
$ go version
go version go1.1.1 linux/amd64

Known issues

As described above, RHEL5/CentOS5 are not supported as their kernel is too old. Here are some of the known issues. As RHEL5/CentOS5 are unsupported, they will not be fixed.

Test failures

You’ll notice above to build Go I ran make.bash, not the recommended all.bash, to skip the tests. Due to the lack of working O_CLOEXEC support, some tests will fail. This is a known issue and will not be fixed.

$ ./run.bash
...
--- FAIL: TestExtraFiles (0.05 seconds)
        exec_test.go:302: Something already leaked - closed fd 3
        exec_test.go:359: Run: exit status 1; stdout "leaked parent file. fd = 10; want 9\n", stderr ""
FAIL
FAIL    os/exec 0.489s

Segfaults and crashes due to missing vDSO support

A some point during the RHEL5 release cycle support for vDSO vsyscalls was added to RedHat’s 2.6.18 kernel. However that point appears to differ by point release. For example, for RedHat 5, kernel 2.6.18-238.el5 does not work, whereas 2.6.18-238.19.1.el5 does. Running CentOS 5.9 with kernel 2.6.18.348.el5 does work.

$ ./make.bash
...
cmd/go
./make.bash: line 141:  8269 segmentfault  "$GOTOOLDIR"/go_bootstrap clean -i std

In summary, if the your Go programs crash or segfault using RHEL5/CentOS5, you should try upgrading to the latest kernel available for your point release. I’ll leave the comments on this article open for a while so people can contribute their known working kernel versions, perhaps I can build a (partial) table of known good configurations.

You don’t need to set GOROOT, really

Introduction

This is a short post to explain why it is not necessary to set $GOROOT when compiling or using Go.

TL;DR

In general1 it is not necessary to set the $GOROOT environment variable when compiling or using Go 1.0 or later. In fact, setting $GOROOT can lead to hard to debug problems if you have multiple versions of Go present on your computer.

You still need to set $GOPATH. Since Go 1.0 setting $GOPATH has been highly recommended, and with the release of Go 1.1, it is considered mandatory.

Why isn’t GOROOT required anymore ?

You’re still reading ? Excellent. Now for some history.

The history of the GO* environment variables

Go old timers may remember when not only $GOROOT, but $GOOS and $GOARCH were required environment variables. These were required because the Makefile based build system used lots of includes which used $GOROOT as their base path.

By the time the go tool was introduced, prior to Go 1.0, $GOOS and $GOARCH were optional as the build scripts were able to detect the host’s operating system and cpu architecture. With the release of Go 1.0, and the introduction of the cmd/dist bootstrap build tool, $GOOS and $GOARCH became truly optional. They are now only used when cross compiling.

Go 1.0 also introduced $GOPATH based workspaces. If you’ve read this far, you probably know what a $GOPATH workspace is. But in case you don’t, this is documented on the golang.org website, and in this screencast.

Okay, so I don’t need $GOOS or $GOARCH, but what about $GOROOT ?

$GOROOT has always been defined as a pointer to the root of your Go installation. In the old Makefile based build system, it was used as the base path for including other Makefiles, and since Go 1.0 it is used by the go tool to find the compiler (stored in $GOROOT/pkg/tool/$GOOS_$GOARCH) and the standard library (also in $GOROOT/pkg/$GOOS_$GOARCH). If you are a Java user, $GOROOT is similar in effect to $JAVA_HOME.

When you compile Go from source, the value of $GOROOT is automatically discovered2 (it is one directory up from the all.bash script) and then embedded into the go tool built from that source tree. You can see this when you run go env

% echo $GOROOT

% go env GOROOT
/home/dfc/go

The binary distributions you download from the golang.org website, or install from your operating system distribution also have the correct $GOROOT value embedded into the go tool binary. Here is an example from a Ubuntu 12.04 system which ships with Go 1.0.

% dpkg -l golang-{go,src} | grep ^ii
ii  golang-go        2:1-5        Go programming language compiler
ii  golang-src       2:1-5        Go programming language compiler - source files
% which go
/usr/bin/go
% go env GOROOT
/usr/lib/go

You can see that the go tool is installed in /usr/bin/go and $GOROOT is embedded as /usr/lib/go.

So, why shouldn’t I set $GOROOT anymore ?

You should not set $GOROOT because the correct value is already embedded in the go tool.

Setting $GOROOT will override the value stored in the go tool which could lead to the go tool from one version of Go pointing to the compiler and standard library from another version.

There are only two cases that where you may have to set a $GOROOT environment. These are both described in the installation page on the golang.org website. For completeness I will recap them here

  • You are a Linux, FreeBSD or OS X user using the the zip or tarball binary downloads from the golang.org website. These binaries have a $GOROOT value of /usr/local/go and recommend you unpack them into that location. If you choose not to do this, then you must set $GOROOT to the location you chose.
  • You are a Windows user using the zip binary download from the golang.org website. These binaries have a $GOROOT value of C:\Go. If you place Go somewhere else on your system then you must set $GOROOT to the location you chose.

Super nerdy bonus detail

This post has explained how $GOROOT is automatically discovered when compiling from source. I’ve also shown that the build scripts can detect when that value doesn’t match the path that all.bash is invoked from. So, how do operating system distributions set $GOROOT when they normally compile Go in a temporary build directory or chroot? The answer is the $GOROOT_FINAL value, which is used to override the $GOROOT location stored in the go tool.

For example, the Debian/Ubuntu build process will supply a value for $GOROOT_FINAL of /usr/lib/go. This frees it to leave $GOROOT unset, making the build process happy. After the build, the build process will install the go tool in /usr/bin, and the compilers, sources and packages in /usr/lib/go.


  1. There are a few cases where it is required if using the binary distributions of Go, which are described in this post.
  2. That is, if you haven’t set $GOROOT. Although the build system will detect when the parent directory of all.bash does not match $GOROOT.

Writing table driven tests in Go

This article is intended as a short introduction to the mechanics and syntax of writing a table driven test in Go. Supporting this article is a small repository, https://github.com/davecheney/fib, which contains all the code mentioned below.

Introduction

I enjoy writing table driven tests in Go. While not unique to the language, table driven tests leverage several features, composite literals and anonymous structs, to allow you to write related tests in a compact form.

As an example, please consider the case of testing this overused function

package fib

// Fib returns the nth number in the Fibonacci series.
func Fib(n int) int {
        if n < 2 {
                return n
        }
        return Fib(n-1) + Fib(n-2)
}

The table structure

At the heart of all table driven tests is the table itself, which provides the inputs and expected results of the function under test. In most cases the table is a slice of anonymous structs, which allows the table to be written in a compact form.

var fibTests = []struct {
        n        int // input
        expected int // expected result
}{
        {1, 1},
        {2, 1},
        {3, 2},
        {4, 3},
        {5, 5},
        {6, 8},
        {7, 13},
}

If you wished, you could give the struct a name, in which case the table definition would look something like this

type fibTest struct {
        n        int
        expected int
}

var fibTests = []fibTest {
        {1, 1}, {2, 1}, {3, 2}, {4, 3}, {5, 5}, {6, 8}, {7, 13},
}

Hooking it up

Now that we have the table of inputs and results defined, we need to write a driver function to iterate through the inputs and compare the results to their expected value. Rather than one Test function per set of values, we can use the range clause to loop over each test case.

func TestFib(t *testing.T) {
        for _, tt := range fibTests {
                actual := Fib(tt.n)
                if actual != tt.expected {
                        t.Errorf("Fib(%d): expected %d, actual %d", tt.n, tt.expected, actual)
                }
        }
}

In this example we range over all the fibTests defined above, assigning their value in turn to tt. We then call Fib passing in the value of tt.n and compare the result, stored in actual, with the value of tt.expected.

The use of the names actual and expected show my JUnit heritage, others may prefer names like want and got. You should choose something that works for you and gives a clear meaning in your test code.

The use of t.Errorf instead of t.Fatalf is a personal preference. As Fib is a pure function it is safe to continue the loop after a failure. I find this generally reduces test whack-a-mole by returning all the failures at once.

Conclusion

In my introduction I said that table driven tests are one my favorite parts of the Go language. They allow you to write unit tests in a concise fashion, hopefully leading to greater test coverage at a lower line count. If done correctly, adding additional test cases is as simple as a new element in the test table.

This is certainly not the only way that tests could be written in Go, nor the only way to write table driven tests. The Go standard library contains many examples of this form of testing which are worth studying. In particular I suggest the tests for the math and time packages are an excellent starting point.

At the other end of the spectrum is this table driven test of the Juju status command which defines its own language of helpers to populate the table structure. Although this elevates the table driven test to ninja levels, it still contains the same components and concepts, and right at the bottom of the file you’ll find a simple function driving each test.

How Go uses Go to build itself

This post is based on a talk I gave to the Sydney Go users group in mid April 2013 describing the Go build process.


Frequently on mailing list or IRC channel there are requests for documentation on the details of the Go compiler, runtime and internals. Currently the canonical source of documentation about Go’s internals is the source, which I encourage everyone to read. Having said that, the Go build process has been stable since the Go 1.0 release, so documenting it here will probably remain relevant for some time.

This post walks through the nine steps of the Go build process, starting with the source and ending with a fully tested Go installation. For simplicity, all paths mentioned are relative to the root of the source checkout, $GOROOT/src.

For background you should also read Installing Go from source on the golang.org website.

Step 1. all.bash

% cd $GOROOT/src
% ./all.bash

The first step is a bit anticlimactic as all.bash just calls two other shell scripts; make.bash and run.bash. If you’re using Windows or Plan 9 the process is the same, but the scripts end in .bat or .rc respectively. For the rest of this post, please substitute the extension appropriate for your operating system.

Step 2. make.bash

. ./make.bash --no-banner

make.bash is sourced from all.bash so that calls to exit will terminate the build process properly. make.bash has three main jobs, the first job is to validate the environment Go is being compiled in is sane. The sanity checks have been built up over the last few years and generally try to avoid building with known broken tools, or in environments where the build will fail.

Step 3. cmd/dist

gcc -O2 -Wall -Werror -ggdb -o cmd/dist/dist -Icmd/dist cmd/dist/*.c

Once the sanity checks are complete, make.bash compiles cmd/dist. cmd/dist replaces the Makefile based system which existed before Go 1 and manages the small amounts of code generation in pkg/runtime. cmd/dist is a C program which allows it to leverage the system C compiler and headers to handle most of the host platform detection issues. cmd/dist always detects your host’s operating system and architecture, $GOHOSTOS and $GOHOSTARCH. These may differ from any value of $GOOS and $GOARCH you may have set if you are cross compiling. In fact, the Go build process is always building a cross compiler, but in most cases the host and target platform are the same. Next, make.bash invokes cmd/dist with the bootstrap argument which compiles the supporting libraries, lib9, libbio and libmach, used by the compiler suite, then the compilers themselves. These tools are also written in C and are compiled by the system C compiler.

echo "# Building compilers and Go bootstrap tool for host, $GOHOSTOS/$GOHOSTARCH."
buildall="-a"
if [ "$1" = "--no-clean" ]; then
 buildall=""
fi
./cmd/dist/dist bootstrap $buildall -v # builds go_bootstrap

Using the compiler suite, cmd/dist then compiles a version of the go tool, go_bootstrap. The go_bootstrap tool is not the full go tool, for example pkg/net is stubbed out which avoids a dependency on cgo. The list of directories containing packages or libraries be compiled, and their dependencies is encoded in the cmd/dist tool itself, so great care is taken to avoid introducing new build dependencies for cmd/go.

Step 4. go_bootstrap

Now that go_bootstrap is built, the final stage of make.bash is to use go_bootstrap to compile the complete Go standard library, including a replacement version of the full go tool.

echo "# Building packages and commands for $GOOS/$GOARCH."
"$GOTOOLDIR"/go_bootstrap install -gcflags "$GO_GCFLAGS" \
    -ldflags "$GO_LDFLAGS" -v std

Step 5. run.bash

Now that make.bash is complete, execution falls back to all.bash, which invokes run.bash. run.bash‘s job is to compile and test the standard library, the runtime, and the language test suite.

bash run.bash --no-rebuild

The --no-rebuild flag is used because make.bash and run.bash can both invoke go install -a std, so to avoid duplicating the previous effort, --no-rebuild skips the second go install.

# allow all.bash to avoid double-build of everything
rebuild=true
if [ "$1" = "--no-rebuild" ]; then
 shift
else
 echo '# Building packages and commands.'
 time go install -a -v std
 echo
fi

Step 6. go test -a std

echo '# Testing packages.'
time go test std -short -timeout=$(expr 120 \* $timeout_scale)s
echo

Next run.bash is to run the unit tests for all the packages in the standard library, which are written using the testing package. Because code in $GOPATH and $GOROOT live in the same namespace, we cannot use go test ... as this would also test every package in $GOPATH, so an alias, std, was created to address the packages in the standard library. Because some tests take a long time, or consume a lot of memory, some tests filter themselves with the -short flag.

Step 7. runtime and cgo tests

The next section of run.bash runs a set of tests for platforms that support cgo, runs a few benchmarks, and compiles miscellaneous programs that ship with the Go distribution. Over time this list of miscellaneous programs has grown as it was found that when they were not included in the build process, they would inevitably break silently.

Step 8. go run test

(xcd ../test
unset GOMAXPROCS
time go run run.go
) || exit $?

The penultimate stage of run.bash invokes the compiler and runtime tests in the test folder directly under $GOROOT. These are tests of the low level details of the compiler and runtime itself. While the tests exercise the specification of the language, the test/bugs and test/fixedbugs sub directories capture unique tests for issues which have been found and fixed. The test driver for all these tests is $GOROOT/test/run.go which is a small Go program that runs each .go file inside the test directory. Some .go files contain directives on the first line which instruct run.go to expect, for example, the program to fail, or to emit a certain output sequence.

Step 9. go tool api

echo '# Checking API compatibility.'
go tool api -c $GOROOT/api/go1.txt,$GOROOT/api/go1.1.txt \
    -next $GOROOT/api/next.txt -except $GOROOT/api/except.txt

The final step of run.bash is to invoke the api tool. The api tool’s job is to enforce the Go 1 contract; the exported symbols, constants, functions, variables, types and methods that made up the Go 1 API when it shipped in 2012. For Go 1 they are spelled out in api/go1.txt, and Go 1.1, api/go1.1.txt. An additional file, api/next.txt identifies the symbols that make up the additions to the standard library and runtime since Go 1.1. Once Go 1.2 ships, this file will become the contract for Go 1.2, and there will be a new next.txt. There is also a small file, except.txt, which contains exceptions to the Go 1 contract which have been approved. Additions to the file are not expected to be taken lightly.

Additional tips and tricks

You’ve probably figured out that make.bash is useful for building Go without running the tests, and  likewise, run.bash is useful for building and testing the Go runtime. This distinction is also useful as the former can be used when cross compiling Go, and the latter is useful if you are working on the standard library.

Update: Thanks to Russ Cox and Andrew Gerrand for their feedback and suggestions.