Tag Archives: testing

Microblog: TestMain can cause one to question reality

This morning a one line change had several of us tearing up the fabric of reality trying to understand why a failing test wasn’t failing, or, in fact, being run at all. Increasingly frantic efforts to upgrade/downgrade Go, run the tests on another machine, run the tests in CI, all served to only unnerve us further.

Can you spot the bug?

package gosh_darn_important_test

import (
    "testing"
    "go.uber.org/goleak"
)

func TestMain(m *testing.M) {
//	goleak.VerifyTestMain(m)
}

...

go test -v streaming output

The testing package is one of my favourite packages in the Go standard library, not just because of its low noise approach to unit testing, but, over the lifetime of Go, it has received a steady stream of quality of life improvements driven by real world usage.

The most recent example of this is, in Go 1.14, go test -v will stream t.Log output as it happens, rather than hoarding it til the end of the test run. Here’s an example;

package main

import (
	"fmt"
	"testing"
	"time"
)

func TestLogStreaming(t *testing.T) {
	for i := 0; i < 5; i++ {
		time.Sleep(300 * time.Millisecond)
		fmt.Println("fmt.Println:", i)
		t.Log("t.Log:", i)
	}
}

Note: Calling fmt.Println inside a test is generally considered a no no as it bypasses the testing package’s output buffering irrespective of the -v flag. However, for this example, it‘s necessary to demonstrate the streaming t.Log change.

% go1.13 test -v tlog_test.go
=== RUN   TestLogStreaming
fmt.Println: 0
fmt.Println: 1
fmt.Println: 2
fmt.Println: 3
fmt.Println: 4
--- PASS: TestLogStreaming (1.52s)
    tlog_test.go:13: t.Log: 0
    tlog_test.go:13: t.Log: 1
    tlog_test.go:13: t.Log: 2
    tlog_test.go:13: t.Log: 3
    tlog_test.go:13: t.Log: 4
PASS
ok      command-line-arguments  1.971s

Under Go 1.13 and earlier the fmt.Println lines output immediately. t.Log lines are buffered and are printed after the test completes.

% go1.14 test -v tlog_test.go
=== RUN   TestLogStreaming
fmt.Println: 0
    TestLogStreaming: tlog_test.go:13: t.Log: 0
fmt.Println: 1
    TestLogStreaming: tlog_test.go:13: t.Log: 1
fmt.Println: 2
    TestLogStreaming: tlog_test.go:13: t.Log: 2
fmt.Println: 3
    TestLogStreaming: tlog_test.go:13: t.Log: 3
fmt.Println: 4
    TestLogStreaming: tlog_test.go:13: t.Log: 4
--- PASS: TestLogStreaming (1.51s)
PASS
ok      command-line-arguments  1.809s

Under Go 1.14 the fmt.Println and t.Log lines are interleaved, rather than waiting for the test to complete, demonstrating that test output is streamed when go test -v is used.

This is a great quality of life improvement for integration style tests that often retry for long periods when the test is failing. Streaming t.Log output will help Gophers debug those test failures without having to wait until the entire test times out to receive their output.

Dynamically scoped variables in Go

This is a thought experiment in API design. It starts with the classic Go unit testing idiom:

func TestOpenFile(t *testing.T) {
        f, err := os.Open("notfound")
        if err != nil {
                t.Fatal(err)
        }

        // ...
}

What’s the problem with this code? The assertion. if err != nil { ... } is repetitive and in the case where multiple conditions need to be checked, somewhat error prone if the author of the test uses t.Error not t.Fatal, eg:

        f, err := os.Open("notfound")
        if err != nil {
                t.Error(err)
        }
        f.Close() // boom!

What’s the solution? DRY it up, of course, by moving the repetitive assertion logic to a helper:

func TestOpenFile(t *testing.T) {
        f, err := os.Open("notfound")
        check(t, err)

        // ...
}
 
func check(t *testing.T, err error) {
       if err != nil {
                t.Helper()
                t.Fatal(err)
        }
}

Using the check helper the code is a little cleaner, and clearer, check the error, and hopefully the indecision between t.Error and t.Fatal has been solved. The downside of abstracting the assertion to a helper function is now you need to pass a testing.T into each and every invocation. Worse, you need to pass a *testing.T to everything that needs to call check, transitively, just in case.

This is ok, I guess, but I will make the observation that the t variable is only needed when the assertion fails — and even in a testing scenario, most of the time, most of the tests pass, so that means reading, and writing, all these t‘s is a constant overhead for the relatively rare occasion that a test fails.

What about if we did something like this instead?

func TestOpenFile(t *testing.T) {
        f, err := os.Open("notfound")
        check(err)
 
        // ...
}
 
func check(err error) {
        if err != nil {
                panic(err.Error())
        }
}

Yeah, that’ll work, but it has a few problems

% go test
--- FAIL: TestOpenFile (0.00s)
panic: open notfound: no such file or directory [recovered]
        panic: open notfound: no such file or directory

goroutine 22 [running]:
testing.tRunner.func1(0xc0000b4400)
        /Users/dfc/go/src/testing/testing.go:874 +0x3a3
panic(0x111b040, 0xc0000866f0)
        /Users/dfc/go/src/runtime/panic.go:679 +0x1b2
github.com/pkg/expect_test.check(...)
        /Users/dfc/src/github.com/pkg/expect/expect_test.go:18
github.com/pkg/expect_test.TestOpenFile(0xc0000b4400)
        /Users/dfc/src/github.com/pkg/expect/expect_test.go:10 +0xa1
testing.tRunner(0xc0000b4400, 0x115ac90)
        /Users/dfc/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
        /Users/dfc/go/src/testing/testing.go:960 +0x350
exit status 2

Let’s start with the good; we didn’t have to pass a testing.T every place we call check, the test fails immediately, and we get a nice message in the panic — albeit twice. But where the assertion failed is hard to see. It occurred on expect_test.go:11 but you’d be forgiven for not knowing that.

So panic isn’t really a good solution, but there’s something in this stack trace that is — can you see it? Here’s a hint, github.com/pkg/expect_test.TestOpenFile(0xc0000b4400).

TestOpenFile has a t value, it was passed to it by tRunner, so there’s a testing.T in memory at address 0xc0000b4400. What if we could get access to that t inside check? Then we could use it to call t.Helper and t.Fatal. Is that possible?

Dynamic scoping

What we want is to be able to access a variable whose declaration is neither global, or local to the function, but somewhere higher in the call stack. This is called dynamic scoping. Go doesn’t support dynamic scoping, but it turns out, for restricted cases, we can fake it. I’ll skip to the chase:

// getT returns the address of the testing.T passed to testing.tRunner
// which called the function which called getT. If testing.tRunner cannot
// be located in the stack, say if getT is not called from the main test
// goroutine, getT returns nil.
func getT() *testing.T {
        var buf [8192]byte
        n := runtime.Stack(buf[:], false)
        sc := bufio.NewScanner(bytes.NewReader(buf[:n]))
        for sc.Scan() {
                var p uintptr
                n, _ := fmt.Sscanf(sc.Text(), "testing.tRunner(%v", &p)
                if n != 1 {
                        continue
                }
                return (*testing.T)(unsafe.Pointer(p))
        }
        return nil
}

We know that each Test is called by the testing package in its own goroutine (see the stack trace above). The testing package launches the test via a function called tRunner which takes a *testing.T and a func(*testing.T) to invoke. Thus we grab a stack trace of the current goroutine, scan through it for the line beginning with testing.tRunner — which can only be the testing package as tRunner is a private function — and parse the address of the first parameter, which is a pointer to a testing.T. With a little unsafe we convert the raw pointer back to a *testing.T and we’re done.

If the search fails then it is likely that getT wasn’t called from a Test. This is actually ok because the reason we needed the *testing.T was to call t.Fatal and the testing package already requires that t.Fatal be called from the main test goroutine.

import "github.com/pkg/expect"

func TestOpenFile(t *testing.T) {
        f, err := os.Open("notfound")
        expect.Nil(err)
 
        // ...
}

Putting it all together we’ve eliminated the assertion boilerplate and possibly made the expectation of the test a little clearer to read, after opening the file err is expected to be nil.

Is this fine?

At this point you should be asking, is this fine? And the answer is, no, this is not fine. You should be screaming internally at this point. But it’s probably worth introspecting those feelings of revulsion.

Apart from the inherent fragility of scrobbling around in a goroutine’s call stack, there are some serious design issues:

  1. The expect.Nil‘s behaviour now depends on who called it. Provided with the same arguments it may have different behaviour depending on where it appears in the call stack — this is unexpected.
  2. Taken to the extreme dynamic scoping effective brings into the scope of a single function all the variables passed into any function that preceded it. It is a side channel for passing data in to and out of functions that is not explicitly documented in function declaration.

Ironically these are precisely the critiques I have of context.Context. I’ll leave it to you to decide if they are justified.

A final word

This is a bad idea, no argument there. This is not a pattern you should ever use in production code. But, this isn’t production code, it’s a test, and perhaps there are different rules that apply to test code. After all, we use mocks, and stubs, and monkey patching, and type assertions, and reflection, and helper functions, and build flags, and global variables, all so we can test our code effectively. None of those, uh, hacks will ever show up in the production code path, so is it really the end of the world?

If you’ve read this far perhaps you’ll agree with me that as unconventional as this approach is, not having to pass a *testing.T into every function that could possibly need to assert something transitively, makes for clearer test code.

So maybe, in this case, the ends do justify the means.


If you’re interested, I’ve put together a small assertion library using this pattern. Caveat emptor.

Why bother writing tests at all?

In previous posts and presentations I talked about how to test, and when to test. To conclude this series of I’m going to ask the question, why test at all?

Even if you don’t, someone will test your software

I’m sure no-one reading this post thinks that software should be delivered without being tested first. Even if that were true, your customers are going to test it, or at least use it. If nothing else, it would be good to discover any issues with the code before your customers do. If not for the reputation of your company, at least for your professional pride.

So, if we agree that software should be tested, the question becomes: who should do that testing?

The majority of testing should be performed by development teams

I argue that the majority of the testing should be done by development groups. Moreover, testing should be automated, and thus the majority of these tests should be unit style tests.

To be clear, I am not saying you shouldn’t write integration, functional, or end to end tests. I’m also not saying that you shouldn’t have a QA group, or integration test engineers. However at a recent software conference, in a room of over 1,000 engineers, nobody raised their hand when I asked if they considered themselves in a pure quality assurance role.

You might argue that the audience was self selecting, that QA engineers did not feel a software conference was relevant–or welcoming–to them. However, I think this proves my point, the days of one developer to one test engineer are gone and not coming back.

If development teams aren’t writing the majority of tests, who is?

Manual testing should not be the majority of your testing because manual testing is O(n)

Thus, if individual contributors are expected to test the software they write, why do we need to automate it? Why is a manual testing plan not good enough?

Manual testing of software or manual verification of a defect is not sufficient because it does not scale. As the number of manual tests grows, engineers are tempted to skip them or only execute the scenarios they think are could be affected. Manual testing is expensive in terms of time, thus dollars, and it is boring. 99.9% of the tests that passed last time are expected to pass again. Manual testing is looking for a needle in a haystack, except you don’t stop when you find the first needle.

This means that your first response when given a bug to fix or a feature to implement should be to write a failing test. This doesn’t need to be a unit test, but it should be an automated test. Once you’ve fixed the bug, or added the feature, now have the test case to prove it worked–and you can check them in together.

Tests are the critical component that ensure you can always ship your master branch

As a development team, you are judged on your ability to deliver working software to the business. No, seriously, the business could care less about OOP vs FP, CI/CD, table tennis or limited run La Croix.

Your super power is, at any time, anyone on the team should be confident that the master branch of your code is shippable. This means at any time they can deliver a release of your software to the business and the business can recoup its investment in your development R&D.

I cannot emphasise this enough. If you want the non technical parts of the business to believe you are heros, you must never create a situation where you say “well, we can’t release right now because we’re in the middle of an important refactoring. It’ll be a few weeks. We hope.”

Again, I’m not saying you cannot refactor, but at every stage your product must be shippable. Your tests have to pass. It may not have all the desired features, but the features that are there should work as described on the tin.

Tests lock in behaviour

Your tests are the contract about what your software does and does not do. Unit tests should lock in the behaviour of the package’s API. Integration tests do the same for complex interactions. Tests describe, in code, what the program promises to do.

If there is a unit test for each input permutation, you have defined the contract for what the code will do in code, not documentation. This is a contract anyone on your team can assert by simply running the tests. At any stage you know with a high degree of confidence that the behaviour people relied on before your change continues to function after your change.

Tests give you confidence to change someone else’s code

Lastly, and this is the biggest one, for programmers working on a piece of code that has been through many hands. Tests give you the confidence to make changes.

Even though we’ve never met, something I know about you, the reader, is you will eventually leave your current employer. Maybe you’ll be moving on to a new role, or perhaps a promotion, perhaps you’ll move cities, or follow your partner overseas. Whatever the reason, the succession of the maintenance of programs you write is key.

If people cannot maintain our code then as you and I move from job to job we’ll leave behind programs which cannot be maintained. This goes beyond advocacy for a language or tool. Programs which cannot be changed, programs which are too hard to onboard new developers, or programs which feel like career digression to work on them will reach only one end state–they are a dead end. They represent a balance sheet loss for the business. They will be replaced.

If you worry about who will maintain your code after you’re gone, write good tests.

Prefer table driven tests

I’m a big fan of testing, specifically unit testing and TDD (done correctly, of course). A practice that has grown around Go projects is the idea of a table driven test. This post explores the how and why of writing a table driven test.

Let’s say we have a function that splits strings:

// Split slices s into all substrings separated by sep and
// returns a slice of the substrings between those separators.
func Split(s, sep string) []string {
var result []string
i := strings.Index(s, sep)
for i > -1 {
result = append(result, s[:i])
s = s[i+len(sep):]
i = strings.Index(s, sep)
}
return append(result, s)
}

In Go, unit tests are just regular Go functions (with a few rules) so we write a unit test for this function starting with a file in the same directory, with the same package name, strings.

package split

import (
"reflect"
"testing"
)

func TestSplit(t *testing.T) {
got := Split("a/b/c", "/")
want := []string{"a", "b", "c"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}

Tests are just regular Go functions with a few rules:

  1. The name of the test function must start with Test.
  2. The test function must take one argument of type *testing.T. A *testing.T is a type injected by the testing package itself, to provide ways to print, skip, and fail the test.

In our test we call Split with some inputs, then compare it to the result we expected.

Code coverage

The next question is, what is the coverage of this package? Luckily the go tool has a built in branch coverage. We can invoke it like this:

% go test -coverprofile=c.out
PASS
coverage: 100.0% of statements
ok split 0.010s

Which tells us we have 100% branch coverage, which isn’t really surprising, there’s only one branch in this code.

If we want to dig in to the coverage report the go tool has several options to print the coverage report. We can use go tool cover -func to break down the coverage per function:

% go tool cover -func=c.out
split/split.go:8: Split 100.0%
total: (statements) 100.0%

Which isn’t that exciting as we only have one function in this package, but I’m sure you’ll find more exciting packages to test.

Spray some .bashrc on that

This pair of commands is so useful for me I have a shell alias which runs the test coverage and the report in one command:

cover () {
local t=$(mktemp -t cover)
go test $COVERFLAGS -coverprofile=$t $@ \
&& go tool cover -func=$t \
&& unlink $t
}

Going beyond 100% coverage

So, we wrote one test case, got 100% coverage, but this isn’t really the end of the story. We have good branch coverage but we probably need to test some of the boundary conditions. For example, what happens if we try to split it on comma?

func TestSplitWrongSep(t *testing.T) {
got := Split("a/b/c", ",")
want := []string{"a/b/c"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}

Or, what happens if there are no separators in the source string?

func TestSplitNoSep(t *testing.T) {
got := Split("abc", "/")
want := []string{"abc"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}

We’re starting build a set of test cases that exercise boundary conditions. This is good.

Introducing table driven tests

However the there is a lot of duplication in our tests. For each test case only the input, the expected output, and name of the test case change. Everything else is boilerplate. What we’d like to to set up all the inputs and expected outputs and feel them to a single test harness. This is a great time to introduce table driven testing.

func TestSplit(t *testing.T) {
type test struct {
input string
sep string
want []string
}

tests := []test{
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
}

for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
}
}

We declare a structure to hold our test inputs and expected outputs. This is our table. The tests structure is usually a local declaration because we want to reuse this name for other tests in this package.

In fact, we don’t even need to give the type a name, we can use an anonymous struct literal to reduce the boilerplate like this:

func TestSplit(t *testing.T) {
tests := []struct {
input string
sep string
want []string
}{
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
}

for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
}
}

Now, adding a new test is a straight forward matter; simply add another line the tests structure. For example, what will happen if our input string has a trailing separator?

{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, // trailing sep

But, when we run go test, we get

% go test
--- FAIL: TestSplit (0.00s)
split_test.go:24: expected: [a b c], got: [a b c ]

Putting aside the test failure, there are a few problems to talk about.

The first is by rewriting each test from a function to a row in a table we’ve lost the name of the failing test. We added a comment in the test file to call out this case, but we don’t have access to that comment in the go test output.

There are a few ways to resolve this. You’ll see a mix of styles in use in Go code bases because the table testing idiom is evolving as people continue to experiment with the form.

Enumerating test cases

As tests are stored in a slice we can print out the index of the test case in the failure message:

func TestSplit(t *testing.T) {
    tests := []struct {
        input string
        sep   string
        want  []string
    }{
        {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
        {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
        {input: "abc", sep: "/", want: []string{"abc"}},
        {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
    }

    for i, tc := range tests {
        got := Split(tc.input, tc.sep)
        if !reflect.DeepEqual(tc.want, got) {
            t.Fatalf("test %d: expected: %v, got: %v", i+1, tc.want, got)
        }
    }
}

Now when we run go test we get this

% go test
--- FAIL: TestSplit (0.00s)
split_test.go:24: test 4: expected: [a b c], got: [a b c ]

Which is a little better. Now we know that the fourth test is failing, although we have to do a little bit of fudging because slice indexing—​and range iteration—​is zero based. This requires consistency across your test cases; if some use zero base reporting and others use one based, it’s going to be confusing. And, if the list of test cases is long, it could be difficult to count braces to figure out exactly which fixture constitutes test case number four.

Give your test cases names

Another common pattern is to include a name field in the test fixture.

func TestSplit(t *testing.T) {
tests := []struct {
name string
input string
sep string
want []string
}{
{name: "simple", input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{name: "wrong sep", input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{name: "no sep", input: "abc", sep: "/", want: []string{"abc"}},
{name: "trailing sep", input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}

for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("%s: expected: %v, got: %v", tc.name, tc.want, got)
}
}
}

Now when the test fails we have a descriptive name for what the test was doing. We no longer have to try to figure it out from the output—​also, now have a string we can search on.

% go test
--- FAIL: TestSplit (0.00s)
split_test.go:25: trailing sep: expected: [a b c], got: [a b c ]

We can dry this up even more using a map literal syntax:

func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}
{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}

for name, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("%s: expected: %v, got: %v", name, tc.want, got)
}
}
}

Using a map literal syntax we define our test cases not as a slice of structs, but as map of test names to test fixtures. There’s also a side benefit of using a map that is going to potentially improve the utility of our tests.

Map iteration order is undefined 1 This means each time we run go test, our tests are going to be potentially run in a different order.

This is super useful for spotting conditions where test pass when run in statement order, but not otherwise. If you find that happens you probably have some global state that is being mutated by one test with subsequent tests depending on that modification.

Introducing sub tests

Before we fix the failing test there are a few other issues to address in our table driven test harness.

The first is we’re calling t.Fatalf when one of the test cases fails. This means after the first failing test case we stop testing the other cases. Because test cases are run in an undefined order, if there is a test failure, it would be nice to know if it was the only failure or just the first.

The testing package would do this for us if we go to the effort to write out each test case as its own function, but that’s quite verbose. The good news is since Go 1.7 a new feature was added that lets us do this easily for table driven tests. They’re called sub tests.

func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}

for name, tc := range tests {
t.Run(name, func(t *testing.T) {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
})

}
}

As each sub test now has a name we get that name automatically printed out in any test runs.

% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]

Each subtest is its own anonymous function, therefore we can use t.Fatalft.Skipf, and all the other testing.Thelpers, while retaining the compactness of a table driven test.

Individual sub test cases can be executed directly

Because sub tests have a name, you can run a selection of sub tests by name using the go test -run flag.

% go test -run=.*/trailing -v
=== RUN TestSplit
=== RUN TestSplit/trailing_sep
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]

Comparing what we got with what we wanted

Now we’re ready to fix the test case. Let’s look at the error.

--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]

Can you spot the problem? Clearly the slices are different, that’s what reflect.DeepEqual is upset about. But spotting the actual difference isn’t easy, you have to spot that extra space after c. This might look simple in this simple example, but it is any thing but when you’re comparing two complicated deeply nested gRPC structures.

We can improve the output if we switch to the %#v syntax to view the value as a Go(ish) declaration:

got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %#v, got: %#v", tc.want, got)
}

Now when we run our test it’s clear that the problem is there is an extra blank element in the slice.

% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: []string{"a", "b", "c"}, got: []string{"a", "b", "c", ""}

But before we go to fix our test failure I want to talk a little bit more about choosing the right way to present test failures. Our Split function is simple, it takes a primitive string and returns a slice of strings, but what if it worked with structs, or worse, pointers to structs?

Here is an example where %#v does not work as well:

func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
fmt.Printf("%v %v\n", x, y)
fmt.Printf("%#v %#v\n", x, y)
}

The first fmt.Printfprints the unhelpful, but expected slice of addresses; [0xc000096000 0xc000096008 0xc000096010] [0xc000096018 0xc000096020 0xc000096028]. However our %#v version doesn’t fare any better, printing a slice of addresses cast to *main.T;[]*main.T{(*main.T)(0xc000096000), (*main.T)(0xc000096008), (*main.T)(0xc000096010)} []*main.T{(*main.T)(0xc000096018), (*main.T)(0xc000096020), (*main.T)(0xc000096028)}

Because of the limitations in using any fmt.Printf verb, I want to introduce the go-cmp library from Google.

The goal of the cmp library is it is specifically to compare two values. This is similar to reflect.DeepEqual, but it has more capabilities. Using the cmp pacakge you can, of course, write:

func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
fmt.Println(cmp.Equal(x, y)) // false
}

But far more useful for us with our test function is the cmp.Diff function which will produce a textual description of what is different between the two values, recursively.

func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
diff := cmp.Diff(x, y)
fmt.Printf(diff)
}

Which instead produces:

% go run
{[]*main.T}[2].I:
-: 3
+: 4

Telling us that at element 2 of the slice of Ts the Ifield was expected to be 3, but was actually 4.

Putting this all together we have our table driven go-cmp test

func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}

for name, tc := range tests {
t.Run(name, func(t *testing.T) {
got := Split(tc.input, tc.sep)
diff := cmp.Diff(tc.want, got)
if diff != "" {
t.Fatalf(diff)
}

})
}
}

Running this we get

% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:27: {[]string}[?->3]:
-: <non-existent>
+: ""
FAIL
exit status 1
FAIL split 0.006s

Using cmp.Diff our test harness isn’t just telling us that what we got and what we wanted were different. Our test is telling us that the strings are different lengths, the third index in the fixture shouldn’t exist, but the actual output we got an empty string, “”. From here fixing the test failure is straight forward.

Internets of Interest #7: Ian Cooper on Test Driven Development

As the tech lead on non SaaS product I spend a lot of my time worrying about testing. Specifically we have tests that cover code, but what is covering the tests? Tests are important to give you certainty that what your product says on the tin is what it will do when people take it home and unwrap it, but what’s backstopping the tests? Testing lets you refactor with impunity, but what if you want to refactor your tests?

This presentation by Ian Cooper takes a little while to get going but is worth persisting with. Cooper’s observations that the unit of the unit test is not a type, or a class, but the API–in Go terms, the public API of a package–was revelatory for me.

Bonus: Michael Feathers’ YOW ! 2016 presentation; Testing Patience.

Test fixtures in Go

This is a quick post to describe how you can use test fixtures, data files on disk, with the Go testing package. Using fixtures with the Go testing package is quite straight forward because of two convenience features built into the go tool.

First, when you run go test, for each package in scope, the test binary will be executed with its working directory set to the source directory of the package under test. Consider this test in the example package:

package example
import (
"os"
"testing"
)

func TestWorkingDirectory(t *testing.T) {
wd, _ := os.Getwd()
t.Log(wd)
}

Running this from a random directory, and remembering that go test takes a path relative to your $GOPATH, results in:

% pwd /tmp % go test -v github.com/davecheney/example
=== RUN TestWorkingDirectory
--- PASS: TestWorkingDirectory (0.00s)
example_test.go:10: /Users/dfc/src/github.com/davecheney/example
PASS ok github.com/davecheney/example 0.013s

Second, the Go tool will ignore any directory in your $GOPATH that starts with a period, an underscore, or matches the word testdata.

Putting this together, locating a fixture from your test code is as simple as

f, err := os.Open("testdata/somefixture.json")

(technically this code should use filepath.Join but in these simple cases Windows copes fine with the forward slash). Here are some random examples from the standard library:

  1. debug/elf
  2. net/http
  3. image

Happy testing!

The value of TDD

What is the value of test driven development?

Is the value writing tests at the same time as you write the code? Sure, I like that property. It means that at any time you’re one control-Z away from your tests passing; either revert your test change, or fix the code so the test pass. The nice property of this method is once you’ve implemented your feature, by definition, it’s already tested. Push that branch and lean in for the code review.

Another important property of TDD is it forces you to think about writing code that is testable, as a first class citizen. You don’t add testing after the fact, in the same way you don’t add performance or security after the code is “done” — right?

But for me, the most important property of TDD is it forces you to write your tests as a consumer of your own code, making you think about its API, continuously.

Many times people have said to me that they like the idea of TDD in principle, but have found they felt slower when they tried it. I understand completely. TDD does slow you down if you don’t have a design to work from. TDD doesn’t relieve you of the responsibility of designing your code first.

How much design you do is really up to you, but if you find yourself in the situation where you find TDD is slowing you down because you’re fighting the double whammy of changing the code and the tests at the same time, that’s a sure fire sign that you’ve run off the edge of your design map.

Robert Martin says you should not write a line of production code without a failing unit test–the key word is production code. It’s 100% OK to skip writing tests you’re exploring the design space, just remember to budget time to rewrite this code in a TDD fashion. The good news is it won’t take you very long, you’ve already designed the code, and built one to throw away.

Struct composition with Go

This is a quick Friday blog post to talk about a recent experience I had working on a piece Juju code that needed to capture the data being sent over a net.Conn.

Most Gophers know that the net package provides a net.Pipe function which returns a pair of net.Conns representing an in memory network connection. net.Pipe is ideal for testing components that expect to talk over the network without all the mucking around of actually using the network.

The Go standard library also contains the super useful io.MultiWriter function which takes any number of io.Writers and returns another io.Writer that will send a copy of any data written to it to each of its underlying io.Writers. Now I had all the pieces I needed to create a net.Conn that could record the data written through it.

func main() {
        client, server := net.Pipe()
        var buf bytes.Buffer
        client = io.MultiWriter(client, &buf)

        // ...
}

Except this code does not compile.

# command-line-arguments
/tmp/sandbox866813815/main.go:13: cannot use io.MultiWriter(client, &buf) (type io.Writer) as type net.Conn in assignment:
	io.Writer does not implement net.Conn (missing Close method)

The value returned by io.MultiWriter is an implementation of io.Writer, it doesn’t have the rest of the methods necessary to fulfil the net.Conn interface; what I really need is the ability to replace the Write method of an existing net.Conn value. We can do this with embedding by creating a structure that embeds both a net.Conn and an independant io.Writer as anonymous fields.

type recordingConn struct {
        net.Conn
        io.Writer
}

func main() {
        client, server := net.Pipe()
        var buf bytes.Buffer
        client = &recordingConn {
                Conn: client,
                Writer: io.MultiWriter(client, &buf),
        }

        // ...
}

The recodingConn embeds a net.Conn ensuring that recordingConn implements net.Conn. It also gives us a place to hang the io.MultiWriter so we can syphon off the data written by the client. There is only one small problem remaining.

# command-line-arguments
/tmp/sandbox439875759/main.go:24: recordingConn.Write is ambiguous

Because both fields in the structure are types that have a Write method, the compiler cannot decide which one should be the called. To resolve this ambiguity we can add a Write method on the recordingConn type itself:

func (c *recordingConn) Write(buf []byte) (int, error) {
        return c.Writer.Write(buf)
}

With the ambiguity resolved, the recordingConn is now usable as a net.Conn implementation. You can see the full code here.

This is just a small example of the power of struct composition using Go. Can you think of other ways to do this ?