Tag Archives: golang

Curious Channels

Channels are a signature feature of the Go programming language. Channels provide a powerful way to reason about the flow of data from one goroutine to another without the use of locks or critical sections.

Today I want to talk about two important properties of channels that make them useful for controlling not just data flow within your program, but the flow of control as well.

A closed channel never blocks

The first property I want to talk about is a closed channel. Once a channel has been closed, you cannot send a value on this channel, but you can still receive from the channel.

package main

import "fmt"

func main() {
        ch := make(chan bool, 2)
        ch <- true
        ch <- true
        close(ch)

        for i := 0; i < cap(ch) +1 ; i++ {
                v, ok := <- ch
                fmt.Println(v, ok)
        }
}

In this example we create a channel with a buffer of two, fill the buffer, then close it.

true true
true true
false false

Running the program shows we retrieve the first two values we sent on the channel, then on our third attempt the channel gives us the values of false and false. The first false is the zero value for that channel’s type, which is false, as the channel is of type chan bool. The second indicates the open state of the channel, which is now false, indicating the channel is closed. The channel will continue to report these values infinitely. As an experiment, alter this example to receive from the channel 100 times.

Being able to detect if your channel is closed is a useful property, it is used in the range over channel idiom to exit the loop once a channel has been drained.

package main

import "fmt"

func main() {
        ch := make(chan bool, 2)
        ch <- true
        ch <- true
        close(ch)

        for v := range ch {
                fmt.Println(v) // called twice
        }
}

but really comes into its own when combined with select. Let’s start with this example

package main

import (
        "fmt"
        "sync"
        "time"
)

func main() {
        finish := make(chan bool)
        var done sync.WaitGroup
        done.Add(1)
        go func() {
                select {
                case <-time.After(1 * time.Hour):
                case <-finish:
                }
                done.Done()
        }()
        t0 := time.Now()
        finish <- true // send the close signal
        done.Wait()    // wait for the goroutine to stop
        fmt.Printf("Waited %v for goroutine to stop\n", time.Since(t0))
}

Running the program, on my system, gives a low wait duration, hence it is clear that the goroutine does not wait the full hour before calling done.Done()

Waited 129.607us for goroutine to stop

But there are a few problems with this program. The first is the finish channel is not buffered, so the send to finish may block if the receiver forgot to add finish to their select statement. You could solve that problem by wrapping the send in a select block to make it non blocking, or making the finish channel buffered. However what if you had many goroutines listening on the finish channel, you would need to track this and remember to send the correct number of times to the finish channel. This might get tricky if you aren’t in control of creating these goroutines; they may be being created in another part of your program, perhaps in response to incoming requests over the network.

A nice solution to this problem is to leverage the property that a closed channel is always ready to receive. Using this property we can rewrite the program, now including 100 goroutines, without having to keep track of the number of goroutines spawned, or correctly size the finish channel

package main

import (
        "fmt"
        "sync"
        "time"
)

func main() {
        const n = 100
        finish := make(chan bool)
        var done sync.WaitGroup
        for i := 0; i < n; i++ { 
                done.Add(1)
                go func() {
                        select {
                        case <-time.After(1 * time.Hour):
                        case <-finish:
                        }
                        done.Done()
                }()
        }
        t0 := time.Now()
        close(finish)    // closing finish makes it ready to receive
        done.Wait()      // wait for all goroutines to stop
        fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
}

On my system, this returns

Waited 231.385us for 100 goroutines to stop

So what is going on here? As soon as the finish channel is closed, it becomes ready to receive. As all the goroutines are waiting to receive either from their time.After channel, or finish, the select statement is now complete and the goroutines exits after calling done.Done() to deincrement the WaitGroup counter. This powerful idiom allows you to use a channel to send a signal to an unknown number of goroutines, without having to know anything about them, or worrying about deadlock.

Before moving on to the next topic, I want to mention a final simplification that is preferred by many Go programmers. If you look at the sample program above, you’ll note that we never send a value on the finish channel, and the receiver always discards any value received. Because of this it is quite common to see the program written like this:

package main

import (
        "fmt"
        "sync"
        "time"
)

func main() {
        finish := make(chan struct{})
        var done sync.WaitGroup
        done.Add(1)
        go func() {
                select {
                case <-time.After(1 * time.Hour):
                case <-finish:
                }
                done.Done()
        }()
        t0 := time.Now()
        close(finish)
        done.Wait()
        fmt.Printf("Waited %v for goroutine to stop\n", time.Since(t0))
}

As the behaviour of the close(finish) relies on signalling the close of the channel, not the value sent or received, declaring finish to be of type chan struct{} says that the channel contains no value; we’re only interested in its closed property.

A nil channel always blocks

The second property I want to talk about is polar opposite of the closed channel property. A nil channel; a channel value that has not been initalised, or has been set to nil will always block. For example

package main

func main() {
        var ch chan bool
        ch <- true // blocks forever
}

will deadlock as ch is nil and will never be ready to send. The same is true for receiving

package main

func main() {
        var ch chan bool
        <- ch // blocks forever
}

This might not seem important, but is a useful property when you want to use the closed channel idiom to wait for multiple channels to close. For example

// WaitMany waits for a and b to close.
func WaitMany(a, b chan bool) {
        var aclosed, bclosed bool
        for !aclosed || !bclosed {
                select {
                case <-a:
                        aclosed = true
                case <-b:
                        bclosed = true
                }
        }
}

WaitMany() looks like a good way to wait for channels a and b to close, but it has a problem. Let’s say that channel a is closed first, then it will always be ready to receive. Because bclosed is still false the program can enter an infinite loop, preventing the channel b from ever being closed.

A safe way to solve the problem is to leverage the blocking properties of a nil channel and rewrite the program like this

package main

import (
        "fmt"
        "time"
)

func WaitMany(a, b chan bool) {
        for a != nil || b != nil {
                select {
                case <-a:
                        a = nil 
                case <-b:
                        b = nil
                }
        }
}

func main() {
        a, b := make(chan bool), make(chan bool)
        t0 := time.Now()
        go func() {
                close(a)
                close(b)
        }()
        WaitMany(a, b)
        fmt.Printf("waited %v for WaitMany\n", time.Since(t0))
}

In the rewritten WaitMany() we nil the reference to a or b once they have received a value. When a nil channel is part of a select statement, it is effectively ignored, so niling a removes it from selection, leaving only b which blocks until it is closed, exiting the loop without spinning.

Running this on my system gives

waited 54.912us for WaitMany

In conclusion, the simple properties of closed and nil channels are powerful building blocks that can be used to create highly concurrent programs that are simple to reason about.

Testing Go on the Raspberry Pi running FreeBSD

This afternoon Oleksandr Tymoshenko posted an update on the state of FreeBSD on ARMv6 devices. The takeaway for Raspberry Pi fans is things are working out nicely. A few days ago a usable image was published allowing me to do some serious testing of the Go freebsd/arm port.

So, what works? Pretty much everything

[root@raspberry-pi ~]# go run src/hello.go
Hello, 世界

For the moment cgo and hardware floating point is disabled. I disabled cgo support early in testing after some segfaults, but it shouldn’t be too hard to fix. The dist tool is currently failing to auto detect1 support for any floating point hardware.

[root@raspberry-pi ~]# go tool dist env
GOROOT="/root/go"
GOBIN="/root/go/bin"
GOARCH="arm"
GOOS="freebsd"
GOHOSTARCH="arm"
GOHOSTOS="freebsd"
GOTOOLDIR="/root/go/pkg/tool/freebsd_arm"
GOCHAR="5"
GOARM="5"

This could be because the auto detection is broken on freebsd/arm, but possibly this kernel image does not enable the floating point unit. I’ll update this post when I’ve done some more testing.

At the moment performance is not great, even by Pi standards. The SDCard runs in 1bit 25mhz mode, and I believe the caches are disabled or set to write though. The image has been stable for me, allowing me to compile Go, and various ports required by the build scripts.

[root@raspberry-pi ~/go/test/bench/go1]# go test -bench=.
testing: warning: no tests to run
PASS
BenchmarkBinaryTree17    1        166473841000 ns/op
BenchmarkFannkuch11      1        83260837000 ns/op
BenchmarkGobDecode       5         518688800 ns/op           1.48 MB/s
BenchmarkGobEncode      10         225905200 ns/op           3.40 MB/s
BenchmarkGzip            1        16926476000 ns/op          1.15 MB/s
BenchmarkGunzip          1        2849252000 ns/op           6.81 MB/s
BenchmarkJSONEncode      1        3149797000 ns/op           0.62 MB/s
BenchmarkJSONDecode      1        6253162000 ns/op           0.31 MB/s
BenchmarkMandelbrot200   1        20880387000 ns/op
BenchmarkParse          10         250097600 ns/op           0.23 MB/s
BenchmarkRevcomp         5         279384200 ns/op           9.10 MB/s
BenchmarkTemplate        1        7347360000 ns/op           0.26 MB/s
ok      _/root/go/test/bench/go1        380.408s

If you are interested in experimenting with FreeBSD on your Pi, or testing Go on freebsd/arm, please get in touch with me.

Update: As of 6th Jan, 2013, benchmarks and IO have improved.

BenchmarkGobDecode             5         482796600 ns/op           1.59 MB/s
BenchmarkGobEncode            10         226637900 ns/op           3.39 MB/s
BenchmarkGzip          1        15986424000 ns/op          1.21 MB/s
BenchmarkGunzip        1        2553481000 ns/op           7.60 MB/s
BenchmarkJSONEncode            1        2967743000 ns/op           0.65 MB/s
BenchmarkJSONDecode            1        6014558000 ns/op           0.32 MB/s
BenchmarkMandelbrot200         1        19312855000 ns/op
BenchmarkParse        10         238778300 ns/op           0.24 MB/s
BenchmarkRevcomp               5         307852000 ns/op           8.26 MB/s
BenchmarkTemplate              1        6767514000 ns/op           0.29 MB/s

1. Did you know that Go automatically detects the floating point capabilities of the machine it is built on ?

The Go Programming Language (2009)

This month, Go turns three, and this is the video that started it all for me.

As a testiment to skills of Pike, Thompson and Griesemer, the ideals presented in 2009 have survived virtually unaltered into the Go 1.0 release earlier this year. Rewatching this video recently I was reminded that when changes were required they were almost always to the standard library, which underwent constant revision (aided by the gofix tool) until the 1.0 code freeze. From my notes, the important language level changes were

  • By early 2010, semicolons had been removed from the written syntax (although still present implicitly inside the compiler).
  • The semantics of non-blocking send and recieve and channel closure were explored and altered a number of times before arriving at their final form.
  • Maps dropped the confusing map[key] = nil, false deletion form, replaced with more regular delete(map, key), although some bemoaned the addition of another language builtin.
  • The constant literal syntax has been improved to make it clearer when constructing large constant literal forms.
  • Lastly, the builtin error type replaced the original os.Error interface type.

Notes on exploring the compiler flags in the Go compiler suite

I’ve been doing some work improving the code generation of the 5g compiler, which is the Go compiler for arm. These notes also apply to the 6g and 8g compilers for amd64 and 386 respectively.

For this discussion we’ll use a very simple package.

package addr

func addr(s[]int) *int {
return &s[2]
}

To see the assembly produced by compiling this package we use the -S flag. -S can be passed directly to the compiler with go tool 5g -S addr.go, but it is simpler (and more portable) to use the -gcflags flag on the go tool itself.

% go build -gcflags=-S addr.go 
# command-line-arguments
--- prog list "addr" ---
0000 (/home/dfc/src/addr.go:3) TEXT addr+0(SB),$0-16
0001 (/home/dfc/src/addr.go:4) MOVW $s+0(FP),R0
0002 (/home/dfc/src/addr.go:4) MOVW 4(R0),R1
0003 (/home/dfc/src/addr.go:4) CMP $2,R1,
0004 (/home/dfc/src/addr.go:4) BHI ,6(APC)
0005 (/home/dfc/src/addr.go:4) BL ,runtime.panicindex+0(SB)
0006 (/home/dfc/src/addr.go:4) MOVW 0(R0),R0
0007 (/home/dfc/src/addr.go:4) ADD $8,R0
0008 (/home/dfc/src/addr.go:4) MOVW R0,.noname+12(FP)
0009 (/home/dfc/src/addr.go:4) RET ,

This is quite a lot of code for a one line function. One of the reasons for this is s is a slice, whose length is not known at compile time, so the compiler must insert a bounds check. We can tell the compiler to not emit bounds checks with the -B flag.

% go build -gcflags=-SB addr.go
# command-line-arguments
--- prog list "addr" ---
0000 (/home/dfc/src/addr.go:3) TEXT addr+0(SB),$0-16
0001 (/home/dfc/src/addr.go:4) MOVW $s+0(FP),R0
0002 (/home/dfc/src/addr.go:4) MOVW 0(R0),R0
0003 (/home/dfc/src/addr.go:4) ADD $8,R0
0004 (/home/dfc/src/addr.go:4) MOVW R0,.noname+12(FP)
0005 (/home/dfc/src/addr.go:4) RET ,

It is important to note that -B is an unsupported flag. The goal of Go is a safe language, one where array subscripts are bounds checked when they are not provably safe. Go already elides bounds checks when you use range loops, and future compilers will improve this. It is also important to note that none of the builders test -B so it might even generate incorrect code. In summary, when the compiler improves, -B will go away, so don’t get too attached.

One other interesting flag is -N, which will disable the optimisation pass in the compiler

% go build -gcflags=-SN addr.go
# command-line-arguments
--- prog list "addr" ---
0000 (/home/dfc/src/addr.go:3) TEXT addr+0(SB),$0-16
0001 (/home/dfc/src/addr.go:4) MOVW $s+0(FP),R0
0002 (/home/dfc/src/addr.go:4) MOVW R0,R0
0003 (/home/dfc/src/addr.go:4) MOVW 4(R0),R1
0004 (/home/dfc/src/addr.go:4) CMP $2,R1,
0005 (/home/dfc/src/addr.go:4) BHI ,8(APC)
0006 (/home/dfc/src/addr.go:4) BL ,runtime.panicindex+0(SB)
0007 (/home/dfc/src/addr.go:4) UNDEF ,
0008 (/home/dfc/src/addr.go:4) MOVW 0(R0),R0
0009 (/home/dfc/src/addr.go:4) ADD $8,R0
0010 (/home/dfc/src/addr.go:4) MOVW R0,.noname+12(FP)
0011 (/home/dfc/src/addr.go:4) RET ,
0012 (/home/dfc/src/addr.go:5) BL ,runtime.throwreturn+0(SB)
0013 (/home/dfc/src/addr.go:5) RET ,

I think the only thing that is useful about this example is, it’s good thing the optimiser is on by default because there are some strange things going on here, for example line 0002, and the unreachable branch at line 0012.

The last thing to talk about is the output of 5g is not the final code that is executed. Aside from the usual work of a linker, 5l does several transformations on the code which are important to understand.

func addr(s[]int) *int {
10c00: e59a1000 ldr r1, [sl]
10c04: e15d0001 cmp sp, r1
10c08: 33a01004 movcc r1, #4
10c0c: 33a02010 movcc r2, #16
10c10: 31a0300e movcc r3, lr
10c14: 3b00668c blcc 2a64c
10c18: e52de004 push {lr} ; (str lr, [sp, #-4]!)
return &s[2]
10c1c: e28d0008 add r0, sp, #8
10c20: e5901004 ldr r1, [r0, #4]
10c24: e3510002 cmp r1, #2
10c28: 8a000000 bhi 10c30
10c2c: eb0035d5 bl 1e388
10c30: e5900000 ldr r0, [r0]
10c34: e2800008 add r0, r0, #8
10c38: e58d0014 str r0, [sp, #20]
10c3c: e49df004 pop {pc} ; (ldr pc, [sp], #4)

Here we use objdump -dS to dump the addr function as it is compiled into the executable. The first six instructions, starting at 10c00, are the function preamble that deals with segmented stacks which is inserted automatically by the 5l.

Taking it further

There are several other compiler flags which are useful when debugging or optimising your Go code.

  • -g will output the steps a the compiler is a taking at a very low level. The discussion of the output format is outside the scope of this article. Personally I find it easier to add a warn statement which will tell me the source line the compiler was working on at the time.
  • -l will disable inlining (but still retain other compiler optimisations). This is very useful if you are investigating small methods, but can’t find them in objdump.
  • -m is mainly a frontend switch and outputs details about escape analysis and inlining choices.

Mikio Hara’s ipv4 package

Yesterday Mikio Hara committed a new package for ipv4 handling to the go.net repository. I wanted to recognise Mikio’s work for two reasons.

  1. The level of detail and control this package offers is, in my opinion, unmatched by any other language. If you’re using C or C++, then you have the raw power of setsockopt(2) and ioctl(2) available, but you also have no guide or safety rail when using them. Mikio’s package provides exquisite control over ipv4 minutia without sacrificing the safety that Go provides.
  2. The package is a fantastic example of using conditional compilation and embedding to build a very low level package that works across all the supported Go platforms. This package compiles cleanly on Linux, *BSD, OS X and Windows without resorting to the lowest common denominator or using a single ifdef. If you are looking for examples on how to structure your code using build tags, or looking for ways to work within the Go1 API, you should study this package.

Check it out for yourselves, go.pkgdoc.org/code.google.com/p/go.net/ipv4.

Installing Go on the Raspberry Pi

Introduction

The Raspberry Pi has captured the imagination of hackers and makers alike. While it certainly wasn’t the first ARM development board on the market, its bargin basement price tag and the charitable philosophy of its inventors has sparked a huge interest in this little ARM system. What could be more appropriate for a new generation of programmers than a modern, safe and efficient programming language for their projects, Google Go.

This post describes the steps for installing Google Go from source on the Raspberry Pi. At the time of this post, trunk contains many improvements for ARM processors, including full support for cgo, which are not available in Go 1.0.x. It is expected that these enhancements will be available when Go 1.1 ships next year. If you are reading this post in September 2013, then it’s likely you will want to use the version of Go shipping with your operating system distribution rather than these instructions.

Update May, 2013 If you want to save yourself some time, precompiled binary releases of Go 1.1 for linux/arm are available. Please see the Unofficial ARM tarballs link at the top of the page.

Setting up your Pi

Before you compile Go on your Pi you should follow these steps to ensure a successful compilation. Briefly summarised, they are:

  1. Install Raspbian
  2. Configure your memory split
  3. Add some swap

Install Raspbian

The downloads page on the Raspberry Pi website contains links to SD card images for various Linux distributions. This tutorial recommends the Raspbian wheezy flavour of Debian. Follow the instructions for creating an SD card image and continue to the next step once you have ssh’d into your Raspbian installation.

Configure your memory split

The Pi comes with 256mb of memory which is shared between the video subsystem and the main processor. Compiling and linking Go programs can consume over 100mb of ram so it is recommended that the memory split be adjusted in favor of the main processor, at least while working with Go code. The Raspbian distribution makes this very easy with the raspi-config utility

% sudo raspi-config


Then reboot your system

% sudo shutdown -r now

Add some swap

To run the full test suite you will need some swap. This can be accomplished a variety of ways, using an external USB hard drive, or swapping over NFS. I’ll describe how to setup a NFS swap partition as this is the configuration I am using to generate this tutorial.

% sudo dd if=/dev/zero of=/import/nas/swap bs=1024 count=1048576 1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 136.045 s, 7.9 MB/s
% sudo losetup /dev/loop0 /import/nas/swap
% sudo mkswap /dev/loop0
Setting up swapspace version 1, size = 1048572 KiB no label, UUID=7ba9443d-c64c-416f-9931-39e3e2decf0f
% sudo swapon /dev/loop0
% free -m
total used free shared buffers cached
Mem: 232 78 153 0 0 24
-/+ buffers/cache: 52 179
Swap: 1123 15 1108

Installing the prerequisites

Raspbian comes with almost all the tools you need to compile Go already installed, but to be sure you should install the following packages, described on the golang.org website.

% sudo apt-get install -y mercurial gcc libc6-dev

Cloning the source

% hg clone -u default https://code.google.com/p/go $HOME/go warning: code.google.com certificate with fingerprint 9f:af:b9:ce:b5:10:97:c0:5d:16:90:11:63:78:fa:2f:37:f4:96:79 not verified (check hostfingerprints or web.cacerts config setting)
destination directory: go
requesting all changes
adding changesets
adding manifests
adding file changes
added 14430 changesets with 52478 changes to 7406 files (+5 heads) updating to branch default
3520 files updated, 0 files merged, 0 files removed, 0 files unresolved

Building Go

% cd $HOME/go/src % ./all.bash

If all goes well, after about 90 minutes you should see

ALL TESTS PASSED

---
Installed Go for linux/arm in /home/dfc/go
Installed commands in /home/dfc/go/bin

If there was an error relating to out of memory, or you couldn’t configure an appropriate swap device, you can skip the test suite by executing

% cd $HOME/go % ./make.bash

as an alternative to ./all.bash.

Adding the go command to your path

The go command should be added to your $PATH

% export PATH=$PATH:$HOME/go/bin
% go version
go version devel +cfbcf8176d26 Tue Sep 25 17:06:39 2012 +1000

Now, Go and make something awesome.

Additional resources

How the Go language improves expressiveness without sacrificing runtime performance

This week there was a discussion on the golang-nuts mailing list about an idiomatic way to update a slice of structs. For example, consider this struct representing a set of counters.

type E struct {
A, B, C, D int
}

var e = make([]E, 1000)

Updating these counters may take the form

for i := range e {
e[i].A += 1
e[i].B += 2
e[i].C += 3
e[i].D += 4]
}

Which is good idiomatic Go code. It’s pretty fast too

BenchmarkManual   500000              4642 ns/op

However there is a problem with this example. Each access the ith element of e requires the compiler to insert an array bounds checks. You can avoid 3 of these checks by referencing the ith element once per iteration.

for i := range e {
v := &e[i]
v.A += 1
v.B += 2
v.C += 3
v.D += 4
}

By reducing the number of subscript checks, the code now runs considerably faster.

BenchmarkUnroll  1000000              2824 ns/op

If you are coding a tight loop, clearly this is the more efficient method, but it comes with a cost to readability, as well as a few gotchas. Someone else reading the code might be tempted to move the creation of v into the for declaration, or wonder why the address of e[i] is being taken. Both of these changes would cause an incorrect result. Obviously tests are there to catch this sort of thing, but I propose there is a better way to write this code, one that doesn’t sacrifice performance, and expresses the intent of the author more clearly.

func (e *E) update(a, b, c, d int) {
e.A += a
e.B += b
e.C += c
e.D += d
}

for i := range e {
e[i].update(1, 2, 3, 4)
}

Because E is a named type, we can create an update() method on it. Because update is declared on with a receiver of *E, the compiler automatically inserts the (&e[i]).update() for us. Most importantly because of the simplicity of the update() method itself, the inliner can roll it up into the body of the calling loop, negating the method call cost. The result is very similar to the hand unrolled version.

BenchmarkUpdate   500000              2996 ns/op

In conclusion, as Cliff Click observed, there are Lies, Damn Lies and Microbenchmarks. This post is an example of a least one of the three. My intent in writing was not to spark a language war about who can update a struct the fastest, but instead argue that Go lets you write your code in a more expressive manner without having to trade off performance.

You can find the source code for the benchmarks presented in this below.

package b

import "testing"

// SIZE=1000 results (core i5 late 2011 mac mini, 10.7.3)
// % go test -v -run='XXX' -bench='.'
// PASS
// BenchmarkUpdate 500000 2996 ns/op
// BenchmarkManual 500000 4642 ns/op
// BenchmarkUnroll 1000000 2824 ns/op

type E struct {
A, B, C, D int
}

func (e *E) update(a, b, c, d int) {
e.A += a
e.B += b
e.C += c
e.D += d
}

var SIZE = 1000 // needed to make a valid testable package

func TestNothing(t *testing.T) {}

func assert(e []E, b testing.B) {
for _, v := range e {
if v.A != b.N || v.B != b.N2 || v.C != b.N3 || v.D != b.N4 {
b.Errorf("Expected: %d, %d, %d, %d; actual: %d, %d, %d, %d",
b.N, b.N2, b.N3, b.N*4, v.A, v.B, v.C, v.D)
}
}
}

func BenchmarkUpdate(b *testing.B) {
var e = make([]E, SIZE)
for j := 0; j < b.N; j++ {
for i := range e {
e[i].update(1, 2, 3, 4)
}
}
b.StopTimer()
assert(e, b)
}

func BenchmarkManual(b *testing.B) {
var e = make([]E, SIZE)
for j := 0; j < b.N; j++ {
for i := range e {
e[i].A += 1
e[i].B += 2
e[i].C += 3
e[i].D += 4
}
}
b.StopTimer()
assert(e, b)
}

func BenchmarkUnroll(b *testing.B) {
var e = make([]E, SIZE)
for j := 0; j < b.N; j++ {
for i := range e {
v := &e[i]
v.A += 1
v.B += 2
v.C += 3
v.D += 4
}
}
b.StopTimer()
assert(e, b)
}

Introducing gmx, runtime instrumentation for Go applications

What is gmx ?

gmx is an experimental package for instrumenting Go applications. gmx is similar to Java’s jmx and provides a simple method of querying the internal state of your Go application by invoking anonymous functions bound to published keys. Here is an example using the included client, gmxc.

% ./gmxc -p 16378 runtime.version runtime.numcpu os.args 
os.args: [./godoc -v -http=:8080]
runtime.numcpu: 4
runtime.version: weekly.2012-01-27 11688+

How can I use gmx in my applications ?

In the example above, a stock godoc was instrumented by importing the github.com/davecheney/gmx package into main.

package main

import _ "github.com/davecheney/gmx"

The runtime and os instruments are provided by default by gmx.

How can I export my own values via gmx ?

You can publish gmx instruments at any point in your application. The most logical place is inside your package’s init function.

package foo

import "github.com/davecheney/gmx"

var c1 = 1
var c2 = 2

func init() {
        // publish c1 via a closure
        gmx.Publish("c1", func() interface{} {
                return c1
        })
        // publish c2 via a function
        gmx.Publish("c2", getC2)
}

func getC2() interface{} {
        return c2
}

Using gmxc the values can be queried at runtime:

./gmxc -p 16467 c1 c2
c2: 2
c1: 1

Who is gmx aimed at ?

If you are developing an application in Go, especially a daemon or some other long running process, then gmx may appeal to you. If you live on the other side of the DevOps table, gmx will hopfully allow you to gain a greater understanding of the internal workings of applications you have been charged with caring for.

If you find gmx useful for you, please let me know. I plan to continue to develop the default instrumentation bundled with gmx and am always open to pull requests for additional functionality.

You can find the source to gmx on github, https://github.com/davecheney/gmx.

 

Why Go gets exceptions right

How does Go get exceptions right? Why, by not having them in the first place.

First, a little history.

Before my time, there was C, and errors were your problem. This was generally okay, because if you owned an 70’s vintage mini computer, you probably had your share of problems. Because C was a single return language, things got a bit complicated when you wanted to know the result of a function that could sometimes go wrong. IO is a perfect example of this, or sockets, but there are also more pernicious cases like converting a string to its integer value. A few idioms grew to handle this problem. For example, if you had a function that would mess around with the contents of a struct, you could pass a pointer to it, and the return code would indicate if the fiddling was successful. There are other idioms, but I’m not a C programmer, and that isn’t the point of this article.

Next came C++, which looked at the error situation and tried to improve it. If you had a function which would do some work, it could return a value or it could throw an exception, which you were then responsible for catching and handling. Bam! Now C++ programmers can signal errors without having to conflate their single return value. Even better, exceptions can be handled anywhere in the call stack. If you don’t know how to handle that exception it’ll bubble up to someone who does. All the nastyness with errno and threads is solved. Achievement unlocked!

Sorta.

The downside of C++ exceptions is you can’t tell (without the source and the impetus to check) if any function you call may throw an exception. In addition to worrying about resource leaks and destructors, you have to worry about RAII and transactional semantics to ensure your methods are exception safe in case they are somewhere on the call stack when an exception is thrown. In solving one problem, C++ created another.

So the designers of Java sat down, stroked their beards and decided that the problem was not exceptions themselves, but the fact that they could be thrown without notice; hence Java has checked exceptions. You can’t throw an exception inside a method without annotating that method’s signature to indicate you may do so, and you can’t call a method that may throw an exception without wrapping it in code to handle the potential exception. Via the magic of compile time bondage and discipline the error problem is solved, right?

This is about the time I enter the story, the early millennium, circa Java 1.4. I agreed then, as I do now, that the Java way of checked exceptions was more civilised, safer, than the C++ way. I don’t think I was the only one. Because exceptions were now safe, developers started to explore their limits. There were coroutine systems built using exceptions, and at least one XML parsing library I know of used exceptions as a control flow technique. It’s commonplace for established Java webapps to disgorge screenfuls of exceptions, dutifully logged with their call stack, on startup. Java exceptions ceased to be exceptional at all, they became commonplace. They are used from everything from the benign to the catastrophic, differentiating between the severity of exceptions falls to the caller of the function.

If that wasn’t bad enough, not all Java exceptions are checked, subclasses of java.Error and java.RuntimeException are unchecked. You don’t need to declare them, just throw them. This probably started out as a good idea, null references and array subscript errors are now simple to implement in the runtime, but at the same time because every exception Java extends java.Exception any piece of code can catch it, even if it makes little sense to do so, leading to patterns like

catch (e Exception) { // ignore }

So, Java mostly solved the C++ unchecked exception problem, and introduced a whole slew of its own. However I argue Java didn’t solve the actual problem, the problem that C++ didn’t solve either. The problem of how to signal to caller of your function that something went wrong.

Enter Go

Go solves the exception problem by not having exceptions. Instead Go allows functions to return an error type in addition to a result via its support for multiple return values. By declaring a return value of the interface type error you indicate to the caller that this method could go wrong. If a function returns a value and an error, then you can’t assume anything about the value until you’ve inspected the error. The only place that may be acceptable to ignore the value of error is when you don’t care about the other values returned.

Go does have a facility called panic, and if you squint hard enough, you might imagine that panic is the same as throw, but you’d be wrong. When you throw and exception you’re making it the caller’s problem

throw new SomeoneElsesProblem();

For example in C++ you might throw an exception when you can’t convert from an enum to its string equivalent, or in Java when parsing a date from a string. In an internet connected world, where every input from a network must be considered hostile, is the failure to parse a string into a date really exceptional? Of course not.

When you panic in Go, you’re freaking out, it’s not someone elses problem, it’s game over man.

panic("inconceivable")

panics are always fatal to your program. In panicing you never assume that your caller can solve the problem. Hence panic is only used in exceptional circumstances, ones where it is not possible for your code, or anyone integrating your code to continue.

The decision to not include exceptions in Go is an example of its simplicity and orthogonality. Using multiple return values and a simple convention, Go solves the problem of letting programmers know when things have gone wrong and reserves panic for the truly exceptional.