Monthly Archives: January 2012

Why Go gets exceptions right

How does Go get exceptions right? Why, by not having them in the first place.

First, a little history.

Before my time, there was C, and errors were your problem. This was generally okay, because if you owned an 70’s vintage mini computer, you probably had your share of problems. Because C was a single return language, things got a bit complicated when you wanted to know the result of a function that could sometimes go wrong. IO is a perfect example of this, or sockets, but there are also more pernicious cases like converting a string to its integer value. A few idioms grew to handle this problem. For example, if you had a function that would mess around with the contents of a struct, you could pass a pointer to it, and the return code would indicate if the fiddling was successful. There are other idioms, but I’m not a C programmer, and that isn’t the point of this article.

Next came C++, which looked at the error situation and tried to improve it. If you had a function which would do some work, it could return a value or it could throw an exception, which you were then responsible for catching and handling. Bam! Now C++ programmers can signal errors without having to conflate their single return value. Even better, exceptions can be handled anywhere in the call stack. If you don’t know how to handle that exception it’ll bubble up to someone who does. All the nastyness with errno and threads is solved. Achievement unlocked!

Sorta.

The downside of C++ exceptions is you can’t tell (without the source and the impetus to check) if any function you call may throw an exception. In addition to worrying about resource leaks and destructors, you have to worry about RAII and transactional semantics to ensure your methods are exception safe in case they are somewhere on the call stack when an exception is thrown. In solving one problem, C++ created another.

So the designers of Java sat down, stroked their beards and decided that the problem was not exceptions themselves, but the fact that they could be thrown without notice; hence Java has checked exceptions. You can’t throw an exception inside a method without annotating that method’s signature to indicate you may do so, and you can’t call a method that may throw an exception without wrapping it in code to handle the potential exception. Via the magic of compile time bondage and discipline the error problem is solved, right?

This is about the time I enter the story, the early millennium, circa Java 1.4. I agreed then, as I do now, that the Java way of checked exceptions was more civilised, safer, than the C++ way. I don’t think I was the only one. Because exceptions were now safe, developers started to explore their limits. There were coroutine systems built using exceptions, and at least one XML parsing library I know of used exceptions as a control flow technique. It’s commonplace for established Java webapps to disgorge screenfuls of exceptions, dutifully logged with their call stack, on startup. Java exceptions ceased to be exceptional at all, they became commonplace. They are used from everything from the benign to the catastrophic, differentiating between the severity of exceptions falls to the caller of the function.

If that wasn’t bad enough, not all Java exceptions are checked, subclasses of java.Error and java.RuntimeException are unchecked. You don’t need to declare them, just throw them. This probably started out as a good idea, null references and array subscript errors are now simple to implement in the runtime, but at the same time because every exception Java extends java.Exception any piece of code can catch it, even if it makes little sense to do so, leading to patterns like

catch (e Exception) { // ignore }

So, Java mostly solved the C++ unchecked exception problem, and introduced a whole slew of its own. However I argue Java didn’t solve the actual problem, the problem that C++ didn’t solve either. The problem of how to signal to caller of your function that something went wrong.

Enter Go

Go solves the exception problem by not having exceptions. Instead Go allows functions to return an error type in addition to a result via its support for multiple return values. By declaring a return value of the interface type error you indicate to the caller that this method could go wrong. If a function returns a value and an error, then you can’t assume anything about the value until you’ve inspected the error. The only place that may be acceptable to ignore the value of error is when you don’t care about the other values returned.

Go does have a facility called panic, and if you squint hard enough, you might imagine that panic is the same as throw, but you’d be wrong. When you throw and exception you’re making it the caller’s problem

throw new SomeoneElsesProblem();

For example in C++ you might throw an exception when you can’t convert from an enum to its string equivalent, or in Java when parsing a date from a string. In an internet connected world, where every input from a network must be considered hostile, is the failure to parse a string into a date really exceptional? Of course not.

When you panic in Go, you’re freaking out, it’s not someone elses problem, it’s game over man.

panic("inconceivable")

panics are always fatal to your program. In panicing you never assume that your caller can solve the problem. Hence panic is only used in exceptional circumstances, ones where it is not possible for your code, or anyone integrating your code to continue.

The decision to not include exceptions in Go is an example of its simplicity and orthogonality. Using multiple return values and a simple convention, Go solves the problem of letting programmers know when things have gone wrong and reserves panic for the truly exceptional.