Just so we’re clear, this post is a thought experiment, not any form of commitment to deliver Go 2.0 in any time frame. While I personally believe there will be a Go 2.0 in the future, I’m in no position to influence its creation; hence, this post is mere speculation.
Why introduce a new major version of Go?
Go 1.0 was released over 4 years ago, and since then the Go 1 compatibility contract has been a boon to anyone investing in Go as the language to build their product. So, why introduce a new version of Go?
By the time that Go 1.8 is released at the start of 2017, the standard library will have accumulated cruft and hacks for five years, and if you consider that Go started life in 2007, it’s closer to ten. An opportunity to address this cruft and remove some of the packages which are now understood to be a bad idea would make the standard library more consistent and approachable to newcomers.
It is possible the language itself could become smaller. Rob Pike noted in 2014 that there are too many ways to declare a variable in Go, and this could be rationalised. Similarly the incongruence between make and new might be resolved. Then there is the problem of non latin characters not being considered upper case. So, lots of little cleanups to do.
Obviously some kind of solution for templated types would have to be part of any Go 2.0 discussion and, as David Symonds pointed out several years ago, they would have to be used to rewrite the standard library, both causing, and justifying, the compatibility break.
Backward compatibility
Backwards compatibility is not about syntax or features, backwards compatibility is about investment. Investment in the language; both at a technical and career level. Investment in libraries. Investment in backends that generate machine code. Investment in the mid part of the compiler that transforms and optimises code. Investment in build scripts and toolchains that embeds one piece of compiled code into another.
Brian Goetz, the Java language architect, describes the commitment to backward compatibility as the “central park effect“. This is something our cousins in the hardware world have long understood–never let the customer unbolt your product from the rack, ‘cos they might take the opportunity to use that space for your competition.
The lessons of Python 3000 are prescient; ignore backward compatibility at your peril. No matter how compelling the new version of your language, if you make it incompatible with the investment in the previous version, you are launching a new product which is in direct competition with itself. And just to make it clear, I’m not picking on Python specifically, there are plenty of other examples; D 2.0, Perl 6, and VB.net also come to mind.
All of these examples show the danger of creating a new version of a language that requires its users to rewrite all the source of their program, including all their dependencies (which may be non trivial), before it will compile and run.
A plausible implementation
So, how to create a new Go 2.0 language, with a new syntax and a new standard library, without making it incompatible every piece of Go code written to date? How could we avoid the all or nothing stand-off in which other languages place their users?
What if we could combine code written in Go 1.0 and a proposed Go 2.0 in one program using the package level as the boundary between language versions? Go 2.0 would be a new language, with a new standard library built upon a runtime shared between itself and Go 1.0, thereby allowing users to work outwards from their Go 2.0 main
package to the limbs of their dependency graph, one package at a time.
A Go 2.0 package would be able to call down to Go 1.0, but not the other way around. Go 2.0 types would be able to interoperate with Go 1.0 types, but Go 1.0 types would be unaware of Go 2.0 constructed code. Perhaps calling from Go 2.0 to Go 1.0 looks conceptually like using cgo to call C code, except without the overhead as both languages would be compiled to the same intermediary form.
The key is both language versions would be compiled to a single intermediate representation, one that can represent the superset of both syntaxes. This has been done before; in the first few versions of Go, C code and Go code was compiled to an intermediate representation, Ken Thompson’s universal assembly language, then converted to machine code at link time. Now with Keith Randall’s SSA compiler, there is a single low level intermediate representation (similar to gcc’s GIMPLE and LLVM’s IR) that describes all the things that make Go programs Go1.
There is a strong precedent for this; the Sun Oracle JVM. For more than a decade the JVM has hosted byte-code that was not compiled from .java
source file. Combined with a version of gofix
that could automate some of the effort in migrating a package to Go 2.0 syntax, this could be a plausible way to introduce a new version of Go without abrogating the investment in code written for Go 1.0.
- This also raises the possibility of developing other language front-ends using the Go toolchain. If you look at what LLVM has done for projects like Pony, Crystal, and Rust, think of what a portable, cross platform, optimising compiler, with user space concurrency built in, and written in Go, not C++, would mean for language experimentation.