Simplicity and collaboration

This is the text of my closing keynote from Gophercon India. It has been slightly altered for readability from my original speaking notes.

I am indebted to the organisers of Gophercon India for inviting me to speak, and to Canonical for giving me the time off to attend the conference.

If you want to see me stumble through the real thing, the video is now available.


Simplicity and collaboration

Closing Keynote, Gophercon India
21 Feb 2015

Introduction

I want to open my presentation with a proposition.

https://twitter.com/davecheney/status/539576755254611968

Being passionate about Go means being passionate about language advocacy, and a natural hazard of dwelling too long on the nature of programming results in statements like these.

But underlying the pithy form enforced by a tweet, I believe there is a gem of truth—I cannot think of a language introduced in my life time that didn’t purport to be simple. Each new language offers as a justification, and an enticement, their inherent simplicity.

On the other hand, I cannot point to a language introduced in the same time frame with the rallying call of complexity; more complexity than its contemporaries—but many instead claim to be powerful.

The idea of proposing a language which offered higher levels of inherent complexity is clearly laughable, yet this is exactly what so many contemporary languages have become; complicated, baroque, messes. A parody of the languages they sought to replace.

Clumsy syntax and non orthogonality is justified by the difficulty of capturing nuanced corner cases of the language, many of them self inflicted by years of careless feature creep.

So, every language starts out with simplicity as a goal, yet many of them fail to achieve this goal. Eventually falling back on notions of expressiveness or power of the language as justification for a failure to remain simple.

Any why is this ? Why do so many language, launched with sincere, idealistic goals, fall afoul of their own self inflicted complexity ?

One reason, one major reason, I believe, is that to be thought successful, a language should somehow include all the popular features of its predecessors.

historyIf you would listen to language critics, they demand that any new language should push forward the boundaries of language theory.

In reality this appears to be a veiled request that your new language include all the bits they felt were important in their favourite old language, while still holding true to the promise of whatever it was that drew them to investigate your language I the first place.

I believe that this is a fundamentally incorrect view.

Why would a new language be proposed if not to address limitations of its predecessors ?

Why should a new language not aim to represent a refinement of the cornucopia of features presented in existing languages, learning from its predecessors, rather than repeating their folly.

Language design is about trade-offs; you cannot have your cake and eat it too. So I challenge the notion that every mainstream language must be a super-set of those it seeks to replace.

Simplicity

This brings me back to my tweet.

Go is a language that chooses to be simple, and it does so by choosing to not include many features that other programming languages have accustomed their users to believing are essential.

So the subtext of this thesis would be; what makes Go successful is what has been left out of the language, just as much as what has been included.

Or as Rob Pike puts it “less is exponentially more”.

Simplicity cannot be added laterIMG_0095

When raising a new building, engineers first sink long pillars, down to the bedrock to provide a stable foundation for the structure of the building.

To not do this, to just tamp the area flat, lay a concrete slab and start construction, would leave the building vulnerable to small disturbances from changes in the local area, at risk from rising damp or subsidence due to changes in environmental conditions.

As programmers, we can recognise this as the parable of leaky abstraction. Just as tall buildings can only be successfully constructed by placing them on a firm foundation, large programs can not be successful if they are placed upon a loose covering of dirt that masks decades of accumulated debris.

You cannot add simplicity after the fact. Simplicity is only gained by taking things away.

Simplicity is not easy

IMG_0245Simplicity does not mean easy, but it may mean straight forward or uncomplicated.

Something which is simple may take a little longer, it may be a little more verbose, but it will be more comprehensible.

Putting this into the context of programming languages, a simple programming language may choose to limit the number of semantic conveniences it offers to experienced programmers to avoid alienating newcomers.

Simplicity is not a synonym for easy, nor is achieving a design which is simple an easy task.

Don’t mistake simple for crude

10_ck_chef_hand_10Just because something may be simple, don’t mistake it for crude.

While lasers are fabulous technology used in manufacturing and medicine, a chef prefers a knife to prepare food.

Compared to the laser, a simple chefs knife may appear unsophisticated, but in truth it represents generations of knowledge in metallurgy, manufacturing and usability.

When considering a programming language, don’t mistake a lack of the latest features for a lack of sophistication.

Simplicity is a goal, not a by-product

“nothing went in [to the language], until all three of us [Ken, Robert and myself], agreed that it was a good idea.” — Rob Pike, Gophercon 2014

You should design your programs with simplicity as a goal, not aim to be pleasantly surprised when your solution happens to be simple.

As Rob Pike noted at Gophercon last year, Go was not designed by committee. The language represent a distillation of the experiences of Robert Griesemer, Ken Thompson, and himself, and only once all three were all convinced of a feature’s utility to the language was it included.

Choose simplicity over completeness

There is an exponential cost in completeness.

The 90% solution, a language that remains orthogonal while recognizing some things are not possible, verses a language attempting to offer 100% of its capabilities to every possible permutation will inherently be less complex, because as we engineers know,

The last 10% costs another 90% of the effort.

Complexity

four_string_braid

A lack of simplicity is, of course complexity.

Complexity is friction, a force which acts against getting things done.

Complexity is debt, it robs you of capital to invest in the future.

Good programmers write simple programs

Good programmers write simple programs.

They bring their experience, their knowledge and their failures to new designs, to learn from and avoid mistakes in the future.

Simplicity, conclusion

To steal a quote from Rich Hickey

“Simplicity is the ultimate sophistication” — Leonardo da Vinci

Go is a language designed to be simple. It is a feature, not a by-product, or an accident.

This was the message that spoke to me when I first learned about the language in 2009, and is the message that has stayed with me to this day.

The desire for simplicity is woven through every aspect of the language.

My question for you, the audience: Do you want to write simple programs, or will you settle for writing powerful programs ?

Collaboration

I hope by now I have convinced you that a need for simplicity in programming languages, is self evident, so I want to move to my second topic; collaboration.

Is programming an art or a science ? Are we artists or engineers ? This one question is a debate in itself, but I hope you will humour me that as professionals, programming is a little of both; we are both software artists, and software engineers—and as engineers we work as a team.

There is more to the success of Go than just being simple, and this is the realization that for a programming language to be successful, it must coexist inside a larger environment.

A language for collaboration

Large programs are written by large teams. I don’t believe this is a controversial statement.

The inverse is also true. Large teams of programmers, by their nature, produce large code bases.

Projects with large goals will necessitate large teams, and thus their output will be commensurate.

This is the nature of our work.

Big Problems

nasa-mainframe-980x663Go is a small language, but deliberately designed as a language for large teams of programmers.

Small annoyances such as a lack of warnings, a refusal to allow unused imports, or unused local variables, are all facets of choices designed to help Go work well for large teams.

This does not mean that Go is not suitable for the single developer working alone, or a small program written for a specific need, but speaks to the fact that a number of the choices within the language are aimed at the needs of growing software teams.

And if your project is successful, your team will grow, so you need to plan for it.

Programming languages as part of an environment

It may appear heretical to suggest this, as many of the metrics that we as professional software developers are judged by; lines of code written; the number of revisions committed to source control, and so on, are all accounted for character by character. Line by line. File by file.

But, writing a program, or more accurately solving a problem; delivering a solution, has little to do with the final act of entering the program into the computer.

Programs are designed, written, debugged and distributed in an environment significantly larger than one programmer’s editor.

Go recognizes this, it is a language designed to work in this larger environment, not in spite of it.

Because ultimately Go is a language for the problems that exist in today’s commercial programming, not just language theory.

Code is written to be decoded

cover-bigThe author Peter Seibel suggests that programs are not read, but instead decoded. In hindsight this should have been obvious, after all we call it source code, not source literature.

The source code of a program is an intermediary form, somewhere between our concept and the computer’s executable notation.

As with many transformations, this encoding of the source program is not lossless; some loss of fidelity, some ambiguity, some imprecision is present. This is why when reading source code, you must in fact decode it, to divine the original intention of the programmer.

Many of the choices relating to the way Go code is represented as source, speak to this impedance mismatch. The simplicity and regularity of the grammar, while providing few opportunities for individuality, in turn makes it easier for a new reader to decode a Go program and determine its function.

Because source code is written to be read.

How to build a Go communitygofmt

Go is a language designed from the beginning to be transposed, transformed and processed at the source level. This has opened up new fields of analysis and code generation to the wider Go community. We’ve seen several examples of this demonstrated at this conference.

While these tools are impressive, I believe the regular syntax of a Go source file belies its greatest champion; go fmt.

But what is it that is so important about go fmt, and why is it important to go fmt your source code ?

Part of the reason is, of course, to avoid needless debate. Large teams will, by their nature have a wide spectrum of views on many aspects of programming, and source code formatting is the most pernicious.

Go is a language for collaboration. So, in a large team, just as in a wider community, personal choices are moderated for a harmonious society.

The outcome is that nearly all go code is go formatted by convention. Adherence to this convention is an indicator of alignment with the values of Go.

This is important because it is a social convention leading to positive reinforcement, which is far more powerful than negative reinforcement of a chafing edict from the compiler writer.

In fact, code that is not well formatted can be a first order indicator of the suitability of the package. Now, I’m not trying to say that poorly formatted code is buggy, but poorly formatted code may be an indication that the authors have not understood the design principles that underscore Go.

So while buggy code can be fixed, design issues or impedance mismatches can be much harder to address, especially after that code is integrated into your program.

Batteries included

As Go programmers we can pick up a piece of Go code written by anyone in the world and start to read it. This goes deeper than just formatting.

Go lacks heavy libraries like Boost. There are no QT base classes, no gobject. There is no pre-processor to obfuscate. Domain specific language rarely appear in Go code.

The inclusion of maps and slices in the language side steps the most basic interoperability issues integrating packages from vendors, or other parts of your company. All Go code uses these same basic building blocks; maps, slices and channels, so all Go code is accessible to a reader who is versed in the language, not some quaint organization specific dialect.

Interfaces, the UNIX waypipe

In 1964 Doug McIlroy postulated about the power of pipes for composing programs. This was five years before the first Unix was written mind you.

McIlroy’s observations became the foundation of the UNIX philosophy; small, sharp tools which can be combined to solve larger tasks, tasks which may not have even been envisioned by the original authors.

In the last few decades, I feel that programmers have lost the ability to compose programs, lost behind waves of run time dependencies, stifling frameworks, and brittle type hierarchies that degrade the ability to move quickly and experiment cheaply.

Go programs embody the spirit of the UNIX philosophy. Go packages interact with one another via interfaces. Programs are composed, just like the UNIX shell, by combining packages together.

I can use fmt.Fprintf to write formatted output to a network connection, or a zip file, or a writer which discards its input. Conversely I can create a gzip reader that consumes data from a http connection, or a string constant, or a multireader composed of several sources.

All of these permutations are possible, in McIlroy’s vision, without any of the components having the slightest bit of knowledge about the other parts of this processing chain.

Small interfaces

Interfaces in Go are therefore a unifying force; they are the means of describing behaviour. Interfaces let programmers describe what their package provides, not how it does it.

Well designed interfaces are more likely to be small interfaces; the prevailing idiom here is that interfaces contain only a single method.

Compare this to other languages like Java or C++. In those languages interfaces are generally larger, in terms of the method count required to satisfy them, and more complex because of their entanglement with the inheritance based nature of those languages.

Interfaces in Go share none of those restrictions and so are simpler, yet at the same time, are more powerful, and more composable, and critical to the narrative of collaboration, interfaces in Go are satisfied implicitly.

Any Go type, written at any time, in any package, by any programmer, can implement an interface by simply providing the methods necessary to satisfy the interface’s contract.

It follows logically that small interfaces lead to simple implementations, because it is hard to do otherwise. Leading to packages comprised of simple implementations connected by common interfaces.

Errors and interfaces

errorI’ve written a lot about the subject of Go’s error handling, so I’ll restrict my comments here to errors as they relate to collaboration.

The error interface is the key to Go’s composable error handling story.

If you’ve worked on some large Go projects you may have come across packages like Canonical’s errgo, which provide facilities to apply a stack trace to an error value. Perhaps the project has rolled their own implementation. Maybe you have developed something similar in house.

I want to be clear that I am remaining neutral on the relative goodness or badness of the idea of gift wrapping errors.

What I do want to highlight is even though one piece of code you integrate uses fmt.Errorf, and another a third party package, and in your package you have developed your own error handling type. From the point of view of you the programmer consuming your work, the error handling strategy always looks the same. If the error is nil, the call worked.

Compare this to the variety of error handling strategies that must be managed in other languages as programs grow through accretion of dependencies.

This is the key to a Go programmer’s ability to write an application at any size without sacrificing reliability. In the context of collaboration, it must be said that Go’s error handling strategy is the only form that makes sense.

Simple build systems

Go’s lack of Makefiles is more than a convenience.

With other programming languages, when you integrate a piece of third party code, maybe it’s something complex, like v8, or something more mundane, like a database driver from your vendor, you’re integrating that code into your program, this part is obvious, but you are also integrating their build system.

This is a far less visible, and sometimes far more intractable problem. You’ve not just introduced a dependency on that piece of code, but also a dependency on its build system, be it cmake, scons, gnu autotools, what have you.

Go simply doesn’t have this problem.

Putting aside the contentious issues of package versioning, once you have the source in your $GOPATH, integrating any piece of third party Go code into your program is just an import statement.

Go programs are built from just their source, which has everything you need to know to compile a Go program. I think this is a hugely important and equally under-appreciated part of Go’s collaboration story.

This is also the key to Go’s efficient compilation. The source indicates only those things that it depends on, and nothing else. Compiling your program will touch only the lines of source necessary.

Sans runtime

Does your heart sink when you want to try the hottest new project from Hacker News or Reddit only find it requires node.js, or some assortment of Ruby dependencies that aren’t available on your operating system ? Or you have to look up what is the correct way to install a python package this week. Is it pip, is it easy_install, does that need an egg, or are they wheels ?

I can tell you mine does.

For Go programmers dependency management remains an open wound, this is a fact, and one that I am not proud of. But for users of programs written in Go their life just got a whole lot easier; compile the program, scp it to the server, job done.

Go’s ability to produce stand alone applications; and even cross compile them directly from your workstation means that programmers are rediscovering the lost art of shipping a program, a real compiled program, the exact same one they tested, to customers.

This one fact alone has allowed Go to establish a commanding position in the container orchestration market, a market which arguably would not exist in its current form if not for Go’s deployment story.

This story also illustrates how Go’s design decisions move beyond just thinking about how the programmer and the language will interact during the development phase, and extend right through the software life-cycle to the deployment phase.

Go’s choice of a single static binary is directly influenced by Google’s experiences deploying their own large complex applications, and I believe their advice should not be dismissed lightly.

Portability

C# isn’t portable, it is joined at the hip to a Windows host.

Swift and Objective-C are in the same boat, they live in the land of Apple only programming languages. Popular ? yes, but portable ? no.

Java, Scala, Groovy, and all the rest of the languages built atop the JVM may benefit from the architecture independence of the JVM bytecode format, until you realize that Oracle is only interested in supporting the JVM on its own hardware.

Java is tone deaf to the requirements of the machine it is executing on. The JVM is too sandboxed, too divorced from the reality of the environment it is working inside.

Ruby and Python are better citizens in this regard, but are hamstrung by their clumsy deployment strategies.

In the new world of metered cloud deployments in which we find ourselves, where you pay by the hour, the difference between a slow interpreted language, and a nimble compiled Go program is stark.

Go’s fresh take on portability, without the requirement to abstract yourself away from the machine your program runs on, is like no other language available today.

A command line renaissance

For the last few decades, since the rise of interpreted languages, or virtual machine run-times, programming has been less about writing small targeted tools, and more about managing the complexity of the environment those tools are deployed into.

Slow languages, or bloated deployments encourage programmers to pile additional functionality into one application to amortize the cost of installation and set up.

I believe that we are in the early stage of a command line renaissance, a renaissance driven by the rediscovery of languages which produce compiled, self contained, programs. Go is leading this charge.

A command line renaissance which enables developers to deliver simple programs that fit together, cross platform, in a way that suites the needs of the nascent cloud automation community, and reintroduces a generation of programmers to the art of writing tools which fit together, as Doug McIlroy described, “like segments in a garden hose”.

A key part of the renaissance is Go’s deployment story. I spoke earlier and many of my fellow speakers have praised Go for its pragmatic deployment story, focusing on server side deployments, but I believe there is more to this.

Over the last year we’ve seen a number of companies shift their client side tools from interpreted languages like Ruby and Python to Go. Cloud Foundry’s tools, Github’s hub, and MongoDB’s tool suite are the ones that spring to mind.

In every case their motivations were similar; while the existing tools worked well, the support load from customers who were not able to get the tool installed correctly on their machine was huge.

Go lets you write command line applications, that in turn enables developers to leverage the UNIX philosophy; small, sharp tools that work well together.

This is a command line renaissance that has been missing for a generation.

Conclusion

https://twitter.com/supermighty/status/548897982016663552

Go is a simple language, this was not an accident.

This was a deliberate decision, executed brilliantly by experienced designers who struck a chord with pragmatic developers.

Go is a language for programmers who want to get things done

Put simply, Go is a language for programmers who want to get things done.

“I just get things done instead of talking about getting them done.” — Henry Rollins

As Andrew Gerrand noted in his fifth birthday announcement

“Go arrived as the industry underwent a tectonic shift toward cloud computing, and we were thrilled to see it quickly become an important part of that movement.” — Andrew Gerrand

Go’s success is directly attributable to the factors that motivated its designers. As Rob Pike noted in his 2012 Splash paper.

“Go is a language designed by Google to help solve Google’s problems; and Google has big problems” — Rob Pike

And it turns out that Go’s design choices are applicable to the problems that an increasing number of professional programmers face today.

Go is growing

growingNovember last year, Go turned 5 years old as a public project.

In these 5 years, less if you consider that the language only reached 1.0 in April of 2012, Go, as a language, and a proposition to programmers and development teams, has been wildly successful.

In 2014 there were five international Go conferences. In 2015 there will be seven.

Plus

  • An ever growing engagement in social media; Twitter, Google plus, etc.
  • An established Reddit community.
  • Real time discussion communities like the -nuts IRC channel, or the slack gophers group.
  • Go featured in mainstream tech press, established companies are shipping Go APIs for their services.
  • Go training available in both professional and academic contexts.
  • Over 100 Go meetups around the world.
  • Sites like Damian Gryski’s Gophervids helping to disseminate the material produced by those meetups and conferences.
  • Community hackathon events like GopherGala and the Go Challenge.

Lastly, look around this room and see your peers, 350 experienced programmers, who have decided in invest in Go.

In closing

amorThis paper describes the language we have today. A language built with care and moderation. The language is what it is because of deliberate decisions that were made at every step.

Language design is about trade-offs, so learn to appreciate the care in which Go’s features were chosen and the skill in which they were combined.

While the language strives for simplicity, and is easy to learn, it does not immediately follow that the language is trivial to master.

There is a lot to love in our language, don’t be in such a hurry to dismiss it before you have explored it fully.

Learn to love the language. Really learn the language. It’ll take longer than you would think.

Learn to appreciate the choices of the designers.

Because, and I truly mean this, Go will make you a better programmer.

Cross compilation just got a whole lot better in Go 1.5

Introduction

Cross compilation is one of Go’s headline features. I’ve written about it a few times, and others have taken this work and built better tooling around it.

This morning Russ Cox committed this change which resolved the last issue in making cross compilation simpler and more accessible for all Gophers. When Go 1.5 ships in August any Go programmer will be able to cross compile their program without having to go through a fussy set up phase.

Background

In the current version of Go, (if you’re from the future, read Go 1.4 and earlier) before you could cross compile a Go program, you needed to go through a set up phase to enhance your Go installation with the bits necessary to build for other platforms.

This worked ok if you had built Go from source, but if you were using one of the binary distributions or something from your operating system (brew, apt, etc), then the process would turn into a maze of twisty passages, all alike.

This was necessary because for successful cross compilation you would need

  • compilers for the target platform, if they differed from your host platform, ie you’re on darwin/amd64 (6g) and you want to compile for linux/arm (5g).
  • a standard library for the target platform, which included some files generated at the point your Go distribution was built.

With the plan to translate the Go compiler into Go coming to fruition in the 1.5 release the first issue is now resolved. That just left the small amount of customisation to the runtime done at installation time, which has been whittled away as part of the compiler transition (most of the customisation was generation of include files for the C parts of the runtime), and as of this morning, removed.

Try it out

If you can’t wait til August, you can try this out today by building the development version of Go.

Disclaimer: the development branch is changing rapidly due to the re-factoring of the compilers. If you’re using Go in production, the release version, Go 1.4.x, is always recommended.

Prerequisites

As mentioned above to build the development version of Go, you also need Go 1.4 installed to bootstrap. Once Go 1.5 comes out this step will be unnecessary.

These steps are a simple procedure to do this which I would recommend following

  1. Uninstall any version of Go you have on your system, including any $PATH variables.
  2. Check out Go 1.4 and build it
    % git clone https://go.googlesource.com/go $HOME/go1.4
    % cd $HOME/go1.4/src
    % git checkout release-branch.go1.4
    % ./make.bash
  3. Check out the development branch and build it
    % git clone https://go.googlesource.com/go $HOME/go
    % cd $HOME/go/src
    % env GOROOT_BOOTSTRAP=$HOME/go1.4 ./all.bash
  4. Add $HOME/go/bin to your $PATH

Build something

Notice: if you are reading this from a future where Go 1.5 has been released, you can skip the previous step.

Now you have the development version of Go 1.5 installed, cross compiling this simple program is trivial

package main

import "fmt"
import "runtime"

func main() {
        fmt.Printf("Hello %s/%s\n", runtime.GOOS, runtime.GOARCH)
}

Now build for darwin/386

% env GOOS=darwin GOARCH=386 go build hello.go
# scp to darwin host
$ ./hello
Hello darwin/386

Or build for linux/arm

% env GOOS=linux GOARCH=arm GOARM=7 go build hello.go
# scp to linux host
$ ./hello
Hello linux/arm

That’s it!

Cross compilation can’t get any simpler than that.

Technical mumbo-jumbo

So what is happening under the hood when we compile a program for a different platform ? The -v flag gives us a clue.

% go build -v hello.go
command-line-arguments

% env GOOS=linux GOARCH=arm go build -v hello.go                                  runtime
errors
sync/atomic
math
unicode/utf8
sync
io
syscall
time
strconv
reflect
os
fmt
command-line-arguments

Comparing the two examples above, the first build is using the standard library that was built as part of the Go installation. That is to say, all the packages that fmt and runtime depend on are already built into $GOROOT/pkg/linux_amd64.

In the second example, the go tool detects that all the dependencies of this program need to be built for the target plaform before hello.go can be compiled and builds them, all the way back to the runtime package.

I have not included the output here because it is very verbose, but you can look at all the steps that are being performed if you pass the -x flag to go build.

Out of scope

Cross compilation while linking to libraries via cgo is the holy grail for some. Sadly the changes mentioned above do not change the situation with respect to cgo. In fact you may still need to rebuild your Go installation in the traditional way to pass environment variables like CC_FOR_TARGET. Please try it out.

Conclusion

In case you can’t tell, I’m over the moon about this improvement. Cross compilation is Go’s ace in the hole and Go 1.5 will make it even better.

Please try it out and if you find issues please let us know on the GitHub issue tracker.

Lost in translation

Over the last year I have had the privilege of travelling to meet Go communities in Japan, Korea and India. In every instance I have met experienced, passionate, pragmatic programmers ready to accept Go for what it can do for them.

At the same time the message from each of these communities was the same; where is the documentation, where are the examples, where are the tutorials ? Travelling to these communities has been an humbling experience and has made me realise my privileged position as a native English speaker.

The documentation, and the tutorials, and the examples will come, slowly; this is open source after all. But what I can offer is the fact that all the content on this blog is licensed under a Creative Commons licence.

In short, I don’t have the skills, but if you do, you are welcome to translate any content on this site, and I’ll help you in any way I can.

 

Practical public speaking for Nerds

A friend recently asked me for some advice in preparing a talk for an upcoming conference. I ended up writing way more than they asked for.

If you are a Nerd like me, I hope you find some of this advice helpful.


Preparing the talk

Read before you write

Of course you should do your research before talking about something, but also (re)read writing that you enjoyed as inspiration, analyse its style, analyse what qualities you enjoyed unrelated to its topic.

Re-watch presentations that you enjoyed, you are going to be doing a presentation after all, analyse the style of the presentation, the manner of the speaker, the way they presented their argument, the way they present themselves on stage.

In this respect, imitation is the most sincere form of flattery.

Avoid the Death Sentence

Don’t write in the passive voice, ever. If you don’t know what I mean, read Death Sentence by Don Watson.

The TL;DR of the passive voice is sounding like a CEO or a politician. Always use the first person or second person pronouns, I and You, to assign ownership of an idea or responsibility to someone, don’t leave it hanging out there with phrases like “we should”.

Start at the end

In order of importance you need

  1. a conclusion
  2. an introduction
  3. everything else

The conclusion is your position, your idea, your argument, your call to action, summarised in one slide.

Don’t start writing until you know how your talk ends.

The introduction should set the stage for the conclusion.

The rest of the talking points should flow logically from the proposition established in the introduction. They should be relevant and supportive of the conclusion. If a point does not relate to the introduction, or support the conclusion then either rewrite it, drop it, or in extreme cases reconsider your conclusion.

Length

Every presentation will have a minimum and maximum time limit; if you don’t know it, ask the organiser, don’t guess. All things being equal it is preferable to finish sooner than to run overtime. Here are some of the techniques I use for planning my talk.

The two key elements are word count and the number of slides.

Word count

I prefer to write my talks in full as a paper, you may choose to speak to bullet points, it’s very much personal choice. Either way you have a set number of words to work with for your presentation.

The average speaking pace for native English speakers is around 120 to 130 words per minute. You can use this to calculate the number of words a presentation will require.

Professional speakers will talk slower, around 100 words per minute. Don’t assume that you will be able to to do this unless you have had a lot of practice at public speaking. Plan for a higher word rate and write accordingly. It will be easier to speed up during your talk if you are short on time, than to make yourself slow down if you are panicking.

A good rule of thumb is 2,000 to 2,500 words for a 20 minute presentation, 4,000 to 5,000 for a 45 minute slot.

Aim to finish ahead of time so people can prove they were listening by asking questions.

Slides

I budget on one supporting slide per minute. So 18 to 20 slides for a 20 minute presentation, 35 to 40 for a 45 minute presentation.

I’ve found this to be a pretty reliable rule of thumb regardless of the style of presentation or how I have prepared for it.

You may want to structure your slides with one bullet point per slide rather than one topic, but don’t think “this will take less time”. It might be true, but it risks any small delay in covering a slide multiplying through your deck and blowing your time limit.

If you are running short on time it is easier to summarize slides with multiple points on tham to catch up without looking like you’re rushing.

Practice

Read through your talk multiple times to check your material and time yourself – this is why I prefer to write my talks in full, it helps avoid the temptation to skip the rehearsal of parts that I think I know well.

On the day

Plan for the worst

With all respect to conference organisers, the presentation stage is the most hostile environment you will encounter. Every effort is made by the organisers to mitigate this, but the bottom line is, if things go to shit, you still have to do your talk, so plan for things to fail.

If your talk is formatted for wide screen, assume the projector is from the stone age and you’ll have to use someone else’s 4:3 laptop. If there is an opportunity to practice in the space, take it.

Assume the internet won’t work. Yup, it’s 2015 and internet is everywhere, except on stage. This is super important if you use tools like Google present or anything that uses web fonts.

Also assume that multi monitor set ups just won’t work, so all that clever shit that keynote does is useless, have a fall back.

Larger conferences will either request, or demand that you use their laptops, even to the point of asking for presentation material well in advance. The lowest common denominator here is PDF, so whatever tool you use, make sure it can emit a PDF.

Because of these difficulties, if you plan to use speaking notes, you should have a way to access those independent of your presentation software. I’ve found copying the text into a Google doc works well and lets me edit my speech after the presentation materials have been handed over to the organisers.

Beware that Goggle docs is really pedantic about being online, so don’t assume that just because the document was open on your laptop before lunch, it’ll work on stage. If in doubt export your notes to a PDF.

Engaging the audience

Establishing contact with the audience is one of the hardest things for me. There are many aspects of this

  • at larger conferences, you may be on a professional stage, so the stage lights make it hard to see anything except a sea of little apple logos in the audience.
  • different cultures show respect for the speaker in different ways. Some show they are interested in make positive grunts and noises, others sit in respectful silence.
  • humour is hard, unless you are a professional comedian. It’s really hard to pull off, so try not to make it a requirement of your talk, or necessary to support your conclusion.
  • most tech audiences are rude. People will check their email and tweet during your presentation; they’re probably not disinterested, just insensitive; try to ignore them.
  • In other cultures it is appropriate to close your eyes during a presentation, or even snooze, don’t take it personally.

Speak slowly

Duh, who doesn’t say that ? The fact is you will be nervous, or if not nervous, excited, so you will talk faster than you plan to.

The key is to take time for yourself.

Pause between bullet points

If you’re reading from a script, then start a new paragraph between points and remind yourself to take a deep breath.

Pause at the top of each new slide

It gives the audience time to read the material on the slide before you start to speak. This is important because while you know it backwards and forwards, this is probably the first time the audience is getting a chance to see your idea.

If you feel uncomfortable then fill that time by taking a drink of water or walking to a different part of the stage. The latter is my favourite because it gives you excuse to take another pause to walk back.

Dry throat

Your throat will get dry during your talk, this is part of our fight or flight response to stressful situations; it’s the adrenaline. If it happens, don’t let it throw you, focus on pausing between points. Take a drink of water to insert a pause into your presentation, but don’t panic if taking a drink doesn’t fix the problem, that’ll just make it worse.

Don’t beat yourself up

Lastly, don’t beat yourself up afterwards.

Public speaking is a skill that needs practice, it’s not something that any of us are born with. This is why they make us practice public speaking in high school. But it’s probably been a long time since you and I were in high school, and we probably didn’t realise the importance of what we were being taught at the time.

So, don’t expect to be awesome every time, and don’t put yourself in a position where your talk must be awesome. This isn’t an interview, it’s not a binary thing, even if you were nervous, or talked too fast, or realised that you crapped up one point in your argument, it’s still ok, the audience will still get a lot from it.

Thanks Brainman

This is a short post to recognise the incredible contribution Alex Brainman has made to the Go project.

Alex was responsible for the port of Go to Windows way back before Go 1 was even released. Since that time he has virtually single-handedly supported Go and Go users on Windows. It’s no wonder that he is the 10th most active contributor to the project.

The Windows build is consistently the most popular download from the official go site.

While I may not use Windows, and you may not use Windows, spare a thought for the large body of developers who do use Windows to develop Go programs and are able to do so because of Alex’s efforts.

Even if your entire business doesn’t use Windows, consider the moment when your product manager comes to you and asks “so, we’ve got a request from a big customer to port our product to Windows, that’s not going to be hard, right ?”. Your answer is directly attributable to Alex’s contributions.

Alex, every Go programmer owes you a huge debt of gratitude. So let me be the first to say it, thank you for everything you have done for Go. None of us would be as successful as we are today without your work.

Errors and Exceptions, redux

In my previous post, I doubled down on my claim that Go’s error handling strategy is, on balance, the best.

In this post, I wanted to take this a bit further, and prove that multiple returns and error values are the best,

When I say best, I obviously mean, of the set of choices available to programmers that write real world programs — because real world programs have to handle things going wrong.

The language we have

I am only going to use the Go language that we have today, not any version of the language which might be available in the future — it simply isn’t practical to hold my breath for that long. As I will show, additions to the language like dare I say, exceptions, would not change the outcome.

A simple problem

For this discussion, I’m going to start with a made up, but very simple function, which demonstrates the requirement for error handling.

package main

import "fmt"

// Positive returns true if the number is positive, false if it is negative.
func Positive(n int) bool {
        return n > -1
}

func Check(n int) {
        if Positive(n) {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

func main() {
	Check(1)
	Check(0)
	Check(-1)
}

If you run this code, you get the following output

1 is positive
0 is positive
-1 is negative

which is wrong.

How can this single line function be wrong ? It is wrong because zero is neither positive or negative, and that cannot be accurately captured by the boolean return value from Positive.

This is a contrived example, but hopefully one that can be adapted to discuss the costs and benefits of the various methods of error handling.

Preconditions

No matter what solution is determined to be the best, a check will have to be added to Positive to test the non zero precondition. Here is an example with the precondition added

// Positive returns true if the number is positive, false if it is negative.
// The second return value indicates if the result is valid, which in the case
// of n == 0, is not valid.
func Positive(n int) (bool, bool) {
        if n == 0 {
                return false, false
        }
        return n > -1, true
}

func Check(n int) {
        pos, ok := Positive(n)
        if !ok {
                fmt.Println(n, "is neither")
                return
        }
        if pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

Running this program we see that the bug is fixed,

1 is positive
0 is neither
-1 is negative

albeit in an ungainly way. For those interested, I also tried a version using a switch which was harder to read for the saving of one line of code.

This then is the baseline to compare other solutions.

Error

Returning a boolean is uncommon, it’s far more common to return an error value, even if the set of errors is fixed. For completeness, and because this simple example is supposed to hold up in more complex circumstances, here is an example using a value that conforms to the error interface.

// Positive returns true if the number is positive, false if it is negative.
func Positive(n int) (bool, error) {
        if n == 0 {
                return false, errors.New("undefined")
        }
        return n > -1, nil
}

func Check(n int) {
        pos, err := Positive(n)
        if err != nil {
                fmt.Println(n, err)
                return
        }
        if pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

The result is a function which performs the same, and the caller must check the result in an near identical way.

If anything, this underlines the flexibility of Go’s errors are values methodology. When an error occurs, indicating only success or failure (think of the two result form of map lookup), a boolean can be substituted instead of an interface value, which removes the any confusion arising from typed nils and nilness of interface values.

More boolean

Here is an example which allows Positive to return three states, true, false, and nil (Anyone with a background in set theory or SQL will be twitching at this point).

// If the result not nil, the result is true if the number is
// positive, false if it is negative.
func Positive(n int) *bool {
        if n == 0 {
                return nil
        }
        r := n > -1
        return &r
}

func Check(n int) {
        pos := Positive(n)
        if pos == nil {
                fmt.Println(n, "is neither")
                return
        }
        if *pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

Positive has grown another line, because of the requirement to capture the address of the result of the comparison.

Worse, now before the return value can be used anywhere, it must be checked to make sure that it points to a valid address. This is the situation that Java developers face constantly and leads to deep seated hatred of nil (with good reason). This clearly isn’t a viable solution.

Let’s try panicking

For completeness, let’s look at a version of this code that tries to simulate exceptions using panic.

// Positive returns true if the number is positive, false if it is negative.
// In the case that n is 0, Positive will panic.
func Positive(n int) bool {
        if n == 0 {
                panic("undefined")
        }
        return n > -1
}

func Check(n int) {
        defer func() {
                if recover() != nil {
                        fmt.Println("is neither")
                }
        }()
        if Positive(n) {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

… this is just getting worse.

Not exceptional

For the truly exceptional cases, the ones that represent either unrecoverable programming mistakes, like index out of bounds, or unrecoverable environmental problem, like running out of stack, we have panic.

For all of the remaining cases, any error conditions that you will encounter in a Go program, are by definition not exceptional — you expect them because regardless of returning a boolean, an error, or pancing, it is the result of a test in your code.

Forgetting to check

I consider the argument that Developers forget to check error codes is cancelled out by the counter argument Developers forget to handle exceptions. Either may be true, depending on the language you are basing your argument on, but neither commands a winning position.

With that said, you only need to check the error value if you care about the result.

Knowing the difference between which errors to ignore and which to check is why we’re paid as professionals.

Conclusion

I have shown in the article that multiple returns and error values the simplest, and most reliable to use. Easier to use than any other form of error handling, including ones that do not even exist in Go as it stands today.

A challenge

So this is the best demonstration I can come up with, but I expect others can do better, particularly where the monadic style is used. I look forward to your feedback.

Building an atmega1284p prototype

This project was featured on Hackaday and the Atmel blog.

For the next step in my Apple 1 replica project I decided I wanted to replace the Arduino Mega board with a bare Atmega MPU with the goal of producing a two chip solution — just the Atmel and the 6502, no glue logic or external support chips.

I had been stockpiling parts for this phase of the project for a while now, so I sat down to lay out the board based on a small 5×7 cm perfboard.

Perfboard sketch

Perfboard sketch

The trickiest piece was fitting the crystal and load capacitors into the design without disrupting to many of the other traces. It worked out well so I decided to add ICSP and FTDI headers and tried my hand at laying out the board using Fritzing.

Fritzing layout

Fritzing layout

The picture above is one of several designs I tried in Fritzing. I designed another that uses a flood fill ground plane to eliminate all the vias. We’ll see how that one turns out in a few weeks.

Rant: Cadsoft Eagle might be the industry standard, at least amongst open source hardware hackers, but it truly embodies the “worse is better” philosophy. Maybe one day my Altium Circuitmaker invitation will arrive (hint hint).

The finished product

The finished product

While I’m waiting for my PCBs to be delivered I decided to build a simplified version. The FTDI and ISCP headers have been left off as they are readily accessible from the headers on the left hand side.

Demo time


It worked, first time.

Bootloaders

The atmega1284p’s I ordered were unprogrammed. Getting Optiboot installed on them is handled nicely by Manicbug’s Mighty1284 Arduino support package.  There are only two small issues of note.

  1. Due to cross talk between the XTAL1 and RX0 pins, serial communication may be unreliable. The solution to this is configure the clock source to use Full Swing mode (rather than the default low power mode). This is done by setting the relevant fuse settings in boards.txt like so
    mighty_opt.bootloader.low_fuses=0xf7
  2. Mighty1284 only supports Arduino 1.0.x, not the newer 1.5.x betas. This might be an issue if you are a fan of the improvements in Ardunio 1.5.x as it doesn’t look like Mighty1284 is being updated.

Next steps

I’m smitten with the 1284p. It feels like the right compromise between the pin starved 328 and the unfriendly 2540 series. The 1284p supports more SRAM than either of its counterparts and ships in a package large enough that you get a full 24 pins of IO.

This experiment gave me the skills and the confidence to continue to design my replica project around the 1284p. I had originally intended to build the replica in two boards, possibly adding a third with some SRAM. Routing the upper 6502 board will be harder than the lower 1284p board, so I may have to wait til my Fritzing samples return to judge the feasibility of that approach.

Resources

Make your own Apple 1 replica

Woot! This project was featured on Hackaday.

mega6502

mega6502, a big mess of wires

No Apple 1 under the tree on Christmas Day ? Never mind, with a 6502 and an Arduino Mega 2560 you can make your own.

The Apple 1 was essentially a 6502 computer with 4k of RAM and 256 bytes of ROM. The inclusion of a 6821 PIA and a Signetics video encoder meant that the Apple 1 shipped with its own 2400 baud dumb terminal built in. Just supply your own keyboard, composite monitor, and you were in business.

The good news is we can emulate the RAM, ROM, PIA, and all the glue logic with an Arduino.

The hardware

To validate the idea that an Ardunio could provide a stable clock for the 6502, I started by breadboarding the project.

6502 strapped for $EA

6502 strapped for $EA

The result was a success, with a tight assembly loop I was able to generate a 1Mhz clock with a roughly 50% duty cycle.  So it was on to a prototype.

Prototype 6502 "sidecar"

Prototype 6502 “sidecar”

The protoshield has 0.1 inch connectors for the 40 pins on the 6502 and the 40 something pins on the Ardunio Mega’s expansion header allowing me to jumper between the 6502 and the Arduino. The strange jumper block presents $EA on the data bus unconditionally, this is called free running mode.

Mega6502 prototype

Mega6502 prototype

Because I wanted to use an LCD panel for debugging and the patch wires on the protoshield would not fit under the LCD shield I mounted the shield backwards and upside down, which retained the same pin outs (including 5v on the top). I called this prototype design the “sidecar”.

Sidecar wiring in detail

The schematic for wiring the sidecar to the Arduino is detailed in the README file.

The software

At the moment the software is a simple Arduino IDE sketch, you can find it on Github.

Clock

The Arduino provides the ϕ0 clock signal as part of the main loop() function. The 6502 interacts with the outside world on the falling edge of this clock (actually a few ns after ϕ0, the falling edge of ϕ2). It produces the address and read/write signals on the rising edge of ϕ0.

Different 6502 models have different requirements for the minimum and maximum length of each phase of ϕ0. The original NMOS 6502 required a clock of at least 100khz to avoid loosing internal CPU state, which made single stepping more complicated. With the Rockwell 65c02 I am using the ϕ0 low phase must not exceed 5μs, but the clock signal can remain high indefinitely (the fully static WDC 6502 removes any restriction on a minimum clock).

We can use this property to generate a stable ϕ0 low around 500 ns (the minimum instruction time on a 16Mhz Atmega is 62.5ns), then raise ϕ0 and do our processing, even take an interrupt. Because I have the 4Mhz 65c02 version, we can even make the ϕ0 low period shorter, to allow our high pulse to take longer in an effort to reach the 1Mhz clock target.

Laughton Electronics has published a fantastic page if you want to learn more about the 6502 timings.

Ram

The Apple 1 divided the 6502’s address space into 16 4k banks which could be assigned to RAM, ROM, or I/O.

The 2560 includes 8kb of SRAM, so we dedicate 4k to be the bottom bank of ram starting at $0000, which is more than enough for a usable replica. For Apple 1’s with 8kb of ram, the second bank of ram was usually strapped to $E000 and used for BASIC. The nice property of this is we can replace the $E000 bank with a ROM image mapped to that location (BASIC did not expect to be able to write to memory at $E000) and achieve the same effect without providing another 4k of RAM.

ROM

The original 256 byte Woz monitor rom is provided at $FF00. For simplicities sake the ROM is mirrored at every page in the $F000 address space.

I have tested a few of the popular ROM images like A1Assembler and Applesoft-lite but only include the Woz monitor rom in the source distribution. Enterprising readers should have little difficulty modifying the source code to include additional ROM images.

Input and output

The Apple 1 interfaces to the keyboard and screen via four registers from the 6821 PIA chip mapped into the address space at $D000.

When a key is pressed on the keyboard, the high bit of $D011 is latched high, this can be detected by the 6502 ROM monitor which then reads $D010 to fetch the keycode, which is conveniently encoded as 7bit ASCII.

Output is similar, the 6502 polls $D013 until the PIA reports that the video encoder is not busy then the character to write to the screen is placed in $D012.

It is straight forward to map these reads and writes to these addresses to the Arduino serial port. Again for simplicity, the PIA is mirrored to every page in $D000.

The speed

Like my previous projects, performance is always a problem. Assuming a 50% duty cycle for the ϕ0 clock, a 16Mhz Atmel has 8 cycles to decode the address and read/write the data. This is basically impossible. However, as I am using a Rockwell 65c02 cpu, which is CMOS, and a higher speed grade than the original NMOS based 6502, we can cheat and shorten the ϕ2 low, trading that time for a longer ϕ2 high pulse.

Just shy of 300khz

Just shy of 300khz

Using my trusty Bitscope Micro, I can probe the ϕ2 clock. You can see the asymetry between the high and low phases. The high phase is currently 2.8μs, or around 45 cycles for the Arduino. This equates to a clock speed of just under 300khz, which is very usable.

Demo time

Here is a short video showing the mega6502 running a short BASIC program in debug mode.

Here is a screen capture showing David Schmenk’s 30th birthday demo for the Apple 1.

Next steps

  • More tweaking of the decode logic to try to reduce the ϕ2 high period.
  • Implement a faux cassette interface possibly using the SD card for cassette storage
  • A new design using an Atmega1284P — a minimalistic two chip SBC 6502 solution, assuming I can find a bootloader that works.

Resources

If you liked this project, check out some other fantastic 6502 projects.

  • Project:65
  • Quinn Dunki’s fantastic VeronicaWe are not worthy!
  • PDA6502, Paul has designed his own 6502 solution from scratch.

Inspecting errors

The common contract for functions which return a value of the interface type error, is the caller should not presume anything about the state of the other values returned from that call without first checking the error.

In the majority of cases, error values returned from functions should be opaque to the caller. That is to say, a test that error is nil indicates if the call succeeded or failed, and that’s all there is to it.

A small number of cases, generally revolving around interactions with the world outside your process, like network activity, require that the caller investigate the nature of the error to decide if it is reasonable to retry the operation.

A common request for package authors is to return errors of a known public type, so the caller can type assert and inspect them. I believe this practice leads to a number of undesirable outcomes:

  • Public error types increase the surface area of the package’s API.
  • New implementations must only return types specified in the interface’s declaration, even if they are a poor fit.
  • The error type cannot be changed or deprecated after introduction without breaking compatibility, making for a brittle API.

Callers should feel no more comfortable asserting an error is a particular type than they would be asserting the string returned from Error() matches a particular pattern.

Instead I present a suggestion that permits package authors and consumers to communicate about their intention, without having to overly couple their implementation to the caller.

Assert errors for behaviour, not type

Don’t assert an error value is a specific type, but rather assert that the value implements a particular behaviour.

This suggestion fits the has a nature of Go’s implicit interfaces, rather than the is a [subtype of] nature of inheritance based languages. Consider this example:

func isTimeout(err error) bool {
        type timeout interface {
                Timeout() bool
        }
        te, ok := err.(timeout)
        return ok && te.Timeout()
}

The caller can use isTimeout() to determine if the error is related to a timeout, via its implementation of the timeout interface, and then confirm if the error was timeout related — all without knowing anything about the type, or the original source of the error value.

Gift wrapping errors, usually by libraries that annotate the error path, is enabled by this method; providing that the wrapped error types also implement the interfaces of the error they wrap.

This may seem like an insoluble problem, but in practice there are relatively few interface methods that are in common use, so Timeout() bool and Temporary() bool would cover a large set of the use cases.

In conclusion

Don’t assert errors for type, assert for behaviour.

For package authors, if your package generates errors of a temporary nature, ensure you return error types that implement the respective interface methods. If you wrap error values on the way out, ensure that your wrappers respect the interface(s) that the underlying error value implemented.

For package users, if you need to inspect an error, use interfaces to assert the behaviour you expect, not the error’s type. Don’t ask package authors for public error types; ask that they make their types conform to common interfaces by supplying Timeout() or Temporary() methods as appropriate.