Monthly Archives: June 2016

Transistor logic fundamentals

Long time readers of this blog will know that when I’m not shilling for the Go language, my hobbies include electronics and retro computing. For me, projects like James Newman’s Megaprocessor, a computer built entirely from discrete components, is about as good as it gets.

James has recently finished construction of the Megaprocessor and has started to document it on YouTube, you should totally check it out. But this post isn’t about the Megaprocessor.

When I subscribed to the Megaprocessor channel on YouTube I discovered James has produced another series of videos focused on the fundamentals of implementing digital logic with transistors. In the three videos embedded below, James lays out the foundations of digital logic.

The first video describes (in James’ wonderfully understated manner) the operation of the simplest digital logic circuit; a voltage controlled inverter built with one transistor1.

In the second video, James adds a second transistor in series with the first and demonstrates the implementation of the NAND (Not AND) function2.

In the third video, by reorganising the transistors in parallel, James shows the circuit now implements the logical NOR (Not OR) function.

… and that’s it. There are more videos in James’ Stepping Stones video series, but with these three operations, NOT (inversion), NAND, and NOR, any combination of digital logic of any size can be created, as the Megaprocessor shows3.

Why is this important?

The circuits described in this set of videos feature far fewer transistors than you would find in real processor, but they are not simplified. The circuits described in this video were used in mainframe computers in the 1960’s and formed the basis for the integrated microprocessors of the 1970’s.

In these three videos James describes the entire foundation for contemporary computation. No matter how many layers of operating systems, networks, and source code abstraction you build on top, the fundamentals of computation and digital logic remain as simple as these three videos.

Notes and further reading

If you’re interested in learning more, here are a few suggestions for your own research.

  1. If you have no background in electronics a simple analogy for the relationships between voltage, current, and resistance is water flowing through a pipe. In this analogy, voltage represents water pressure, pushing water through the pipe. Current is the water itself. The volume of water in the pipe is a property of both the diameter of the pipe, and any resistance which may cause segments of the pipe to be less than full. Resistance, the final property, is any constriction or obstruction of the pipe. The higher the resistance, the more the pipe is constricted, reducing the amount of water (current) flowing through it.
  2. James’ tutorials use discrete TTL logic. TTL stands for Transistor to Transistor Logic, introduced in the early 1960’s by Sylvania (yes, the lightbulb makers). Before TTL there were at least two other forms of digital logic, what were they, and why did they succumb to TTL?
  3. James’ tutorials, and the Megaprocessor itself, use NPN transistors. If the Megaprocessor was shrunk down to a single integrated circuit it would most likely be implemented using NMOS logic. NMOS was very popular in the 70’s and early 80’s but has since given way to CMOS logic. What are the differences between NMOS and CMOS and why would James have chosen NMOS to implement the Megaprocessor?

Automatically fetch your project’s dependencies with gb

gb has been in development for just over a year now. Since the announcement in May 2015 the project has received over 1,600 stars, produced 16 releases, and attracted 41 contributors.

Thanks to a committed band of early adopters, gb has grown to be a usable day to day replacement for the go tool. But, there is one area where gb has not lived up to my hopes, and that is dependency management.

gb’s $PROJECT/vendor/ directory was the inspiration for the go tool’s vendor/ directory (although their implementations differ greatly) and has delivered on its goal of reproducible builds for Go projects. However, the success of gb’s project based model, and vendoring code in general, has a few problems. Specifically, wholesale copying (or forking if you prefer) of one code base into another continues to sidestep the issue of adoption of a proper release and versioning culture amongst Go developers.

To be fair, for Go developers using the tools they have access to today–including gb–there is no incentive to release their code. As a Go package author, you get no points for doing proper versioned releases if your build tool just pulls from HEAD anyway. There is similarly limited value in adopting a version numbering policy like SemVer if your tools only memorise the git revision you last copied your code at.

A second problem, equally poorly served by gb or the vendor/ support in the go tool, are developers and projects who cannot, usually for legal reasons, or do not wish to, copy code wholesale into their project. Suggestions of using git submodules have been soundly dismissed as unworkable.

With the release of gb 0.4.3, there is a new way to manage dependencies with gb. This new method does not replace gb vendor or $PROJECT/vendor as the recommended method for achieving reproducible builds, but it does acknowledge that vendoring is not appropriate for all use cases.

To be clear, this new mode of managing dependencies does not supersede or deprecate the existing mechanisms of cloning source code into $PROJECT/vendor. The automatic download feature is optional and is activated by the project author creating a file in their project’s root called, $PROJECT/depfile.

If you have a gb project that is currently vendoring code, or you’re using gb vendor restore to actively avoid cloning code into your project, you can try this feature today, with the following caveats:

  1. Currently only GitHub is supported. This is because the new facility uses the GitHub API to download release tarballs via https. Vanity urls that redirect to GitHub are also not supported yet, but will be supported soon.
  2. The repository must have made a release of its code, and that release must be tagged with a tag containing a valid SemVer 2.0.0 version number. The format of the tag is described in this proposal. If a dependency you want to consume in your gb project has not released their code, then please ask them to do so.

Polishing this feature will be the remainder of the 0.4.x development series. After this work is complete gb vendor will be getting some attention. Ultimately both gb vendor and $PROJECT/depfile do the same thing–one copies the source of your dependencies into your project, the other into your home directory.

Gophers, please tag your releases

What do we want? Version management for Go packages! When do we want it? Yesterday!

What does everyone want? We want our Go build tool of choice to fetch the latest stable version when you start using the package in your project. We want them to grab security updates and bug fixes automatically, but not upgrade to a version where the author deleted a method you were using.

But as it stands, today, in 2016, there is no way for a human, or a tool, to look at an arbitrary git (or mercurial, or bzr, etc) repository of Go code and ask questions like:

  • What versions of this project have been released?
  • What is the latest stable release of this software?
  • If I have version 1.2.3, is there a bugfix or security update that I should apply?

The reason for this is Go projects (repositories of Go packages) do not have versions, at least not in the way that our friends in other languages use that word. Go projects do not have versions because there is no formalised release process.

But there’s vendor/ right?

Arguing about tools to manage your vendor/ directory, or which markup format a manifest file should be written in is eating the elephant from the wrong end.

Before you can argue about the format of a file that records the version of a package, you have to have some way of actually knowing what that version is. A version number has to be sortable, so you can ask, “is there a newer version available than the one you have on disk?” Ideally the version number should give you a clue to how large the jump between versions is, perhaps even give a clue to backwards or forwards compatibility between two versions.

SemVer is no one’s favourite, yet one format is everyone’s favourite.

I recommend that Go projects adopt SemVer 2.0.0. It’s a sound standard, it is well understood by many, not just Go programmers, and semantic versioning will let people write tools to build a dependency management ecosystem on top of a minimal release process.

Following the lead of the big three Go projects, Docker, Kubernetes, and CoreOS (and GitHub’s on releases page), the format of the tag must be:

v<SemVer>

That is, the letter v followed by a string which is SemVer 2.0.0 compliant. Here are some examples:

git tag -a v1.2.3
git tag -a v0.1.0
git tag -a v1.0.0-rc.1

Here are some incorrect examples:

git tag -a 1.2.3        // missing v prefix
git tag -a v1.0         // 1.0 is not SemVer compliant
git tag -a v2.0.0beta3  // also not SemVer compliant

Of course, if you’re using hg, bzr, or another version control system, please adjust as appropriate. This isn’t just for git or GitHub repos.

What do you get for this?

Imagine if godoc.org could show you the documentation for the version of the package you’re using, not just the latest from HEAD.

Now, imagine if godoc.org could not just show you the documentation, but also serve you a tarball or zip file of the source code of that version. Imagine not having to install mercurial just to go get that one dependency that is still on google code (rest in peace), or bitbucket in hg form.

Establishing a single release process for Go projects and adopting semantic versioning will let your favourite Go package management or vendoring tool provide you things like a real upgrade command. Instead of letting you figure out which revision to switch to, SemVer gives tool writers the ability to do things like upgrade a dependency to the latest patch release of version 1.2.

Build it and they will come

Tagging releases is pointless if people don’t write tools to consume the information. Just like writing tools that can, at the moment, only record git hashes is pointless.

Here’s the deal: If you release your Go projects with the correctly formatted tags, then there are a host of developers who are working dependency management tools for Go packages that want to consume this information.

How can I declare which versions of other packages my project depends on?

If you’ve read this far you are probably wondering how using tagging releases in your own repository is going to help specify the versions of your Go project’s dependencies.

The Go import statement doesn’t contain this version information, all it has is the import path. But whether you’re in the camp that wants to add version information to the import statement, a comment inside the source file, or you would prefer to put that information in a metadata file, everyone needs version information, and that starts with tagging your release of your Go projects.

No version information, no tools, and the situation never improves. It’s that simple.

Automatically run your package’s tests with inotifywait

This is a short post to illustrate how I use the inotifywait command as a cheap and cheerful way to run my tests automatically on save.

Note: inotify is only available on linux, sorry OS X users.

Step 1. Install inotify-tools

On Debian/Ubuntu, inotifywait and friends live in the inotify-tools package.

% sudo apt-get install inotify-tools

If you live in an RPM universe the package name will hopefully be similar.

Step 2. Create a helper function

Remembering the full inotifywait incantation can be taxing, so save yourself some effort and define a function in .bashrc (or your shell of choice’s startup script).

watch() { while inotifywait --exclude .swp -e modify -r .; do $@; done; }

If you use /usr/bin/watch frequently, you might want to pick another name for this function.

Step 3. Run a command on save

Using tmux (you do use tmux, right?), split the window and run

% watch go test .

Any time that a file in the current working directory is modified, inotifywait will return, which runs the command you provided, then loops back around.

watch will trigger on a modification to anything in the current working directory or below it. The command that runs when inotifywait detects a modification can be anything you like. For example you could be working in one package inside your project, and have watch rebuild all the commands any time you save, like this:

% cd $GOPATH/src/github.com/you/yourproject/pkg/pkg/pkg
% watch go install -v github/com/you/yourproject/cmd/...

Stack traces and the errors package

A few months ago I gave a presentation on my philosophy for error handling. In the talk I introduced a small errors package designed to support the ideas presented in the talk.

This post is an update to my previous blog post which reflects the changes in the errors package as I’ve put it into service in my own projects.

Wrapping and stack traces

In my April presentation I gave examples of using the Wrap function to produce an annotated error that could be unwrapped for inspection, yet mirrored the recommendations from Kernighan and Donovan’s book.

package main

import "fmt"
import "github.com/pkg/errors"

func main() {
        err := errors.New("error")
        err = errors.Wrap(err, "open failed")
        err = errors.Wrap(err, "read config failed")

        fmt.Println(err) // read config failed: open failed: error
}

Wraping an error added context to the underlying error and recorded the file and line that the error occurred. This file and line information could be retrieved via a helper function, Fprint, to give a trace of the execution path leading away from the error. More on that later.

However, when I came to integrate the errors package into my own projects, I found that using Wrap at each call site in the return path often felt redundant. For example:

func readconfig(file string) {
        if err := openfile(file); err != nil {
                return errors.Wrap(err, "read config failed")
        }
        // ...
}

If openfile failed it would likely annotate the error it returned with open failed, and that error would also include the file and line of the openfile function. Similarly, readconfig‘s wrapped error would be annotated with read config failed as well as the file and line of the call to errors.Wrap inside the readconfig function.

I realised that, at least in my own code, it is likely that the name of the function contains sufficient information to frequently make the additional context passed to Wrap redundant. But as Wrap requires a message, even if I had nothing useful to add, I’d still have to pass something:

if err != nil {
        return errors.Wrap(err, "") // ewww
}

I briefly considered making Wrap variadic–to make the second parameter optional–before realising that rather than forcing the user to manually annotate each stack frame in the return path, I can just record the entire stack trace at the point that an error is created by the errors package.

I believe that for 90% of the use cases, this natural stack trace–that is the trace collected at the point New or Errorf are called–is correct with respect to the information required to investigate the error’s cause. In the other cases, Wrap and Wrapf can be used to add context when needed.

This lead to a large internal refactor of the package to collect and expose this natural stack trace.

Fprint and Print have been removed

As mentioned earlier, the mechanism for printing not just the err.Error() text of an error, but also its stack trace, has also changed with feedback from early users.

The first attempts were a pair of functions; Print(err error), which printed the detailed error to os.Stderr, and Fprint(w io.Writer, err error) which did the same but allowed the caller to control the destination. Neither were very popular.

Print was removed in version 0.4.0 because it was just a wrapper around Fprint(os.Stderr, err) and was hard to test, harder to write an example test for, and didn’t feel like its three lines paid their way. However, with Print gone, users were unhappy that Fprint required you to pass an io.Writer, usually a bytes.Buffer, just to retrieve a string form of the error’s trace.

So, Print and Fprint were the wrong API. They were too opinionated, without it being a useful opinion. Fprint has been slowly gutted over the period of 0.5, 0.6 and now has been replaced with a much more powerful facility inspired by Chris Hines’ go-stack/stack package.

The errors package now leverages the powerful fmt.Formatter interface to allow it to customise its output when any error generated, or wrapped by this package, is passed to fmt.Printf. This extended format is activated by the %+v verb. For example,

func main() {
        err := parseArgs(os.Args[1:])
        fmt.Printf("%v\n", err)
}

Prints, as expected,

not enough arguments, expected at least 3, got 0

However if we change the formatting verb to %+v,

func main() {
        err := parseArgs(os.Args[1:])
        fmt.Printf("%+v\n", err)
}

the same error value now results in

not enough arguments, expected at least 3, got 0
main.parseArgs
        /home/dfc/src/github.com/pkg/errors/_examples/wrap/main.go:12
main.main
        /home/dfc/src/github.com/pkg/errors/_examples/wrap/main.go:18
runtime.main
        /home/dfc/go/src/runtime/proc.go:183
runtime.goexit
        /home/dfc/go/src/runtime/asm_amd64.s:2059

For those that need more control the Cause and StackTrace behaviours return values who have their own fmt.Formatter implementations. The latter is alias for a slice of Frame values which represent each frame in a call stack. Again, Frame implements several fmt.Formatter verbs that allow its output to be customised as required.

Putting it all together

With the changes to the errors package, some guidelines on how to use the package are in order.

  • In your own code, use errors.New or errors.Errorf at the point an error occurs.
    func parseArgs(args []string) error {
            if len(args) < 3 {
                    return errors.Errorf("not enough arguments, expected at least 3, got %d", len(args))
            }
            // ...
    }
  • If you receive an error from another function, it is often sufficient to simply return it.
    if err != nil {
           return err
    }
  • If you interact with a package from another repository, consider using errors.Wrap or errors.Wrapf to establish a stack trace at that point. This advice also applies when interacting with the standard library.
    f, err := os.Open(path)
    if err != nil {
            return errors.Wrapf(err, "failed to open %q", path)
    }
  • Always return errors to their caller rather than logging them throughout your program.
  • At the top level of your program, or worker goroutine, use %+v to print the error with sufficient detail.
    func main() {
            err := app.Run()
            if err != nil {
                    fmt.Printf("FATAL: %+v\n", err)
                    os.Exit(1)
            }
    }
  • If you want to exclude some classes of error from printing, use errors.Cause to unwrap errors before inspecting them.

Conclusion

The errors package, from the point of view of the four package level functions, New, Errorf, Wrap, and Wrapf, is done. Their API signatures are well tested, and now this package has been integrated into over 100 other packages, are unlikely to change at this point.

The extended stack trace format, %+v, is still very new and I encourage you to try it and leave feedback via an issue.