Author Archives: Dave Cheney

About Dave Cheney

A chaotic neutral System Administrator with super cow powers. My weapons are: * fear * cynicism * an almost fanatical devotion to the command line twitter.com/davecheney

Cross compilation just got a whole lot better in Go 1.5

Introduction

Cross compilation is one of Go’s headline features. I’ve written about it a few times, and others have taken this work and built better tooling around it.

This morning Russ Cox committed this change which resolved the last issue in making cross compilation simpler and more accessible for all Gophers. When Go 1.5 ships in August any Go programmer will be able to cross compile their program without having to go through a fussy set up phase.

Background

In the current version of Go, (if you’re from the future, read Go 1.4 and earlier) before you could cross compile a Go program, you needed to go through a set up phase to enhance your Go installation with the bits necessary to build for other platforms.

This worked ok if you had built Go from source, but if you were using one of the binary distributions or something from your operating system (brew, apt, etc), then the process would turn into a maze of twisty passages, all alike.

This was necessary because for successful cross compilation you would need

  • compilers for the target platform, if they differed from your host platform, ie you’re on darwin/amd64 (6g) and you want to compile for linux/arm (5g).
  • a standard library for the target platform, which included some files generated at the point your Go distribution was built.

With the plan to translate the Go compiler into Go coming to fruition in the 1.5 release the first issue is now resolved. That just left the small amount of customisation to the runtime done at installation time, which has been whittled away as part of the compiler transition (most of the customisation was generation of include files for the C parts of the runtime), and as of this morning, removed.

Try it out

If you can’t wait til August, you can try this out today by building the development version of Go.

Disclaimer: the development branch is changing rapidly due to the re-factoring of the compilers. If you’re using Go in production, the release version, Go 1.4.x, is always recommended.

Prerequisites

As mentioned above to build the development version of Go, you also need Go 1.4 installed to bootstrap. Once Go 1.5 comes out this step will be unnecessary.

These steps are a simple procedure to do this which I would recommend following

  1. Uninstall any version of Go you have on your system, including any $PATH variables.
  2. Check out Go 1.4 and build it
    % git clone https://go.googlesource.com/go $HOME/go1.4
    % cd $HOME/go1.4/src
    % git checkout release-branch.go1.4
    % ./make.bash
  3. Check out the development branch and build it
    % git clone https://go.googlesource.com/go $HOME/go
    % cd $HOME/go/src
    % env GOROOT_BOOTSTRAP=$HOME/go1.4 ./all.bash
  4. Add $HOME/go/bin to your $PATH

Build something

Notice: if you are reading this from a future where Go 1.5 has been released, you can skip the previous step.

Now you have the development version of Go 1.5 installed, cross compiling this simple program is trivial

package main

import "fmt"
import "runtime"

func main() {
        fmt.Printf("Hello %s/%s\n", runtime.GOOS, runtime.GOARCH)
}

Now build for darwin/386

% env GOOS=darwin GOARCH=386 go build hello.go
# scp to darwin host
$ ./hello
Hello darwin/386

Or build for linux/arm

% env GOOS=linux GOARCH=arm GOARM=7 go build hello.go
# scp to linux host
$ ./hello
Hello linux/arm

That’s it!

Cross compilation can’t get any simpler than that.

Technical mumbo-jumbo

So what is happening under the hood when we compile a program for a different platform ? The -v flag gives us a clue.

% go build -v hello.go
command-line-arguments

% env GOOS=linux GOARCH=arm go build -v hello.go                                  runtime
errors
sync/atomic
math
unicode/utf8
sync
io
syscall
time
strconv
reflect
os
fmt
command-line-arguments

Comparing the two examples above, the first build is using the standard library that was built as part of the Go installation. That is to say, all the packages that fmt and runtime depend on are already built into $GOROOT/pkg/linux_amd64.

In the second example, the go tool detects that all the dependencies of this program need to be built for the target plaform before hello.go can be compiled and builds them, all the way back to the runtime package.

I have not included the output here because it is very verbose, but you can look at all the steps that are being performed if you pass the -x flag to go build.

Out of scope

Cross compilation while linking to libraries via cgo is the holy grail for some. Sadly the changes mentioned above do not change the situation with respect to cgo. In fact you may still need to rebuild your Go installation in the traditional way to pass environment variables like CC_FOR_TARGET. Please try it out.

Conclusion

In case you can’t tell, I’m over the moon about this improvement. Cross compilation is Go’s ace in the hole and Go 1.5 will make it even better.

Please try it out and if you find issues please let us know on the GitHub issue tracker.

Lost in translation

Over the last year I have had the privilege of travelling to meet Go communities in Japan, Korea and India. In every instance I have met experienced, passionate, pragmatic programmers ready to accept Go for what it can do for them.

At the same time the message from each of these communities was the same; where is the documentation, where are the examples, where are the tutorials ? Travelling to these communities has been an humbling experience and has made me realise my privileged position as a native English speaker.

The documentation, and the tutorials, and the examples will come, slowly; this is open source after all. But what I can offer is the fact that all the content on this blog is licensed under a Creative Commons licence.

In short, I don’t have the skills, but if you do, you are welcome to translate any content on this site, and I’ll help you in any way I can.

 

Practical public speaking for Nerds

A friend recently asked me for some advice in preparing a talk for an upcoming conference. I ended up writing way more than they asked for.

If you are a Nerd like me, I hope you find some of this advice helpful.


Preparing the talk

Read before you write

Of course you should do your research before talking about something, but also (re)read writing that you enjoyed as inspiration, analyse its style, analyse what qualities you enjoyed unrelated to its topic.

Re-watch presentations that you enjoyed, you are going to be doing a presentation after all, analyse the style of the presentation, the manner of the speaker, the way they presented their argument, the way they present themselves on stage.

In this respect, imitation is the most sincere form of flattery.

Avoid the Death Sentence

Don’t write in the passive voice, ever. If you don’t know what I mean, read Death Sentence by Don Watson.

The TL;DR of the passive voice is sounding like a CEO or a politician. Always use the first person or second person pronouns, I and You, to assign ownership of an idea or responsibility to someone, don’t leave it hanging out there with phrases like “we should”.

Start at the end

In order of importance you need

  1. a conclusion
  2. an introduction
  3. everything else

The conclusion is your position, your idea, your argument, your call to action, summarised in one slide.

Don’t start writing until you know how your talk ends.

The introduction should set the stage for the conclusion.

The rest of the talking points should flow logically from the proposition established in the introduction. They should be relevant and supportive of the conclusion. If a point does not relate to the introduction, or support the conclusion then either rewrite it, drop it, or in extreme cases reconsider your conclusion.

Length

Every presentation will have a minimum and maximum time limit; if you don’t know it, ask the organiser, don’t guess. All things being equal it is preferable to finish sooner than to run overtime. Here are some of the techniques I use for planning my talk.

The two key elements are word count and the number of slides.

Word count

I prefer to write my talks in full as a paper, you may choose to speak to bullet points, it’s very much personal choice. Either way you have a set number of words to work with for your presentation.

The average speaking pace for native English speakers is around 120 to 130 words per minute. You can use this to calculate the number of words a presentation will require.

Professional speakers will talk slower, around 100 words per minute. Don’t assume that you will be able to to do this unless you have had a lot of practice at public speaking. Plan for a higher word rate and write accordingly. It will be easier to speed up during your talk if you are short on time, than to make yourself slow down if you are panicking.

A good rule of thumb is 2,000 to 2,500 words for a 20 minute presentation, 4,000 to 5,000 for a 45 minute slot.

Aim to finish ahead of time so people can prove they were listening by asking questions.

Slides

I budget on one supporting slide per minute. So 18 to 20 slides for a 20 minute presentation, 35 to 40 for a 45 minute presentation.

I’ve found this to be a pretty reliable rule of thumb regardless of the style of presentation or how I have prepared for it.

You may want to structure your slides with one bullet point per slide rather than one topic, but don’t think “this will take less time”. It might be true, but it risks any small delay in covering a slide multiplying through your deck and blowing your time limit.

If you are running short on time it is easier to summarize slides with multiple points on tham to catch up without looking like you’re rushing.

Practice

Read through your talk multiple times to check your material and time yourself – this is why I prefer to write my talks in full, it helps avoid the temptation to skip the rehearsal of parts that I think I know well.

On the day

Plan for the worst

With all respect to conference organisers, the presentation stage is the most hostile environment you will encounter. Every effort is made by the organisers to mitigate this, but the bottom line is, if things go to shit, you still have to do your talk, so plan for things to fail.

If your talk is formatted for wide screen, assume the projector is from the stone age and you’ll have to use someone else’s 4:3 laptop. If there is an opportunity to practice in the space, take it.

Assume the internet won’t work. Yup, it’s 2015 and internet is everywhere, except on stage. This is super important if you use tools like Google present or anything that uses web fonts.

Also assume that multi monitor set ups just won’t work, so all that clever shit that keynote does is useless, have a fall back.

Larger conferences will either request, or demand that you use their laptops, even to the point of asking for presentation material well in advance. The lowest common denominator here is PDF, so whatever tool you use, make sure it can emit a PDF.

Because of these difficulties, if you plan to use speaking notes, you should have a way to access those independent of your presentation software. I’ve found copying the text into a Google doc works well and lets me edit my speech after the presentation materials have been handed over to the organisers.

Beware that Goggle docs is really pedantic about being online, so don’t assume that just because the document was open on your laptop before lunch, it’ll work on stage. If in doubt export your notes to a PDF.

Engaging the audience

Establishing contact with the audience is one of the hardest things for me. There are many aspects of this

  • at larger conferences, you may be on a professional stage, so the stage lights make it hard to see anything except a sea of little apple logos in the audience.
  • different cultures show respect for the speaker in different ways. Some show they are interested in make positive grunts and noises, others sit in respectful silence.
  • humour is hard, unless you are a professional comedian. It’s really hard to pull off, so try not to make it a requirement of your talk, or necessary to support your conclusion.
  • most tech audiences are rude. People will check their email and tweet during your presentation; they’re probably not disinterested, just insensitive; try to ignore them.
  • In other cultures it is appropriate to close your eyes during a presentation, or even snooze, don’t take it personally.

Speak slowly

Duh, who doesn’t say that ? The fact is you will be nervous, or if not nervous, excited, so you will talk faster than you plan to.

The key is to take time for yourself.

Pause between bullet points

If you’re reading from a script, then start a new paragraph between points and remind yourself to take a deep breath.

Pause at the top of each new slide

It gives the audience time to read the material on the slide before you start to speak. This is important because while you know it backwards and forwards, this is probably the first time the audience is getting a chance to see your idea.

If you feel uncomfortable then fill that time by taking a drink of water or walking to a different part of the stage. The latter is my favourite because it gives you excuse to take another pause to walk back.

Dry throat

Your throat will get dry during your talk, this is part of our fight or flight response to stressful situations; it’s the adrenaline. If it happens, don’t let it throw you, focus on pausing between points. Take a drink of water to insert a pause into your presentation, but don’t panic if taking a drink doesn’t fix the problem, that’ll just make it worse.

Don’t beat yourself up

Lastly, don’t beat yourself up afterwards.

Public speaking is a skill that needs practice, it’s not something that any of us are born with. This is why they make us practice public speaking in high school. But it’s probably been a long time since you and I were in high school, and we probably didn’t realise the importance of what we were being taught at the time.

So, don’t expect to be awesome every time, and don’t put yourself in a position where your talk must be awesome. This isn’t an interview, it’s not a binary thing, even if you were nervous, or talked too fast, or realised that you crapped up one point in your argument, it’s still ok, the audience will still get a lot from it.

Thanks Brainman

This is a short post to recognise the incredible contribution Alex Brainman has made to the Go project.

Alex was responsible for the port of Go to Windows way back before Go 1 was even released. Since that time he has virtually single-handedly supported Go and Go users on Windows. It’s no wonder that he is the 10th most active contributor to the project.

The Windows build is consistently the most popular download from the official go site.

While I may not use Windows, and you may not use Windows, spare a thought for the large body of developers who do use Windows to develop Go programs and are able to do so because of Alex’s efforts.

Even if your entire business doesn’t use Windows, consider the moment when your product manager comes to you and asks “so, we’ve got a request from a big customer to port our product to Windows, that’s not going to be hard, right ?”. Your answer is directly attributable to Alex’s contributions.

Alex, every Go programmer owes you a huge debt of gratitude. So let me be the first to say it, thank you for everything you have done for Go. None of us would be as successful as we are today without your work.

Errors and Exceptions, redux

In my previous post, I doubled down on my claim that Go’s error handling strategy is, on balance, the best.

In this post, I wanted to take this a bit further, and prove that multiple returns and error values are the best,

When I say best, I obviously mean, of the set of choices available to programmers that write real world programs — because real world programs have to handle things going wrong.

The language we have

I am only going to use the Go language that we have today, not any version of the language which might be available in the future — it simply isn’t practical to hold my breath for that long. As I will show, additions to the language like dare I say, exceptions, would not change the outcome.

A simple problem

For this discussion, I’m going to start with a made up, but very simple function, which demonstrates the requirement for error handling.

package main

import "fmt"

// Positive returns true if the number is positive, false if it is negative.
func Positive(n int) bool {
        return n > -1
}

func Check(n int) {
        if Positive(n) {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

func main() {
	Check(1)
	Check(0)
	Check(-1)
}

If you run this code, you get the following output

1 is positive
0 is positive
-1 is negative

which is wrong.

How can this single line function be wrong ? It is wrong because zero is neither positive or negative, and that cannot be accurately captured by the boolean return value from Positive.

This is a contrived example, but hopefully one that can be adapted to discuss the costs and benefits of the various methods of error handling.

Preconditions

No matter what solution is determined to be the best, a check will have to be added to Positive to test the non zero precondition. Here is an example with the precondition added

// Positive returns true if the number is positive, false if it is negative.
// The second return value indicates if the result is valid, which in the case
// of n == 0, is not valid.
func Positive(n int) (bool, bool) {
        if n == 0 {
                return false, false
        }
        return n > -1, true
}

func Check(n int) {
        pos, ok := Positive(n)
        if !ok {
                fmt.Println(n, "is neither")
                return
        }
        if pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

Running this program we see that the bug is fixed,

1 is positive
0 is neither
-1 is negative

albeit in an ungainly way. For those interested, I also tried a version using a switch which was harder to read for the saving of one line of code.

This then is the baseline to compare other solutions.

Error

Returning a boolean is uncommon, it’s far more common to return an error value, even if the set of errors is fixed. For completeness, and because this simple example is supposed to hold up in more complex circumstances, here is an example using a value that conforms to the error interface.

// Positive returns true if the number is positive, false if it is negative.
func Positive(n int) (bool, error) {
        if n == 0 {
                return false, errors.New("undefined")
        }
        return n > -1, nil
}

func Check(n int) {
        pos, err := Positive(n)
        if err != nil {
                fmt.Println(n, err)
                return
        }
        if pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

The result is a function which performs the same, and the caller must check the result in an near identical way.

If anything, this underlines the flexibility of Go’s errors are values methodology. When an error occurs, indicating only success or failure (think of the two result form of map lookup), a boolean can be substituted instead of an interface value, which removes the any confusion arising from typed nils and nilness of interface values.

More boolean

Here is an example which allows Positive to return three states, true, false, and nil (Anyone with a background in set theory or SQL will be twitching at this point).

// If the result not nil, the result is true if the number is
// positive, false if it is negative.
func Positive(n int) *bool {
        if n == 0 {
                return nil
        }
        r := n > -1
        return &r
}

func Check(n int) {
        pos := Positive(n)
        if pos == nil {
                fmt.Println(n, "is neither")
                return
        }
        if *pos {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

Positive has grown another line, because of the requirement to capture the address of the result of the comparison.

Worse, now before the return value can be used anywhere, it must be checked to make sure that it points to a valid address. This is the situation that Java developers face constantly and leads to deep seated hatred of nil (with good reason). This clearly isn’t a viable solution.

Let’s try panicking

For completeness, let’s look at a version of this code that tries to simulate exceptions using panic.

// Positive returns true if the number is positive, false if it is negative.
// In the case that n is 0, Positive will panic.
func Positive(n int) bool {
        if n == 0 {
                panic("undefined")
        }
        return n > -1
}

func Check(n int) {
        defer func() {
                if recover() != nil {
                        fmt.Println("is neither")
                }
        }()
        if Positive(n) {
                fmt.Println(n, "is positive")
        } else {
                fmt.Println(n, "is negative")
        }
}

… this is just getting worse.

Not exceptional

For the truly exceptional cases, the ones that represent either unrecoverable programming mistakes, like index out of bounds, or unrecoverable environmental problem, like running out of stack, we have panic.

For all of the remaining cases, any error conditions that you will encounter in a Go program, are by definition not exceptional — you expect them because regardless of returning a boolean, an error, or pancing, it is the result of a test in your code.

Forgetting to check

I consider the argument that Developers forget to check error codes is cancelled out by the counter argument Developers forget to handle exceptions. Either may be true, depending on the language you are basing your argument on, but neither commands a winning position.

With that said, you only need to check the error value if you care about the result.

Knowing the difference between which errors to ignore and which to check is why we’re paid as professionals.

Conclusion

I have shown in the article that multiple returns and error values the simplest, and most reliable to use. Easier to use than any other form of error handling, including ones that do not even exist in Go as it stands today.

A challenge

So this is the best demonstration I can come up with, but I expect others can do better, particularly where the monadic style is used. I look forward to your feedback.

Building an atmega1284p prototype

This project was featured on Hackaday and the Atmel blog.

For the next step in my Apple 1 replica project I decided I wanted to replace the Arduino Mega board with a bare Atmega MPU with the goal of producing a two chip solution — just the Atmel and the 6502, no glue logic or external support chips.

I had been stockpiling parts for this phase of the project for a while now, so I sat down to lay out the board based on a small 5×7 cm perfboard.

Perfboard sketch

Perfboard sketch

The trickiest piece was fitting the crystal and load capacitors into the design without disrupting to many of the other traces. It worked out well so I decided to add ICSP and FTDI headers and tried my hand at laying out the board using Fritzing.

Fritzing layout

Fritzing layout

The picture above is one of several designs I tried in Fritzing. I designed another that uses a flood fill ground plane to eliminate all the vias. We’ll see how that one turns out in a few weeks.

Rant: Cadsoft Eagle might be the industry standard, at least amongst open source hardware hackers, but it truly embodies the “worse is better” philosophy. Maybe one day my Altium Circuitmaker invitation will arrive (hint hint).

The finished product

The finished product

While I’m waiting for my PCBs to be delivered I decided to build a simplified version. The FTDI and ISCP headers have been left off as they are readily accessible from the headers on the left hand side.

Demo time


It worked, first time.

Bootloaders

The atmega1284p’s I ordered were unprogrammed. Getting Optiboot installed on them is handled nicely by Manicbug’s Mighty1284 Arduino support package.  There are only two small issues of note.

  1. Due to cross talk between the XTAL1 and RX0 pins, serial communication may be unreliable. The solution to this is configure the clock source to use Full Swing mode (rather than the default low power mode). This is done by setting the relevant fuse settings in boards.txt like so
    mighty_opt.bootloader.low_fuses=0xf7
  2. Mighty1284 only supports Arduino 1.0.x, not the newer 1.5.x betas. This might be an issue if you are a fan of the improvements in Ardunio 1.5.x as it doesn’t look like Mighty1284 is being updated.

Next steps

I’m smitten with the 1284p. It feels like the right compromise between the pin starved 328 and the unfriendly 2540 series. The 1284p supports more SRAM than either of its counterparts and ships in a package large enough that you get a full 24 pins of IO.

This experiment gave me the skills and the confidence to continue to design my replica project around the 1284p. I had originally intended to build the replica in two boards, possibly adding a third with some SRAM. Routing the upper 6502 board will be harder than the lower 1284p board, so I may have to wait til my Fritzing samples return to judge the feasibility of that approach.

Resources

Make your own Apple 1 replica

Woot! This project was featured on Hackaday.

mega6502

mega6502, a big mess of wires

No Apple 1 under the tree on Christmas Day ? Never mind, with a 6502 and an Arduino Mega 2560 you can make your own.

The Apple 1 was essentially a 6502 computer with 4k of RAM and 256 bytes of ROM. The inclusion of a 6821 PIA and a Signetics video encoder meant that the Apple 1 shipped with its own 2400 baud dumb terminal built in. Just supply your own keyboard, composite monitor, and you were in business.

The good news is we can emulate the RAM, ROM, PIA, and all the glue logic with an Arduino.

The hardware

To validate the idea that an Ardunio could provide a stable clock for the 6502, I started by breadboarding the project.

6502 strapped for $EA

6502 strapped for $EA

The result was a success, with a tight assembly loop I was able to generate a 1Mhz clock with a roughly 50% duty cycle.  So it was on to a prototype.

Prototype 6502 "sidecar"

Prototype 6502 “sidecar”

The protoshield has 0.1 inch connectors for the 40 pins on the 6502 and the 40 something pins on the Ardunio Mega’s expansion header allowing me to jumper between the 6502 and the Arduino. The strange jumper block presents $EA on the data bus unconditionally, this is called free running mode.

Mega6502 prototype

Mega6502 prototype

Because I wanted to use an LCD panel for debugging and the patch wires on the protoshield would not fit under the LCD shield I mounted the shield backwards and upside down, which retained the same pin outs (including 5v on the top). I called this prototype design the “sidecar”.

Sidecar wiring in detail

The schematic for wiring the sidecar to the Arduino is detailed in the README file.

The software

At the moment the software is a simple Arduino IDE sketch, you can find it on Github.

Clock

The Arduino provides the ϕ0 clock signal as part of the main loop() function. The 6502 interacts with the outside world on the falling edge of this clock (actually a few ns after ϕ0, the falling edge of ϕ2). It produces the address and read/write signals on the rising edge of ϕ0.

Different 6502 models have different requirements for the minimum and maximum length of each phase of ϕ0. The original NMOS 6502 required a clock of at least 100khz to avoid loosing internal CPU state, which made single stepping more complicated. With the Rockwell 65c02 I am using the ϕ0 low phase must not exceed 5μs, but the clock signal can remain high indefinitely (the fully static WDC 6502 removes any restriction on a minimum clock).

We can use this property to generate a stable ϕ0 low around 500 ns (the minimum instruction time on a 16Mhz Atmega is 62.5ns), then raise ϕ0 and do our processing, even take an interrupt. Because I have the 4Mhz 65c02 version, we can even make the ϕ0 low period shorter, to allow our high pulse to take longer in an effort to reach the 1Mhz clock target.

Laughton Electronics has published a fantastic page if you want to learn more about the 6502 timings.

Ram

The Apple 1 divided the 6502’s address space into 16 4k banks which could be assigned to RAM, ROM, or I/O.

The 2560 includes 8kb of SRAM, so we dedicate 4k to be the bottom bank of ram starting at $0000, which is more than enough for a usable replica. For Apple 1’s with 8kb of ram, the second bank of ram was usually strapped to $E000 and used for BASIC. The nice property of this is we can replace the $E000 bank with a ROM image mapped to that location (BASIC did not expect to be able to write to memory at $E000) and achieve the same effect without providing another 4k of RAM.

ROM

The original 256 byte Woz monitor rom is provided at $FF00. For simplicities sake the ROM is mirrored at every page in the $F000 address space.

I have tested a few of the popular ROM images like A1Assembler and Applesoft-lite but only include the Woz monitor rom in the source distribution. Enterprising readers should have little difficulty modifying the source code to include additional ROM images.

Input and output

The Apple 1 interfaces to the keyboard and screen via four registers from the 6821 PIA chip mapped into the address space at $D000.

When a key is pressed on the keyboard, the high bit of $D011 is latched high, this can be detected by the 6502 ROM monitor which then reads $D010 to fetch the keycode, which is conveniently encoded as 7bit ASCII.

Output is similar, the 6502 polls $D013 until the PIA reports that the video encoder is not busy then the character to write to the screen is placed in $D012.

It is straight forward to map these reads and writes to these addresses to the Arduino serial port. Again for simplicity, the PIA is mirrored to every page in $D000.

The speed

Like my previous projects, performance is always a problem. Assuming a 50% duty cycle for the ϕ0 clock, a 16Mhz Atmel has 8 cycles to decode the address and read/write the data. This is basically impossible. However, as I am using a Rockwell 65c02 cpu, which is CMOS, and a higher speed grade than the original NMOS based 6502, we can cheat and shorten the ϕ2 low, trading that time for a longer ϕ2 high pulse.

Just shy of 300khz

Just shy of 300khz

Using my trusty Bitscope Micro, I can probe the ϕ2 clock. You can see the asymetry between the high and low phases. The high phase is currently 2.8μs, or around 45 cycles for the Arduino. This equates to a clock speed of just under 300khz, which is very usable.

Demo time

Here is a short video showing the mega6502 running a short BASIC program in debug mode.

Here is a screen capture showing David Schmenk’s 30th birthday demo for the Apple 1.

Next steps

  • More tweaking of the decode logic to try to reduce the ϕ2 high period.
  • Implement a faux cassette interface possibly using the SD card for cassette storage
  • A new design using an Atmega1284P — a minimalistic two chip SBC 6502 solution, assuming I can find a bootloader that works.

Resources

If you liked this project, check out some other fantastic 6502 projects.

  • Project:65
  • Quinn Dunki’s fantastic VeronicaWe are not worthy!
  • PDA6502, Paul has designed his own 6502 solution from scratch.

Inspecting errors

The common contract for functions which return a value of the interface type error, is the caller should not presume anything about the state of the other values returned from that call without first checking the error.

In the majority of cases, error values returned from functions should be opaque to the caller. That is to say, a test that error is nil indicates if the call succeeded or failed, and that’s all there is to it.

A small number of cases, generally revolving around interactions with the world outside your process, like network activity, require that the caller investigate the nature of the error to decide if it is reasonable to retry the operation.

A common request for package authors is to return errors of a known public type, so the caller can type assert and inspect them. I believe this practice leads to a number of undesirable outcomes:

  • Public error types increase the surface area of the package’s API.
  • New implementations must only return types specified in the interface’s declaration, even if they are a poor fit.
  • The error type cannot be changed or deprecated after introduction without breaking compatibility, making for a brittle API.

Callers should feel no more comfortable asserting an error is a particular type than they would be asserting the string returned from Error() matches a particular pattern.

Instead I present a suggestion that permits package authors and consumers to communicate about their intention, without having to overly couple their implementation to the caller.

Assert errors for behaviour, not type

Don’t assert an error value is a specific type, but rather assert that the value implements a particular behaviour.

This suggestion fits the has a nature of Go’s implicit interfaces, rather than the is a [subtype of] nature of inheritance based languages. Consider this example:

func isTimeout(err error) bool {
        type timeout interface {
                Timeout() bool
        }
        if te, ok := err.(timeout); ok {
                return te.Timeout()
        }
        return false
}

The caller can use isTimeout() to determine if the error is related to a timeout, via its implementation of the timeout interface, and then confirm if the error was timeout related — all without knowing anything about the type, or the original source of the error value.

Gift wrapping errors, usually by libraries that annotate the error path, is enabled by this method; providing that the wrapped error types also implement the interfaces of the error they wrap.

This may seem like an insoluble problem, but in practice there are relatively few interface methods that are in common use, so Timeout() bool and Temporary() bool would cover a large set of the use cases.

In conclusion

Don’t assert errors for type, assert for behaviour.

For package authors, if your package generates errors of a temporary nature, ensure you return error types that implement the respective interface methods. If you wrap error values on the way out, ensure that your wrappers respect the interface(s) that the underlying error value implemented.

For package users, if you need to inspect an error, use interfaces to assert the behaviour you expect, not the error’s type. Don’t ask package authors for public error types; ask that they make their types conform to common interfaces by supplying Timeout() or Temporary() methods as appropriate.