Author Archives: Dave Cheney

About Dave Cheney

A chaotic neutral System Administrator with super cow powers. My weapons are: * fear * cynicism * an almost fanatical devotion to the command line twitter.com/davecheney

Go, frameworks, and Ludditry

A topic that has weighed on my mind recently is the dichotomy of frameworks vs. libraries in the Go community.

Is the prevailing stance against complex frameworks a rejection of this purported labour saving automation, or an enlightened position that has weighed the pro’s and cons and found the costs of a framework based approached outweighs the benefits ?

A framework calls your code, you call a library.

If you want to call net/http a framework under that definition, go for it. You can win that argument, but don’t let it fool you into thinking that this is the exception that proves the rule.

Frameworks, if they communicate with your code in simple terms are tone deaf. Unable to express the intent that they have been asked to pass on to you, they force you to glean meaning from their obscure requests via processes akin to psychic divination. Alternatively frameworks subject you to an increasingly bureaucratic interaction of call-backs, configuration settings, and rigid parameters.

By comparison, the best libraries are the ones with loose coupling, powered by the pervasive nature of Go’s interface. These simple abstractions allow you to compose your own grammar specialised to the task at hand.

Writing programs is fundamentally about interpreting data. If you outsource that interpreter to someone else, you have artificially constrained the vocabulary with which to describe and interact with that data.

Given these two choices, I am happy to stand with the camp who desire simple composable types and packages.

Simple profiling package moved, updated

As part of preparing for my dotGo talk I updated a few of my packages to use the functional options pattern. I only had time to show one of those packages on stage, pkg/term. This is a post about one that was left on the cutting room floor.

Last year I wrote a simple package to help me automate profiling of my programs. It used the Config struct approach for configuration. Here is an example from the existing documentation.

package main

import "github.com/davecheney/profile"

func main() {
      config := profile.Config{
            CPUProfile: true,
            MemProfile: false,
            BlockProfile: false,
      }
      defer profile.Start(&config).Stop()
      // 
}

Not long after I released the package, I started to get reports from users who had tried to turn on all the profiling options at the same time; they were finding the profiles interfered with each other.

I didn’t want to complicate the API with some sort of validation of the configuration, possibly returning an error. It was important to me to retain the single line usage of the package.

So, I settled for providing sample Config values that enabled just one profile option at a time, and documented that turning on multiple profiles at a time was a bad thing.

I also changed the signature of the function to take a *Config and arranged that if the caller passed nil they would get CPU profiling.

In preparing for the talk I decided to revisit my profiling package, and by applying functional options I think I have improved the API.

package main

import "github.com/pkg/profile"

func main() {
        // CPU profiling by default
        defer profile.Start().Stop()
        ...
}

Now we always enable CPU profiling unless you choose a different profile, which is done explicitly.

defer profile.Start(profile.MemProfile).Stop()

Previously if you specified more than one profile you got a corrupted result, and specifying no profiles, ie passing an empty Config, produced no profile at all. Now that’s all fixed.

I was also delighted to discover that as the option functions are now real functions, you can hang examples off them. Take a look at the example of the NoShutdownHook option.

Sourcegraph tells me that several projects are already using the old version of this package, so to avoid breaking those, I have pushed the updated version to a new repository, github.com/pkg/profile.

The old package should be considered deprecated, and you should update your code to use the new version, it’s bloody easy.

Functional options for friendly APIs

What follows is the text of my presentation, Functional options for friendly APIs that I gave at dotGo this year. It has been edited slightly for readability.

I want to thank Kelsey Hightower, Bill Kennedy, Jeremy Saenz, and Brian Ketelsen, for their assistance in preparing this talk.


Screenshot from 2014-10-15 03:24:14

I want to begin my talk with a story.

It is late 2014, your company is launching a revolutionary new distributed social network. Wisely, your team has chosen Go as the language for this product.

You have been tasked with writing the crucial server component. Possibly it looks a little like this.

Screenshot from 2014-10-17 21:23:15

There are some unexported fields that need to be initialised, and a goroutine must be started to service incoming requests.

The package has a simple API, it is pretty easy to use.

But, there is a problem. Soon after you announce your first beta release, the feature requests start to roll in.

Screenshot from 2014-10-15 03:27:06

Mobile clients are often slow to respond, or stop responding altogether—you’ll need to add support for disconnecting these slow clients.

In this climate of heightened security awareness, your bug tracker starts to fill with demands to support secure connections.

Then, you get a report from a user who is running your server on a very small VPS. They need a way to limit the number of simultaneous clients.

Next is the request to rate limit concurrent connections from a group of users being targeted by a botnet.

… and on it goes.

Now, you need to change your API to incorporate all these feature requests.

Screenshot from 2014-10-15 03:28:01

It’s kind of a sign that things are not going well when the function won’t easily fit on a slide.

Show of hands, who has used an API like this ?

Who has written an API like this ?

Who has had their code break while depending on an API like this ?

Obviously this solution is cumbersome and brittle. It also isn’t very discoverable.

Newcomers to your package have no idea which parameters are optional, and which are mandatory.

For example, if I want to create an instance of the Server for testing, do I need to provide a real TLS certificate ? If not, what do I provide instead ?

If I don’t care about maxconns, or maxconcurrent what value should I use ? Do I use zero ? Zero sounds reasonable, but depending on how the feature was implemented, that might limit you to zero total concurrent connections.

It appears to me, that writing an API like this can be easy; as long as you make it the caller’s responsibility to use it correctly.

While this example could be a considered an exaggeration, maliciously constructed and compounded by poor documentation, I believe that it demonstrates a real issue with ornate, brittle APIs such as this.

So now that I’ve defined the problem, lets look at some solutions.

Screenshot from 2014-10-15 03:29:12

Rather than trying to provide one single function which must cater for every permutation, a solution might be to create a set of functions.

With this approach, when callers need a secure server they can call the TLS variant.

When they need to establish a maximum duration for idle connections, they can use the variant that takes a timeout.

Unfortunately, as you can see, providing every possible permutation can quickly become overwhelming.

Let’s move on to others way of making your API configurable.

Screenshot from 2014-10-15 03:30:16

A very common solution is to use a configuration struct.

This has some advantages.

Using this approach, the configuration struct can grow over time as new options are added, while the public API for creating a server itself remains unchanged.

This method can lead to better documentation.

What was once a massive comment block on the NewServer function, becomes a nicely documented struct.

Potentially it also enables the callers to use the zero value to signify they they want the default behaviour for a particular configuration option.

Screenshot from 2014-10-15 03:31:23

However, this pattern is not perfect.

It has trouble with defaults, especially if the zero value has a well understood meaning.

For example, in the config structure shown here, when Port is not provided, NewServer will return a *Server for listening on port 8080.

But this has the downside that you can no longer explicitly set Port to 0 and have the operating system automatically choose a free port, because that explicit 0 is indistinguishable from the fields’ zero value.

Screenshot from 2014-10-15 03:32:20

Most of the time, users of your API will be expecting to use its default behaviour.

Even though they do not intend to change any of the configuration parameters, those callers are still required to pass something for that second argument.

So, when people read your tests or your example code, trying to figure out how to use your package, they’ll see this magic empty value, and it’ll become enshrined in the collective unconsciousness.

[and] to me, this just feel wrong.

Why should users of your API be required to construct an empty value, simply to satisfy the signature of the function ?

Screenshot from 2014-10-15 03:35:08

A common solution to this empty value problem is to pass a pointer to the value instead, thereby enabling callers to use nil rather than constructing an empty value.

In my opinion this pattern has all the problems of the previous example, and it adds a few more.

We still have to pass something for this function’s second argument, but now this value could be nil, and most of the time will be nil for those wanting the default behaviour.

It raises the question, is there a difference between passing nil, and passing a pointer to an empty value ?

More concerning to both the package’s author, and its callers, is the Server and the caller can now share a reference to the same configuration value. Which gives rise to questions of what happens if this value is mutated after being passed to the NewServer function ?

I believe that well written APIs should not require callers to create dummy values to satisfy those rarer use cases.

I believe that we, as Go programmers, should work hard to ensure that nil is never a parameter that needs to be passed to any public function.

And when we do want to pass configuration information, it should be as self explanatory and as expressive as possible.

So now with these points in mind, I want to talk about what I believe are some better solutions.

Screenshot from 2014-10-15 03:36:05

To remove the problem of that mandatory, yet frequently unused, configuration value, we can change the NewServer function to accept a variable number of arguments.

Instead of passing nil, or some zero value, as a signal that you want the defaults, the variadic nature of the function means you don’t need to pass anything at all.

And in my book this solves two big problems.

First, the invocation for the default behaviour becomes as concise as possible.

Secondly, NewServer now only accepts Config values, not pointers to config values, removing nil as a possible argument, and ensuring that the caller cannot retain a reference to the server’s internal configuration.

I think this is a big improvement.

But if we’re being pedantic, it still has a few problems.

Obviously the expectation is for you to provide at most one Config value. But as the function signature is variadic, the implementation has to be written to cope with a caller passing multiple, possibly contradictory, configuration structs.

Is there a way to use a variadic function signature and improve the expressiveness of configuration parameters when needed ?

I think that there is.

Screenshot from 2014-10-15 03:37:13

At this point I want to make it clear that that the idea of functional options comes from a blog post titled. Self referential functions and design by Rob Pike, published in January this year. I encourage everyone here to read it.

The key difference from the previous example, and in fact all the examples so far, is customisation of the Server is performed not with configuration parameters stored in a structure, but with functions which operate on the Server value itself.

As before, the variadic nature of the function’s signature gives us the compact behaviour for the default case.

When configuration is required, I pass to NewServer functions which operate on the Server value as an argument.

The timeout function simply changes the timeout field of any *Server value passed to it.

The tls function is a little more complicated. It takes a *Server value and wraps the original listener value inside a tls.Listener, thereby transforming it into a secure listener.

Screenshot from 2014-10-15 03:39:37

Inside NewServer, applying these options is straightforward.

After opening a net.Listener, we declare a Server instance using that listener.

Then, for each option function provided to NewServer, we call that function, passing in a pointer to the Server value that was just declared.

Obviously, if no option functions were provided, there is no work to do in this loop and so srv is unchanged.

And that’s all there is too it.

Using this pattern we can make an API that has

  • sensible defaults
  • is highly configurable
  • can grow over time
  • self documenting
  • safe for newcomers
  • and never requires nil or an empty value to keep the compiler happy

In the few minutes I have remaining I’d like to show you how I improved one of my own packages by converting it to use functional options.

Screenshot from 2014-10-15 03:40:29

I’m an amateur hardware hacker, and many of the devices I work with use a USB serial interface. A so a few months ago I wrote a terminal handling package.

In the prior version of this package, to open a serial device, change the speed and set the terminal to raw mode, you’d have to do each of these steps individually, checking the error at every stage.

Even though this package is trying to provide a friendlier interface on an even lower level interface, it still left too many procedural warts for the user.

Let’s take a look at the package after applying the functional options pattern.

Screenshot from 2014-10-15 03:43:21

By converting the Open function to use a variadic parameter of function values, we get a much cleaner API.

In fact, it’s not just the Open API that improves, the grind of setting an option, checking an error, setting the next option, checking the error, that is gone as well.

The default case, still just takes one argument, the name of the device.

For more complicated use cases, configuration functions, defined in the term package, are passed to the Open function and are applied in order before returning.

This is the same pattern we saw in the previous example, the only thing that is different is rather than being anonymous, these are public functions. In all other respects their operation is identical.

We’ll take a look at how Speed, RawMode, and Open, are implemented on the next slide.

Screenshot from 2014-10-15 03:44:13

RawMode is the easiest to explain. It just a function whose signature is compatible with Open.

Because RawMode is declared in the same package as Term, it can access the private fields and call private methods declared on the Term type, in this case calling the private setRawMode helper.

Speed is also just a regular function, however it does not match the signature Open requires. This is because Speed itself requires an argument; the baud rate.

Speed returns an anonymous function which is compatible with the Open function’s signature, which closes over the baud rate parameter, capturing it for later when the function is applied.

Inside the call to Open, we first open the terminal device with the openTerm helper.

Next, just as before, we range over the slice of options functions, calling each one in turn passing in t, the pointer to our term.Term value.

If there is an error applying any function then we stop at that point, clean up and return the error to the caller.

Otherwise, returning from the function, we’ve now created and configured a Term value to the caller’s specifications.

Screenshot from 2014-10-15 03:45:03

In summary

  • Functional options let you write APIs that can grow over time.
  • They enable the default use case to be the simplest.
  • They provide meaningful configuration parameters.
  • Finally they give you access to the entire power of the language to initialize complex values.

In this talk, I have presented many of the existing configuration patterns, those considered idiomatic and commonly in use today, and at every stage asked questions like:

  • Can this be made simpler ?
  • Is that parameter necessary ?
  • Does the signature of this function make it easy for it to be used safely ?
  • Does the API contain traps or confusing misdirection that will frustrate ?

I hope I have inspired you to do the same. To revisit code that you have written in the past and pose yourself these same questions and thereby improve it.

Screenshot from 2014-10-15 03:45:43

Thank you.

That trailing comma

When initialising a variable with a composite literal, Go requires that each line of the composite literal end with a comma, even the last line of your declaration. This is the result of the semicolon rule.

Although possibly an unintended consequence, this means that when proposing a one line change, it really is a one line change.

Screenshot 2014-10-04 at 08.58.52

The semicolon rule, by enforcing that each line of a composite literal is terminated by a comma, ensures that your one line change doesn’t include an edit to the previous line to add a comma.

It’s the little things that make the difference.

Using // +build to switch between debug and release builds

Build tags are part of the conditional compilation system provided by the go tool. This is a quick post to discuss using build tags to selectively enable debug printing in a package.

This afternoon I merged a contribution to pkg/sftp which improved the packet encoding performance but introduced a bug where some packet types were incorrectly encoded.

% go test -integration
Unknown message 0
... oops

Turning on verbose got a little closer.

% go test -integration -v
=== RUN TestUnmarshalAttrs
--- PASS: TestUnmarshalAttrs (0.00s)
=== RUN TestNewClient
--- PASS: TestNewClient (0.00s)
=== RUN TestClientLstat
Unknown message 0

But each integration test sends many different packets, which one was at fault?

Rather than reaching for fmt.Println I took a few minutes to add conditional debugging to this package.
debug.go

// +build debug

package sftp

import "log"

func debug(fmt string, args ...interface{}) {
	log.Printf(fmt, args...)
}

release.go

// +build !debug

package sftp

func debug(fmt string, args ...interface{}) {}

Adding a call to debug inside sendPacket it was easy to figure out the packet which was being incorrectly encoded.

% go test -tags debug -integration -v -run=Lstat
2014/09/28 11:18:31 send packet sftp.sshFxInitPacket, len: 38
=== RUN TestClientLstat
2014/09/28 11:18:31 send packet sftp.sshFxInitPacket, len: 5
2014/09/28 11:18:31 send packet sftp.sshFxpLstatPacket, len: 62
Unknown message 0

Debugging is optional

When I committed the fix for this bug I didn’t have to spend any time removing the debug function calls inside the package.

When -tags debug is not present, the version from release.go, effectively a no-op, is used.

Extra credit

This package includes integration tests which are not run by default. How the -integration test flag works is left as an exercise to the reader.

go list, your Swiss army knife

During my day job I’ve been working on re-factoring some the internals of Juju to reverse the trend of a growing number of dependencies in our core packages.

In this post I’ll show how I used go list to help me in this task.

The basics

Let’s start with the basics of go list. The default invocation of go list returns the name of the import path that represents the directory you are currently in, or the package path you provide.

% cd $GOPATH/src/code.google.com/p/go.crypto/ssh
% go list
code.google.com/p/go.crypto/ssh
% go list github.com/juju/juju
github.com/juju/juju

By itself this doesn’t appear to be particularly notable, however this simple example belies the power of go list.

The secret of the -f flag

Tucked away at the top of the documentation for go help list is this short piece

The -f flag specifies an alternate format for the list, using the syntax of package template. The default output is equivalent to -f '{{.ImportPath}}'.

Put simply, -f allows you to execute a Go template which has access to the internal data structures of the go tool, same structures that are used when building, testing or getting code.

This example, using -f '{{ .ImportPath }}', is equivalent to the previous invocation and gives us a glimpse into the power of go list

% go list -f '{{ .ImportPath }}' github.com/juju/juju                          github.com/juju/juju

Going a step further with go list

The godoc for cmd/go lists the structures available to -f, so I won’t repeat them verbatim. Instead I’ll highlight some uses of go list.

Listing the test files that will be built

% go list -f '{{ .TestGoFiles }}' archive/tar
[reader_test.go tar_test.go writer_test.go]

Or the external test files of that package

% go list -f '{{ .XTestGoFiles }}' archive/tar
[example_test.go]

Or a summary for a set of packages

% go list -f '{{.Name}}: {{.Doc}}' code.google.com/p/go.net/ipv{4,6}
ipv4: Package ipv4 implements IP-level socket options for the Internet Protocol version 4.
ipv6: Package ipv6 implements IP-level socket options for the Internet Protocol version 6.

Conditional builds and build tags

A important part of writing Go programs is dealing with portability issues across various operating systems or processors. I’ve written about conditional builds before, so I’ll just show an example listing the files that will be compiled on various platforms

% env GOOS=darwin go list -f '{{ .GoFiles }}' github.com/pkg/term
[term.go term_bsd.go]
% env GOOS=linux go list -f '{{ .GoFiles }}' github.com/pkg/term
[term.go term_linux.go]

Who depends on what?

go list can show both the packages that your package directly depends, or its complete set of transitive dependencies.

% go list -f '{{ .Imports }}' github.com/davecheney/profile
[io/ioutil log os os/signal path/filepath runtime runtime/pprof]
% go list -f '{{ .Deps }}' github.com/davecheney/profile
[bufio bytes errors fmt io io/ioutil log math os os/signal path/filepath reflect runtime runtime/pprof sort strconv strings sync sync/atomic syscall text/tabwriter time unicode unicode/utf8 unsafe]

The first command lists only the packages that github.com/davecheney/profile depends on directly. This is the unique set of import statements in all Go files, adjusted for build constraints. The second command returns the set of direct and transitive dependency of github.com/davecheney/profile.

Fancy templating

The set of data structures available to the go list template is quite specialised, but don’t forget we have the whole power of the Go template package at our disposal.

In the previous examples dealing with slices of values, the default output format follows the fmt package. However it is probably more convenient for scripting applications to have one entry per line, which we can do easily as the go list template contains a function called join which delegates to strings.Join.

% go list -f '{{ join .Imports "\n" }}' github.com/davecheney/profile
io/ioutil
log
os
os/signal
path/filepath
runtime
runtime/pprof

Putting it together

With the task of trying to unpick the forest of dependencies in our core packages, I can use go list to define a helper like

deps() {
        go list -f '{{ join .Deps  "\n"}}' . | grep juju
}

The usage is as simple as navigating to a particular package in my $GOPATH and checking to see its complete dependency list

% deps
github.com/juju/cmd
github.com/juju/errgo
github.com/juju/errors
github.com/juju/gojsonpointer
github.com/juju/gojsonreference
github.com/juju/gojsonschema
github.com/juju/juju/api/agent
... // many more lines elided

Conclusion

This short post has barely scratched the surface of the flexibility that go list provides.

If your development, build, or CI tooling is using hand rolled scripts, grep, awk, etc, to introspect Go packages and their interdependencies, consider switching to go list.

How to install multiple versions of Go

Here is a short recipe I use for installing multiple versions of Go from source. In this example I’m going to install the release (currently Go 1.3.1) and trunk versions of Go.

Step 1. Checkout

Checkout two copies of the Go source code into independent paths.

% hg clone https://code.google.com/p/go -r release $HOME/go.release
% hg clone https://code.google.com/p/go $HOME/go.trunk

Step 2. Build

Build and run the tests for both versions.

% cd $HOME/go.release/src && ./all.bash
% cd $HOME/go.trunk/src && ./all.bash

Step 3. Done

That’s it, we’re done. You can now invoke whichever version of Go you want by invoking the go tool like so

% $HOME/go.release/bin/go test $YOURPACKAGE # test with the release version
% $HOME/go.trunk/bin/go test $YOURPACKAGE # test with the trunk version

If you want a particular version of Go to be your default, add that version’s bin directory to your $PATH

export PATH=$PATH:$HOME/go.release/bin:$GOPATH/bin

Look Ma, no $GOROOT!

You’ll notice that I didn’t set $GOROOT. You don’t need to set $GOROOT, ever1.


  1. Unless you’re using Windows, and have decided to not follow the instructions from the golang.org site. Please refer to this helpful infographic for full details.

Go’s runtime C to Go rewrite, by the numbers

The start of September brings a close to the Go 1.4 merge window. From now until the release in December, only bug fixes and tweaks will be accepted.

A major piece of work that landed in Go 1.4 cycle was the rewrite of parts of the Go runtime from C to Go. What may not be widely known is the main Go distribution1 includes a C compiler whose primary function is to compile the C code in the runtime package.

Rewriting parts of the runtime in this cycle provided the following benefits.

  • Currently if C code in the runtime is found on a Goroutine’s call stack the runtime will fall back to the old split stacks method if it needs to grow the stack. When all the parts of the runtime called from Go code are written in Go, the copying stack method can be used more effectively.
  • The translation of the Go compilers themselves from C2 to Go is only planned to convert the Go compiler, (5g, 6g, 8g), not the C compiler. Reducing the number of lines of C code in the runtime, possibly eliminating them entirely, will make the compiler rewrite simpler.

Here are the numbers.

The data captured here is not the total line count of C and Go in the runtime package, but the count of lines compiled as part of ./all.bash. In effect

go list -f "{{ range .GoFiles }}{{ . }} {{ end }}" | wc -l
go list -f "{{ range .CFiles }}{{ . }} {{ end }}" | wc -l

for each revision that included runtime: in the commit log.


Notes

  1. This is the golang.org distribution, commonly known as gc, not gccgo.
  2. This is a different C compiler; the gc toolchain is compiled with your system’s C compiler, the gc runtime is compiled with its C compiler.
  3. The increase in number of lines converted after August 20th is probably related to this request by khr.

Go has both make and new functions, what gives ?

This is a post about Go’s built in make and new functions.

As Rob Pike noted at Gophercon this year, Go has many ways of initialising variables. Among them is the ability to take the address of a struct literal which leads to serveral ways to do the same thing.

s := &SomeStruct{}
v := SomeStruct{}
s := &v              // identical
s := new(SomeStruct) // also identical

It is fair that commenters point out this redundancy in the language and this sometimes leads them to search for other inconsistencies, most notably the redundancy between make and new.

On the surface it appears that make and new do very similar things, so what is the rationale for having both ?

Why can’t we use make for everything ?

Go does not have user defined generic types, but it does have several built in types that can operate as generic lists, maps, sets, and queues;  slices, maps and channels.

Because make is designed to create these three built in generic types, it must be provided by the runtime as there is no way to express the function signature of make directly in Go.

Although make creates generic slice, map, and channel values, they are still just regular values; make does not return pointer values.

If new was removed in favour make, how would you construct a pointer to an initialised value ?

var x1 *int
var x2 = new(int)

x1 and x2 have the same type, *intx2 points to initialised memory and may be safely dereferenced, the same is not true for x1.

Why can’t we use new for everything ?

Although the use of new is rare, its behaviour is well specified.

new(T) always returns a *T pointing to an initialised T. As Go doesn’t have constructors, the value will be initialised to T‘s zero value.

Using new to construct a pointer to a slice, map, or channel zero value works today and is consistent with the behaviour of new.

s := new([]string)
fmt.Println(len(*s))  // 0
fmt.Println(*s == nil) // true

m := new(map[string]int)
fmt.Println(m == nil) // false
fmt.Println(*m == nil) // true

c := new(chan int)
fmt.Println(c == nil) // false
fmt.Println(*c == nil) // true

Sure, but these are just rules, we can change them, right ?

For the confusion they may cause, make and new are consistent; make only makes slices, maps, and channels, new only returns pointers to initialised memory.

Yes, new could be extended to operate like make for slices, maps and channels, but that would introduce its own inconsistencies.

  1. new would have special behaviour if the type passed to new was a slice, map or channel. This is a rule that every Go programmer would have to remember.
  2. For slices and channels, new would have to become variadic, taking a possible length, buffer size, or capacity, as required. Again more special cases to have to remember, whereas before new took exactly one argument, the type.
  3. new always returns a *T for the T passed to it. That would mean code like
    func Read(buf []byte) []byte
    // assume new takes an optional length
    buf := Read(new([]byte, 4096))

    would no longer be possible, requiring more special cases in the grammar to permit *new([]byte, length).

In summary

make and new do different things.

If you are coming from another language, especially one that uses constructors, it may appear that new should be all you need, but Go is not those languages, nor does it have constructors.

My advice is to use new sparingly, there are almost always easier or cleaner ways to write your program without it.

As a code reviewer, the use of new, like the use of named return arguments, is a signal that the code is trying to do something clever and I need to pay special attention. It may be that code really is clever, but more than likely, it can be rewritten to be clearer and more idiomatic.