- 26 Oct, 2016 18 commits
-
-
Austin Clements authored
Currently, markroot delays scanning mark worker stacks until mark termination by putting the mark worker G directly on the rescan list when it encounters one during the mark phase. Without this, since mark workers are non-preemptible, two mark workers that attempt to scan each other's stacks can deadlock. However, this is annoyingly asymmetric and causes some real problems. First, markroot does not own the G at that point, so it's not technically safe to add it to the rescan list. I haven't been able to find a specific problem this could cause, but I suspect it's the root cause of issue #17099. Second, this will interfere with the hybrid barrier, since there is no stack rescanning during mark termination with the hybrid barrier. This commit switches to a different approach. We move the mark worker's call to gcDrain to the system stack and set the mark worker's status to _Gwaiting for the duration of the drain to indicate that it's preemptible. This lets another mark worker scan its G stack while the drain is running on the system stack. We don't return to the G stack until we can switch back to _Grunning, which ensures we don't race with a stack scan. This lets us eliminate the special case for mark worker stack scans and scan them just like any other goroutine. The only subtlety to this approach is that we have to disable stack shrinking for mark workers; they could be referring to captured variables from the G stack, so it's not safe to move their stacks. Updates #17099 and #17503. Change-Id: Ia5213949ec470af63e24dfce01df357c12adbbea Reviewed-on: https://go-review.googlesource.com/31820 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
David du Colombier authored
This issue has been fixed in CL 31271. Fixes #8908. Change-Id: I8015490e2d992e09c664560e42188315e0e0669e Reviewed-on: https://go-review.googlesource.com/32150 Run-TryBot: David du Colombier <0intro@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Austin Clements authored
Now that SSA's write barrier pass is generating calls to these, compile doesn't need to look them up. Change-Id: Ib50e5f2c67b247ca280d467c399e23877988bc12 Reviewed-on: https://go-review.googlesource.com/32170 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
David du Colombier authored
TestRemoveDevNull was added in CL 31657. However, this test was failing on Plan 9, because /dev/null was considered as a regular file. On Plan 9, there is no special mode to distinguish between device files and regular files. However, files are served by different servers. For example, /dev/null is served by #c (devcons), while /bin/cat is served by #M (devmnt). We chose to consider only the files served by #M as regular files. All files served by different servers will be considered as device files. Fixes #17598. Change-Id: Ibb1c3357d742cf2a7de15fc78c9e436dc31982bb Reviewed-on: https://go-review.googlesource.com/32152Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: David du Colombier <0intro@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Russ Cox authored
This allows callers to invoke f.Usage() themselves and get the default usage handler instead of a panic (from calling a nil function). Fixes #16955. Change-Id: Ie337fd9e1f85daf78c5eae7b5c41d5ad8c1f89bf Reviewed-on: https://go-review.googlesource.com/31576 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Rob Pike authored
Make the messages grammatically korrect and consistent. Fixes #16844 Change-Id: I7c137b4dc25c0c875ed07b0c64c67ae984c39cbc Reviewed-on: https://go-review.googlesource.com/32112Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Rob Pike authored
For some reason git won't let me write doc/effective_go.html: reword confusing sentence or even doc/effective_go: reword confusing sentence as the subject line for this CL, but that's not important. The actual CL just rewrites one sentence and adds an option to grep in the associated example. Fixes #15875 Change-Id: Iee159ea751caf4b73eacf3dfc86e29032646373f Reviewed-on: https://go-review.googlesource.com/32110Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Austin Clements authored
Currently, if the number of stack barriers for a stack is 0, we'll create a zero-length slice that points just past the end of the stack allocation. This bad pointer causes GC panics. Fix this by creating a nil slice if the stack barrier count is 0. In practice, the only way this can happen is if GODEBUG=gcstackbarrieroff=1 is set because even the minimum size stack reserves space for two stack barriers. Change-Id: I3527c9a504c445b64b81170ee285a28594e7983d Reviewed-on: https://go-review.googlesource.com/31762Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This adds debug code enabled in gccheckmark mode that panics if we attempt to mark an unallocated object. This is a common issue with the hybrid barrier when we're manipulating uninitialized memory that contains stale pointers. This also tends to catch bugs that will lead to "sweep increased allocation count" crashes closer to the source of the bug. Change-Id: I443ead3eac6f316a46f50b106078b524cac317f4 Reviewed-on: https://go-review.googlesource.com/31761Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently reflectcall has a subtle dance with write barriers where the assembly code copies the result values from the stack to the in-heap argument frame without write barriers and then calls into the runtime after the fact to invoke the necessary write barriers. For the hybrid barrier (and for ROC), we need to switch to a *pre*-write write barrier, which is very difficult to do with the current setup. We could tie ourselves in knots of subtle reasoning about why it's okay in this particular case to have a post-write write barrier, but this commit instead takes a different approach. Rather than making things more complex, this simplifies reflection calls so that the argument copy is done in Go using normal bulk write barriers. The one difficulty with this approach is that calling into Go requires putting arguments on the stack, but the call* functions "donate" their entire stack frame to the called function. We can get away with this now because the copy avoids using the stack and has copied the results out before we clobber the stack frame to call into the write barrier. The solution in this CL is to call another function, passing arguments in registers instead of on the stack, and let that other function reserve more stack space and setup the arguments for the runtime. This approach seemed to work out the best. I also tried making the call* functions reserve 32 extra bytes of frame for the write barrier arguments and adjust SP up by 32 bytes around the call. However, even with the necessary changes to the assembler to correct the spdelta table, the runtime was still having trouble with the frame layout (and the changes to the assembler caused many other things that do strange things with the SP to fail to assemble). The approach I took doesn't require any funny business with the SP. Updates #17503. Change-Id: Ie2bb0084b24d6cff38b5afb218b9e0534ad2119e Reviewed-on: https://go-review.googlesource.com/31655 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Russ Cox authored
The logic for saving the list of packages was not always preferring to keep error messages around correctly. The missed error led to an internal consistency failure later. Fixes #17119. Change-Id: I9723b5d2518c25e2cac5249e6a7b907be95b521c Reviewed-on: https://go-review.googlesource.com/31812 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Filippo Valsorda authored
Fixes #17430 Change-Id: Ia1c25363d64e3091455ce00644438715aff30a0d Reviewed-on: https://go-review.googlesource.com/31391 Run-TryBot: Adam Langley <agl@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Filippo Valsorda <hi@filippo.io>
-
Russ Cox authored
Fixes #17409. Change-Id: Ib49ff4a467431b5c1e6637e5144979cf0bfba489 Reviewed-on: https://go-review.googlesource.com/31817Reviewed-by: Martin Möhrmann <martisch@uos.de> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Russ Cox authored
If we leave it for compilation sometimes the error appears first in derived vendor paths, without any indication where they came from. This is better. $ go1.7 build canonical/d cmd/go/testdata/src/canonical/a/a.go:3: non-canonical import path "canonical/a//vendor/c" (should be "canonical/a/vendor/c") cmd/go/testdata/src/canonical/a/a.go:3: can't find import: "canonical/a//vendor/c" $ go build canonical/d package canonical/d imports canonical/b imports canonical/a/: non-canonical import path: "canonical/a/" should be "canonical/a" $ Fixes #16954. Change-Id: I315ccec92a00d98a08c139b3dc4e17dbc640edd0 Reviewed-on: https://go-review.googlesource.com/31668Reviewed-by: Quentin Smith <quentin@golang.org>
-
Russ Cox authored
This is not strictly illegal but it probably should be (too late) and doesn't mean what it looks like it means: the second key-value pair has the key ",xml". Fixes #14466. Change-Id: I174bccc23fd28affeb87f57f77c6591634ade641 Reviewed-on: https://go-review.googlesource.com/32031 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rob Pike <r@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Michael Munday authored
This CL introduces some minor changes to match rules more closely to the instructions they are targeting. s390x logical operation with immediate instructions typically leave some bits in the target register unchanged. This means for example that an XOR with -1 requires 2 instructions. It is better in cases such as this to create a constant and leave it visible to the compiler so that it can be reused rather than hiding it in the assembler. This CL also tweaks the rules a bit to ensure that constants are folded when possible. Change-Id: I1c6dee31ece00fc3c5fdf6a24f1abbc91dd2db2a Reviewed-on: https://go-review.googlesource.com/31754Reviewed-by: Keith Randall <khr@golang.org>
-
Alexander Morozov authored
Change-Id: I5c501f598f41241e6d7b21d98a126827a3c3ad9a Reviewed-on: https://go-review.googlesource.com/32018Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Francesc Campoy authored
Currently any warning will make dist fail because the text will be considered as part of the package list. Change-Id: I09a14089cd0448c3779e2f767e9356fe3325d8d9 Reviewed-on: https://go-review.googlesource.com/32111 TryBot-Result: Gobot Gobot <gobot@golang.org> Run-TryBot: Andrew Gerrand <adg@golang.org> Reviewed-by: Andrew Gerrand <adg@golang.org>
-
- 25 Oct, 2016 22 commits
-
-
Matthew Dempsky authored
Use a single *struct{} type instance rather than reconstructing one for every declared/imported interface method. Minor allocations win: name old alloc/op new alloc/op delta Template 41.8MB ± 0% 41.7MB ± 0% -0.10% (p=0.000 n=9+10) Unicode 34.2MB ± 0% 34.2MB ± 0% ~ (p=0.971 n=10+10) GoTypes 123MB ± 0% 122MB ± 0% -0.03% (p=0.000 n=9+10) Compiler 495MB ± 0% 495MB ± 0% -0.01% (p=0.000 n=10+10) name old allocs/op new allocs/op delta Template 409k ± 0% 408k ± 0% -0.13% (p=0.000 n=10+10) Unicode 354k ± 0% 354k ± 0% ~ (p=0.516 n=10+10) GoTypes 1.22M ± 0% 1.22M ± 0% -0.03% (p=0.009 n=10+10) Compiler 4.43M ± 0% 4.43M ± 0% -0.02% (p=0.000 n=10+10) Change-Id: Id3a4ca3dd09112bb96ccc982b06c9e79f661d31f Reviewed-on: https://go-review.googlesource.com/32051Reviewed-by: Robert Griesemer <gri@golang.org>
-
Keith Randall authored
This reverts commit 7dd9c385. Reason for revert: Reverting the revert, which will re-enable the convI2E optimization. We originally reverted the convI2E optimization because it was making the builder fail, but the underlying cause was later determined to be unrelated. Original CL: https://go-review.googlesource.com/31260 Revert CL: https://go-review.googlesource.com/31310 Real bug: https://go-review.googlesource.com/c/25159 Real fix: https://go-review.googlesource.com/c/31316 Change-Id: I17237bb577a23a7675a5caab970ccda71a4124f2 Reviewed-on: https://go-review.googlesource.com/32023 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Joe Tsai authored
In the situation where the Client.Jar is set and the Request.Header has cookies manually inserted, the redirect logic needs to be able to apply changes to cookies from "Set-Cookie" headers to both the Jar and the manually inserted Header cookies. Since Header cookies lack information about the original domain and path, the logic in this CL simply removes cookies from the initial Header if any subsequent "Set-Cookie" matches. Thus, in the event of cookie conflicts, the logic preserves the behavior prior to change made in golang.org/cl/28930. Fixes #17494 Updates #4800 Change-Id: I645194d9f97ff4d95bd07ca36de1d6cdf2f32429 Reviewed-on: https://go-review.googlesource.com/31435Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Matthew Dempsky authored
It's only necessary to wrap named OTYPE or OLITERAL nodes, because their line numbers reflect the line number of the declaration, rather than use. Saves a lot of wrapper nodes in composite-literal-heavy packages like Unicode. name old alloc/op new alloc/op delta Template 41.8MB ± 0% 41.8MB ± 0% -0.07% (p=0.000 n=10+10) Unicode 36.6MB ± 0% 34.2MB ± 0% -6.55% (p=0.000 n=10+10) GoTypes 123MB ± 0% 123MB ± 0% -0.02% (p=0.004 n=10+10) Compiler 495MB ± 0% 495MB ± 0% -0.03% (p=0.000 n=10+10) name old allocs/op new allocs/op delta Template 409k ± 0% 409k ± 0% -0.05% (p=0.029 n=10+10) Unicode 371k ± 0% 354k ± 0% -4.48% (p=0.000 n=10+9) GoTypes 1.22M ± 0% 1.22M ± 0% ~ (p=0.075 n=10+10) Compiler 4.44M ± 0% 4.44M ± 0% -0.02% (p=0.000 n=10+10) Change-Id: Id1183170835125c778fb41b7e76d06d5ecd4f7a1 Reviewed-on: https://go-review.googlesource.com/32021 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Matthew Dempsky authored
Change-Id: I7306d28930dc4538a3bee31ff5d22f3f40681ec5 Reviewed-on: https://go-review.googlesource.com/32020 Run-TryBot: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Now that sweeping and span marking use the sweep list, there's no need for the work.spans snapshot of the allspans list. This change eliminates the few remaining uses of it, which are either dead code or can use allspans directly, and removes work.spans and its support functions. Change-Id: Id5388b42b1e68e8baee853d8eafb8bb4ff95bb43 Reviewed-on: https://go-review.googlesource.com/30537 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently markrootSpans iterates over all spans ever allocated to find the in-use spans. Since we now have a list of in-use spans, change it to iterate over that instead. This, combined with the previous change, fixes #9265. Before these two changes, blowing up the heap to 8GB and then shrinking it to a 0MB live set caused the small-heap portion of the test to run 60x slower than without the initial blowup. With these two changes, the time is indistinguishable. No significant effect on other benchmarks. Change-Id: I4a27e533efecfb5d18cba3a87c0181a81d0ddc1e Reviewed-on: https://go-review.googlesource.com/30536 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently sweeping walks the list of all spans, which means the work in sweeping is proportional to the maximum number of spans ever used. If the heap was once large but is now small, this causes an amortization failure: on a small heap, GCs happen frequently, but a full sweep still has to happen in each GC cycle, which means we spent a lot of time in sweeping. Fix this by creating a separate list consisting of just the in-use spans to be swept, so sweeping is proportional to the number of in-use spans (which is proportional to the live heap). Specifically, we create two lists: a list of unswept in-use spans and a list of swept in-use spans. At the start of the sweep cycle, the swept list becomes the unswept list and the new swept list is empty. Allocating a new in-use span adds it to the swept list. Sweeping moves spans from the unswept list to the swept list. This fixes the amortization problem because a shrinking heap moves spans off the unswept list without adding them to the swept list, reducing the time required by the next sweep cycle. Updates #9265. This fix eliminates almost all of the time spent in sweepone; however, markrootSpans has essentially the same bug, so now the test program from this issue spends all of its time in markrootSpans. No significant effect on other benchmarks. Change-Id: Ib382e82790aad907da1c127e62b3ab45d7a4ac1e Reviewed-on: https://go-review.googlesource.com/30535 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Change-Id: I4b8a6f5d9bc5aba16026d17f99f3512dacde8d2d Reviewed-on: https://go-review.googlesource.com/30534 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we set the len and cap of h.spans to the full reserved region of the address space and track the actual mapped region separately in h.spans_mapped. Since we have both the len and cap at our disposal, change things so len(h.spans) tracks how much of the spans array is mapped and eliminate h.spans_mapped. This simplifies mheap and means we'll get nice "index out of bounds" exceptions if we do try to go off the end of the spans rather than a SIGSEGV. Change-Id: I8ed9a1a9a844d90e9fd2e269add4704623dbdfe6 Reviewed-on: https://go-review.googlesource.com/30533 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Like h_allspans and mheap_.allspans, these were two ways of referring to the spans array from when the runtime was split between C and Go. Clean this up by making mheap_.spans a slice and eliminating h_spans. Change-Id: I3aa7038d53c3a4252050aa33e468c48dfed0b70e Reviewed-on: https://go-review.googlesource.com/30532 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This was necessary in the C days when allspans was an mspan**, but now that allspans is a Go slice, this is redundant with len(allspans) and we can use range loops over allspans. Change-Id: Ie1dc39611e574e29a896e01690582933f4c5be7e Reviewed-on: https://go-review.googlesource.com/30531 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
These are two ways to refer to the allspans array that hark back to when the runtime was split between C and Go. Clean this up by making mheap_.allspans a slice and eliminating h_allspans. Change-Id: Ic9360d040cf3eb590b5dfbab0b82e8ace8525610 Reviewed-on: https://go-review.googlesource.com/30530 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Matthew Dempsky authored
Change-Id: I3c784986755cfbbe1b8eb8da4d64227bd109a3b0 Reviewed-on: https://go-review.googlesource.com/27203 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
David du Colombier authored
Generated from go vet. Change-Id: Ie775c29b505166e0bd511826ef20eeb153a0424c Reviewed-on: https://go-review.googlesource.com/32071Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
David du Colombier authored
Generated from go vet. Change-Id: I2620e5544be46485a876c7dce26b0592bf5a4101 Reviewed-on: https://go-review.googlesource.com/32070Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Cherry Zhang authored
When the compiler insert write barriers, the frontend makes conservative decisions at an early stage. This may have false positives which result in write barriers for stack writes. A new phase, writebarrier, is added to the SSA backend, to delay the decision and eliminate false positives. The frontend still makes conservative decisions. When building SSA, instead of emitting runtime calls directly, it emits WB ops (StoreWB, MoveWB, etc.), which will be expanded to branches and runtime calls in writebarrier phase. Writes to static locations on stack are detected and write barriers are removed. All write barriers of stack writes found by the script from issue #17330 are eliminated (except two false positives). Fixes #17330. Change-Id: I9bd66333da9d0ceb64dcaa3c6f33502798d1a0f8 Reviewed-on: https://go-review.googlesource.com/31131Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: David Chase <drchase@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Cherry Zhang authored
The runtime traceback code assumes non-empty frame has link link register saved on LR architectures. Make sure it is so in the assember. Also make sure that LR is stored before update SP, so the traceback code will not see a half-updated stack frame if a signal comes during the execution of function prologue. Fixes #17381. Change-Id: I668b04501999b7f9b080275a2d1f8a57029cbbb3 Reviewed-on: https://go-review.googlesource.com/31760Reviewed-by: Michael Munday <munday@ca.ibm.com>
-
Tom Bergan authored
This interface will be implemented by golang.org/x/net/http2 in https://go-review.googlesource.com/c/29439/. Updates golang/go#13443 Change-Id: Ib6bdd403b0878cfe36fa9875c07c2c7239232556 Reviewed-on: https://go-review.googlesource.com/32012 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Lynn Boger authored
Update the ppc64x disassembly code for use by objdump from golang.org/x/arch/ppc64/ppc64asm commit fcea5ea. Enable the objdump testcase for external linking on ppc64le make a minor fix to the expected output. Fixes #17447 Change-Id: I769cc7f8bfade594690a476dfe77ab33677ac03b Reviewed-on: https://go-review.googlesource.com/32015 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Russ Cox authored
Fixes #17148. Change-Id: I4c66aa0733c249ee6019d1c4e802a7e30457d4b6 Reviewed-on: https://go-review.googlesource.com/32030 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Ian Lance Taylor authored
The Dragonfly libc returns a non-zero value for malloc(-1). Fixes #17585. Change-Id: Icfe68011ccbc75c676273ee3c3efdf24a520a004 Reviewed-on: https://go-review.googlesource.com/32050 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-