- 20 Mar, 2015 11 commits
-
-
Russ Cox authored
The ProgInfo is loaded many times during each analysis pass. Load it once at the beginning (in Flowstart if using that, or explicitly, as in plive.go) and then refer to the cached copy. Removes many calls to proginfo. Makes Prog a little bigger, but the previous CL more than compensates. Change-Id: If90a12fc6729878fdae10444f9c3bedc8d85026e Reviewed-on: https://go-review.googlesource.com/7745Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Russ Cox authored
An interface{} is more in the spirit of the original union. By my calculations, on 64-bit systems this reduces Addr from 120 to 80 bytes, and Prog from 592 to 424 bytes. Change-Id: I0d7b0981513c2a3c94c9ac76bb4f8816485b5a3c Reviewed-on: https://go-review.googlesource.com/7744Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
We're skating on thin ice, and things are finally starting to melt around here. (I want to avoid the debugging session that will happen when someone uses atomicand8 expecting it to be atomic with respect to other operations.) Change-Id: I254f1582be4eb1f2d7fbba05335a91c6bf0c7f02 Reviewed-on: https://go-review.googlesource.com/7861Reviewed-by: Minux Ma <minux@golang.org>
-
Russ Cox authored
Change-Id: I847bf32bd0be913fad277c5e657f44df147eee14 Reviewed-on: https://go-review.googlesource.com/7729Reviewed-by: Rob Pike <r@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Russ Cox authored
Change-Id: I99aee6dff97a4abcaf5a9cddb505ba90b65667ea Reviewed-on: https://go-review.googlesource.com/7728Reviewed-by: Rob Pike <r@golang.org>
-
Shenghou Ma authored
Fixes #9825. Change-Id: Id7eeaa14c26201db34db0820371c92a63af485b0 Reviewed-on: https://go-review.googlesource.com/7604Reviewed-by: Rob Pike <r@golang.org>
-
Shenghou Ma authored
Fixes #10171. Change-Id: I1b2e30ebbb2b9d66680008674baa96e550efe1f2 Reviewed-on: https://go-review.googlesource.com/7603Reviewed-by: Adam Langley <agl@golang.org> Run-TryBot: Adam Langley <agl@golang.org>
-
Russ Cox authored
I think the file ended up in the order of the typedefs instead of the order of the actual struct definitions. You can see where some of the declarations were because some of the comments didn't move. Put things back in the original order. Change-Id: I0e3703008278b084b632c917cfb73bc81bdd4f23 Reviewed-on: https://go-review.googlesource.com/7743Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Russ Cox authored
This allows gins to let Naddr fill in p.From and p.To directly, avoiding the zeroing and copying of a temporary. Change-Id: I96d120afe266e68f94d5e82b00886bf6bd458f85 Reviewed-on: https://go-review.googlesource.com/7742Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Russ Cox authored
This way the error messages will show the original file name in addition to the bootstrap file name, so that you have some chance of making the correction in the original instead of the copy (which will be blown away). Before: /Users/rsc/g/go/pkg/bootstrap/src/bootstrap/5g/gsubr.go:863: undefined: a After: /Users/rsc/g/go/src/cmd/5g/gsubr.go:860[/Users/rsc/g/go/pkg/bootstrap/src/bootstrap/5g/gsubr.go:863]: undefined: a Change-Id: I8d6006abd9499edb16d9f27fe8b7dc6cae143fca Reviewed-on: https://go-review.googlesource.com/7741Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
To reduce lock contention in this mode, makes persistent allocation state per-P, which means at most 64 kB overhead x $GOMAXPROCS, which should be completely tolerable. Change-Id: I34ca95e77d7e67130e30822e5a4aff6772b1a1c5 Reviewed-on: https://go-review.googlesource.com/7740Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 19 Mar, 2015 15 commits
-
-
Russ Cox authored
This reverts commit 42fcc6fe. Change-Id: If860b7cbff5b5d288c1df1405c1765275dfba7cb Reviewed-on: https://go-review.googlesource.com/7860Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
This is a follow-up to review comments on CL 7696. I believe that this includes the first regular Go test in the compiler. No functional changes. Passes toolstash -cmp. Change-Id: Id45f51aa664c5d52ece2a61cd7d8417159ce3cf0 Reviewed-on: https://go-review.googlesource.com/7820Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Quoc-Viet Nguyen authored
The body tag in the pprof template was misplaced. Change-Id: Icd7948b358f52df1acc7e033ab27a062990ef977 Reviewed-on: https://go-review.googlesource.com/7795Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
David Crawshaw authored
Accidentally turned on in golang.org/cl/7734. Change-Id: I8d72c279150a0b93732a2ac41b82fbb3cd7bf9d3 Reviewed-on: https://go-review.googlesource.com/7737Reviewed-by: Burcu Dogan <jbd@google.com>
-
David Crawshaw authored
This CL updates a TODO on a condition excluding a lot of tests on android, clarifying what needs to be done. Several of the tests should be turned off, for example anything depending on the Go tool, others should be enabled. (See #8345, comment 3 for more details.) Also add iOS, which has the same set of restrictions. Tested manually on linux/amd64, darwin/amd64, android/arm, darwin/arm. Updates #8345 Change-Id: I147f0a915426e0e0de9a73f9aea353766156609b Reviewed-on: https://go-review.googlesource.com/7734Reviewed-by: Burcu Dogan <jbd@google.com>
-
Robert Griesemer authored
Change-Id: Ifa71fb443a66eb8d7732f3b0c1408947b583c1f1 Reviewed-on: https://go-review.googlesource.com/7800Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
CL 7697 caused doasm failures on 386: runtime/append_test.go:1: doasm: notfound ft=2 tt=20 00112 (runtime/iface_test.go:207) CMPL $0, BX 2 20 I think that this should be fixed in liblink, but in the meantime, work around the problem by instead generating CMPL BX, $0. Change-Id: I9c572f8f15fc159507132cf4ace8d7a328a3eb4a Reviewed-on: https://go-review.googlesource.com/7810Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
Some type assertions of the form _, ok := i.(T) allow efficient inlining. Such type assertions commonly show up in type switches. For example, with this optimization, using 6g, the length of encoding/binary's intDataSize function shrinks from 2224 to 1728 bytes (-22%). benchmark old ns/op new ns/op delta BenchmarkAssertI2E2Blank 4.67 0.82 -82.44% BenchmarkAssertE2T2Blank 4.38 0.83 -81.05% BenchmarkAssertE2E2Blank 3.88 0.83 -78.61% BenchmarkAssertE2E2 14.2 14.4 +1.41% BenchmarkAssertE2T2 10.3 10.4 +0.97% BenchmarkAssertI2E2 13.4 13.3 -0.75% Change-Id: Ie9798c3e85432bb8e0f2c723afc376e233639df7 Reviewed-on: https://go-review.googlesource.com/7697Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
This is preliminary cleanup for another change. No functional changes. Passes toolstash -cmp. Change-Id: I11d562fbd6cba5c48d9636f3149e210e5f5308ad Reviewed-on: https://go-review.googlesource.com/7696Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Austin Clements authored
The distinction between gcWorkProducer and gcWork (producer and consumer) is not serving us as originally intended, so merge these into just gcWork. The original intent was to replace the currentwbuf cache with a gcWorkProducer. However, with gchelpwork (aka mutator assists), mutators can both produce and consume work, so it will make more sense to cache a whole gcWork. Change-Id: I6e633e96db7cb23a64fbadbfc4607e3ad32bcfb3 Reviewed-on: https://go-review.googlesource.com/7733Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently markroot fetches the wbuf to fill from the per-M wbuf cache. The wbuf cache is primarily meant for the write barrier because it produces very little work on each call. There's little point to using the cache in mark root, since each call to markroot is likely to produce a large amount of work (so the slight win on getting it from the cache instead of from the central wbuf lists doesn't matter), and markroot does not dispose the wbuf back to the cache (so most markroot calls won't get anything from the wbuf cache anyway). Instead, just get the wbuf from the central wbuf lists like other work producers. This will simplify later changes. Change-Id: I07a18a4335a41e266a6d70aa3a0911a40babce23 Reviewed-on: https://go-review.googlesource.com/7732Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the GC's concurrent mark phase runs on the system stack. There's no need to do this, and running it this way ties up the entire M and P running the GC by preventing the scheduler from preempting the GC even during concurrent mark. Fix this by running concurrent mark on the regular G stack. It's still non-preemptible because we also set preemptoff around the whole GC process, but this moves us closer to making it preemptible. Change-Id: Ia9f1245e299b8c5c513a4b1e3ef13eaa35ac5e73 Reviewed-on: https://go-review.googlesource.com/7730Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
"Sync" is not very informative. What's being synchronized and with whom? Update this comment to explain what we're really doing: enabling write barriers. Change-Id: I4f0cbb8771988c7ba4606d566b77c26c64165f0f Reviewed-on: https://go-review.googlesource.com/7700Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we harvestwbufs the moment we enter the mark phase, even before starting the world again. Since cached wbufs are only filled when we're in mark or mark termination, they should all be empty at this point, making the harvest pointless. Remove the harvest. We should, but do not currently harvest at the end of the mark phase when we're running out of work to do. Change-Id: I5f4ba874f14dd915b8dfbc4ee5bb526eecc2c0b4 Reviewed-on: https://go-review.googlesource.com/7669Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Change-Id: I0ad1a81a235c7c067fea2093bbeac4e06a233c10 Reviewed-on: https://go-review.googlesource.com/7661Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 18 Mar, 2015 10 commits
-
-
Josh Bleecher Snyder authored
Change-Id: I5a49f56518adf7d64ba8610b51ea1621ad888fc4 Reviewed-on: https://go-review.googlesource.com/7771Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Switch statements do a binary search on long runs of constants. Doing a less-than comparison on a string is much more expensive than on (say) an int. Use two part comparison for strings: First compare length, then the strings themselves. Benchmarks from issue 10000: benchmark old ns/op new ns/op delta BenchmarkIf0 3.36 3.35 -0.30% BenchmarkIf1 4.45 4.47 +0.45% BenchmarkIf2 5.22 5.26 +0.77% BenchmarkIf3 5.56 5.58 +0.36% BenchmarkIf4 10.5 10.6 +0.95% BenchmarkIfNewStr0 5.26 5.30 +0.76% BenchmarkIfNewStr1 7.19 7.15 -0.56% BenchmarkIfNewStr2 7.23 7.16 -0.97% BenchmarkIfNewStr3 7.47 7.43 -0.54% BenchmarkIfNewStr4 12.4 12.2 -1.61% BenchmarkSwitch0 9.56 4.24 -55.65% BenchmarkSwitch1 8.64 5.58 -35.42% BenchmarkSwitch2 9.38 10.1 +7.68% BenchmarkSwitch3 8.66 5.00 -42.26% BenchmarkSwitch4 7.99 8.18 +2.38% BenchmarkSwitchNewStr0 11.3 6.12 -45.84% BenchmarkSwitchNewStr1 11.1 8.33 -24.95% BenchmarkSwitchNewStr2 11.0 11.1 +0.91% BenchmarkSwitchNewStr3 10.3 6.93 -32.72% BenchmarkSwitchNewStr4 11.0 11.2 +1.82% Fixes #10000 Change-Id: Ia2fffc32e9843425374c274064f709ec7ee46d80 Reviewed-on: https://go-review.googlesource.com/7698Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I79b7ed8f7e78e9d35b5e30ef70b98db64bc68a7b Reviewed-on: https://go-review.googlesource.com/7720Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Comment changes only. Change-Id: I56848814564c4aa0988b451df18bebdfc88d6d94 Reviewed-on: https://go-review.googlesource.com/7721Reviewed-by: Rob Pike <r@golang.org>
-
Dmitry Vyukov authored
One of my earlier versions of finer-grained select locking failed on this test. If you just naively lock and check channels one-by-one, it is possible that you skip over ready channels. Consider that initially c1 is ready and c2 is not. Select checks c2. Then another goroutine makes c1 not ready and c2 ready (in that order). Then select checks c1, concludes that no channels are ready and executes the default case. But there was no point in time when no channel is ready and so default case must not be executed. Change-Id: I3594bf1f36cfb120be65e2474794f0562aebcbbd Reviewed-on: https://go-review.googlesource.com/7550Reviewed-by: Russ Cox <rsc@golang.org>
-
Aaron Jacobs authored
Change-Id: I216511a4bce431de0a468f618a7a7c4da79e2979 Reviewed-on: https://go-review.googlesource.com/7710Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Adam Langley authored
RC4 is frowned upon[1] at this point and major providers are disabling it by default[2]. Those who still need RC4 support in crypto/tls can enable it by specifying the CipherSuites slice in crypto/tls.Config explicitly. Fixes #10094. [1] https://tools.ietf.org/html/rfc7465 [2] https://blog.cloudflare.com/killing-rc4-the-long-goodbye/ Change-Id: Ia03a456f7e7a4362b706392b0e3c4cc93ce06f9f Reviewed-on: https://go-review.googlesource.com/7647Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Adam Langley authored
Just so that we notice in the future if another hash function is added without updating this utility function, make it panic when passed an unknown handshake hash function. (Which should never happen.) Change-Id: I60a6fc01669441523d8c44e8fbe7ed435e7f04c8 Reviewed-on: https://go-review.googlesource.com/7646Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-by: Joël Stemmer <stemmertech@gmail.com>
-
Adam Langley authored
crypto/rand.Reader doesn't ensure that short reads don't happen. This change contains a couple of fixups where io.ReadFull wasn't being used with it. Change-Id: I3855b81f5890f2e703112eeea804aeba07b6a6b8 Reviewed-on: https://go-review.googlesource.com/7645Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Ian Lance Taylor authored
For example, "GOARCH=sparc go build -compiler=gccgo" should not crash merely because the architecture character for sparc is not known. Change-Id: I18912c7f5d90ef8f586592235ec9d6e5053e4bef Reviewed-on: https://go-review.googlesource.com/7695Reviewed-by: Russ Cox <rsc@golang.org>
-
- 17 Mar, 2015 4 commits
-
-
Robert Griesemer authored
Change-Id: I72e8389ec080be8a0119f98df898de6f5510fa4d Reviewed-on: https://go-review.googlesource.com/7693Reviewed-by: Alan Donovan <adonovan@google.com>
-
David Chase authored
Change-Id: I19e6542e7d79d60e39d62339da51a827c5aa6d3b Reviewed-on: https://go-review.googlesource.com/7668Reviewed-by: Russ Cox <rsc@golang.org>
-
Russ Cox authored
The value in question is really a bit pattern (a pointer with extra bits thrown in), so treat it as a uintptr instead, avoiding the generation of a write barrier when there might not be a p. Also add the obligatory //go:nowritebarrier. Change-Id: I4ea097945dd7093a140f4740bcadca3ce7191971 Reviewed-on: https://go-review.googlesource.com/7667Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Rick Hudson authored
The GC assumes that there will be no asynchronous write barriers when the world is stopped. This keeps the synchronization between write barriers and the GC simple. However, currently, there are a few places in runtime code where this assumption does not hold. The GC stops the world by collecting all Ps, which stops all user Go code, but small parts of the runtime can run without a P. For example, the code that releases a P must still deschedule its G onto a runnable queue before stopping. Similarly, when a G returns from a long-running syscall, it must run code to reacquire a P. Currently, this code can contain write barriers. This can lead to the GC collecting reachable objects if something like the following sequence of events happens: 1. GC stops the world by collecting all Ps. 2. G #1 returns from a syscall (for example), tries to install a pointer to object X, and calls greyobject on X. 3. greyobject on G #1 marks X, but does not yet add it to a write buffer. At this point, X is effectively black, not grey, even though it may point to white objects. 4. GC reaches X through some other path and calls greyobject on X, but greyobject does nothing because X is already marked. 5. GC completes. 6. greyobject on G #1 adds X to a work buffer, but it's too late. 7. Objects that were reachable only through X are incorrectly collected. To fix this, we check the invariant that no asynchronous write barriers happen when the world is stopped by checking that write barriers always have a P, and modify all currently known sources of these writes to disable the write barrier. In all modified cases this is safe because the object in question will always be reachable via some other path. Some of the trace code was turned off, in particular the code that traces returning from a syscall. The GC assumes that as far as the heap is concerned the thread is stopped when it is in a syscall. Upon returning the trace code must not do any heap writes for the same reasons discussed above. Fixes #10098 Fixes #9953 Fixes #9951 Fixes #9884 May relate to #9610 #9771 Change-Id: Ic2e70b7caffa053e56156838eb8d89503e3c0c8a Reviewed-on: https://go-review.googlesource.com/7504Reviewed-by: Austin Clements <austin@google.com>
-