- 22 Apr, 2015 15 commits
-
-
Michael Hudson-Doyle authored
To make the gcprog for global data containing variables of types defined in other shared libraries, we need to know a lot about those types. So read the value of any symbol with a name starting with "type.". If a type uses a mask, the name of the symbol defining the mask unfortunately cannot be predicted from the type name so I have to keep track of the addresses of every such symbol and associate them with the type symbols after the fact. I'm not very happy about this change, but something like this is needed and this is as pleasant as I know how to make it. Change-Id: I408d831b08b3b31e0610688c41367b23998e975c Reviewed-on: https://go-review.googlesource.com/8334Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org>
-
Michael Hudson-Doyle authored
There were 10 implementations of the trivial bool2int function, 9 of which were the only thing in their file. Remove all of them in favor of one in cmd/internal/obj. Change-Id: I9c51d30716239df51186860b9842a5e9b27264d3 Reviewed-on: https://go-review.googlesource.com/9230Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Alan Donovan authored
since the "precision" parameter means constant arithmetic is not necessarily exact. As requested by gri, within go/types, the local import name 'exact' has been kept, to reduce the diff with the x/tools branch. This may be changed later. Since the go/types.bash script was already obsolete, I added a comment to this effect. Tested with all.bash. Change-Id: I45153688d9d8afa8384fb15229b0124c686059b4 Reviewed-on: https://go-review.googlesource.com/9242Reviewed-by: Rob Pike <r@golang.org>
-
Srdjan Petrovic authored
We initially added clone0 to handle the case when G or M don't exist, but it turns out that we could have just modified clone. (It also helps that the function we're invoking in clone0 no longer needs arguments.) As a side-effect, newosproc0 is now supported on all linux archs. Change-Id: Ie603af75d8f164310fc16446052d83743961f3ca Reviewed-on: https://go-review.googlesource.com/9164Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Robert Griesemer authored
Added a prec parameter to MakeFromLiteral (which currently must always be 0). This will permit go/types to provide an upper limit for the precision of constant values, eventually. Overflows can be returned with a special Overflow value (very much like the current Unknown values). This is a minimal change that should prevent the need for future backward-incompatible API changes. Change-Id: I6c9390d7cc4810375e26c53ed3bde5a383392330 Reviewed-on: https://go-review.googlesource.com/9168 Run-TryBot: Robert Griesemer <gri@golang.org> Reviewed-by: Alan Donovan <adonovan@google.com>
-
Daniel Morsing authored
In the brief window between getConn and persistConn.roundTrip, a cancel could end up going missing. Fix by making it possible to inspect if a cancel function was cleared and checking if we were canceled before entering roundTrip. Fixes #10511 Change-Id: If6513e63fbc2edb703e36d6356ccc95a1dc33144 Reviewed-on: https://go-review.googlesource.com/9181Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Brad Fitzpatrick authored
Previously all errors were 404 errors, even if the real error had nothing to do with a file being non-existent. Fixes #10283 Change-Id: I5b08b471a9064c347510cfcf8557373704eef7c0 Reviewed-on: https://go-review.googlesource.com/9200Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
-
Brad Fitzpatrick authored
There used to be a small window where if a server declared it would do a keep-alive connection but then actually closed the connection before the roundTrip goroutine scheduled after being sent a response from the readLoop goroutine, then the readLoop goroutine would loop around and block forever reading from a channel because the numExpectedResponses accounting was done too late. Fixes #10457 Change-Id: Icbae937ffe83c792c295b7f4fb929c6a24a4f759 Reviewed-on: https://go-review.googlesource.com/9169Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
-
Shenghou Ma authored
Change-Id: Ie8dfdb592ee0bfc736d08c92c3d8413a37b6ac03 Reviewed-on: https://go-review.googlesource.com/9241Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Keith Randall authored
Unlike linux arm32, linux arm64 does not set the condition codes to indicate whether a system call failed or not. We must check if the return value is in the error code range (the same as amd64 does). Fixes runtime.TestBadOpen test. Change-Id: I97a8b0a17b5f002a3215c535efa91d199cee3309 Reviewed-on: https://go-review.googlesource.com/9220Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
Several naming changes and a real issue in asmcgocall_errno. Change-Id: Ieb0a328a168819fe233d74e0397358384d7e71b3 Reviewed-on: https://go-review.googlesource.com/9212Reviewed-by: Minux Ma <minux@golang.org>
-
Mikio Hara authored
This change replaces server tests with new ones that require features introduced after go1 release, such as runtime-integrated network poller, Dialer, etc. Change-Id: Icf1f94f08f33caacd499cfccbe74cda8d05eed30 Reviewed-on: https://go-review.googlesource.com/9195Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Nigel Tao authored
Fixes #10413 Change-Id: I7a4ecd042c40f786ea7406c670d561b1c1179bf0 Reviewed-on: https://go-review.googlesource.com/8998Reviewed-by: Rob Pike <r@golang.org>
-
Mikio Hara authored
This change deflakes zero byte read/write tests on datagram sockets, and enables them by default. Change-Id: I52f1a76f8ff379d90f40a07bb352fae9343ea41a Reviewed-on: https://go-review.googlesource.com/9194Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Mikio Hara authored
This change excludes internal UDP header size from a result of number of bytes written on WriteTo. Change-Id: I847d57f7f195657b6f14efdf1b4cfab13d4490dd Reviewed-on: https://go-review.googlesource.com/9196Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: David du Colombier <0intro@gmail.com>
-
- 21 Apr, 2015 25 commits
-
-
Josh Bleecher Snyder authored
Fixes #10525. Change-Id: I92dc87f5d6db396d8dde2220fc37b7093b772d81 Reviewed-on: https://go-review.googlesource.com/9210Reviewed-by: Robert Griesemer <gri@golang.org>
-
Ian Lance Taylor authored
The purpose of this test is to make sure that -buildmode=c-shared works even when the shared library can be built without invoking cgo. Change-Id: Id6f95af755992b209aff770440ca9819b74113ab Reviewed-on: https://go-review.googlesource.com/9166Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Alan Donovan authored
This reverts commit 8d7d02f1. Reverted because it breaks go/build's "deps" test. Change-Id: I61db6b2431b3ba0d2b3ece5bab7a04194239c34b Reviewed-on: https://go-review.googlesource.com/9174Reviewed-by: Alan Donovan <adonovan@google.com>
-
Alan Donovan authored
This is an upstream change to the tools repo: https://go-review.googlesource.com/#/c/8924/ Change-Id: I01fb1b2e9ec834354994c544f65c8ec8267c9626 Reviewed-on: https://go-review.googlesource.com/8954 Run-TryBot: Robert Griesemer <gri@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Ian Lance Taylor authored
In external linking mode, the linker automatically imports runtime/cgo. When the user uses non-standard compilation options, they have to know to run go install runtime/cgo. When the go tool adds non-standard compilation options itself, we can't force the user to do that. So add the dependency ourselves. Bad news: we don't currently have a clean way to know whether we are going to use external linking mode. This CL duplicates logic split between cmd/6l and cmd/internal/ld. Good news: adding an unnecessary dependency on runtime/cgo does no real harm. We aren't going to force the linker to pull it in, we're just going to build it so that its available if the linker wants it. Change-Id: Ide676339d4e8b1c3d9792884a2cea921abb281b7 Reviewed-on: https://go-review.googlesource.com/9115Reviewed-by: David Crawshaw <crawshaw@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org>
-
Sebastien Binet authored
This change refactors reflect.Value to consistently use arrayAt when an element of an array of bytes is indexed. This effectively replaces: arr := unsafe.Pointer(...) arri := unsafe.Pointer(uintptr(arr) + uintptr(i)*elementSize) with: arr := unsafe.Pointer(...) arri := arrayAt(arr, i, elementSize) Change-Id: I53ffd0d6de693b43d5c10c0aa4cd6d4f5e95a1e3 Reviewed-on: https://go-review.googlesource.com/9183Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
This reduces the number of allocations in the compiler while building the stdlib by 15.66%. No functional changes. Passes toolstash -cmp. Change-Id: Ia21b37134a8906a4e23d53fdc15235b4aa7bbb34 Reviewed-on: https://go-review.googlesource.com/9085Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Sebastien Binet authored
Change-Id: I89704249218d4fdba11463c239c69143f8ad0051 Reviewed-on: https://go-review.googlesource.com/9185Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Austin Clements authored
Currently, the GC controller computes the mutator assist ratio at the beginning of the cycle by estimating that the marked heap size this cycle will be the same as it was the previous cycle. It then uses that assist ratio for the rest of the cycle. However, this means that if the mutator is quickly growing its reachable heap, the heap size is likely to exceed the heap goal and currently there's no additional pressure on mutator assists when this happens. For example, 6g (with GOMAXPROCS=1) frequently exceeds the goal heap size by ~25% because of this. This change makes GC revise its work estimate and the resulting assist ratio every 10ms during the concurrent mark. Instead of unconditionally using the marked heap size from the last cycle as an estimate for this cycle, it takes the minimum of the previously marked heap and the currently marked heap. As a result, as the cycle approaches or exceeds its heap goal, this will increase the assist ratio to put more pressure on the mutator assist to bring the cycle to an end. For 6g, this causes the GC to always finish within 5% and often within 1% of its heap goal. Change-Id: I4333b92ad0878c704964be42c655c38a862b4224 Reviewed-on: https://go-review.googlesource.com/9070Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently, in accordance with the GC pacing proposal, we schedule background marking with a goal of achieving 25% utilization *total* between mutator assists and background marking. This is stricter than was set out in the Go 1.5 proposal, which suggests that the garbage collector can use 25% just for itself and anything the mutator does to help out is on top of that. It also has several technical drawbacks. Because mutator assist time is constantly changing and we can't have instantaneous information on background marking time, it effectively requires hitting a moving target based on out-of-date information. This works out in the long run, but works poorly for short GC cycles and on short time scales. Also, this requires time-multiplexing all Ps between the mutator and background GC since the goal utilization of background GC constantly fluctuates. This results in a complicated scheduling algorithm, poor affinity, and extra overheads from context switching. This change modifies the way we schedule and run background marking so that background marking always consumes 25% of GOMAXPROCS and mutator assist is in addition to this. This enables a much more robust scheduling algorithm where we pre-determine the number of Ps we should dedicate to background marking as well as the utilization goal for a single floating "remainder" mark worker. Change-Id: I187fa4c03ab6fe78012a84d95975167299eb9168 Reviewed-on: https://go-review.googlesource.com/9013Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the concurrent sweep follows a 1:1 rule: when allocation needs a span, it sweeps a span (likewise, when a large allocation needs N pages, it sweeps until it frees N pages). This rule worked well for the STW collector (especially when GOGC==100) because it did no more sweeping than necessary to keep the heap from growing, would generally finish sweeping just before GC, and ensured good temporal locality between sweeping a page and allocating from it. It doesn't work well with concurrent GC. Since concurrent GC requires starting GC earlier (sometimes much earlier), the sweep often won't be done when GC starts. Unfortunately, the first thing GC has to do is finish the sweep. In the mean time, the mutator can continue allocating, pushing the heap size even closer to the goal size. This worked okay with the 7/8ths trigger, but it gets into a vicious cycle with the GC trigger controller: if the mutator is allocating quickly and driving the trigger lower, more and more sweep work will be left to GC; this both causes GC to take longer (allowing the mutator to allocate more during GC) and delays the start of the concurrent mark phase, which throws off the GC controller's statistics and generally causes it to push the trigger even lower. As an example of a particularly bad case, the garbage benchmark with GOMAXPROCS=4 and -benchmem 512 (MB) spends the first 0.4-0.8 seconds of each GC cycle sweeping, during which the heap grows by between 109MB and 252MB. To fix this, this change replaces the 1:1 sweep rule with a proportional sweep rule. At the end of GC, GC knows exactly how much heap allocation will occur before the next concurrent GC as well as how many span pages must be swept. This change computes this "sweep ratio" and when the mallocgc asks for a span, the mcentral sweeps enough spans to bring the swept span count into ratio with the allocated byte count. On the benchmark from above, this entirely eliminates sweeping at the beginning of GC, which reduces the time between startGC readying the GC goroutine and GC stopping the world for sweep termination to ~100µs during which the heap grows at most 134KB. Change-Id: I35422d6bba0c2310d48bb1f8f30a72d29e98c1af Reviewed-on: https://go-review.googlesource.com/8921Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This field used to decrease with sweeps (and potentially go negative). Now it is always zero or positive, so change it to a uintptr so it meshes better with other memory stats. Change-Id: I6a50a956ddc6077eeaf92011c51743cb69540a3c Reviewed-on: https://go-review.googlesource.com/8899Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, concurrent GC triggers at a fixed 7/8*GOGC heap growth. For mutators that allocate slowly, this means GC will trigger too early and run too often, wasting CPU time on GC. For mutators that allocate quickly, this means GC will trigger too late, causing the program to exceed the GOGC heap growth goal and/or to exceed CPU goals because of a high mutator assist ratio. This change adds a feedback control loop to dynamically adjust the GC trigger from cycle to cycle. By monitoring the heap growth and GC CPU utilization from cycle to cycle, this adjusts the Go garbage collector to target the GOGC heap growth goal and the 25% CPU utilization goal. Change-Id: Ic82eef288c1fa122f73b69fe604d32cbb219e293 Reviewed-on: https://go-review.googlesource.com/8851Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the concurrent mark phase is performed by the main GC goroutine. Prior to the previous commit enabling preemption, this caused marking to always consume 1/GOMAXPROCS of the available CPU time. If GOMAXPROCS=1, this meant background GC would consume 100% of the CPU (effectively a STW). If GOMAXPROCS>4, background GC would use less than the goal of 25%. If GOMAXPROCS=4, background GC would use the goal 25%, but if the mutator wasn't using the remaining 75%, background marking wouldn't take advantage of the idle time. Enabling preemption in the previous commit made GC miss CPU targets in completely different ways, but set us up to bring everything back in line. This change replaces the fixed GC goroutine with per-P background mark goroutines. Once started, these goroutines don't go in the standard run queues; instead, they are scheduled specially such that the time spent in mutator assists and the background mark goroutines totals 25% of the CPU time available to the program. Furthermore, this lets background marking take advantage of idle Ps, which significantly boosts GC performance for applications that under-utilize the CPU. This requires also changing how time is reported for gctrace, so this change splits the concurrent mark CPU time into assist/background/idle scanning. This also requires increasing the size of the StackRecord slice used in a GoroutineProfile test. Change-Id: I0936ff907d2cee6cb687a208f2df47e8988e3157 Reviewed-on: https://go-review.googlesource.com/8850Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the entire GC process runs with g.m.preemptoff set. In the concurrent phases, the parts that actually need preemption disabled are run on a system stack and there's no overall need to stay on the same M or P during the concurrent phases. Hence, move the setting of g.m.preemptoff to when we start mark termination, at which point we really do need preemption disabled. This dramatically changes the scheduling behavior of the concurrent mark phase. Currently, since this is non-preemptible, concurrent mark gets one dedicated P (so 1/GOMAXPROCS utilization). With this change, the GC goroutine is scheduled like any other goroutine during concurrent mark, so it gets 1/<runnable goroutines> utilization. You might think it's not even necessary to set g.m.preemptoff at that point since the world is stopped, but stackalloc/stackfree use this as a signal that the per-P pools are not safe to access without synchronization. Change-Id: I08aebe8179a7d304650fb8449ff36262b3771099 Reviewed-on: https://go-review.googlesource.com/8839Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This time is tracked per P and periodically flushed to the global controller state. This will be used to compute mutator assist utilization in order to schedule background GC work. Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec Reviewed-on: https://go-review.googlesource.com/8837Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, mutator allocation periodically assists the garbage collector by performing a small, fixed amount of scanning work. However, to control heap growth, mutators need to perform scanning work *proportional* to their allocation rate. This change implements proportional mutator assists. This uses the scan work estimate computed by the garbage collector at the beginning of each cycle to compute how much scan work must be performed per allocation byte to complete the estimated scan work by the time the heap reaches the goal size. When allocation triggers an assist, it uses this ratio and the amount allocated since the last assist to compute the assist work, then attempts to steal as much of this work as possible from the background collector's credit, and then performs any remaining scan work itself. Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba Reviewed-on: https://go-review.googlesource.com/8836Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the "n" in gcDrainN is in terms of objects to scan. This is used by gchelpwork to perform a limited amount of work on allocation, but is a pretty arbitrary way to bound this amount of work since the number of objects has little relation to how long they take to scan. Modify gcDrainN to perform a fixed amount of scan work instead. For now, gchelpwork still performs a fairly arbitrary amount of scan work, but at least this is much more closely related to how long the work will take. Shortly, we'll use this to precisely control the scan work performed by mutator assists during allocation to achieve the heap size goal. Change-Id: I3cd07fe0516304298a0af188d0ccdf621d4651cc Reviewed-on: https://go-review.googlesource.com/8835Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This tracks scan work done by background GC in a global pool. Mutator assists will draw on this credit to avoid doing work when background GC is staying ahead. Unlike the other GC controller tracking variables, this will be both written and read throughout the cycle. Hence, we can't arbitrarily delay updates like we can for scan work and bytes marked. However, we still want to minimize contention, so this global credit pool is allowed some error from the "true" amount of credit. Background GC accumulates credit locally up to a limit and only then flushes to the global pool. Similarly, mutator assists will draw from the credit pool in batches. Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e Reviewed-on: https://go-review.googlesource.com/8834Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This implements tracking the scan work ratio of a GC cycle and using this to estimate the scan work that will be required by the next GC cycle. Currently this estimate is unused; it will be used to drive mutator assists. Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72 Reviewed-on: https://go-review.googlesource.com/8833Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This tracks the amount of scan work in terms of scanned pointers during the concurrent mark phase. We'll use this information to estimate scan work for the next cycle. Currently this aggregates the work counter in gcWork and dispose atomically aggregates this into a global work counter. dispose happens relatively infrequently, so the contention on the global counter should be low. If this turns out to be an issue, we can reduce the number of disposes, and if it's still a problem, we can switch to per-P counters. Change-Id: Iac0364c466ee35fab781dbbbe7970a5f3c4e1fc1 Reviewed-on: https://go-review.googlesource.com/8832Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
These currently use portable implementations in terms of their uint64 counterparts. Change-Id: Icba5f7134cfcf9d0429edabcdd73091d97e5e905 Reviewed-on: https://go-review.googlesource.com/8831Reviewed-by: Rick Hudson <rlh@golang.org>
-
Sebastien Binet authored
This change exposes reflect.ArrayOf to create new reflect.Type array types at runtime, when given a reflect.Type element. - reflect: implement ArrayOf - reflect: tests for ArrayOf - runtime: document that typeAlg is used by reflect and must be kept in synchronized Fixes #5996. Change-Id: I5d07213364ca915c25612deea390507c19461758 Reviewed-on: https://go-review.googlesource.com/4111Reviewed-by: Keith Randall <khr@golang.org>
-
Matthew Dempsky authored
Update #10512. Change-Id: Ifdc59c3a5d8aba420b34ae4e37b3c2315dd7c783 Reviewed-on: https://go-review.googlesource.com/9162Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Mikio Hara authored
Fixes #10516. Change-Id: Ia93f53d4e752bbcca6112bc75f6c3dbe30b90dac Reviewed-on: https://go-review.googlesource.com/9192Reviewed-by: Ian Lance Taylor <iant@golang.org>
-