An error occurred fetching the project authors.
- 21 Apr, 2015 5 commits
-
-
Austin Clements authored
This time is tracked per P and periodically flushed to the global controller state. This will be used to compute mutator assist utilization in order to schedule background GC work. Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec Reviewed-on: https://go-review.googlesource.com/8837Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, mutator allocation periodically assists the garbage collector by performing a small, fixed amount of scanning work. However, to control heap growth, mutators need to perform scanning work *proportional* to their allocation rate. This change implements proportional mutator assists. This uses the scan work estimate computed by the garbage collector at the beginning of each cycle to compute how much scan work must be performed per allocation byte to complete the estimated scan work by the time the heap reaches the goal size. When allocation triggers an assist, it uses this ratio and the amount allocated since the last assist to compute the assist work, then attempts to steal as much of this work as possible from the background collector's credit, and then performs any remaining scan work itself. Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba Reviewed-on: https://go-review.googlesource.com/8836Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This tracks scan work done by background GC in a global pool. Mutator assists will draw on this credit to avoid doing work when background GC is staying ahead. Unlike the other GC controller tracking variables, this will be both written and read throughout the cycle. Hence, we can't arbitrarily delay updates like we can for scan work and bytes marked. However, we still want to minimize contention, so this global credit pool is allowed some error from the "true" amount of credit. Background GC accumulates credit locally up to a limit and only then flushes to the global pool. Similarly, mutator assists will draw from the credit pool in batches. Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e Reviewed-on: https://go-review.googlesource.com/8834Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This implements tracking the scan work ratio of a GC cycle and using this to estimate the scan work that will be required by the next GC cycle. Currently this estimate is unused; it will be used to drive mutator assists. Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72 Reviewed-on: https://go-review.googlesource.com/8833Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This tracks the amount of scan work in terms of scanned pointers during the concurrent mark phase. We'll use this information to estimate scan work for the next cycle. Currently this aggregates the work counter in gcWork and dispose atomically aggregates this into a global work counter. dispose happens relatively infrequently, so the contention on the global counter should be low. If this turns out to be an issue, we can reduce the number of disposes, and if it's still a problem, we can switch to per-P counters. Change-Id: Iac0364c466ee35fab781dbbbe7970a5f3c4e1fc1 Reviewed-on: https://go-review.googlesource.com/8832Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 17 Apr, 2015 1 commit
-
-
Russ Cox authored
This memory is untyped and can't be used anymore. The next version of SWIG won't need it. Change-Id: I592b287c5f5186975ee09a9b28d8efe3b57134e7 Reviewed-on: https://go-review.googlesource.com/8956Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 10 Apr, 2015 4 commits
-
-
Austin Clements authored
Currently, when allocation reaches the concurrent GC trigger size, we start the concurrent collector by ready'ing its G. This simply puts it on the end of the P's run queue, which means we may not actually start GC for some time as the current G continues to run and then the P drains other Gs already on its run queue. Since the mutator can continue to allocate, the heap can potentially be much larger than we intended by the time GC actually starts. Furthermore, how much larger is difficult to predict since it depends on the scheduler. Fix this by preempting the current G and switching directly to the concurrent GC G as soon as we reach the trigger heap size. On the garbage benchmark from the benchmarks subrepo with GOMAXPROCS=4, this reduces the time from triggering the GC to the beginning of sweep termination by 10 to 30 milliseconds, which reduces allocation after the trigger by up to 10MB (a large fraction of the 64MB live heap the benchmark tries to maintain). One other known source of delay before we "really" start GC is the sweep finalization performed before sweep termination. This has similar negative effects on heap size and predictability, but is an orthogonal problem. This change adds a TODO for this. Change-Id: I8bae98cb43685c1bf353ff55868e4647e3743c47 Reviewed-on: https://go-review.googlesource.com/8513Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
These were appropriate for STW GC, since it interrupted the allocating Goroutine, but don't apply to concurrent GC, which runs on its own Goroutine. Forced GC is still STW, but it makes sense to attribute the GC to the goroutine that called runtime.GC(). Change-Id: If12418ca66dc7e53b8b16025af4e03adb5d9577e Reviewed-on: https://go-review.googlesource.com/8715Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Michael Hudson-Doyle authored
'themoduledata' doesn't really make sense now we support multiple moduledata objects. Change-Id: I8263045d8f62a42cb523502b37289b0fba054f62 Reviewed-on: https://go-review.googlesource.com/8521Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Michael Hudson-Doyle authored
This changes all the places that consult themoduledata to consult a linked list of moduledata objects, as will be necessary for -linkshared to work. Obviously, as there is as yet no way of adding moduledata objects to this list, all this change achieves right now is wasting a few instructions here and there. Change-Id: I397af7f60d0849b76aaccedf72238fe664867051 Reviewed-on: https://go-review.googlesource.com/8231Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
- 09 Apr, 2015 1 commit
-
-
Austin Clements authored
Currently, the initial heap size reported in the gctrace line is the heap_live right before sweep termination. However, we triggered GC when heap_live reached next_gc, and there may have been significant allocation between that point and the beginning of sweep termination. Ideally these would be essentially the same, but currently there's scheduler delay when readying the GC goroutine as well as delay from background sweep finalization. We should fix this delay, but in the mean time, to give the user a better idea of how much the heap grew during the whole of garbage collection, report the trigger rather than what the heap size happened to be after the garbage collector finished rolling out of bed. This will also be more useful for heap growth plots. Change-Id: I08476b9fbcfb2de90592405e9c9f434dfb9eb1f8 Reviewed-on: https://go-review.googlesource.com/8512Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 06 Apr, 2015 4 commits
-
-
Austin Clements authored
When the gctrace GODEBUG option is enabled, it will now report three heap sizes: the heap size at the beginning of the GC cycle, the heap size at the end of the GC cycle before sweeping, and marked heap size, which is the amount of heap that will be retained until the next GC cycle. Change-Id: Ie13f8a6d5c609bc9cc47c7555960ab55b37b5f1c Reviewed-on: https://go-review.googlesource.com/8430Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
In the STW collector, next_gc was both the heap size to trigger GC at as well as the goal heap size. Early in the concurrent collector's development, next_gc was the goal heap size, but was also used as the heap size to trigger GC at. This meant we always overshot the goal because of allocation during concurrent GC. Currently, next_gc is still the goal heap size, but we trigger concurrent GC at 7/8*GOGC heap growth. This complicates shouldtriggergc, but was necessary because of the incremental maintenance of next_gc. Now we simply compute next_gc for the next cycle during mark termination. Hence, it's now easy to take the simpler route and redefine next_gc as the heap size at which the next GC triggers. We can directly compute this with the 7/8 backoff during mark termination and shouldtriggergc can simply test if the live heap size has grown over the next_gc trigger. This will also simplify later changes once we start setting next_gc in more sophisticated ways. Change-Id: I872be4ae06b4f7a0d7f7967360a054bd36b90eea Reviewed-on: https://go-review.googlesource.com/8420Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Currently there are two main consumers of memstats.heap_alloc: updatememstats (aka ReadMemStats) and shouldtriggergc. updatememstats recomputes heap_alloc from the ground up, so we don't need to keep heap_alloc up to date for it. shouldtriggergc wants to know how many bytes were marked by the previous GC plus how many bytes have been allocated since then, but this *isn't* what heap_alloc tracks. heap_alloc also includes objects that are not marked and haven't yet been swept. Introduce a new memstat called heap_live that actually tracks what shouldtriggergc wants to know and stop keeping heap_alloc up to date. Unlike heap_alloc, heap_live follows a simple sawtooth that drops during each mark termination and increases monotonically between GCs. heap_alloc, on the other hand, has much more complicated behavior: it may drop during sweep termination, slowly decreases from background sweeping between GCs, is roughly unaffected by allocation as long as there are unswept spans (because we sweep and allocate at the same rate), and may go up after background sweeping is done depending on the GC trigger. heap_live simplifies computing next_gc and using it to figure out when to trigger garbage collection. Currently, we guess next_gc at the end of a cycle and update it as we sweep and get a better idea of how much heap was marked. Now, since we're directly tracking how much heap is marked, we can directly compute next_gc. This also corrects bugs that could cause us to trigger GC early. Currently, in any case where sweep termination actually finds spans to sweep, heap_alloc is an overestimation of live heap, so we'll trigger GC too early. heap_live, on the other hand, is unaffected by sweeping. Change-Id: I1f96807b6ed60d4156e8173a8e68745ffc742388 Reviewed-on: https://go-review.googlesource.com/8389Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This tracks the number of heap bytes marked by a GC cycle. We'll use this information to precisely trigger the next GC cycle. Currently this aggregates the work counter in gcWork and dispose atomically aggregates this into a global work counter. dispose happens relatively infrequently, so the contention on the global counter should be low. If this turns out to be an issue, we can reduce the number of disposes, and if it's still a problem, we can switch to per-P counters. Change-Id: I1bc377cb2e802ef61c2968602b63146d52e7f5db Reviewed-on: https://go-review.googlesource.com/8388Reviewed-by: Russ Cox <rsc@golang.org>
-
- 02 Apr, 2015 2 commits
-
-
Austin Clements authored
This tracks both total CPU time used by GC and the total time available to all Ps since the beginning of the program and uses this to derive a cumulative CPU usage percent for the gctrace line. Change-Id: Ica85372b8dd45f7621909b325d5ac713a9b0d015 Reviewed-on: https://go-review.googlesource.com/8350Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
GODEBUG=gctrace=1 turns on a per-GC cycle trace line. The current line is left over from the STW garbage collector and includes a lot of information that is no longer meaningful for the concurrent GC and doesn't include a lot of information that is important. Replace this line with a new line designed for the new garbage collector. This new line is focused more on helping the user understand the impact of the garbage collector on their program and less on telling us, the runtime developers, everything that's happening inside GC. It's designed to fit in 80 columns and intentionally omit some potentially useful things that were in the old line. We might want a "verbose" mode that adds information for us. We'll be able to further simplify the line once we eliminate the STW around enabling the write barrier. Then we'll have just one STW phase, one concurrent phase, and one more STW phase, so we'll be able to reduce the number of times from five to three. Change-Id: Icc30939fe4576fb4491b4eac811649395727aa2a Reviewed-on: https://go-review.googlesource.com/8208Reviewed-by: Russ Cox <rsc@golang.org>
-
- 31 Mar, 2015 1 commit
-
-
Michael Hudson-Doyle authored
In preparation for being able to run a go program that has code in several objects, this changes from having several linker symbols used by the runtime into having one linker symbol that points at a structure containing the needed data. Multiple object support will construct a linked list of such structures. A follow up will initialize the slices in the themoduledata structure directly from the linker but I was aiming for a minimal diff for now. Change-Id: I613cce35309801cf265a1d5ae5aaca8d689c5cbf Reviewed-on: https://go-review.googlesource.com/7441Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 20 Mar, 2015 2 commits
-
-
Austin Clements authored
The barrier in gcDrain does not account for concurrent gcDrainNs happening in gchelpwork, so it can actually return while there is still work being done. It turns out this is okay, but for subtle reasons involving gcDrainN always being run on the system stack. Document these reasons. Change-Id: Ib07b3753cc4e2b54533ab3081a359cbd1c3c08fb Reviewed-on: https://go-review.googlesource.com/7736Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
We're skating on thin ice, and things are finally starting to melt around here. (I want to avoid the debugging session that will happen when someone uses atomicand8 expecting it to be atomic with respect to other operations.) Change-Id: I254f1582be4eb1f2d7fbba05335a91c6bf0c7f02 Reviewed-on: https://go-review.googlesource.com/7861Reviewed-by: Minux Ma <minux@golang.org>
-
- 19 Mar, 2015 3 commits
-
-
Austin Clements authored
Currently, the GC's concurrent mark phase runs on the system stack. There's no need to do this, and running it this way ties up the entire M and P running the GC by preventing the scheduler from preempting the GC even during concurrent mark. Fix this by running concurrent mark on the regular G stack. It's still non-preemptible because we also set preemptoff around the whole GC process, but this moves us closer to making it preemptible. Change-Id: Ia9f1245e299b8c5c513a4b1e3ef13eaa35ac5e73 Reviewed-on: https://go-review.googlesource.com/7730Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
"Sync" is not very informative. What's being synchronized and with whom? Update this comment to explain what we're really doing: enabling write barriers. Change-Id: I4f0cbb8771988c7ba4606d566b77c26c64165f0f Reviewed-on: https://go-review.googlesource.com/7700Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we harvestwbufs the moment we enter the mark phase, even before starting the world again. Since cached wbufs are only filled when we're in mark or mark termination, they should all be empty at this point, making the harvest pointless. Remove the harvest. We should, but do not currently harvest at the end of the mark phase when we're running out of work to do. Change-Id: I5f4ba874f14dd915b8dfbc4ee5bb526eecc2c0b4 Reviewed-on: https://go-review.googlesource.com/7669Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 10 Mar, 2015 2 commits
-
-
Rick Hudson authored
Even though the world is stopped the GC may do pointer writes that need to be protected by write barriers. This means that the write barrier must be on continuously from the time the mark phase starts and the mark termination phase ends. Checks were added to ensure that no allocation happens during a GC. Hoist the logic that clears pools the start of the GC so that the memory can be reclaimed during this GC cycle. Change-Id: I9d1551ac5db9bac7bac0cb5370d5b2b19a9e6a52 Reviewed-on: https://go-review.googlesource.com/6990Reviewed-by: Austin Clements <austin@google.com>
-
Dmitry Vyukov authored
Stip uninteresting bottom and top frames from trace stacks. This makes both binary and json trace files smaller, and also makes stacks shorter and more readable in the viewer. Change-Id: Ib9c80ccc280504f0e235f867f53f1d2652c41583 Reviewed-on: https://go-review.googlesource.com/5523Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
-
- 05 Mar, 2015 1 commit
-
-
Russ Cox authored
Starting it lazily causes a memory allocation (for the goroutine) during GC. First use of channels for runtime implementation. Change-Id: I9cd24dcadbbf0ee5070ee6d0ed7ea415504f316c Reviewed-on: https://go-review.googlesource.com/6960 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
- 04 Mar, 2015 2 commits
-
-
Dmitry Vyukov authored
The unbounded list-based defer pool can grow infinitely. This can happen if a goroutine routinely allocates a defer; then blocks on one P; and then unblocked, scheduled and frees the defer on another P. The scenario was reported on golang-nuts list. We've been here several times. Any unbounded local caches are bad and grow to infinite size. This change introduces central defer pool; local pools become fixed-size with the only purpose of amortizing accesses to the central pool. Freedefer now executes on system stack to not consume nosplit stack space. Change-Id: I1a27695838409259d1586a0adfa9f92bccf7ceba Reviewed-on: https://go-review.googlesource.com/3967Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
-
Dmitry Vyukov authored
The unbounded list-based sudog cache can grow infinitely. This can happen if a goroutine is routinely blocked on one P and then unblocked and scheduled on another P. The scenario was reported on golang-nuts list. We've been here several times. Any unbounded local caches are bad and grow to infinite size. This change introduces central sudog cache; local caches become fixed-size with the only purpose of amortizing accesses to the central cache. The change required to move sudog cache from mcache to P, because mcache is not scanned by GC. Change-Id: I3bb7b14710354c026dcba28b3d3c8936a8db4e90 Reviewed-on: https://go-review.googlesource.com/3742Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
-
- 25 Feb, 2015 4 commits
-
-
Austin Clements authored
Since allglock is held in this function, there's no point to tip-toeing around allgs. Just use a for-range loop. Change-Id: I1ee61c7e8cac8b8ebc8107c0c22f739db5db9840 Reviewed-on: https://go-review.googlesource.com/5882Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Previously, we had three loops in the garbage collector that all cleared the per-G GC flags. Consolidate these into one function. This one function is designed to work in a concurrent setting. As a result, it's slightly more expensive than the loops it replaces during STW phases, but these happen at most twice per GC. Change-Id: Id1ec0074fd58865eb0112b8a0547b267802d0df1 Reviewed-on: https://go-review.googlesource.com/5881Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
The loop in gcMark is redundant with the gcworkdone resetting performed by markroot, which called a few lines later in gcMark. Change-Id: Ie0a826a614ecfa79e6e6b866e8d1de40ba515856 Reviewed-on: https://go-review.googlesource.com/5880Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Rick Hudson authored
When GODEBUG=gctrace=2 two gcs are preformed. During the first gc the stack scan sets the g's gcscanvalid and gcworkdone flags to true indicating that the stacks have to be scanned and do not need to be rescanned. These need to be reset to false for the second GC so the stacks are rescanned, otherwise if the only pointer to an object is on the stack it will not be discovered and the object will be freed. Typically this will include the object that was just allocated in the mallocgc call that initiated the GC. Change-Id: Ic25163f4689905fd810c90abfca777324005c02f Reviewed-on: https://go-review.googlesource.com/5861Reviewed-by: Russ Cox <rsc@golang.org>
-
- 20 Feb, 2015 3 commits
-
-
Russ Cox authored
This is a nice split but more importantly it provides a better way to fit the checkmark phase into the sequencing. Also factor out common span copying into gcSpanCopy. Change-Id: Ia058644974e4ed4ac3cf4b017a3446eb2284d053 Reviewed-on: https://go-review.googlesource.com/5333Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
The loop made more sense when gc_m was not its own function. Change-Id: I71a7f21d777e69c1924e3b534c507476daa4dfdd Reviewed-on: https://go-review.googlesource.com/5332Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
Change-Id: I0da26e89ae73272e49e82c6549c774e5bc97f64c Reviewed-on: https://go-review.googlesource.com/5331Reviewed-by: Austin Clements <austin@google.com>
-
- 19 Feb, 2015 5 commits
-
-
Russ Cox authored
This is causing crashes. Change-Id: I1832f33d114bc29894e491dd2baac45d7ab3a50d Reviewed-on: https://go-review.googlesource.com/5330Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Russ Cox authored
That is, I accidentally dropped this change of Austin's when preparing my CL. I blame Git. Change-Id: I9dd772c84edefad96c4b16785fdd2dea04a4a0d6 Reviewed-on: https://go-review.googlesource.com/5320Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
Move code from malloc1.go, malloc2.go, mem.go, mgc0.go into appropriate locations. Factor mgc.go into mgc.go, mgcmark.go, mgcsweep.go, mstats.go. A lot of this code was in certain files because the right place was in a C file but it was written in Go, or vice versa. This is one step toward making things actually well-organized again. Change-Id: I6741deb88a7cfb1c17ffe0bcca3989e10207968f Reviewed-on: https://go-review.googlesource.com/5300Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This converts the garbage collector from directly manipulating work buffers to using the new gcWork abstraction. The previous management of work buffers was rather ad hoc. As a result, switching to the gcWork abstraction changes many details of work buffer management. If greyobject fills a work buffer, it can now pull from work.partial in addition to work.empty. Previously, gcDrain started with a partial or empty work buffer and fetched an empty work buffer if it filled its current buffer (in greyobject). Now, gcDrain starts with a full work buffer and fetches an partial or empty work buffer if it fills its current buffer (in greyobject). The original behavior was bad because gcDrain would immediately drop the empty work buffer returned by greyobject and fetch a full work buffer, which greyobject was likely to immediately overflow, fetching another empty work buffer, etc. The new behavior isn't great at the start because greyobject is likely to immediately overflow the full buffer, but the steady-state behavior should be more stable. Both before and after this change, gcDrain fetches a full work buffer if it drains its current buffer. Basically all of these choices are bad; the right answer is to use a dual work buffer scheme. Previously, shade always fetched a work buffer (though usually from m.currentwbuf), even if the object was already marked. Now it only fetches a work buffer if it actually greys an object. Change-Id: I8b880ed660eb63135236fa5d5678f0c1c041881f Reviewed-on: https://go-review.googlesource.com/5232Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
This introduces a producer/consumer abstraction for GC work pointers that internally handles the details of filling, draining, and shuffling work buffers. In addition to simplifying the GC code, this should make it easy for us to change how we use work buffers, including cleaning up how we use the work.partial queue, reintroducing a FIFO lookahead cache, adding prefetching, and using dual buffers to avoid flapping. This commit doesn't change any existing code. The following commit will switch the garbage collector from explicit workbuf manipulation to gcWork. Change-Id: Ifbfe5fff45bf0362d6d7c3cecb061f0c9874077d Reviewed-on: https://go-review.googlesource.com/5231Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-