An error occurred fetching the project authors.
  1. 21 Apr, 2015 5 commits
    • Austin Clements's avatar
      runtime: track time spent in mutator assists · 100da609
      Austin Clements authored
      This time is tracked per P and periodically flushed to the global
      controller state. This will be used to compute mutator assist
      utilization in order to schedule background GC work.
      
      Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec
      Reviewed-on: https://go-review.googlesource.com/8837Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      100da609
    • Austin Clements's avatar
      runtime: proportional mutator assist · 4b2fde94
      Austin Clements authored
      Currently, mutator allocation periodically assists the garbage
      collector by performing a small, fixed amount of scanning work.
      However, to control heap growth, mutators need to perform scanning
      work *proportional* to their allocation rate.
      
      This change implements proportional mutator assists. This uses the
      scan work estimate computed by the garbage collector at the beginning
      of each cycle to compute how much scan work must be performed per
      allocation byte to complete the estimated scan work by the time the
      heap reaches the goal size. When allocation triggers an assist, it
      uses this ratio and the amount allocated since the last assist to
      compute the assist work, then attempts to steal as much of this work
      as possible from the background collector's credit, and then performs
      any remaining scan work itself.
      
      Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba
      Reviewed-on: https://go-review.googlesource.com/8836Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      4b2fde94
    • Austin Clements's avatar
      runtime: track background scan work credit · 8e24283a
      Austin Clements authored
      This tracks scan work done by background GC in a global pool. Mutator
      assists will draw on this credit to avoid doing work when background
      GC is staying ahead.
      
      Unlike the other GC controller tracking variables, this will be both
      written and read throughout the cycle. Hence, we can't arbitrarily
      delay updates like we can for scan work and bytes marked. However, we
      still want to minimize contention, so this global credit pool is
      allowed some error from the "true" amount of credit. Background GC
      accumulates credit locally up to a limit and only then flushes to the
      global pool. Similarly, mutator assists will draw from the credit pool
      in batches.
      
      Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e
      Reviewed-on: https://go-review.googlesource.com/8834Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      8e24283a
    • Austin Clements's avatar
      runtime: implement GC scan work estimator · 4e9fc0df
      Austin Clements authored
      This implements tracking the scan work ratio of a GC cycle and using
      this to estimate the scan work that will be required by the next GC
      cycle. Currently this estimate is unused; it will be used to drive
      mutator assists.
      
      Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72
      Reviewed-on: https://go-review.googlesource.com/8833Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      4e9fc0df
    • Austin Clements's avatar
      runtime: track scan work performed during concurrent mark · 571ebae6
      Austin Clements authored
      This tracks the amount of scan work in terms of scanned pointers
      during the concurrent mark phase. We'll use this information to
      estimate scan work for the next cycle.
      
      Currently this aggregates the work counter in gcWork and dispose
      atomically aggregates this into a global work counter. dispose happens
      relatively infrequently, so the contention on the global counter
      should be low. If this turns out to be an issue, we can reduce the
      number of disposes, and if it's still a problem, we can switch to
      per-P counters.
      
      Change-Id: Iac0364c466ee35fab781dbbbe7970a5f3c4e1fc1
      Reviewed-on: https://go-review.googlesource.com/8832Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      571ebae6
  2. 17 Apr, 2015 1 commit
  3. 10 Apr, 2015 4 commits
    • Austin Clements's avatar
      runtime: start concurrent GC promptly when we reach its trigger · 4b956ae3
      Austin Clements authored
      Currently, when allocation reaches the concurrent GC trigger size, we
      start the concurrent collector by ready'ing its G. This simply puts it
      on the end of the P's run queue, which means we may not actually start
      GC for some time as the current G continues to run and then the P
      drains other Gs already on its run queue. Since the mutator can
      continue to allocate, the heap can potentially be much larger than we
      intended by the time GC actually starts. Furthermore, how much larger
      is difficult to predict since it depends on the scheduler.
      
      Fix this by preempting the current G and switching directly to the
      concurrent GC G as soon as we reach the trigger heap size.
      
      On the garbage benchmark from the benchmarks subrepo with
      GOMAXPROCS=4, this reduces the time from triggering the GC to the
      beginning of sweep termination by 10 to 30 milliseconds, which reduces
      allocation after the trigger by up to 10MB (a large fraction of the
      64MB live heap the benchmark tries to maintain).
      
      One other known source of delay before we "really" start GC is the
      sweep finalization performed before sweep termination. This has
      similar negative effects on heap size and predictability, but is an
      orthogonal problem. This change adds a TODO for this.
      
      Change-Id: I8bae98cb43685c1bf353ff55868e4647e3743c47
      Reviewed-on: https://go-review.googlesource.com/8513Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      4b956ae3
    • Austin Clements's avatar
      runtime: remove GoSched/GoStart trace events around GC · 6afb5fa4
      Austin Clements authored
      These were appropriate for STW GC, since it interrupted the allocating
      Goroutine, but don't apply to concurrent GC, which runs on its own
      Goroutine. Forced GC is still STW, but it makes sense to attribute the
      GC to the goroutine that called runtime.GC().
      
      Change-Id: If12418ca66dc7e53b8b16025af4e03adb5d9577e
      Reviewed-on: https://go-review.googlesource.com/8715Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      6afb5fa4
    • Michael Hudson-Doyle's avatar
      runtime, cmd/internal/ld: rename themoduledata to firstmoduledata · a1f57598
      Michael Hudson-Doyle authored
      'themoduledata' doesn't really make sense now we support multiple moduledata
      objects.
      
      Change-Id: I8263045d8f62a42cb523502b37289b0fba054f62
      Reviewed-on: https://go-review.googlesource.com/8521Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      a1f57598
    • Michael Hudson-Doyle's avatar
      runtime, reflect: support multiple moduledata objects · fae4a128
      Michael Hudson-Doyle authored
      This changes all the places that consult themoduledata to consult a
      linked list of moduledata objects, as will be necessary for
      -linkshared to work.
      
      Obviously, as there is as yet no way of adding moduledata objects to
      this list, all this change achieves right now is wasting a few
      instructions here and there.
      
      Change-Id: I397af7f60d0849b76aaccedf72238fe664867051
      Reviewed-on: https://go-review.googlesource.com/8231Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      fae4a128
  4. 09 Apr, 2015 1 commit
    • Austin Clements's avatar
      runtime: report next_gc for initial heap size in gctrace · cb10ff1e
      Austin Clements authored
      Currently, the initial heap size reported in the gctrace line is the
      heap_live right before sweep termination. However, we triggered GC
      when heap_live reached next_gc, and there may have been significant
      allocation between that point and the beginning of sweep
      termination. Ideally these would be essentially the same, but
      currently there's scheduler delay when readying the GC goroutine as
      well as delay from background sweep finalization.
      
      We should fix this delay, but in the mean time, to give the user a
      better idea of how much the heap grew during the whole of garbage
      collection, report the trigger rather than what the heap size happened
      to be after the garbage collector finished rolling out of bed. This
      will also be more useful for heap growth plots.
      
      Change-Id: I08476b9fbcfb2de90592405e9c9f434dfb9eb1f8
      Reviewed-on: https://go-review.googlesource.com/8512Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      cb10ff1e
  5. 06 Apr, 2015 4 commits
    • Austin Clements's avatar
      runtime: report marked heap size in gctrace · 8c3fc088
      Austin Clements authored
      When the gctrace GODEBUG option is enabled, it will now report three
      heap sizes: the heap size at the beginning of the GC cycle, the heap
      size at the end of the GC cycle before sweeping, and marked heap size,
      which is the amount of heap that will be retained until the next GC
      cycle.
      
      Change-Id: Ie13f8a6d5c609bc9cc47c7555960ab55b37b5f1c
      Reviewed-on: https://go-review.googlesource.com/8430Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      8c3fc088
    • Austin Clements's avatar
      runtime: make next_gc be heap size to trigger GC at · 6d12b178
      Austin Clements authored
      In the STW collector, next_gc was both the heap size to trigger GC at
      as well as the goal heap size.
      
      Early in the concurrent collector's development, next_gc was the goal
      heap size, but was also used as the heap size to trigger GC at. This
      meant we always overshot the goal because of allocation during
      concurrent GC.
      
      Currently, next_gc is still the goal heap size, but we trigger
      concurrent GC at 7/8*GOGC heap growth. This complicates
      shouldtriggergc, but was necessary because of the incremental
      maintenance of next_gc.
      
      Now we simply compute next_gc for the next cycle during mark
      termination. Hence, it's now easy to take the simpler route and
      redefine next_gc as the heap size at which the next GC triggers. We
      can directly compute this with the 7/8 backoff during mark termination
      and shouldtriggergc can simply test if the live heap size has grown
      over the next_gc trigger.
      
      This will also simplify later changes once we start setting next_gc in
      more sophisticated ways.
      
      Change-Id: I872be4ae06b4f7a0d7f7967360a054bd36b90eea
      Reviewed-on: https://go-review.googlesource.com/8420Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      6d12b178
    • Austin Clements's avatar
      runtime: introduce heap_live; replace use of heap_alloc in GC · d7e0ad4b
      Austin Clements authored
      Currently there are two main consumers of memstats.heap_alloc:
      updatememstats (aka ReadMemStats) and shouldtriggergc.
      
      updatememstats recomputes heap_alloc from the ground up, so we don't
      need to keep heap_alloc up to date for it. shouldtriggergc wants to
      know how many bytes were marked by the previous GC plus how many bytes
      have been allocated since then, but this *isn't* what heap_alloc
      tracks. heap_alloc also includes objects that are not marked and
      haven't yet been swept.
      
      Introduce a new memstat called heap_live that actually tracks what
      shouldtriggergc wants to know and stop keeping heap_alloc up to date.
      
      Unlike heap_alloc, heap_live follows a simple sawtooth that drops
      during each mark termination and increases monotonically between GCs.
      heap_alloc, on the other hand, has much more complicated behavior: it
      may drop during sweep termination, slowly decreases from background
      sweeping between GCs, is roughly unaffected by allocation as long as
      there are unswept spans (because we sweep and allocate at the same
      rate), and may go up after background sweeping is done depending on
      the GC trigger.
      
      heap_live simplifies computing next_gc and using it to figure out when
      to trigger garbage collection. Currently, we guess next_gc at the end
      of a cycle and update it as we sweep and get a better idea of how much
      heap was marked. Now, since we're directly tracking how much heap is
      marked, we can directly compute next_gc.
      
      This also corrects bugs that could cause us to trigger GC early.
      Currently, in any case where sweep termination actually finds spans to
      sweep, heap_alloc is an overestimation of live heap, so we'll trigger
      GC too early. heap_live, on the other hand, is unaffected by sweeping.
      
      Change-Id: I1f96807b6ed60d4156e8173a8e68745ffc742388
      Reviewed-on: https://go-review.googlesource.com/8389Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      d7e0ad4b
    • Austin Clements's avatar
      runtime: track heap bytes marked by GC · 50a66562
      Austin Clements authored
      This tracks the number of heap bytes marked by a GC cycle. We'll use
      this information to precisely trigger the next GC cycle.
      
      Currently this aggregates the work counter in gcWork and dispose
      atomically aggregates this into a global work counter. dispose happens
      relatively infrequently, so the contention on the global counter
      should be low. If this turns out to be an issue, we can reduce the
      number of disposes, and if it's still a problem, we can switch to
      per-P counters.
      
      Change-Id: I1bc377cb2e802ef61c2968602b63146d52e7f5db
      Reviewed-on: https://go-review.googlesource.com/8388Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      50a66562
  6. 02 Apr, 2015 2 commits
    • Austin Clements's avatar
      runtime: add cumulative GC CPU % to gctrace line · f244a147
      Austin Clements authored
      This tracks both total CPU time used by GC and the total time
      available to all Ps since the beginning of the program and uses this
      to derive a cumulative CPU usage percent for the gctrace line.
      
      Change-Id: Ica85372b8dd45f7621909b325d5ac713a9b0d015
      Reviewed-on: https://go-review.googlesource.com/8350Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      f244a147
    • Austin Clements's avatar
      runtime: update gctrace line for new garbage collector · 24ee9482
      Austin Clements authored
      GODEBUG=gctrace=1 turns on a per-GC cycle trace line. The current line
      is left over from the STW garbage collector and includes a lot of
      information that is no longer meaningful for the concurrent GC and
      doesn't include a lot of information that is important.
      
      Replace this line with a new line designed for the new garbage
      collector.
      
      This new line is focused more on helping the user understand the
      impact of the garbage collector on their program and less on telling
      us, the runtime developers, everything that's happening inside
      GC. It's designed to fit in 80 columns and intentionally omit some
      potentially useful things that were in the old line. We might want a
      "verbose" mode that adds information for us.
      
      We'll be able to further simplify the line once we eliminate the STW
      around enabling the write barrier. Then we'll have just one STW phase,
      one concurrent phase, and one more STW phase, so we'll be able to
      reduce the number of times from five to three.
      
      Change-Id: Icc30939fe4576fb4491b4eac811649395727aa2a
      Reviewed-on: https://go-review.googlesource.com/8208Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      24ee9482
  7. 31 Mar, 2015 1 commit
  8. 20 Mar, 2015 2 commits
  9. 19 Mar, 2015 3 commits
  10. 10 Mar, 2015 2 commits
  11. 05 Mar, 2015 1 commit
  12. 04 Mar, 2015 2 commits
    • Dmitry Vyukov's avatar
      runtime: bound defer pools (try 2) · b759e225
      Dmitry Vyukov authored
      The unbounded list-based defer pool can grow infinitely.
      This can happen if a goroutine routinely allocates a defer;
      then blocks on one P; and then unblocked, scheduled and
      frees the defer on another P.
      The scenario was reported on golang-nuts list.
      
      We've been here several times. Any unbounded local caches
      are bad and grow to infinite size. This change introduces
      central defer pool; local pools become fixed-size
      with the only purpose of amortizing accesses to the
      central pool.
      
      Freedefer now executes on system stack to not consume
      nosplit stack space.
      
      Change-Id: I1a27695838409259d1586a0adfa9f92bccf7ceba
      Reviewed-on: https://go-review.googlesource.com/3967Reviewed-by: default avatarKeith Randall <khr@golang.org>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      b759e225
    • Dmitry Vyukov's avatar
      runtime: bound sudog cache · 5ef145c8
      Dmitry Vyukov authored
      The unbounded list-based sudog cache can grow infinitely.
      This can happen if a goroutine is routinely blocked on one P
      and then unblocked and scheduled on another P.
      The scenario was reported on golang-nuts list.
      
      We've been here several times. Any unbounded local caches
      are bad and grow to infinite size. This change introduces
      central sudog cache; local caches become fixed-size
      with the only purpose of amortizing accesses to the
      central cache.
      
      The change required to move sudog cache from mcache to P,
      because mcache is not scanned by GC.
      
      Change-Id: I3bb7b14710354c026dcba28b3d3c8936a8db4e90
      Reviewed-on: https://go-review.googlesource.com/3742Reviewed-by: default avatarKeith Randall <khr@golang.org>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      5ef145c8
  13. 25 Feb, 2015 4 commits
  14. 20 Feb, 2015 3 commits
  15. 19 Feb, 2015 5 commits
    • Russ Cox's avatar
      runtime: do not unmap work.spans until after checkmark phase · 5254b7e9
      Russ Cox authored
      This is causing crashes.
      
      Change-Id: I1832f33d114bc29894e491dd2baac45d7ab3a50d
      Reviewed-on: https://go-review.googlesource.com/5330Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      5254b7e9
    • Russ Cox's avatar
      runtime: missed change from reorganization CL · 6c4b54f4
      Russ Cox authored
      That is, I accidentally dropped this change of Austin's
      when preparing my CL. I blame Git.
      
      Change-Id: I9dd772c84edefad96c4b16785fdd2dea04a4a0d6
      Reviewed-on: https://go-review.googlesource.com/5320Reviewed-by: default avatarAustin Clements <austin@google.com>
      6c4b54f4
    • Russ Cox's avatar
      runtime: reorganize memory code · 484f801f
      Russ Cox authored
      Move code from malloc1.go, malloc2.go, mem.go, mgc0.go into
      appropriate locations.
      
      Factor mgc.go into mgc.go, mgcmark.go, mgcsweep.go, mstats.go.
      
      A lot of this code was in certain files because the right place was in
      a C file but it was written in Go, or vice versa. This is one step toward
      making things actually well-organized again.
      
      Change-Id: I6741deb88a7cfb1c17ffe0bcca3989e10207968f
      Reviewed-on: https://go-review.googlesource.com/5300Reviewed-by: default avatarAustin Clements <austin@google.com>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      484f801f
    • Austin Clements's avatar
      runtime: switch to gcWork abstraction · 02dcdba7
      Austin Clements authored
      This converts the garbage collector from directly manipulating work
      buffers to using the new gcWork abstraction.
      
      The previous management of work buffers was rather ad hoc.  As a
      result, switching to the gcWork abstraction changes many details of
      work buffer management.
      
      If greyobject fills a work buffer, it can now pull from work.partial
      in addition to work.empty.
      
      Previously, gcDrain started with a partial or empty work buffer and
      fetched an empty work buffer if it filled its current buffer (in
      greyobject).  Now, gcDrain starts with a full work buffer and fetches
      an partial or empty work buffer if it fills its current buffer (in
      greyobject).  The original behavior was bad because gcDrain would
      immediately drop the empty work buffer returned by greyobject and
      fetch a full work buffer, which greyobject was likely to immediately
      overflow, fetching another empty work buffer, etc.  The new behavior
      isn't great at the start because greyobject is likely to immediately
      overflow the full buffer, but the steady-state behavior should be more
      stable.  Both before and after this change, gcDrain fetches a full
      work buffer if it drains its current buffer.  Basically all of these
      choices are bad; the right answer is to use a dual work buffer scheme.
      
      Previously, shade always fetched a work buffer (though usually from
      m.currentwbuf), even if the object was already marked.  Now it only
      fetches a work buffer if it actually greys an object.
      
      Change-Id: I8b880ed660eb63135236fa5d5678f0c1c041881f
      Reviewed-on: https://go-review.googlesource.com/5232Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      02dcdba7
    • Austin Clements's avatar
      runtime: introduce higher-level GC work abstraction · b30d19de
      Austin Clements authored
      This introduces a producer/consumer abstraction for GC work pointers
      that internally handles the details of filling, draining, and
      shuffling work buffers.
      
      In addition to simplifying the GC code, this should make it easy for
      us to change how we use work buffers, including cleaning up how we use
      the work.partial queue, reintroducing a FIFO lookahead cache, adding
      prefetching, and using dual buffers to avoid flapping.
      
      This commit doesn't change any existing code.  The following commit
      will switch the garbage collector from explicit workbuf manipulation
      to gcWork.
      
      Change-Id: Ifbfe5fff45bf0362d6d7c3cecb061f0c9874077d
      Reviewed-on: https://go-review.googlesource.com/5231Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      b30d19de