An error occurred fetching the project authors.
  1. 21 Apr, 2015 9 commits
    • Austin Clements's avatar
      runtime: track time spent in mutator assists · 100da609
      Austin Clements authored
      This time is tracked per P and periodically flushed to the global
      controller state. This will be used to compute mutator assist
      utilization in order to schedule background GC work.
      
      Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec
      Reviewed-on: https://go-review.googlesource.com/8837Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      100da609
    • Austin Clements's avatar
      runtime: proportional mutator assist · 4b2fde94
      Austin Clements authored
      Currently, mutator allocation periodically assists the garbage
      collector by performing a small, fixed amount of scanning work.
      However, to control heap growth, mutators need to perform scanning
      work *proportional* to their allocation rate.
      
      This change implements proportional mutator assists. This uses the
      scan work estimate computed by the garbage collector at the beginning
      of each cycle to compute how much scan work must be performed per
      allocation byte to complete the estimated scan work by the time the
      heap reaches the goal size. When allocation triggers an assist, it
      uses this ratio and the amount allocated since the last assist to
      compute the assist work, then attempts to steal as much of this work
      as possible from the background collector's credit, and then performs
      any remaining scan work itself.
      
      Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba
      Reviewed-on: https://go-review.googlesource.com/8836Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      4b2fde94
    • Austin Clements's avatar
      runtime: make gcDrainN in terms of scan work · 028f9728
      Austin Clements authored
      Currently, the "n" in gcDrainN is in terms of objects to scan. This is
      used by gchelpwork to perform a limited amount of work on allocation,
      but is a pretty arbitrary way to bound this amount of work since the
      number of objects has little relation to how long they take to scan.
      
      Modify gcDrainN to perform a fixed amount of scan work instead. For
      now, gchelpwork still performs a fairly arbitrary amount of scan work,
      but at least this is much more closely related to how long the work
      will take. Shortly, we'll use this to precisely control the scan work
      performed by mutator assists during allocation to achieve the heap
      size goal.
      
      Change-Id: I3cd07fe0516304298a0af188d0ccdf621d4651cc
      Reviewed-on: https://go-review.googlesource.com/8835Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      028f9728
    • Austin Clements's avatar
      runtime: track background scan work credit · 8e24283a
      Austin Clements authored
      This tracks scan work done by background GC in a global pool. Mutator
      assists will draw on this credit to avoid doing work when background
      GC is staying ahead.
      
      Unlike the other GC controller tracking variables, this will be both
      written and read throughout the cycle. Hence, we can't arbitrarily
      delay updates like we can for scan work and bytes marked. However, we
      still want to minimize contention, so this global credit pool is
      allowed some error from the "true" amount of credit. Background GC
      accumulates credit locally up to a limit and only then flushes to the
      global pool. Similarly, mutator assists will draw from the credit pool
      in batches.
      
      Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e
      Reviewed-on: https://go-review.googlesource.com/8834Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      8e24283a
    • Austin Clements's avatar
      runtime: implement GC scan work estimator · 4e9fc0df
      Austin Clements authored
      This implements tracking the scan work ratio of a GC cycle and using
      this to estimate the scan work that will be required by the next GC
      cycle. Currently this estimate is unused; it will be used to drive
      mutator assists.
      
      Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72
      Reviewed-on: https://go-review.googlesource.com/8833Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      4e9fc0df
    • Austin Clements's avatar
      runtime: track scan work performed during concurrent mark · 571ebae6
      Austin Clements authored
      This tracks the amount of scan work in terms of scanned pointers
      during the concurrent mark phase. We'll use this information to
      estimate scan work for the next cycle.
      
      Currently this aggregates the work counter in gcWork and dispose
      atomically aggregates this into a global work counter. dispose happens
      relatively infrequently, so the contention on the global counter
      should be low. If this turns out to be an issue, we can reduce the
      number of disposes, and if it's still a problem, we can switch to
      per-P counters.
      
      Change-Id: Iac0364c466ee35fab781dbbbe7970a5f3c4e1fc1
      Reviewed-on: https://go-review.googlesource.com/8832Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      571ebae6
    • Austin Clements's avatar
      runtime: atomic ops for int64 · fb9fd2bd
      Austin Clements authored
      These currently use portable implementations in terms of their uint64
      counterparts.
      
      Change-Id: Icba5f7134cfcf9d0429edabcdd73091d97e5e905
      Reviewed-on: https://go-review.googlesource.com/8831Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      fb9fd2bd
    • Sebastien Binet's avatar
      reflect: implement ArrayOf · 918fdae3
      Sebastien Binet authored
      This change exposes reflect.ArrayOf to create new reflect.Type array
      types at runtime, when given a reflect.Type element.
      
      - reflect: implement ArrayOf
      - reflect: tests for ArrayOf
      - runtime: document that typeAlg is used by reflect and must be kept in
        synchronized
      
      Fixes #5996.
      
      Change-Id: I5d07213364ca915c25612deea390507c19461758
      Reviewed-on: https://go-review.googlesource.com/4111Reviewed-by: default avatarKeith Randall <khr@golang.org>
      918fdae3
    • Matthew Dempsky's avatar
      runtime/pprof: disable flaky TestTraceFutileWakeup on linux/ppc64le · c0fa9e3f
      Matthew Dempsky authored
      Update #10512.
      
      Change-Id: Ifdc59c3a5d8aba420b34ae4e37b3c2315dd7c783
      Reviewed-on: https://go-review.googlesource.com/9162Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      c0fa9e3f
  2. 20 Apr, 2015 4 commits
    • Rick Hudson's avatar
      runtime: Speed up heapBitsForObject · 899a4ad4
      Rick Hudson authored
      Optimized heapBitsForObject by special casing
      objects whose size is a power of two. When a
      span holding such objects is initialized I
      added a mask that when &ed with an interior pointer
      results in the base of the pointer. For the garbage
      benchmark this resulted in CPU_CLK_UNHALTED in
      heapBitsForObject going from 7.7% down to 5.9%
      of the total, INST_RETIRED went from 12.2 -> 8.7.
      
      Here are the benchmarks that were at lease plus or minus 1%.
      
      benchmark                          old ns/op      new ns/op      delta
      BenchmarkFmtFprintfString          249            221            -11.24%
      BenchmarkFmtFprintfInt             247            223            -9.72%
      BenchmarkFmtFprintfEmpty           76.5           69.6           -9.02%
      BenchmarkBinaryTree17              4106631412     3744550160     -8.82%
      BenchmarkFmtFprintfFloat           424            399            -5.90%
      BenchmarkGoParse                   4484421        4242115        -5.40%
      BenchmarkGobEncode                 8803668        8449107        -4.03%
      BenchmarkFmtManyArgs               1494           1436           -3.88%
      BenchmarkGobDecode                 10431051       10032606       -3.82%
      BenchmarkFannkuch11                2591306713     2517400464     -2.85%
      BenchmarkTimeParse                 361            371            +2.77%
      BenchmarkJSONDecode                70620492       68830357       -2.53%
      BenchmarkRegexpMatchMedium_1K      54693          53343          -2.47%
      BenchmarkTemplate                  90008879       91929940       +2.13%
      BenchmarkTimeFormat                380            387            +1.84%
      BenchmarkRegexpMatchEasy1_32       111            113            +1.80%
      BenchmarkJSONEncode                21359159       21007583       -1.65%
      BenchmarkRegexpMatchEasy1_1K       603            613            +1.66%
      BenchmarkRegexpMatchEasy0_32       127            129            +1.57%
      BenchmarkFmtFprintfIntInt          399            393            -1.50%
      BenchmarkRegexpMatchEasy0_1K       373            378            +1.34%
      
      Change-Id: I78e297161026f8b5cc7507c965fd3e486f81ed29
      Reviewed-on: https://go-review.googlesource.com/8980Reviewed-by: default avatarAustin Clements <austin@google.com>
      899a4ad4
    • Russ Cox's avatar
      runtime: replace func-based write barrier skipping with type-based · 181e26b9
      Russ Cox authored
      This CL revises CL 7504 to use explicitly uintptr types for the
      struct fields that are going to be updated sometimes without
      write barriers. The result is that the fields are now updated *always*
      without write barriers.
      
      This approach has two important properties:
      
      1) Now the GC never looks at the field, so if the missing reference
      could cause a problem, it will do so all the time, not just when the
      write barrier is missed at just the right moment.
      
      2) Now a write barrier never happens for the field, avoiding the
      (correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1.
      
      Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e
      Reviewed-on: https://go-review.googlesource.com/9019Reviewed-by: default avatarAustin Clements <austin@google.com>
      Run-TryBot: Russ Cox <rsc@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      181e26b9
    • Ian Lance Taylor's avatar
      runtime: save registers in linux/{386,amd64} lib entry point · 357a0130
      Ian Lance Taylor authored
      The callee-saved registers must be saved because for the c-shared case
      this code is invoked from C code in the system library, and that code
      expects the registers to be saved.  The tests were passing because in
      the normal case the code calls a cgo function that naturally saves
      callee-saved registers anyhow.  However, it fails when the code takes
      the non-cgo path.
      
      Change-Id: I9c1f5e884f5a72db9614478049b1863641c8b2b9
      Reviewed-on: https://go-review.googlesource.com/9114Reviewed-by: default avatarDavid Crawshaw <crawshaw@golang.org>
      357a0130
    • Ian Lance Taylor's avatar
      runtime: no deadlock error if buildmode=c-archive or c-shared · 725aa345
      Ian Lance Taylor authored
      Change-Id: I4ee6dac32bd3759aabdfdc92b235282785fbcca9
      Reviewed-on: https://go-review.googlesource.com/9083Reviewed-by: default avatarDavid Crawshaw <crawshaw@golang.org>
      725aa345
  3. 17 Apr, 2015 7 commits
  4. 16 Apr, 2015 5 commits
  5. 15 Apr, 2015 5 commits
  6. 14 Apr, 2015 5 commits
  7. 13 Apr, 2015 5 commits