An error occurred fetching the project authors.
  1. 09 Apr, 2019 1 commit
    • Andrei Vagin's avatar
      runtime: preempt a goroutine which calls a lot of short system calls · 4166ff42
      Andrei Vagin authored
      A goroutine should be preempted if it runs for 10ms without blocking.
      We found that this doesn't work for goroutines which call short system calls.
      
      For example, the next program can stuck for seconds without this fix:
      
      $ cat main.go
      package main
      
      import (
      	"runtime"
      	"syscall"
      )
      
      func main() {
      	runtime.GOMAXPROCS(1)
      	c := make(chan int)
      	go func() {
      		c <- 1
      		for {
      			t := syscall.Timespec{
      				Nsec: 300,
      			}
      			if true {
      				syscall.Nanosleep(&t, nil)
      			}
      		}
      	}()
      	<-c
      }
      
      $ time go run main.go
      
      real	0m8.796s
      user	0m0.367s
      sys	0m0.893s
      
      Updates #10958
      
      Change-Id: Id3be54d3779cc28bfc8b33fe578f13778f1ae2a2
      Reviewed-on: https://go-review.googlesource.com/c/go/+/170138Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      4166ff42
  2. 21 Dec, 2018 1 commit
  3. 19 Dec, 2018 1 commit
    • Michael Anthony Knyszek's avatar
      runtime: don't clear lockedExt on locked M when G exits · d0f8a751
      Michael Anthony Knyszek authored
      When a locked M has its G exit without calling UnlockOSThread, then
      lockedExt on it was getting cleared. Unfortunately, this meant that
      during P handoff, if a new M was started, it might get forked (on
      most OSs besides Windows) from the locked M, which could have kernel
      state attached to it.
      
      To solve this, just don't clear lockedExt. At the point where the
      locked M has its G exit, it will also exit in accordance with the
      LockOSThread API. So, we can safely assume that it's lockedExt state
      will no longer be used. For the case of the main thread where it just
      gets wedged instead of exiting, it's probably better for it to keep
      the locked marker since it more accurately represents its state.
      
      Fixed #28979.
      
      Change-Id: I7d3d71dd65bcb873e9758086d2cbcb9a06429b0f
      Reviewed-on: https://go-review.googlesource.com/c/153078
      Run-TryBot: Michael Knyszek <mknyszek@google.com>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      d0f8a751
  4. 30 Apr, 2018 1 commit
    • Richard Musiol's avatar
      all: skip unsupported tests for js/wasm · e3c68477
      Richard Musiol authored
      The general policy for the current state of js/wasm is that it only
      has to support tests that are also supported by nacl.
      
      The test nilptr3.go makes assumptions about which nil checks can be
      removed. Since WebAssembly does not signal on reading a null pointer,
      all nil checks have to be explicit.
      
      Updates #18892
      
      Change-Id: I06a687860b8d22ae26b1c391499c0f5183e4c485
      Reviewed-on: https://go-review.googlesource.com/110096Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      e3c68477
  5. 21 Nov, 2017 2 commits
    • Brad Fitzpatrick's avatar
      runtime: fix build on non-Linux platforms · 1e3f563b
      Brad Fitzpatrick authored
      CL 78538 was updated after running TryBots to depend on
      syscall.NanoSleep which isn't available on all non-Linux platforms.
      
      Change-Id: I1fa615232b3920453431861310c108b208628441
      Reviewed-on: https://go-review.googlesource.com/79175
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      1e3f563b
    • Jamie Liu's avatar
      runtime: only sleep before stealing work from a running P · 868c8b37
      Jamie Liu authored
      The sleep in question does not make sense if the stolen-from P cannot
      run the stolen G. The usleep(3) has been observed delaying execution of
      woken G's by ~60us; skipping it reduces the wakeup-to-execution latency
      to ~7us in these cases, improving CPU utilization.
      
      Benchmarks added by this change:
      
      name                             old time/op  new time/op  delta
      WakeupParallelSpinning/0s-12     14.4µs ± 1%  14.3µs ± 1%     ~     (p=0.227 n=19+20)
      WakeupParallelSpinning/1µs-12    18.3µs ± 0%  18.3µs ± 1%     ~     (p=0.950 n=20+19)
      WakeupParallelSpinning/2µs-12    22.3µs ± 1%  22.3µs ± 1%     ~     (p=0.670 n=20+18)
      WakeupParallelSpinning/5µs-12    31.7µs ± 0%  31.7µs ± 0%     ~     (p=0.460 n=20+17)
      WakeupParallelSpinning/10µs-12   51.8µs ± 0%  51.8µs ± 0%     ~     (p=0.883 n=20+20)
      WakeupParallelSpinning/20µs-12   91.9µs ± 0%  91.9µs ± 0%     ~     (p=0.245 n=20+20)
      WakeupParallelSpinning/50µs-12    214µs ± 0%   214µs ± 0%     ~     (p=0.509 n=19+20)
      WakeupParallelSpinning/100µs-12   335µs ± 0%   335µs ± 0%   -0.05%  (p=0.006 n=17+15)
      WakeupParallelSyscall/0s-12       228µs ± 2%   129µs ± 1%  -43.32%  (p=0.000 n=20+19)
      WakeupParallelSyscall/1µs-12      232µs ± 1%   131µs ± 1%  -43.60%  (p=0.000 n=19+20)
      WakeupParallelSyscall/2µs-12      236µs ± 1%   133µs ± 1%  -43.44%  (p=0.000 n=18+19)
      WakeupParallelSyscall/5µs-12      248µs ± 2%   139µs ± 1%  -43.68%  (p=0.000 n=18+19)
      WakeupParallelSyscall/10µs-12     263µs ± 3%   150µs ± 2%  -42.97%  (p=0.000 n=18+20)
      WakeupParallelSyscall/20µs-12     281µs ± 2%   170µs ± 1%  -39.43%  (p=0.000 n=19+19)
      WakeupParallelSyscall/50µs-12     345µs ± 4%   246µs ± 7%  -28.85%  (p=0.000 n=20+20)
      WakeupParallelSyscall/100µs-12    460µs ± 5%   350µs ± 4%  -23.85%  (p=0.000 n=20+20)
      
      Benchmarks associated with the change that originally added this sleep
      (see https://golang.org/s/go15gomaxprocs):
      
      name        old time/op  new time/op  delta
      Chain       19.4µs ± 2%  19.3µs ± 1%    ~     (p=0.101 n=19+20)
      ChainBuf    19.5µs ± 2%  19.4µs ± 2%    ~     (p=0.840 n=19+19)
      Chain-2     19.9µs ± 1%  19.9µs ± 2%    ~     (p=0.734 n=19+19)
      ChainBuf-2  20.0µs ± 2%  20.0µs ± 2%    ~     (p=0.175 n=19+17)
      Chain-4     20.3µs ± 1%  20.1µs ± 1%  -0.62%  (p=0.010 n=19+18)
      ChainBuf-4  20.3µs ± 1%  20.2µs ± 1%  -0.52%  (p=0.023 n=19+19)
      Powser       2.09s ± 1%   2.10s ± 3%    ~     (p=0.908 n=19+19)
      Powser-2     2.21s ± 1%   2.20s ± 1%  -0.35%  (p=0.010 n=19+18)
      Powser-4     2.31s ± 2%   2.31s ± 2%    ~     (p=0.578 n=18+19)
      Sieve        13.6s ± 1%   13.6s ± 1%    ~     (p=0.909 n=17+18)
      Sieve-2      8.02s ±52%   7.28s ±15%    ~     (p=0.336 n=20+16)
      Sieve-4      4.00s ±35%   3.98s ±26%    ~     (p=0.654 n=20+18)
      
      Change-Id: I58edd8ce01075859d871e2348fc0833e9c01f70f
      Reviewed-on: https://go-review.googlesource.com/78538Reviewed-by: default avatarAustin Clements <austin@google.com>
      868c8b37
  6. 11 Oct, 2017 1 commit
    • Austin Clements's avatar
      runtime: terminate locked OS thread if its goroutine exits · 4f34a529
      Austin Clements authored
      runtime.LockOSThread is sometimes used when the caller intends to put
      the OS thread into an unusual state. In this case, we never want to
      return this thread to the runtime thread pool. However, currently
      exiting the goroutine implicitly unlocks its OS thread.
      
      Fix this by terminating the locked OS thread when its goroutine exits,
      rather than simply returning it to the pool.
      
      Fixes #20395.
      
      Change-Id: I3dcec63b200957709965f7240dc216fa84b62ad9
      Reviewed-on: https://go-review.googlesource.com/46038
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      4f34a529
  7. 05 Oct, 2017 1 commit
    • Austin Clements's avatar
      runtime: make LockOSThread/UnlockOSThread nested · c85b12b5
      Austin Clements authored
      Currently, there is a single bit for LockOSThread, so two calls to
      LockOSThread followed by one call to UnlockOSThread will unlock the
      thread. There's evidence (#20458) that this is almost never what
      people want or expect and it makes these APIs very hard to use
      correctly or reliably.
      
      Change this so LockOSThread/UnlockOSThread can be nested and the
      calling goroutine will not be unwired until UnlockOSThread has been
      called as many times as LockOSThread has. This should fix the vast
      majority of incorrect uses while having no effect on the vast majority
      of correct uses.
      
      Fixes #20458.
      
      Change-Id: I1464e5e9a0ea4208fbb83638ee9847f929a2bacb
      Reviewed-on: https://go-review.googlesource.com/45752
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      c85b12b5
  8. 05 Jun, 2017 1 commit
  9. 25 Apr, 2017 1 commit
  10. 19 May, 2016 1 commit
    • Austin Clements's avatar
      runtime: fix goroutine priority elevation · 44497eba
      Austin Clements authored
      Currently it's possible for user code to exploit the high scheduler
      priority of the GC worker in conjunction with the runnext optimization
      to elevate a user goroutine to high priority so it will always run
      even if there are other runnable goroutines.
      
      For example, if a goroutine is in a tight allocation loop, the
      following can happen:
      
      1. Goroutine 1 allocates, triggering a GC.
      2. G 1 attempts an assist, but fails and blocks.
      3. The scheduler runs the GC worker, since it is high priority.
         Note that this also starts a new scheduler quantum.
      4. The GC worker does enough work to satisfy the assist.
      5. The GC worker readies G 1, putting it in runnext.
      6. GC finishes and the scheduler runs G 1 from runnext, giving it
         the rest of the GC worker's quantum.
      7. Go to 1.
      
      Even if there are other goroutines on the run queue, they never get a
      chance to run in the above sequence. This requires a confluence of
      circumstances that make it unlikely, though not impossible, that it
      would happen in "real" code. In the test added by this commit, we
      force this confluence by setting GOMAXPROCS to 1 and GOGC to 1 so it's
      easy for the test to repeated trigger GC and wake from a blocked
      assist.
      
      We fix this by making GC always put user goroutines at the end of the
      run queue, instead of in runnext. This makes it so user code can't
      piggy-back on the GC's high priority to make a user goroutine act like
      it has high priority. The only other situation where GC wakes user
      goroutines is waking all blocked assists at the end, but this uses the
      global run queue and hence doesn't have this problem.
      
      Fixes #15706.
      
      Change-Id: I1589dee4b7b7d0c9c8575ed3472226084dfce8bc
      Reviewed-on: https://go-review.googlesource.com/23172Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      44497eba
  11. 03 May, 2016 1 commit
    • Dmitry Vyukov's avatar
      runtime: fix CPU underutilization · fcd7c02c
      Dmitry Vyukov authored
      Runqempty is a critical predicate for scheduler. If runqempty spuriously
      returns true, then scheduler can fail to schedule arbitrary number of
      runnable goroutines on idle Ps for arbitrary long time. With the addition
      of runnext runqempty predicate become broken (can spuriously return true).
      Consider that runnext is not nil and the main array is empty. Runqempty
      observes that the array is empty, then it is descheduled for some time.
      Then queue owner pushes another element to the queue evicting runnext
      into the array. Then queue owner pops runnext. Then runqempty resumes
      and observes runnext is nil and returns true. But there were no point
      in time when the queue was empty.
      
      Fix runqempty predicate to not return true spuriously.
      
      Change-Id: Ifb7d75a699101f3ff753c4ce7c983cf08befd31e
      Reviewed-on: https://go-review.googlesource.com/20858Reviewed-by: default avatarAustin Clements <austin@google.com>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      fcd7c02c
  12. 25 Mar, 2016 1 commit
    • Dmitry Vyukov's avatar
      runtime: improve randomized stealing logic · ea0386f8
      Dmitry Vyukov authored
      During random stealing we steal 4*GOMAXPROCS times from random procs.
      One would expect that most of the time we check all procs this way,
      but due to low quality PRNG we actually miss procs with frightening
      probability. Below are modelling experiment results for 1e6 tries:
      
      GOMAXPROCS = 2 : missed 1 procs 7944 times
      
      GOMAXPROCS = 3 : missed 1 procs 101620 times
      GOMAXPROCS = 3 : missed 2 procs 3571 times
      
      GOMAXPROCS = 4 : missed 1 procs 63916 times
      GOMAXPROCS = 4 : missed 2 procs 61 times
      GOMAXPROCS = 4 : missed 3 procs 16 times
      
      GOMAXPROCS = 5 : missed 1 procs 133136 times
      GOMAXPROCS = 5 : missed 2 procs 1025 times
      GOMAXPROCS = 5 : missed 3 procs 101 times
      GOMAXPROCS = 5 : missed 4 procs 15 times
      
      GOMAXPROCS = 8 : missed 1 procs 151765 times
      GOMAXPROCS = 8 : missed 2 procs 5057 times
      GOMAXPROCS = 8 : missed 3 procs 1726 times
      GOMAXPROCS = 8 : missed 4 procs 68 times
      
      GOMAXPROCS = 12 : missed 1 procs 199081 times
      GOMAXPROCS = 12 : missed 2 procs 27489 times
      GOMAXPROCS = 12 : missed 3 procs 3113 times
      GOMAXPROCS = 12 : missed 4 procs 233 times
      GOMAXPROCS = 12 : missed 5 procs 9 times
      
      GOMAXPROCS = 16 : missed 1 procs 237477 times
      GOMAXPROCS = 16 : missed 2 procs 30037 times
      GOMAXPROCS = 16 : missed 3 procs 9466 times
      GOMAXPROCS = 16 : missed 4 procs 1334 times
      GOMAXPROCS = 16 : missed 5 procs 192 times
      GOMAXPROCS = 16 : missed 6 procs 5 times
      GOMAXPROCS = 16 : missed 7 procs 1 times
      GOMAXPROCS = 16 : missed 8 procs 1 times
      
      A missed proc won't lead to underutilization because we check all procs
      again after dropping P. But it can lead to an unpleasant situation
      when we miss a proc, drop P, check all procs, discover work, acquire P,
      miss the proc again, repeat.
      
      Improve stealing logic to cover all procs.
      Also don't enter spinning mode and try to steal when there is nobody around.
      
      Change-Id: Ibb6b122cc7fb836991bad7d0639b77c807aab4c2
      Reviewed-on: https://go-review.googlesource.com/20836Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      Reviewed-by: default avatarMarvin Stenger <marvin.stenger94@gmail.com>
      ea0386f8
  13. 18 Mar, 2016 1 commit
  14. 08 Mar, 2016 1 commit
  15. 27 Jan, 2016 2 commits
  16. 13 Jan, 2016 1 commit
  17. 08 Jan, 2016 2 commits
  18. 29 Dec, 2015 1 commit
  19. 11 Dec, 2015 1 commit
    • Dmitry Vyukov's avatar
      runtime: remove unnecessary wakeups of worker threads · fb6f8a96
      Dmitry Vyukov authored
      Currently we wake up new worker threads whenever we pass
      through the scheduler with nmspinning==0. This leads to
      lots of unnecessary thread wake ups.
      Instead let only spinning threads wake up new spinning threads.
      
      For the following program:
      
      package main
      import "runtime"
      func main() {
      	for i := 0; i < 1e7; i++ {
      		runtime.Gosched()
      	}
      }
      
      Before:
      $ time ./test
      real	0m4.278s
      user	0m7.634s
      sys	0m1.423s
      
      $ strace -c ./test
      % time     seconds  usecs/call     calls    errors syscall
       99.93    9.314936           3   2685009     17536 futex
      
      After:
      $ time ./test
      real	0m1.200s
      user	0m1.181s
      sys	0m0.024s
      
      $ strace -c ./test
      % time     seconds  usecs/call     calls    errors syscall
        3.11    0.000049          25         2           futex
      
      Fixes #13527
      
      Change-Id: Ia1f5bf8a896dcc25d8b04beb1f4317aa9ff16f74
      Reviewed-on: https://go-review.googlesource.com/17540Reviewed-by: default avatarAustin Clements <austin@google.com>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      fb6f8a96
  20. 22 Jul, 2015 1 commit
  21. 28 May, 2015 1 commit
  22. 27 Apr, 2015 1 commit
  23. 24 Apr, 2015 2 commits
    • Austin Clements's avatar
      runtime: yield time slice to most recently readied G · e870f06c
      Austin Clements authored
      Currently, when the runtime ready()s a G, it adds it to the end of the
      current P's run queue and continues running. If there are many other
      things in the run queue, this can result in a significant delay before
      the ready()d G actually runs and can hurt fairness when other Gs in
      the run queue are CPU hogs. For example, if there are three Gs sharing
      a P, one of which is a CPU hog that never voluntarily gives up the P
      and the other two of which are doing small amounts of work and
      communicating back and forth on an unbuffered channel, the two
      communicating Gs will get very little CPU time.
      
      Change this so that when G1 ready()s G2 and then blocks, the scheduler
      immediately hands off the remainder of G1's time slice to G2. In the
      above example, the two communicating Gs will now act as a unit and
      together get half of the CPU time, while the CPU hog gets the other
      half of the CPU time.
      
      This fixes the problem demonstrated by the ping-pong benchmark added
      in the previous commit:
      
      benchmark                old ns/op     new ns/op     delta
      BenchmarkPingPongHog     684287        825           -99.88%
      
      On the x/benchmarks suite, this change improves the performance of
      garbage by ~6% (for GOMAXPROCS=1 and 4), and json by 28% and 36% for
      GOMAXPROCS=1 and 4. It has negligible effect on heap size.
      
      This has no effect on the go1 benchmark suite since those benchmarks
      are mostly single-threaded.
      
      Change-Id: I858a08eaa78f702ea98a5fac99d28a4ac91d339f
      Reviewed-on: https://go-review.googlesource.com/9289Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      e870f06c
    • Austin Clements's avatar
      runtime: benchmark for ping-pong in the presence of a CPU hog · da0e37fa
      Austin Clements authored
      This benchmark demonstrates a current problem with the scheduler where
      a set of frequently communicating goroutines get very little CPU time
      in the presence of another goroutine that hogs that CPU, even if one
      of those communicating goroutines is always runnable.
      
      Currently it takes about 0.5 milliseconds to switch between
      ping-ponging goroutines in the presence of a CPU hog:
      
      BenchmarkPingPongHog	    2000	    684287 ns/op
      
      Change-Id: I278848c84f778de32344921ae8a4a8056e4898b0
      Reviewed-on: https://go-review.googlesource.com/9288Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      da0e37fa
  24. 13 Feb, 2015 1 commit
    • Dmitry Vyukov's avatar
      cmd/gc: transform closure calls to function calls · c4ee44b7
      Dmitry Vyukov authored
      Currently we always create context objects for closures that capture variables.
      However, it is completely unnecessary for direct calls of closures
      (whether it is func()(), defer func()() or go func()()).
      This change transforms any OCALLFUNC(OCLOSURE) to normal function call.
      Closed variables become function arguments.
      This transformation is especially beneficial for go func(),
      because we do not need to allocate context object on heap.
      But it makes direct closure calls a bit faster as well (see BenchmarkClosureCall).
      
      On implementation level it required to introduce yet another compiler pass.
      However, the pass iterates only over xtop, so it should not be an issue.
      Transformation consists of two parts: closure transformation and call site
      transformation. We can't run these parts on different sides of escape analysis,
      because tree state is inconsistent. We can do both parts during typecheck,
      we don't know how to capture variables and don't have call site.
      We can't do both parts during walk of OCALLFUNC, because we can walk
      OCLOSURE body earlier.
      So now capturevars pass only decides how to capture variables
      (this info is required for escape analysis). New transformclosure
      pass, that runs just before order/walk, does all transformations
      of a closure. And later walk of OCALLFUNC(OCLOSURE) transforms call site.
      
      benchmark                            old ns/op     new ns/op     delta
      BenchmarkClosureCall                 4.89          3.09          -36.81%
      BenchmarkCreateGoroutinesCapture     1634          1294          -20.81%
      
      benchmark                            old allocs     new allocs     delta
      BenchmarkCreateGoroutinesCapture     6              2              -66.67%
      
      benchmark                            old bytes     new bytes     delta
      BenchmarkCreateGoroutinesCapture     176           48            -72.73%
      
      Change-Id: Ic85e1706e18c3235cc45b3c0c031a9c1cdb7a40e
      Reviewed-on: https://go-review.googlesource.com/4050Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      c4ee44b7
  25. 29 Jan, 2015 1 commit
    • Dmitry Vyukov's avatar
      cmd/gc: capture variables by value · 0e80b2e0
      Dmitry Vyukov authored
      Language specification says that variables are captured by reference.
      And that is what gc compiler does. However, in lots of cases it is
      possible to capture variables by value under the hood without
      affecting visible behavior of programs. For example, consider
      the following typical pattern:
      
      	func (o *Obj) requestMany(urls []string) []Result {
      		wg := new(sync.WaitGroup)
      		wg.Add(len(urls))
      		res := make([]Result, len(urls))
      		for i := range urls {
      			i := i
      			go func() {
      				res[i] = o.requestOne(urls[i])
      				wg.Done()
      			}()
      		}
      		wg.Wait()
      		return res
      	}
      
      Currently o, wg, res, and i are captured by reference causing 3+len(urls)
      allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap).
      But all of them can be captured by value without changing behavior.
      
      This change implements simple strategy for capturing by value:
      if a captured variable is not addrtaken and never assigned to,
      then it is captured by value (it is effectively const).
      This simple strategy turned out to be very effective:
      ~80% of all captures in std lib are turned into value captures.
      The remaining 20% are mostly in defers and non-escaping closures,
      that is, they do not cause allocations anyway.
      
      benchmark                                    old allocs     new allocs     delta
      BenchmarkCompressedZipGarbage                153            126            -17.65%
      BenchmarkEncodeDigitsSpeed1e4                91             69             -24.18%
      BenchmarkEncodeDigitsSpeed1e5                178            129            -27.53%
      BenchmarkEncodeDigitsSpeed1e6                1510           1051           -30.40%
      BenchmarkEncodeDigitsDefault1e4              100            75             -25.00%
      BenchmarkEncodeDigitsDefault1e5              193            139            -27.98%
      BenchmarkEncodeDigitsDefault1e6              1420           985            -30.63%
      BenchmarkEncodeDigitsCompress1e4             100            75             -25.00%
      BenchmarkEncodeDigitsCompress1e5             193            139            -27.98%
      BenchmarkEncodeDigitsCompress1e6             1420           985            -30.63%
      BenchmarkEncodeTwainSpeed1e4                 109            81             -25.69%
      BenchmarkEncodeTwainSpeed1e5                 211            151            -28.44%
      BenchmarkEncodeTwainSpeed1e6                 1588           1097           -30.92%
      BenchmarkEncodeTwainDefault1e4               103            77             -25.24%
      BenchmarkEncodeTwainDefault1e5               199            143            -28.14%
      BenchmarkEncodeTwainDefault1e6               1324           917            -30.74%
      BenchmarkEncodeTwainCompress1e4              103            77             -25.24%
      BenchmarkEncodeTwainCompress1e5              190            137            -27.89%
      BenchmarkEncodeTwainCompress1e6              1327           919            -30.75%
      BenchmarkConcurrentDBExec                    16223          16220          -0.02%
      BenchmarkConcurrentStmtQuery                 17687          16182          -8.51%
      BenchmarkConcurrentStmtExec                  5191           5186           -0.10%
      BenchmarkConcurrentTxQuery                   17665          17661          -0.02%
      BenchmarkConcurrentTxExec                    15154          15150          -0.03%
      BenchmarkConcurrentTxStmtQuery               17661          16157          -8.52%
      BenchmarkConcurrentTxStmtExec                3677           3673           -0.11%
      BenchmarkConcurrentRandom                    14000          13614          -2.76%
      BenchmarkManyConcurrentQueries               25             22             -12.00%
      BenchmarkDecodeComplex128Slice               318            252            -20.75%
      BenchmarkDecodeFloat64Slice                  318            252            -20.75%
      BenchmarkDecodeInt32Slice                    318            252            -20.75%
      BenchmarkDecodeStringSlice                   2318           2252           -2.85%
      BenchmarkDecode                              11             8              -27.27%
      BenchmarkEncodeGray                          64             56             -12.50%
      BenchmarkEncodeNRGBOpaque                    64             56             -12.50%
      BenchmarkEncodeNRGBA                         67             58             -13.43%
      BenchmarkEncodePaletted                      68             60             -11.76%
      BenchmarkEncodeRGBOpaque                     64             56             -12.50%
      BenchmarkGoLookupIP                          153            139            -9.15%
      BenchmarkGoLookupIPNoSuchHost                508            466            -8.27%
      BenchmarkGoLookupIPWithBrokenNameServer      245            226            -7.76%
      BenchmarkClientServer                        62             59             -4.84%
      BenchmarkClientServerParallel4               62             59             -4.84%
      BenchmarkClientServerParallel64              62             59             -4.84%
      BenchmarkClientServerParallelTLS4            79             76             -3.80%
      BenchmarkClientServerParallelTLS64           112            109            -2.68%
      BenchmarkCreateGoroutinesCapture             10             6              -40.00%
      BenchmarkAfterFunc                           1006           1005           -0.10%
      
      Fixes #6632.
      
      Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca
      Reviewed-on: https://go-review.googlesource.com/3166Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      0e80b2e0
  26. 08 Sep, 2014 1 commit
  27. 06 Sep, 2014 1 commit
  28. 15 Jul, 2014 1 commit
  29. 24 Feb, 2014 1 commit
  30. 21 Jan, 2014 2 commits
  31. 01 Aug, 2013 1 commit
    • Dmitriy Vyukov's avatar
      runtime: make new tests shorter in short mode · d8bbbd25
      Dmitriy Vyukov authored
      We see timeouts in these tests on some platforms,
      but not on the others.  The hypothesis is that
      the problematic platforms are slow uniprocessors.
      Stack traces do not suggest that the process
      is completely hang, and it is able to schedule
      the alarm goroutine. And if it actually hangs,
      we still will be able to detect that.
      
      R=golang-dev, r
      CC=golang-dev
      https://golang.org/cl/12253043
      d8bbbd25
  32. 30 Jul, 2013 2 commits
  33. 18 Jul, 2013 1 commit
  34. 15 Jul, 2013 1 commit