An error occurred fetching the project authors.
  1. 02 Nov, 2019 1 commit
  2. 15 Mar, 2019 1 commit
    • Ian Lance Taylor's avatar
      runtime: introduce and consistently use setNsec for timespec · 0a7bc8f4
      Ian Lance Taylor authored
      The general code for setting a timespec value sometimes used set_nsec
      and sometimes used a combination of set_sec and set_nsec. Standardize
      on a setNsec function that takes a number of nanoseconds and splits
      them up to set the tv_sec and tv_nsec fields. Consistently mark
      setNsec as go:nosplit, since it has to be that way on some systems
      including Darwin and GNU/Linux. Consistently use timediv on 32-bit
      systems to help stay within split-stack limits on processors that
      don't have a 64-bit division instruction.
      
      Change-Id: I6396bb7ddbef171a96876bdeaf7a1c585a6d725b
      Reviewed-on: https://go-review.googlesource.com/c/go/+/167389
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      0a7bc8f4
  3. 27 Feb, 2019 1 commit
  4. 13 Feb, 2019 1 commit
    • Cherry Zhang's avatar
      runtime: scan gp._panic in stack scan · af8f4062
      Cherry Zhang authored
      In runtime.gopanic, the _panic object p is stack allocated and
      referenced from gp._panic. With stack objects, p on stack is dead
      at the point preprintpanics runs. gp._panic points to p, but
      stack scan doesn't look at gp. Heap scan of gp does look at
      gp._panic, but it stops and ignores the pointer as it points to
      the stack. So whatever p points to may be collected and clobbered.
      We need to scan gp._panic explicitly during stack scan.
      
      To test it reliably, we introduce a GODEBUG mode "clobberfree",
      which clobbers the memory content when the GC frees an object.
      
      Fixes #30150.
      
      Change-Id: I11128298f03a89f817faa221421a9d332b41dced
      Reviewed-on: https://go-review.googlesource.com/c/161778
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      af8f4062
  5. 02 Jan, 2019 1 commit
  6. 02 Oct, 2018 1 commit
    • Austin Clements's avatar
      runtime: remove GODEBUG=gcrescanstacks=1 mode · 198440cc
      Austin Clements authored
      Currently, setting GODEBUG=gcrescanstacks=1 enables a debugging mode
      where the garbage collector re-scans goroutine stacks during mark
      termination. This was introduced in Go 1.8 to debug the hybrid write
      barrier, but I don't think we ever used it.
      
      Now it's one of the last sources of mark work during mark termination.
      This CL removes it.
      
      Updates #26903. This is preparation for unifying STW GC and concurrent
      GC.
      
      Updates #17503.
      
      Change-Id: I6ae04d3738aa9c448e6e206e21857a33ecd12acf
      Reviewed-on: https://go-review.googlesource.com/c/134777
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      198440cc
  7. 12 Sep, 2018 1 commit
    • Emmanuel T Odeke's avatar
      runtime: convert initial timediv quotient increments to bitsets · 178a609f
      Emmanuel T Odeke authored
      At the very beginning of timediv, inside a for loop,
      we reduce the base value by at most (1<<31)-1, while
      incrementing the quotient result by 1<<uint(bit).
      However, since the quotient value was 0 to begin with,
      we are essentially just doing bitsets.
      
      This change is in the hot path of various concurrency and
      scheduling operations that require sleeping, waiting
      on mutexes and futexes etc. On the following OSes:
      * Dragonfly
      * FreeBSD
      * Linux
      * NetBSD
      * OpenBSD
      * Plan9
      * Windows
      
      and paired with architectures that provide the BTS instruction, this
      change shaves off a couple of nanoseconds per invocation of timediv.
      
      Fixes #27529
      
      Change-Id: Ia2fea5022c1109e02d86d1f962a3b0bd70967aa6
      Reviewed-on: https://go-review.googlesource.com/134231
      Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      178a609f
  8. 22 Aug, 2018 1 commit
  9. 13 Apr, 2018 1 commit
    • Eric Daniels's avatar
      runtime/traceback: support tracking goroutine ancestor tracebacks with... · d9b006a7
      Eric Daniels authored
      runtime/traceback: support tracking goroutine ancestor tracebacks with GODEBUG="tracebackancestors=N"
      
      Currently, collecting a stack trace via runtime.Stack captures the stack for the
      immediately running goroutines. This change extends those tracebacks to include
      the tracebacks of their ancestors. This is done with a low memory cost and only
      utilized when debug option tracebackancestors is set to a value greater than 0.
      
      Resolves #22289
      
      Change-Id: I7edacc62b2ee3bd278600c4a21052c351f313f3a
      Reviewed-on: https://go-review.googlesource.com/70993
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      d9b006a7
  10. 30 Oct, 2017 1 commit
    • Austin Clements's avatar
      runtime: buffered write barrier implementation · e9079a69
      Austin Clements authored
      This implements runtime support for buffered write barriers on amd64.
      The buffered write barrier has a fast path that simply enqueues
      pointers in a per-P buffer. Unlike the current write barrier, this
      fast path is *not* a normal Go call and does not require the compiler
      to spill general-purpose registers or put arguments on the stack. When
      the buffer fills up, the write barrier takes the slow path, which
      spills all general purpose registers and flushes the buffer. We don't
      allow safe-points or stack splits while this frame is active, so it
      doesn't matter that we have no type information for the spilled
      registers in this frame.
      
      One minor complication is cgocheck=2 mode, which uses the write
      barrier to detect Go pointers being written to non-Go memory. We
      obviously can't buffer this, so instead we set the buffer to its
      minimum size, forcing the write barrier into the slow path on every
      call. For this specific case, we pass additional information as
      arguments to the flush function. This also requires enabling the cgo
      write barrier slightly later during runtime initialization, after Ps
      (and the per-P write barrier buffers) have been initialized.
      
      The code in this CL is not yet active. The next CL will modify the
      compiler to generate calls to the new write barrier.
      
      This reduces the average cost of the write barrier by roughly a factor
      of 4, which will pay for the cost of having it enabled more of the
      time after we make the GC pacer less aggressive. (Benchmarks will be
      in the next CL.)
      
      Updates #14951.
      Updates #22460.
      
      Change-Id: I396b5b0e2c5e5c4acfd761a3235fd15abadc6cb1
      Reviewed-on: https://go-review.googlesource.com/73711
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      e9079a69
  11. 08 Aug, 2017 1 commit
    • Martin Möhrmann's avatar
      runtime: remove unused prefetch functions · 7045e6f6
      Martin Möhrmann authored
      The only non test user of the assembler prefetch functions is the
      heapBits.prefetch function which is itself unused.
      
      The runtime prefetch functions have no functionality on most platforms
      and are not inlineable since they are written in assembler. The function
      call overhead eliminates the performance gains that could be achieved with
      prefetching and would degrade performance for platforms where the functions
      are no-ops.
      
      If prefetch functions are needed back again later they can be improved
      by avoiding the function call overhead and implementing them as intrinsics.
      
      Change-Id: I52c553cf3607ffe09f0441c6e7a0a818cb21117d
      Reviewed-on: https://go-review.googlesource.com/44370
      Run-TryBot: Martin Möhrmann <moehrmann@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      7045e6f6
  12. 21 Mar, 2017 1 commit
  13. 07 Mar, 2017 1 commit
  14. 14 Feb, 2017 2 commits
  15. 01 Nov, 2016 2 commits
    • David Crawshaw's avatar
      runtime: access modules via a slice · 54ec7b07
      David Crawshaw authored
      The introduction of -buildmode=plugin means modules can be added to a
      Go program while it is running. This means there exists some time
      while the program is running with the module is on the moduledata
      linked list, but it has not been initialized to the satisfaction of
      other parts of the runtime. Notably, the GC.
      
      This CL adds a new way of access modules, an activeModules function.
      It returns a slice of modules that is built in the background and
      atomically swapped in. The parts of the runtime that need to wait on
      module initialization can use this slice instead of the linked list.
      
      Fixes #17455
      
      Change-Id: I04790fd07e40c7295beb47cea202eb439206d33d
      Reviewed-on: https://go-review.googlesource.com/32357Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      54ec7b07
    • Martin Möhrmann's avatar
      runtime: improve atoi implementation · d7b34d5f
      Martin Möhrmann authored
      - Adds overflow checks
      - Adds parsing of negative integers
      - Adds boolean return value to signal parsing errors
      - Adds atoi32 for parsing of integers that fit in an int32
      - Adds tests
      
      Handling of errors to provide error messages
      at the call sites is left to future CLs.
      
      Updates #17718
      
      Change-Id: I3cacd0ab1230b9efc5404c68edae7304d39bcbc0
      Reviewed-on: https://go-review.googlesource.com/32390Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Reviewed-by: default avatarBrad Fitzpatrick <bradfitz@golang.org>
      d7b34d5f
  16. 28 Oct, 2016 1 commit
    • Austin Clements's avatar
      runtime: disable stack rescanning by default · bd640c88
      Austin Clements authored
      With the hybrid barrier in place, we can now disable stack rescanning
      by default. This commit adds a "gcrescanstacks" GODEBUG variable that
      is off by default but can be set to re-enable STW stack rescanning.
      The plan is to leave this off but available in Go 1.8 for debugging
      and as a fallback.
      
      With this change, worst-case mark termination time at GOMAXPROCS=12
      *not* including time spent stopping the world (which is still
      unbounded) is reliably under 100 µs, with a 95%ile around 50 µs in
      every benchmark I tried (the go1 benchmarks, the x/benchmarks garbage
      benchmark, and the gcbench activegs and rpc benchmarks). Including
      time spent stopping the world usually adds about 20 µs to total STW
      time at GOMAXPROCS=12, but I've seen it add around 150 µs in these
      benchmarks when a goroutine takes time to reach a safe point (see
      issue #10958) or when stopping the world races with goroutine
      switches. At GOMAXPROCS=1, where this isn't an issue, worst case STW
      is typically 30 µs.
      
      The go-gcbench activegs benchmark is designed to stress large numbers
      of dirty stacks. This commit reduces 95%ile STW time for 500k dirty
      stacks by nearly three orders of magnitude, from 150ms to 195µs.
      
      This has little effect on the throughput of the go1 benchmarks or the
      x/benchmarks benchmarks.
      
      name         old time/op  new time/op  delta
      XGarbage-12  2.31ms ± 0%  2.32ms ± 1%  +0.28%  (p=0.001 n=17+16)
      XJSON-12     12.4ms ± 0%  12.4ms ± 0%  +0.41%  (p=0.000 n=18+18)
      XHTTP-12     11.8µs ± 0%  11.8µs ± 1%    ~     (p=0.492 n=20+18)
      
      It reduces the tail latency of the x/benchmarks HTTP benchmark:
      
      name      old p50-time  new p50-time  delta
      XHTTP-12    489µs ± 0%    491µs ± 1%  +0.54%  (p=0.000 n=20+18)
      
      name      old p95-time  new p95-time  delta
      XHTTP-12    957µs ± 1%    960µs ± 1%  +0.28%  (p=0.002 n=20+17)
      
      name      old p99-time  new p99-time  delta
      XHTTP-12   1.76ms ± 1%   1.64ms ± 1%  -7.20%  (p=0.000 n=20+18)
      
      Comparing to the beginning of the hybrid barrier implementation
      ("runtime: parallelize STW mcache flushing") shows that the hybrid
      barrier trades a small performance impact for much better STW latency,
      as expected. The magnitude of the performance impact is generally
      small:
      
      name                      old time/op    new time/op    delta
      BinaryTree17-12              2.37s ± 1%     2.42s ± 1%  +2.04%  (p=0.000 n=19+18)
      Fannkuch11-12                2.84s ± 0%     2.72s ± 0%  -4.00%  (p=0.000 n=19+19)
      FmtFprintfEmpty-12          44.2ns ± 1%    45.2ns ± 1%  +2.20%  (p=0.000 n=17+19)
      FmtFprintfString-12          130ns ± 1%     134ns ± 0%  +2.94%  (p=0.000 n=18+16)
      FmtFprintfInt-12             114ns ± 1%     117ns ± 0%  +3.01%  (p=0.000 n=19+15)
      FmtFprintfIntInt-12          176ns ± 1%     182ns ± 0%  +3.17%  (p=0.000 n=20+15)
      FmtFprintfPrefixedInt-12     186ns ± 1%     187ns ± 1%  +1.04%  (p=0.000 n=20+19)
      FmtFprintfFloat-12           251ns ± 1%     250ns ± 1%  -0.74%  (p=0.000 n=17+18)
      FmtManyArgs-12               746ns ± 1%     761ns ± 0%  +2.08%  (p=0.000 n=19+20)
      GobDecode-12                6.57ms ± 1%    6.65ms ± 1%  +1.11%  (p=0.000 n=19+20)
      GobEncode-12                5.59ms ± 1%    5.65ms ± 0%  +1.08%  (p=0.000 n=17+17)
      Gzip-12                      223ms ± 1%     223ms ± 1%  -0.31%  (p=0.006 n=20+20)
      Gunzip-12                   38.0ms ± 0%    37.9ms ± 1%  -0.25%  (p=0.009 n=19+20)
      HTTPClientServer-12         77.5µs ± 1%    78.9µs ± 2%  +1.89%  (p=0.000 n=20+20)
      JSONEncode-12               14.7ms ± 1%    14.9ms ± 0%  +0.75%  (p=0.000 n=20+20)
      JSONDecode-12               53.0ms ± 1%    55.9ms ± 1%  +5.54%  (p=0.000 n=19+19)
      Mandelbrot200-12            3.81ms ± 0%    3.81ms ± 1%  +0.20%  (p=0.023 n=17+19)
      GoParse-12                  3.17ms ± 1%    3.18ms ± 1%    ~     (p=0.057 n=20+19)
      RegexpMatchEasy0_32-12      71.7ns ± 1%    70.4ns ± 1%  -1.77%  (p=0.000 n=19+20)
      RegexpMatchEasy0_1K-12       946ns ± 0%     946ns ± 0%    ~     (p=0.405 n=18+18)
      RegexpMatchEasy1_32-12      67.2ns ± 2%    67.3ns ± 2%    ~     (p=0.732 n=20+20)
      RegexpMatchEasy1_1K-12       374ns ± 1%     378ns ± 1%  +1.14%  (p=0.000 n=18+19)
      RegexpMatchMedium_32-12      107ns ± 1%     107ns ± 1%    ~     (p=0.259 n=18+20)
      RegexpMatchMedium_1K-12     34.2µs ± 1%    34.5µs ± 1%  +1.03%  (p=0.000 n=18+18)
      RegexpMatchHard_32-12       1.77µs ± 1%    1.79µs ± 1%  +0.73%  (p=0.000 n=19+18)
      RegexpMatchHard_1K-12       53.6µs ± 1%    54.2µs ± 1%  +1.10%  (p=0.000 n=19+19)
      Template-12                 61.5ms ± 1%    63.9ms ± 0%  +3.96%  (p=0.000 n=18+18)
      TimeParse-12                 303ns ± 1%     300ns ± 1%  -1.08%  (p=0.000 n=19+20)
      TimeFormat-12                318ns ± 1%     320ns ± 0%  +0.79%  (p=0.000 n=19+19)
      Revcomp-12 (*)               509ms ± 3%     504ms ± 0%    ~     (p=0.967 n=7+12)
      [Geo mean]                  54.3µs         54.8µs       +0.88%
      
      (*) Revcomp is highly non-linear, so I only took samples with 2
      iterations.
      
      name         old time/op  new time/op  delta
      XGarbage-12  2.25ms ± 0%  2.32ms ± 1%  +2.74%  (p=0.000 n=16+16)
      XJSON-12     11.6ms ± 0%  12.4ms ± 0%  +6.81%  (p=0.000 n=18+18)
      XHTTP-12     11.6µs ± 1%  11.8µs ± 1%  +1.62%  (p=0.000 n=17+18)
      
      Updates #17503.
      
      Updates #17099, since you can't have a rescan list bug if there's no
      rescan list. I'm not marking it as fixed, since gcrescanstacks can
      still be set to re-enable the rescan lists.
      
      Change-Id: I6e926b4c2dbd4cd56721869d4f817bdbb330b851
      Reviewed-on: https://go-review.googlesource.com/31766Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      bd640c88
  17. 26 May, 2016 1 commit
  18. 05 May, 2016 1 commit
  19. 18 Apr, 2016 1 commit
  20. 13 Apr, 2016 1 commit
    • David Crawshaw's avatar
      cmd/compile, etc: store method tables as offsets · 7d469179
      David Crawshaw authored
      This CL introduces the typeOff type and a lookup method of the same
      name that can turn a typeOff offset into an *rtype.
      
      In a typical Go binary (built with buildmode=exe, pie, c-archive, or
      c-shared), there is one moduledata and all typeOff values are offsets
      relative to firstmoduledata.types. This makes computing the pointer
      cheap in typical programs.
      
      With buildmode=shared (and one day, buildmode=plugin) there are
      multiple modules whose relative offset is determined at runtime.
      We identify a type in the general case by the pair of the original
      *rtype that references it and its typeOff value. We determine
      the module from the original pointer, and then use the typeOff from
      there to compute the final *rtype.
      
      To ensure there is only one *rtype representing each type, the
      runtime initializes a typemap for each module, using any identical
      type from an earlier module when resolving that offset. This means
      that types computed from an offset match the type mapped by the
      pointer dynamic relocations.
      
      A series of followup CLs will replace other *rtype values with typeOff
      (and name/*string with nameOff).
      
      For types created at runtime by reflect, type offsets are treated as
      global IDs and reference into a reflect offset map kept by the runtime.
      
      darwin/amd64:
      	cmd/go:  -57KB (0.6%)
      	jujud:  -557KB (0.8%)
      
      linux/amd64 PIE:
      	cmd/go: -361KB (3.0%)
      	jujud:  -3.5MB (4.2%)
      
      For #6853.
      
      Change-Id: Icf096fd884a0a0cb9f280f46f7a26c70a9006c96
      Reviewed-on: https://go-review.googlesource.com/21285Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: David Crawshaw <crawshaw@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      7d469179
  21. 12 Apr, 2016 1 commit
    • David Crawshaw's avatar
      cmd/link, etc: store typelinks as offsets · f028b9f9
      David Crawshaw authored
      This is the first in a series of CLs to replace the use of pointers
      in binary read-only data with offsets.
      
      In standard Go binaries these CLs have a small effect, shrinking
      8-byte pointers to 4-bytes. In position-independent code, it also
      saves the dynamic relocation for the pointer. This has a significant
      effect on the binary size when building as PIE, c-archive, or
      c-shared.
      
      darwin/amd64:
      	cmd/go: -12KB (0.1%)
      	jujud:  -82KB (0.1%)
      
      linux/amd64 PIE:
      	cmd/go:  -86KB (0.7%)
      	jujud:  -569KB (0.7%)
      
      For #6853.
      
      Change-Id: Iad5625bbeba58dabfd4d334dbee3fcbfe04b2dcf
      Reviewed-on: https://go-review.googlesource.com/21284Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: David Crawshaw <crawshaw@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      f028b9f9
  22. 07 Mar, 2016 1 commit
  23. 02 Mar, 2016 1 commit
    • Brad Fitzpatrick's avatar
      all: single space after period. · 5fea2ccc
      Brad Fitzpatrick authored
      The tree's pretty inconsistent about single space vs double space
      after a period in documentation. Make it consistently a single space,
      per earlier decisions. This means contributors won't be confused by
      misleading precedence.
      
      This CL doesn't use go/doc to parse. It only addresses // comments.
      It was generated with:
      
      $ perl -i -npe 's,^(\s*// .+[a-z]\.)  +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.)  +([A-Z])')
      $ go test go/doc -update
      
      Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
      Reviewed-on: https://go-review.googlesource.com/20022Reviewed-by: default avatarRob Pike <r@golang.org>
      Reviewed-by: default avatarDave Day <djd@golang.org>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      5fea2ccc
  24. 24 Feb, 2016 1 commit
  25. 18 Dec, 2015 1 commit
  26. 25 Nov, 2015 1 commit
  27. 16 Nov, 2015 1 commit
    • Ian Lance Taylor's avatar
      runtime: add optional expensive check for invalid cgo pointer passing · be1ef467
      Ian Lance Taylor authored
      If you set GODEBUG=cgocheck=2 the runtime package will use the write
      barrier to detect cases where a Go program writes a Go pointer into
      non-Go memory.  In conjunction with the existing cgo checks, and the
      not-yet-implemented cgo check for exported functions, this should
      reliably detect all cases (that do not import the unsafe package) in
      which a Go pointer is incorrectly shared with C code.  This check is
      optional because it turns on the write barrier at all times, which is
      known to be expensive.
      
      Update #12416.
      
      Change-Id: I549d8b2956daa76eac853928e9280e615d6365f4
      Reviewed-on: https://go-review.googlesource.com/16899Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      be1ef467
  28. 15 Nov, 2015 1 commit
  29. 12 Nov, 2015 1 commit
  30. 10 Nov, 2015 2 commits
  31. 30 Oct, 2015 1 commit
    • Russ Cox's avatar
      runtime: introduce GOTRACEBACK=single, now the default · bf1de1b1
      Russ Cox authored
      Abandon (but still support) the old numbering system.
      
      GOTRACEBACK=none is old 0
      GOTRACEBACK=single is the new behavior
      GOTRACEBACK=all is old 1
      GOTRACEBACK=system is old 2
      GOTRACEBACK=crash is unchanged
      
      See doc comment change in runtime1.go for details.
      
      Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release.
      If you run into programs where printing only the current goroutine omits
      needed information, please add details in a comment on that issue.
      
      Fixes #12366.
      
      Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db
      Reviewed-on: https://go-review.googlesource.com/16512Reviewed-by: default avatarAustin Clements <austin@google.com>
      bf1de1b1
  32. 23 Oct, 2015 1 commit
  33. 30 Aug, 2015 1 commit
  34. 29 Jul, 2015 1 commit
  35. 11 Jul, 2015 1 commit
    • Elias Naur's avatar
      runtime: abort on fatal errors and panics in c-shared and c-archive modes · b3a8b057
      Elias Naur authored
      The default behaviour for fatal errors and runtime panics is to dump
      the goroutine stack traces and exit with code 2. However, when the process is
      owned by foreign code, it is suprising and inappropriate to suddenly exit
      the whole process, even on fatal errors. Instead, re-use the crash behaviour
      from GOTRACEBACK=crash and abort.
      
      The motivating use case is issue #11382, where an Android crash reporter
      is confused by an exiting process, but I believe the aborting behaviour
      is appropriate for all cases where Go does not own the process.
      
      The change is simple and contained and will enable reliable crash reporting
      for Android apps in Go 1.5, but I'll leave it to others to judge whether it
      is too late for Go 1.5.
      
      Fixes #11382
      
      Change-Id: I477328e1092f483591c99da1fbb8bc4411911785
      Reviewed-on: https://go-review.googlesource.com/12032Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      b3a8b057
  36. 15 Jun, 2015 1 commit
  37. 07 May, 2015 1 commit