1. 21 Jul, 2016 1 commit
  2. 20 Jul, 2016 4 commits
    • Austin Clements's avatar
      runtime: support smaller physical pages than PhysPageSize · f407ca92
      Austin Clements authored
      Most operations need an upper bound on the physical page size, which
      is what sys.PhysPageSize is for (this is checked at runtime init on
      Linux). However, a few operations need a *lower* bound on the physical
      page size. Introduce a "minPhysPageSize" constant to act as this lower
      bound and use it where it makes sense:
      
      1) In addrspace_free, we have to query each page in the given range.
         Currently we increment by the upper bound on the physical page
         size, which means we may skip over pages if the true size is
         smaller. Worse, we currently pass a result buffer that only has
         enough room for one page. If there are actually multiple pages in
         the range passed to mincore, the kernel will overflow this buffer.
         Fix these problems by incrementing by the lower-bound on the
         physical page size and by passing "1" for the length, which the
         kernel will round up to the true physical page size.
      
      2) In the write barrier, the bad pointer check tests for pointers to
         the first physical page, which are presumably small integers
         masquerading as pointers. However, if physical pages are smaller
         than we think, we may have legitimate pointers below
         sys.PhysPageSize. Hence, use minPhysPageSize for this test since
         pointers should never fall below that.
      
      In particular, this applies to ARM64 and MIPS. The runtime is
      configured to use 64kB pages on ARM64, but by default Linux uses 4kB
      pages. Similarly, the runtime assumes 16kB pages on MIPS, but both 4kB
      and 16kB kernel configurations are common. This also applies to ARM on
      systems where the runtime is recompiled to deal with a larger page
      size. It is also a step toward making the runtime use only a
      dynamically-queried page size.
      
      Change-Id: I1fdfd18f6e7cbca170cc100354b9faa22fde8a69
      Reviewed-on: https://go-review.googlesource.com/25020Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Reviewed-by: default avatarCherry Zhang <cherryyz@google.com>
      Run-TryBot: Austin Clements <austin@google.com>
      f407ca92
    • Dmitry Vyukov's avatar
      runtime/race: fix memory leak · d73ca5f4
      Dmitry Vyukov authored
      The leak was reported internally on a sever canary that runs for days.
      After a day server consumes 5.6GB, after 6 days -- 12.2GB.
      The leak is exposed by the added benchmark.
      The leak is fixed upstream in :
      http://llvm.org/viewvc/llvm-project/compiler-rt/trunk/lib/tsan/rtl/tsan_rtl_thread.cc?view=diff&r1=276102&r2=276103&pathrev=276103
      
      Fixes #16441
      
      Change-Id: I9d4f0adef48ca6cf2cd781b9a6990ad4661ba49b
      Reviewed-on: https://go-review.googlesource.com/25091Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
      d73ca5f4
    • Ian Lance Taylor's avatar
      runtime: add as many extra M's as needed · 50048a4e
      Ian Lance Taylor authored
      When a non-Go thread calls into Go, the runtime needs an M to run the Go
      code. The runtime keeps a list of extra M's available. When the last
      extra M is allocated, the needextram field is set to tell it to allocate
      a new extra M as soon as it is running in Go. This ensures that an extra
      M will always be available for the next thread.
      
      However, if many threads need an extra M at the same time, this
      serializes them all. One thread will get an extra M with the needextram
      field set. All the other threads will see that there is no M available
      and will go to sleep. The one thread that succeeded will create a new
      extra M. One lucky thread will get it. All the other threads will see
      that there is no M available and will go to sleep. The effect is
      thundering herd, as all the threads looking for an extra M go through
      the process one by one. This seems to have a particularly bad effect on
      the FreeBSD scheduler for some reason.
      
      With this change, we track the number of threads waiting for an M, and
      create all of them as soon as one thread gets through. This still means
      that all the threads will fight for the lock to pick up the next M. But
      at least each thread that gets the lock will succeed, instead of going
      to sleep only to fight again.
      
      This smooths out the performance greatly on FreeBSD, reducing the
      average wall time of `testprogcgo CgoCallbackGC` by 74%.  On GNU/Linux
      the average wall time goes down by 9%.
      
      Fixes #13926
      Fixes #16396
      
      Change-Id: I6dc42a4156085a7ed4e5334c60b39db8f8ef8fea
      Reviewed-on: https://go-review.googlesource.com/25047
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      50048a4e
    • Brad Fitzpatrick's avatar
      net/smtp: document that the smtp package is frozen · 883e128f
      Brad Fitzpatrick authored
      This copies the frozen wording from the log/syslog package.
      
      Fixes #16436
      
      Change-Id: If5d478023328925299399f228d8aaf7fb117c1b4
      Reviewed-on: https://go-review.googlesource.com/25080Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAndrew Gerrand <adg@golang.org>
      883e128f
  3. 18 Jul, 2016 6 commits
  4. 17 Jul, 2016 1 commit
  5. 16 Jul, 2016 1 commit
  6. 15 Jul, 2016 1 commit
  7. 14 Jul, 2016 1 commit
  8. 13 Jul, 2016 5 commits
  9. 12 Jul, 2016 4 commits
  10. 11 Jul, 2016 5 commits
  11. 08 Jul, 2016 5 commits
  12. 07 Jul, 2016 3 commits
  13. 06 Jul, 2016 3 commits