1. 08 Oct, 2019 6 commits
  2. 03 Oct, 2019 1 commit
    • Kirill Smelkov's avatar
      bpo-38106: Fix race in pthread PyThread_release_lock() (GH-16047) · c5abd63e
      Kirill Smelkov authored
      Fix race in PyThread_release_lock that was leading to memory corruption and
      deadlocks. The fix applies to POSIX systems where Python locks are implemented
      with mutex and condition variable because POSIX semaphores are either not
      provided, or are known to be broken. One particular example of such system is
      macOS.
      
      On Darwin, even though this is considered as POSIX, Python uses
      mutex+condition variable to implement its lock, and, as of 2019-08-28, Py2.7
      implementation, even though similar issue was fixed for Py3 in 2012, contains
      synchronization bug: the condition is signalled after mutex unlock while the
      correct protocol is to signal condition from under mutex:
      
        https://github.com/python/cpython/blob/v2.7.16-127-g0229b56d8c0/Python/thread_pthread.h#L486-L506
        https://github.com/python/cpython/commit/187aa545165d (py3 fix)
      
      PyPy has the same bug for both pypy2 and pypy3:
      
        https://bitbucket.org/pypy/pypy/src/578667b3fef9/rpython/translator/c/src/thread_pthread.c#lines-443:465
        https://bitbucket.org/pypy/pypy/src/5b42890d48c3/rpython/translator/c/src/thread_pthread.c#lines-443:465
      
      Signalling condition outside of corresponding mutex is considered OK by
      POSIX, but in Python context it can lead to at least memory corruption if we
      consider the whole lifetime of python level lock. For example the following
      logical scenario:
      
            T1                                          T2
      
        sema = Lock()
        sema.acquire()
      
                                                    sema.release()
      
        sema.acquire()
        free(sema)
      
        ...
      
      can translate to the next C-level calls:
      
            T1                                          T2
      
        # sema = Lock()
        sema = malloc(...)
        sema.locked = 0
        pthread_mutex_init(&sema.mut)
        pthread_cond_init (&sema.lock_released)
      
        # sema.acquire()
        pthread_mutex_lock(&sema.mut)
        # sees sema.locked == 0
        sema.locked = 1
        pthread_mutex_unlock(&sema.mut)
      
                                                    # sema.release()
                                                    pthread_mutex_lock(&sema.mut)
                                                    sema.locked = 0
                                                    pthread_mutex_unlock(&sema.mut)
      
                            # OS scheduler gets in and relinquishes control from T2
                            # to another process
                                                    ...
      
        # second sema.acquire()
        pthread_mutex_lock(&sema.mut)
        # sees sema.locked == 0
        sema.locked = 1
        pthread_mutex_unlock(&sema.mut)
      
        # free(sema)
        pthread_mutex_destroy(&sema.mut)
        pthread_cond_destroy (&sema.lock_released)
        free(sema)
      
        # ...
        e.g. malloc() which returns memory where sema was
      
                                                    ...
                            # OS scheduler returns control to T2
                            # sema.release() continues
                            #
                            # BUT sema was already freed and writing to anywhere
                            # inside sema block CORRUPTS MEMORY. In particular if
                            # _another_ python-level lock was allocated where sema
                            # block was, writing into the memory can have effect on
                            # further synchronization correctness and in particular
                            # lead to deadlock on lock that was next allocated.
                                                    pthread_cond_signal(&sema.lock_released)
      
      Note that T2.pthread_cond_signal(&sema.lock_released) CORRUPTS MEMORY as it
      is called when sema memory was already freed and is potentially
      reallocated for another object.
      
      The fix is to move pthread_cond_signal to be done under corresponding mutex:
      
        # sema.release()
        pthread_mutex_lock(&sema.mut)
        sema.locked = 0
        pthread_cond_signal(&sema.lock_released)
        pthread_mutex_unlock(&sema.mut)
      
      To do so this patch cherry-picks thread_pthread.h part of the following 3.2 commit:
      
      commit 187aa545
      Author: Kristján Valur Jónsson <kristjan@ccpgames.com>
      Date:   Tue Jun 5 22:17:42 2012 +0000
      
          Signal condition variables with the mutex held.  Destroy condition variables
          before their mutexes.
      
       Python/ceval_gil.h      |  9 +++++----
       Python/thread_pthread.h | 15 +++++++++------
       2 files changed, 14 insertions(+), 10 deletions(-)
      
      (ceval_gil.h is Python3 specific and does not apply to Python2.7)
      
      The bug was there since 1994 - since at least [1]. It was discussed in 2001
      with original code author[2], but the code was still considered to be
      race-free. In 2010 the place where pthread_cond_signal should be - before or
      after pthread_mutex_unlock - was discussed with the rationale to avoid
      threads bouncing[3,4,5], and in 2012 pthread_cond_signal was moved to be
      called from under mutex, but only for CPython3[6,7].
      
      In 2019 the bug was (re-)discovered while testing Pygolang[8] on macOS with
      CPython2 and PyPy2 and PyPy3.
      
      [1] https://github.com/python/cpython/commit/2c8cb9f3d240
      [2] https://bugs.python.org/issue433625
      [3] https://bugs.python.org/issue8299#msg103224
      [4] https://bugs.python.org/issue8410#msg103313
      [5] https://bugs.python.org/issue8411#msg113301
      [6] https://bugs.python.org/issue15038#msg163187
      [7] https://github.com/python/cpython/commit/187aa545165d
      [8] https://pypi.org/project/pygolang
      
      (cherry picked from commit 187aa545)
      Co-Authored-By: default avatarKristján Valur Jónsson <kristjan@ccpgames.com>
      c5abd63e
  3. 02 Oct, 2019 1 commit
  4. 01 Oct, 2019 1 commit
  5. 28 Sep, 2019 1 commit
  6. 26 Sep, 2019 2 commits
  7. 24 Sep, 2019 1 commit
  8. 18 Sep, 2019 1 commit
  9. 17 Sep, 2019 1 commit
  10. 16 Sep, 2019 4 commits
  11. 14 Sep, 2019 1 commit
  12. 11 Sep, 2019 4 commits
  13. 09 Sep, 2019 2 commits
  14. 28 Aug, 2019 1 commit
  15. 23 Aug, 2019 2 commits
  16. 21 Aug, 2019 2 commits
  17. 05 Aug, 2019 1 commit
  18. 24 Jul, 2019 1 commit
  19. 11 Jul, 2019 2 commits
  20. 07 Jul, 2019 1 commit
  21. 06 Jul, 2019 2 commits
  22. 02 Jul, 2019 1 commit
  23. 01 Jul, 2019 1 commit
    • Ned Deily's avatar
      bpo-19960: Fix building of zlib on macOS without installed headers (GH-14257) · da7f6dbb
      Ned Deily authored
      When building 2.7 on macOS without system header files installed in
      ``/usr/include``, a few extension modules dependent on system-supplied
      third-party libraries were not being built, most notably zlib.
      This situation arose in the past when building without the Command
      Line Tools and the option to install header files in the traditional
      system locations (like /usr/include).  As of macOS 10.14, the
      header files are only available in an SDK so the problem addressed
      here affects most 2.7 builds.
      da7f6dbb