1. 13 Jun, 2018 7 commits
    • Dave Airlie's avatar
      drm: set FMODE_UNSIGNED_OFFSET for drm files · 12958d0f
      Dave Airlie authored
      commit 76ef6b28 upstream.
      
      Since we have the ttm and gem vma managers using a subset
      of the file address space for objects, and these start at
      0x100000000 they will overflow the new mmap checks.
      
      I've checked all the mmap routines I could see for any
      bad behaviour but overall most people use GEM/TTM VMA
      managers even the legacy drivers have a hashtable.
      
      Reported-and-Tested-by: Arthur Marsh (amarsh04 on #radeon)
      Fixes: be83bbf8 (mmap: introduce sane default mmap limits)
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      12958d0f
    • Amir Goldstein's avatar
      xfs: fix incorrect log_flushed on fsync · 66824bdf
      Amir Goldstein authored
      commit 47c7d0b1 upstream.
      
      When calling into _xfs_log_force{,_lsn}() with a pointer
      to log_flushed variable, log_flushed will be set to 1 if:
      1. xlog_sync() is called to flush the active log buffer
      AND/OR
      2. xlog_wait() is called to wait on a syncing log buffers
      
      xfs_file_fsync() checks the value of log_flushed after
      _xfs_log_force_lsn() call to optimize away an explicit
      PREFLUSH request to the data block device after writing
      out all the file's pages to disk.
      
      This optimization is incorrect in the following sequence of events:
      
       Task A                    Task B
       -------------------------------------------------------
       xfs_file_fsync()
         _xfs_log_force_lsn()
           xlog_sync()
              [submit PREFLUSH]
                                 xfs_file_fsync()
                                   file_write_and_wait_range()
                                     [submit WRITE X]
                                     [endio  WRITE X]
                                   _xfs_log_force_lsn()
                                     xlog_wait()
              [endio  PREFLUSH]
      
      The write X is not guarantied to be on persistent storage
      when PREFLUSH request in completed, because write A was submitted
      after the PREFLUSH request, but xfs_file_fsync() of task A will
      be notified of log_flushed=1 and will skip explicit flush.
      
      If the system crashes after fsync of task A, write X may not be
      present on disk after reboot.
      
      This bug was discovered and demonstrated using Josef Bacik's
      dm-log-writes target, which can be used to record block io operations
      and then replay a subset of these operations onto the target device.
      The test goes something like this:
      - Use fsx to execute ops of a file and record ops on log device
      - Every now and then fsync the file, store md5 of file and mark
        the location in the log
      - Then replay log onto device for each mark, mount fs and compare
        md5 of file to stored value
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAmir Goldstein <amir73il@gmail.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      66824bdf
    • Nathan Chancellor's avatar
      kconfig: Avoid format overflow warning from GCC 8.1 · 31658909
      Nathan Chancellor authored
      commit 2ae89c7a upstream.
      
      In file included from scripts/kconfig/zconf.tab.c:2485:
      scripts/kconfig/confdata.c: In function ‘conf_write’:
      scripts/kconfig/confdata.c:773:22: warning: ‘%s’ directive writing likely 7 or more bytes into a region of size between 1 and 4097 [-Wformat-overflow=]
        sprintf(newname, "%s%s", dirname, basename);
                            ^~
      scripts/kconfig/confdata.c:773:19: note: assuming directive output of 7 bytes
        sprintf(newname, "%s%s", dirname, basename);
                         ^~~~~~
      scripts/kconfig/confdata.c:773:2: note: ‘sprintf’ output 1 or more bytes (assuming 4104) into a destination of size 4097
        sprintf(newname, "%s%s", dirname, basename);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      scripts/kconfig/confdata.c:776:23: warning: ‘.tmpconfig.’ directive writing 11 bytes into a region of size between 1 and 4097 [-Wformat-overflow=]
         sprintf(tmpname, "%s.tmpconfig.%d", dirname, (int)getpid());
                             ^~~~~~~~~~~
      scripts/kconfig/confdata.c:776:3: note: ‘sprintf’ output between 13 and 4119 bytes into a destination of size 4097
         sprintf(tmpname, "%s.tmpconfig.%d", dirname, (int)getpid());
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      Increase the size of tmpname and newname to make GCC happy.
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      31658909
    • Linus Torvalds's avatar
      mmap: relax file size limit for regular files · 6ea1dc96
      Linus Torvalds authored
      commit 423913ad upstream.
      
      Commit be83bbf8 ("mmap: introduce sane default mmap limits") was
      introduced to catch problems in various ad-hoc character device drivers
      doing mmap and getting the size limits wrong.  In the process, it used
      "known good" limits for the normal cases of mapping regular files and
      block device drivers.
      
      It turns out that the "s_maxbytes" limit was less "known good" than I
      thought.  In particular, /proc doesn't set it, but exposes one regular
      file to mmap: /proc/vmcore.  As a result, that file got limited to the
      default MAX_INT s_maxbytes value.
      
      This went unnoticed for a while, because apparently the only thing that
      needs it is the s390 kernel zfcpdump, but there might be other tools
      that use this too.
      
      Vasily suggested just changing s_maxbytes for all of /proc, which isn't
      wrong, but makes me nervous at this stage.  So instead, just make the
      new mmap limit always be MAX_LFS_FILESIZE for regular files, which won't
      affect anything else.  It wasn't the regular file case I was worried
      about.
      
      I'd really prefer for maxsize to have been per-inode, but that is not
      how things are today.
      
      Fixes: be83bbf8 ("mmap: introduce sane default mmap limits")
      Reported-by: default avatarVasily Gorbik <gor@linux.ibm.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6ea1dc96
    • Linus Torvalds's avatar
      mmap: introduce sane default mmap limits · bd2f9ce5
      Linus Torvalds authored
      commit be83bbf8 upstream.
      
      The internal VM "mmap()" interfaces are based on the mmap target doing
      everything using page indexes rather than byte offsets, because
      traditionally (ie 32-bit) we had the situation that the byte offset
      didn't fit in a register.  So while the mmap virtual address was limited
      by the word size of the architecture, the backing store was not.
      
      So we're basically passing "pgoff" around as a page index, in order to
      be able to describe backing store locations that are much bigger than
      the word size (think files larger than 4GB etc).
      
      But while this all makes a ton of sense conceptually, we've been dogged
      by various drivers that don't really understand this, and internally
      work with byte offsets, and then try to work with the page index by
      turning it into a byte offset with "pgoff << PAGE_SHIFT".
      
      Which obviously can overflow.
      
      Adding the size of the mapping to it to get the byte offset of the end
      of the backing store just exacerbates the problem, and if you then use
      this overflow-prone value to check various limits of your device driver
      mmap capability, you're just setting yourself up for problems.
      
      The correct thing for drivers to do is to do their limit math in page
      indices, the way the interface is designed.  Because the generic mmap
      code _does_ test that the index doesn't overflow, since that's what the
      mmap code really cares about.
      
      HOWEVER.
      
      Finding and fixing various random drivers is a sisyphean task, so let's
      just see if we can just make the core mmap() code do the limiting for
      us.  Realistically, the only "big" backing stores we need to care about
      are regular files and block devices, both of which are known to do this
      properly, and which have nice well-defined limits for how much data they
      can access.
      
      So let's special-case just those two known cases, and then limit other
      random mmap users to a backing store that still fits in "unsigned long".
      Realistically, that's not much of a limit at all on 64-bit, and on
      32-bit architectures the only worry might be the GPU drivers, which can
      have big physical address spaces.
      
      To make it possible for drivers like that to say that they are 64-bit
      clean, this patch does repurpose the "FMODE_UNSIGNED_OFFSET" bit in the
      file flags to allow drivers to mark their file descriptors as safe in
      the full 64-bit mmap address space.
      
      [ The timing for doing this is less than optimal, and this should really
        go in a merge window. But realistically, this needs wide testing more
        than it needs anything else, and being main-line is the only way to do
        that.
      
        So the earlier the better, even if it's outside the proper development
        cycle        - Linus ]
      
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Dave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bd2f9ce5
    • Chris Chiu's avatar
      tpm: self test failure should not cause suspend to fail · 459e0c3b
      Chris Chiu authored
      commit 0803d7be upstream.
      
      The Acer Acer Veriton X4110G has a TPM device detected as:
        tpm_tis 00:0b: 1.2 TPM (device-id 0xFE, rev-id 71)
      
      After the first S3 suspend, the following error appears during resume:
        tpm tpm0: A TPM error(38) occurred continue selftest
      
      Any following S3 suspend attempts will now fail with this error:
        tpm tpm0: Error (38) sending savestate before suspend
        PM: Device 00:0b failed to suspend: error 38
      
      Error 38 is TPM_ERR_INVALID_POSTINIT which means the TPM is
      not in the correct state. This indicates that the platform BIOS
      is not sending the usual TPM_Startup command during S3 resume.
      >From this point onwards, all TPM commands will fail.
      
      The same issue was previously reported on Foxconn 6150BK8MC and
      Sony Vaio TX3.
      
      The platform behaviour seems broken here, but we should not break
      suspend/resume because of this.
      
      When the unexpected TPM state is encountered, set a flag to skip the
      affected TPM_SaveState command on later suspends.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarChris Chiu <chiu@endlessm.com>
      Signed-off-by: default avatarDaniel Drake <drake@endlessm.com>
      Link: http://lkml.kernel.org/r/CAB4CAwfSCvj1cudi+MWaB5g2Z67d9DwY1o475YOZD64ma23UiQ@mail.gmail.com
      Link: https://lkml.org/lkml/2011/3/28/192
      Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=591031Reviewed-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Signed-off-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      459e0c3b
    • Enric Balletbo i Serra's avatar
      tpm: do not suspend/resume if power stays on · c7d58182
      Enric Balletbo i Serra authored
      commit b5d0ebc9 upstream.
      
      The suspend/resume behavior of the TPM can be controlled by setting
      "powered-while-suspended" in the DTS. This is useful for the cases
      when hardware does not power-off the TPM.
      Signed-off-by: default avatarSonny Rao <sonnyrao@chromium.org>
      Signed-off-by: default avatarEnric Balletbo i Serra <enric.balletbo@collabora.com>
      Reviewed-by: default avatarJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Reviewed-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Signed-off-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Signed-off-by: default avatarJames Morris <james.l.morris@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c7d58182
  2. 06 Jun, 2018 33 commits