1. 07 Feb, 2019 18 commits
    • Dmitry Osipenko's avatar
      gpu: host1x: Continue CDMA execution starting with a next job · 79930baf
      Dmitry Osipenko authored
      Currently gathers of a hung job are getting NOP'ed and a restarted CDMA
      executes the NOP'ed gathers. There shouldn't be a reason to not restart
      CDMA execution starting with a next job, avoiding the unnecessary churning
      with gathers NOP'ing.
      Signed-off-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Reviewed-by: default avatarMikko Perttunen <mperttunen@nvidia.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      79930baf
    • Dmitry Osipenko's avatar
      gpu: host1x: Don't complete a completed job · 5d6f0436
      Dmitry Osipenko authored
      There is a chance that the last job has been completed at the time of
      CDMA timeout handler invocation. In this case there is no need to complete
      the completed job.
      Signed-off-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Reviewed-by: default avatarMikko Perttunen <mperttunen@nvidia.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      5d6f0436
    • Dmitry Osipenko's avatar
      gpu: host1x: Cancel only job that actually got stuck · e8bad659
      Dmitry Osipenko authored
      Host1x doesn't have information about jobs inter-dependency, that is
      something that will become available once host1x will get a proper
      jobs scheduler implementation. Currently a hang job causes other unrelated
      jobs to be canceled, that is a relic from downstream driver which is
      irrelevant to upstream. Let's cancel only the hanging job and not to touch
      other jobs in queue.
      Signed-off-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Reviewed-by: default avatarMikko Perttunen <mperttunen@nvidia.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      e8bad659
    • Thierry Reding's avatar
      drm/tegra: sor: Support device tree crossbar configuration · 6d6c815d
      Thierry Reding authored
      The crossbar configuration is usually the same across all designs for a
      given SoC generation. But sometimes there are designs that require some
      other configuration.
      
      Implement support for parsing the crossbar configuration from a device
      tree. If the crossbar configuration is not present in the device tree,
      fall back to the default crossbar configuration.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      6d6c815d
    • Thierry Reding's avatar
      dt-bindings: display: tegra: Support SOR crossbar configuration · 6c2b3881
      Thierry Reding authored
      The SOR has a crossbar that can map each lane of the SOR to each of the
      SOR pads. The mapping is usually the same across designs for a specific
      SoC generation, but every now and then there's a design that doesn't.
      
      Allow the crossbar configuration to be specified in device tree to make
      it possible to support these designs.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      6c2b3881
    • Thierry Reding's avatar
      drm/tegra: vic: Support stream ID register programming · f3779cb1
      Thierry Reding authored
      The version of VIC found in Tegra186 and later incorporates improvements
      with regards to context isolation. As part of those improvements, stream
      ID registers were added that allow to specify separate stream IDs for
      the Falcon microcontroller and the VIC memory interface.
      
      While it is possible to also set the stream ID dynamically at runtime to
      allow userspace contexts to be completely separated, this commit doesn't
      implement that yet. Instead, the static VIC stream ID is programmed when
      the Falcon is booted. This ensures that memory accesses by the Falcon or
      the VIC are properly translated via the SMMU.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      f3779cb1
    • Thierry Reding's avatar
      drm/tegra: vic: Do not clear driver data · 3ff41673
      Thierry Reding authored
      Upon driver failure, the driver core will take care of clearing the
      driver data, so there's no need to do so explicitly in the driver.
      Reviewed-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      3ff41673
    • Thierry Reding's avatar
      drm/tegra: Restrict IOVA space to DMA mask · 02be8e4f
      Thierry Reding authored
      On Tegra186 and later, the ARM SMMU provides an input address space that
      is 48 bits wide. However, memory clients can only address up to 40 bits.
      If the geometry is used as-is, allocations of IOVA space can end up in a
      region that cannot be addressed by the memory clients.
      
      To fix this, restrict the IOVA space to the DMA mask of the host1x
      device. Note that, technically, the IOVA space needs to be restricted to
      the intersection of the DMA masks for all clients that are attached to
      the IOMMU domain. In practice using the DMA mask of the host1x device is
      sufficient because all host1x clients share the same DMA mask.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      02be8e4f
    • Thierry Reding's avatar
      drm/tegra: Setup shared IOMMU domain after initialization · b9f8b09c
      Thierry Reding authored
      Move initialization of the shared IOMMU domain after the host1x device
      has been initialized. At this point all the Tegra DRM clients have been
      attached to the shared IOMMU domain.
      
      This is important because Tegra186 and later use an ARM SMMU, for which
      the driver defers setting up the geometry for a domain until a device is
      attached to it. This is to ensure that the domain is properly set up for
      a specific ARM SMMU instance, which is unknown at allocation time.
      Reviewed-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      b9f8b09c
    • Thierry Reding's avatar
      drm/tegra: vic: Load firmware on demand · 77a0b09d
      Thierry Reding authored
      Loading the firmware requires an allocation of IOVA space to make sure
      that the VIC's Falcon microcontroller can read the firmware if address
      translation via the SMMU is enabled.
      
      However, the allocation currently happens at a time where the geometry
      of an IOMMU domain may not have been initialized yet. This happens for
      example on Tegra186 and later where an ARM SMMU is used. Domains which
      are created by the ARM SMMU driver postpone the geometry setup until a
      device is attached to the domain. This is because IOMMU domains aren't
      attached to a specific IOMMU instance at allocation time and hence the
      input address space, which defines the geometry, is not known yet.
      
      Work around this by postponing the firmware load until it is needed at
      the time where a channel is opened to the VIC. At this time the shared
      IOMMU domain's geometry has been properly initialized.
      
      As a byproduct this allows the Tegra DRM to be created in the absence
      of VIC firmware, since the VIC initialization no longer fails if the
      firmware can't be found.
      
      Based on an earlier patch by Dmitry Osipenko <digetx@gmail.com>.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      Reviewed-by: default avatarDmitry Osipenko <digetx@gmail.com>
      77a0b09d
    • Thierry Reding's avatar
      drm/tegra: Store parent pointer in Tegra DRM clients · 8e5d19c6
      Thierry Reding authored
      Tegra DRM clients need access to their parent, so store a pointer to it
      upon registration. It's technically possible to get at this by going via
      the host1x client's parent and getting the driver data, but that's quite
      complicated and not very transparent. It's much more straightforward and
      natural to let the children know about their parent.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      Reviewed-by: default avatarDmitry Osipenko <digetx@gmail.com>
      8e5d19c6
    • Thierry Reding's avatar
      gpu: host1x: Optimize CDMA push buffer memory usage · e1f338c0
      Thierry Reding authored
      The host1x CDMA push buffer is terminated by a special opcode (RESTART)
      that tells the CDMA to wrap around to the beginning of the push buffer.
      To accomodate the RESTART opcode, an extra 4 bytes are allocated on top
      of the 512 * 8 = 4096 bytes needed for the 512 slots (1 slot = 2 words)
      that are used for other commands passed to CDMA. This requires that two
      memory pages are allocated, but most of the second page (4092 bytes) is
      never used.
      
      Decrease the number of slots to 511 so that the RESTART opcode fits
      within the page. Adjust the push buffer wraparound code to take into
      account push buffer sizes that are not a power of two.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      e1f338c0
    • Thierry Reding's avatar
      gpu: host1x: Use correct semantics for HOST1X_CHANNEL_DMAEND · 0e43b8da
      Thierry Reding authored
      The HOST1X_CHANNEL_DMAEND is an offset relative to the value written to
      the HOST1X_CHANNEL_DMASTART register, but it is currently treated as an
      absolute address. This can cause SMMU faults if the CDMA fetches past a
      pushbuffer's IOMMU mapping.
      
      Properly setting the DMAEND prevents the CDMA from fetching beyond that
      address and avoid such issues. This is currently not observed because a
      whole (almost) page of essentially scratch space absorbs any excessive
      prefetching by CDMA. However, changing the number of slots in the push
      buffer can trigger these SMMU faults.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      0e43b8da
    • Thierry Reding's avatar
      gpu: host1x: Support 40-bit addressing on Tegra186 · 8de896eb
      Thierry Reding authored
      The host1x and clients instantiated on Tegra186 support addressing 40
      bits of memory.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      8de896eb
    • Thierry Reding's avatar
      gpu: host1x: Restrict IOVA space to DMA mask · 38fabcc9
      Thierry Reding authored
      On Tegra186 and later, the ARM SMMU provides an input address space that
      is 48 bits wide. However, memory clients can only address up to 40 bits.
      If the geometry is used as-is, allocations of IOVA space can end up in a
      region that is not addressable by the memory clients.
      
      To fix this, restrict the IOVA space to the DMA mask of the host1x
      device.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      38fabcc9
    • Thierry Reding's avatar
      gpu: host1x: Support 40-bit addressing · 67a82dbc
      Thierry Reding authored
      Tegra186 and later support 40 bits of address space. Additional
      registers need to be programmed to store the full 40 bits of push
      buffer addresses.
      
      Since command stream gathers can also reside in buffers in a 40-bit
      address space, a new variant of the GATHER opcode is also introduced.
      It takes two parameters: the first parameter contains the lower 32
      bits of the address and the second parameter contains bits 32 to 39.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      67a82dbc
    • Thierry Reding's avatar
      gpu: host1x: Introduce support for wide opcodes · 5a5fccbd
      Thierry Reding authored
      The CDMA push buffer can currently only handle opcodes that take a
      single word parameter. However, the host1x implementation on Tegra186
      and later supports opcodes that require multiple words as parameters.
      
      Unfortunately the way the push buffer is structured, these wide opcodes
      cannot simply be composed of two regular opcodes because that could
      result in the wide opcode being split across the end of the push buffer
      and the final RESTART opcode required to wrap the push buffer around
      would break the wide opcode.
      
      One way to fix this would be to remove the concept of slots to simplify
      push buffer operations. However, that's not entirely trivial and should
      be done in a separate patch. For now, simply use a different function
      to push four-word opcodes into the push buffer. Technically only three
      words are pushed, with the fourth word used as padding to preserve the
      2-word alignment required by the slots abstraction. The fourth word is
      always a NOP opcode.
      
      Additional care must be taken when the end of the push buffer is
      reached. If a four-word opcode doesn't fit into the push buffer without
      being split by the boundary, NOP opcodes will be introduced and the new
      wide opcode placed at the beginning of the push buffer.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      5a5fccbd
    • Thierry Reding's avatar
      gpu: host1x: Program the channel stream ID · de5469c2
      Thierry Reding authored
      When processing command streams, make sure the host1x's stream ID is
      programmed for the channel so that addresses are properly translated
      through the SMMU.
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      de5469c2
  2. 04 Feb, 2019 3 commits
  3. 16 Jan, 2019 5 commits
  4. 07 Jan, 2019 3 commits
    • Linus Torvalds's avatar
      Linux 5.0-rc1 · bfeffd15
      Linus Torvalds authored
      bfeffd15
    • Linus Torvalds's avatar
      Merge tag 'kbuild-v4.21-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild · 85e1ffbd
      Linus Torvalds authored
      Pull more Kbuild updates from Masahiro Yamada:
      
       - improve boolinit.cocci and use_after_iter.cocci semantic patches
      
       - fix alignment for kallsyms
      
       - move 'asm goto' compiler test to Kconfig and clean up jump_label
         CONFIG option
      
       - generate asm-generic wrappers automatically if arch does not
         implement mandatory UAPI headers
      
       - remove redundant generic-y defines
      
       - misc cleanups
      
      * tag 'kbuild-v4.21-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
        kconfig: rename generated .*conf-cfg to *conf-cfg
        kbuild: remove unnecessary stubs for archheader and archscripts
        kbuild: use assignment instead of define ... endef for filechk_* rules
        arch: remove redundant UAPI generic-y defines
        kbuild: generate asm-generic wrappers if mandatory headers are missing
        arch: remove stale comments "UAPI Header export list"
        riscv: remove redundant kernel-space generic-y
        kbuild: change filechk to surround the given command with { }
        kbuild: remove redundant target cleaning on failure
        kbuild: clean up rule_dtc_dt_yaml
        kbuild: remove UIMAGE_IN and UIMAGE_OUT
        jump_label: move 'asm goto' support test to Kconfig
        kallsyms: lower alignment on ARM
        scripts: coccinelle: boolinit: drop warnings on named constants
        scripts: coccinelle: check for redeclaration
        kconfig: remove unused "file" field of yylval union
        nds32: remove redundant kernel-space generic-y
        nios2: remove unneeded HAS_DMA define
      85e1ffbd
    • Linus Torvalds's avatar
      Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · ac5eed2b
      Linus Torvalds authored
      Pull perf tooling updates form Ingo Molnar:
       "A final batch of perf tooling changes: mostly fixes and small
        improvements"
      
      * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (29 commits)
        perf session: Add comment for perf_session__register_idle_thread()
        perf thread-stack: Fix thread stack processing for the idle task
        perf thread-stack: Allocate an array of thread stacks
        perf thread-stack: Factor out thread_stack__init()
        perf thread-stack: Allow for a thread stack array
        perf thread-stack: Avoid direct reference to the thread's stack
        perf thread-stack: Tidy thread_stack__bottom() usage
        perf thread-stack: Simplify some code in thread_stack__process()
        tools gpio: Allow overriding CFLAGS
        tools power turbostat: Override CFLAGS assignments and add LDFLAGS to build command
        tools thermal tmon: Allow overriding CFLAGS assignments
        tools power x86_energy_perf_policy: Override CFLAGS assignments and add LDFLAGS to build command
        perf c2c: Increase the HITM ratio limit for displayed cachelines
        perf c2c: Change the default coalesce setup
        perf trace beauty ioctl: Beautify USBDEVFS_ commands
        perf trace beauty: Export function to get the files for a thread
        perf trace: Wire up ioctl's USBDEBFS_ cmd table generator
        perf beauty ioctl: Add generator for USBDEVFS_ ioctl commands
        tools headers uapi: Grab a copy of usbdevice_fs.h
        perf trace: Store the major number for a file when storing its pathname
        ...
      ac5eed2b
  5. 06 Jan, 2019 11 commits
    • Linus Torvalds's avatar
      Change mincore() to count "mapped" pages rather than "cached" pages · 574823bf
      Linus Torvalds authored
      The semantics of what "in core" means for the mincore() system call are
      somewhat unclear, but Linux has always (since 2.3.52, which is when
      mincore() was initially done) treated it as "page is available in page
      cache" rather than "page is mapped in the mapping".
      
      The problem with that traditional semantic is that it exposes a lot of
      system cache state that it really probably shouldn't, and that users
      shouldn't really even care about.
      
      So let's try to avoid that information leak by simply changing the
      semantics to be that mincore() counts actual mapped pages, not pages
      that might be cheaply mapped if they were faulted (note the "might be"
      part of the old semantics: being in the cache doesn't actually guarantee
      that you can access them without IO anyway, since things like network
      filesystems may have to revalidate the cache before use).
      
      In many ways the old semantics were somewhat insane even aside from the
      information leak issue.  From the very beginning (and that beginning is
      a long time ago: 2.3.52 was released in March 2000, I think), the code
      had a comment saying
      
        Later we can get more picky about what "in core" means precisely.
      
      and this is that "later".  Admittedly it is much later than is really
      comfortable.
      
      NOTE! This is a real semantic change, and it is for example known to
      change the output of "fincore", since that program literally does a
      mmmap without populating it, and then doing "mincore()" on that mapping
      that doesn't actually have any pages in it.
      
      I'm hoping that nobody actually has any workflow that cares, and the
      info leak is real.
      
      We may have to do something different if it turns out that people have
      valid reasons to want the old semantics, and if we can limit the
      information leak sanely.
      
      Cc: Kevin Easton <kevin@guarana.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Masatake YAMATO <yamato@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      574823bf
    • Linus Torvalds's avatar
      Fix 'acccess_ok()' on alpha and SH · 94bd8a05
      Linus Torvalds authored
      Commit 594cc251 ("make 'user_access_begin()' do 'access_ok()'")
      broke both alpha and SH booting in qemu, as noticed by Guenter Roeck.
      
      It turns out that the bug wasn't actually in that commit itself (which
      would have been surprising: it was mostly a no-op), but in how the
      addition of access_ok() to the strncpy_from_user() and strnlen_user()
      functions now triggered the case where those functions would test the
      access of the very last byte of the user address space.
      
      The string functions actually did that user range test before too, but
      they did it manually by just comparing against user_addr_max().  But
      with user_access_begin() doing the check (using "access_ok()"), it now
      exposed problems in the architecture implementations of that function.
      
      For example, on alpha, the access_ok() helper macro looked like this:
      
        #define __access_ok(addr, size) \
              ((get_fs().seg & (addr | size | (addr+size))) == 0)
      
      and what it basically tests is of any of the high bits get set (the
      USER_DS masking value is 0xfffffc0000000000).
      
      And that's completely wrong for the "addr+size" check.  Because it's
      off-by-one for the case where we check to the very end of the user
      address space, which is exactly what the strn*_user() functions do.
      
      Why? Because "addr+size" will be exactly the size of the address space,
      so trying to access the last byte of the user address space will fail
      the __access_ok() check, even though it shouldn't.  As a result, the
      user string accessor functions failed consistently - because they
      literally don't know how long the string is going to be, and the max
      access is going to be that last byte of the user address space.
      
      Side note: that alpha macro is buggy for another reason too - it re-uses
      the arguments twice.
      
      And SH has another version of almost the exact same bug:
      
        #define __addr_ok(addr) \
              ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg)
      
      so far so good: yes, a user address must be below the limit.  But then:
      
        #define __access_ok(addr, size)         \
              (__addr_ok((addr) + (size)))
      
      is wrong with the exact same off-by-one case: the case when "addr+size"
      is exactly _equal_ to the limit is actually perfectly fine (think "one
      byte access at the last address of the user address space")
      
      The SH version is actually seriously buggy in another way: it doesn't
      actually check for overflow, even though it did copy the _comment_ that
      talks about overflow.
      
      So it turns out that both SH and alpha actually have completely buggy
      implementations of access_ok(), but they happened to work in practice
      (although the SH overflow one is a serious serious security bug, not
      that anybody likely cares about SH security).
      
      This fixes the problems by using a similar macro on both alpha and SH.
      It isn't trying to be clever, the end address is based on this logic:
      
              unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b;
      
      which basically says "add start and length, and then subtract one unless
      the length was zero".  We can't subtract one for a zero length, or we'd
      just hit an underflow instead.
      
      For a lot of access_ok() users the length is a constant, so this isn't
      actually as expensive as it initially looks.
      Reported-and-tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      94bd8a05
    • Linus Torvalds's avatar
      Merge tag 'fscrypt_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscrypt · baa67073
      Linus Torvalds authored
      Pull fscrypt updates from Ted Ts'o:
       "Add Adiantum support for fscrypt"
      
      * tag 'fscrypt_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscrypt:
        fscrypt: add Adiantum support
      baa67073
    • Linus Torvalds's avatar
      Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 · 21524046
      Linus Torvalds authored
      Pull ext4 bug fixes from Ted Ts'o:
       "Fix a number of ext4 bugs"
      
      * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
        ext4: fix special inode number checks in __ext4_iget()
        ext4: track writeback errors using the generic tracking infrastructure
        ext4: use ext4_write_inode() when fsyncing w/o a journal
        ext4: avoid kernel warning when writing the superblock to a dead device
        ext4: fix a potential fiemap/page fault deadlock w/ inline_data
        ext4: make sure enough credits are reserved for dioread_nolock writes
      21524046
    • Linus Torvalds's avatar
      Merge tag 'dma-mapping-4.21-1' of git://git.infradead.org/users/hch/dma-mapping · e2b745f4
      Linus Torvalds authored
      Pull dma-mapping fixes from Christoph Hellwig:
       "Fix various regressions introduced in this cycles:
      
         - fix dma-debug tracking for the map_page / map_single
           consolidatation
      
         - properly stub out DMA mapping symbols for !HAS_DMA builds to avoid
           link failures
      
         - fix AMD Gart direct mappings
      
         - setup the dma address for no kernel mappings using the remap
           allocator"
      
      * tag 'dma-mapping-4.21-1' of git://git.infradead.org/users/hch/dma-mapping:
        dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING for remapped allocations
        x86/amd_gart: fix unmapping of non-GART mappings
        dma-mapping: remove a few unused exports
        dma-mapping: properly stub out the DMA API for !CONFIG_HAS_DMA
        dma-mapping: remove dmam_{declare,release}_coherent_memory
        dma-mapping: implement dmam_alloc_coherent using dmam_alloc_attrs
        dma-mapping: implement dma_map_single_attrs using dma_map_page_attrs
      e2b745f4
    • Linus Torvalds's avatar
      Merge tag 'tag-chrome-platform-for-v4.21' of... · 12133258
      Linus Torvalds authored
      Merge tag 'tag-chrome-platform-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/bleung/chrome-platform
      
      Pull chrome platform updates from Benson Leung:
      
       - Changes for EC_MKBP_EVENT_SENSOR_FIFO handling.
      
       - Also, maintainership changes. Olofj out, Enric balletbo in.
      
      * tag 'tag-chrome-platform-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/bleung/chrome-platform:
        MAINTAINERS: add maintainers for ChromeOS EC sub-drivers
        MAINTAINERS: platform/chrome: Add Enric as a maintainer
        MAINTAINERS: platform/chrome: remove myself as maintainer
        platform/chrome: don't report EC_MKBP_EVENT_SENSOR_FIFO as wakeup
        platform/chrome: straighten out cros_ec_get_{next,host}_event() error codes
      12133258
    • Linus Torvalds's avatar
      Merge tag 'hwlock-v4.21' of git://github.com/andersson/remoteproc · 66e012f6
      Linus Torvalds authored
      Pull hwspinlock updates from Bjorn Andersson:
       "This adds support for the hardware semaphores found in STM32MP1"
      
      * tag 'hwlock-v4.21' of git://github.com/andersson/remoteproc:
        hwspinlock: fix return value check in stm32_hwspinlock_probe()
        hwspinlock: add STM32 hwspinlock device
        dt-bindings: hwlock: Document STM32 hwspinlock bindings
      66e012f6
    • Eric Biggers's avatar
      fscrypt: add Adiantum support · 8094c3ce
      Eric Biggers authored
      Add support for the Adiantum encryption mode to fscrypt.  Adiantum is a
      tweakable, length-preserving encryption mode with security provably
      reducible to that of XChaCha12 and AES-256, subject to a security bound.
      It's also a true wide-block mode, unlike XTS.  See the paper
      "Adiantum: length-preserving encryption for entry-level processors"
      (https://eprint.iacr.org/2018/720.pdf) for more details.  Also see
      commit 059c2a4d ("crypto: adiantum - add Adiantum support").
      
      On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and
      the NH hash function.  These algorithms are fast even on processors
      without dedicated crypto instructions.  Adiantum makes it feasible to
      enable storage encryption on low-end mobile devices that lack AES
      instructions; currently such devices are unencrypted.  On ARM Cortex-A7,
      on 4096-byte messages Adiantum encryption is about 4 times faster than
      AES-256-XTS encryption; decryption is about 5 times faster.
      
      In fscrypt, Adiantum is suitable for encrypting both file contents and
      names.  With filenames, it fixes a known weakness: when two filenames in
      a directory share a common prefix of >= 16 bytes, with CTS-CBC their
      encrypted filenames share a common prefix too, leaking information.
      Adiantum does not have this problem.
      
      Since Adiantum also accepts long tweaks (IVs), it's also safe to use the
      master key directly for Adiantum encryption rather than deriving
      per-file keys, provided that the per-file nonce is included in the IVs
      and the master key isn't used for any other encryption mode.  This
      configuration saves memory and improves performance.  A new fscrypt
      policy flag is added to allow users to opt-in to this configuration.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      8094c3ce
    • Linus Torvalds's avatar
      Merge tag 'docs-5.0-fixes' of git://git.lwn.net/linux · b5aef86e
      Linus Torvalds authored
      Pull documentation fixes from Jonathan Corbet:
       "A handful of late-arriving documentation fixes"
      
      * tag 'docs-5.0-fixes' of git://git.lwn.net/linux:
        doc: filesystems: fix bad references to nonexistent ext4.rst file
        Documentation/admin-guide: update URL of LKML information link
        Docs/kernel-api.rst: Remove blk-tag.c reference
      b5aef86e
    • Linus Torvalds's avatar
      Merge tag 'firewire-update' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394 · 15b215e5
      Linus Torvalds authored
      Pull firewire fixlet from Stefan Richter:
       "Remove an explicit dependency in Kconfig which is implied by another
        dependency"
      
      * tag 'firewire-update' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394:
        firewire: Remove depends on HAS_DMA in case of platform dependency
      15b215e5
    • Linus Torvalds's avatar
      Merge tag 'for-linus-20190104' of git://git.kernel.dk/linux-block · d7252d0d
      Linus Torvalds authored
      Pull block updates and fixes from Jens Axboe:
      
       - Pulled in MD changes that Shaohua had queued up for 4.21.
      
         Unfortunately we lost Shaohua late 2018, I'm sending these in on his
         behalf.
      
       - In conjunction with the above, I added a CREDITS entry for Shaoua.
      
       - sunvdc queue restart fix (Ming)
      
      * tag 'for-linus-20190104' of git://git.kernel.dk/linux-block:
        Add CREDITS entry for Shaohua Li
        block: sunvdc: don't run hw queue synchronously from irq context
        md: fix raid10 hang issue caused by barrier
        raid10: refactor common wait code from regular read/write request
        md: remvoe redundant condition check
        lib/raid6: add option to skip algo benchmarking
        lib/raid6: sort algos in rough performance order
        lib/raid6: check for assembler SSSE3 support
        lib/raid6: avoid __attribute_const__ redefinition
        lib/raid6: add missing include for raid6test
        md: remove set but not used variable 'bi_rdev'
      d7252d0d