1. 22 Jun, 2017 4 commits
    • Mark Rutland's avatar
      arm64: dump cpu_hwcaps at panic time · 8effeaaf
      Mark Rutland authored
      When debugging a kernel panic(), it can be useful to know which CPU
      features have been detected by the kernel, as some code paths can depend
      on these (and may have been patched at runtime).
      
      This patch adds a notifier to dump the detected CPU caps (as a hex
      string) at panic(), when we log other information useful for debugging.
      On a Juno R1 system running v4.12-rc5, this looks like:
      
      [  615.431249] Kernel panic - not syncing: Fatal exception in interrupt
      [  615.437609] SMP: stopping secondary CPUs
      [  615.441872] Kernel Offset: disabled
      [  615.445372] CPU features: 0x02086
      [  615.448522] Memory Limit: none
      
      A developer can decode this by looking at the corresponding
      <asm/cpucaps.h> bits. For example, the above decodes as:
      
      * bit  1: ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE
      * bit  2: ARM64_WORKAROUND_845719
      * bit  7: ARM64_WORKAROUND_834220
      * bit 13: ARM64_HAS_32BIT_EL0
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarSteve Capper <steve.capper@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8effeaaf
    • Dave Martin's avatar
      arm64: ptrace: Flush user-RW TLS reg to thread_struct before reading · 936eb65c
      Dave Martin authored
      When reading current's user-writable TLS register (which occurs
      when dumping core for native tasks), it is possible that userspace
      has modified it since the time the task was last scheduled out.
      The new TLS register value is not guaranteed to have been written
      immediately back to thread_struct in this case.
      
      As a result, a coredump can capture stale data for this register.
      Reading the register for a stopped task via ptrace is unaffected.
      
      For native tasks, this patch explicitly flushes the TPIDR_EL0
      register back to thread_struct before dumping when operating on
      current, thus ensuring that coredump contents are up to date.  For
      compat tasks, the TLS register is not user-writable and so cannot
      be out of sync, so no flush is required in compat_tls_get().
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      936eb65c
    • Dave Martin's avatar
      arm64: ptrace: Flush FPSIMD regs back to thread_struct before reading · e1d5a8fb
      Dave Martin authored
      When reading the FPSIMD state of current (which occurs when dumping
      core), it is possible that userspace has modified the FPSIMD
      registers since the time the task was last scheduled out.  Such
      changes are not guaranteed to be reflected immedately in
      thread_struct.
      
      As a result, a coredump can contain stale values for these
      registers.  Reading the registers of a stopped task via ptrace is
      unaffected.
      
      This patch explicitly flushes the CPU state back to thread_struct
      before dumping when operating on current, thus ensuring that
      coredump contents are up to date.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      e1d5a8fb
    • Dave Martin's avatar
      arm64: ptrace: Fix VFP register dumping in compat coredumps · af66b2d8
      Dave Martin authored
      Currently, VFP registers are omitted from coredumps for compat
      processes, due to a bug in the REGSET_COMPAT_VFP regset
      implementation.
      
      compat_vfp_get() needs to transfer non-contiguous data from
      thread_struct.fpsimd_state, and uses put_user() to handle the
      offending trailing word (FPSCR).  This fails when copying to a
      kernel address (i.e., kbuf && !ubuf), which is what happens when
      dumping core.  As a result, the ELF coredump core code silently
      omits the NT_ARM_VFP note from the dump.
      
      It would be possible to work around this with additional special
      case code for the put_user(), but since user_regset_copyout() is
      explicitly designed to handle this scenario it is cleaner to port
      the put_user() to a user_regset_copyout() call, which this patch
      does.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      af66b2d8
  2. 20 Jun, 2017 8 commits
    • Luc Van Oostenryck's avatar
      arm64: pass machine size to sparse · f5d28490
      Luc Van Oostenryck authored
      When using sparse on the arm64 tree we get many thousands of
      warnings like 'constant ... is so big it is unsigned long long'
      or 'shift too big (32) for type unsigned long'. This happens
      because by default sparse considers the machine as 32bit and
      defines the size of the types accordingly.
      
      Fix this by passing the '-m64' flag to sparse so that
      sparse can correctly define longs as being 64bit.
      
      CC: Catalin Marinas <catalin.marinas@arm.com>
      CC: Will Deacon <will.deacon@arm.com>
      CC: linux-arm-kernel@lists.infradead.org
      Signed-off-by: default avatarLuc Van Oostenryck <luc.vanoostenryck@gmail.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f5d28490
    • Dave Martin's avatar
      arm64: signal: factor out signal frame record allocation · bb4322f7
      Dave Martin authored
      This patch factors out the allocator for signal frame optional
      records into a separate function, to ensure consistency and
      facilitate later expansion.
      
      No overrun checking is currently done, because the allocation is in
      user memory and anyway the kernel never tries to allocate enough
      space in the signal frame yet for an overrun to occur.  This
      behaviour will be refined in future patches.
      
      The approach taken in this patch to allocation of the terminator
      record is not very clean: this will also be replaced in subsequent
      patches.
      
      For future extension, a comment is added in sigcontext.h
      documenting the current static allocations in __reserved[].  This
      will be important for determining under what circumstances
      userspace may or may not see an expanded signal frame.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      bb4322f7
    • Dave Martin's avatar
      arm64: signal: factor frame layout and population into separate passes · bb4891a6
      Dave Martin authored
      In preparation for expanding the signal frame, this patch refactors
      the signal frame setup code in setup_sigframe() into two separate
      passes.
      
      The first pass, setup_sigframe_layout(), determines the size of the
      signal frame and its internal layout, including the presence and
      location of optional records.  The resulting knowledge is used to
      allocate and locate the user stack space required for the signal
      frame and to determine which optional records to include.
      
      The second pass, setup_sigframe(), is called once the stack frame
      is allocated in order to populate it with the necessary context
      information.
      
      As a result of these changes, it becomes more natural to represent
      locations in the signal frame by a base pointer and an offset,
      since the absolute address of each location is not known during the
      layout pass.  To be more consistent with this logic,
      parse_user_sigframe() is refactored to describe signal frame
      locations in a similar way.
      
      This change has no effect on the signal ABI, but will make it
      easier to expand the signal frame in future patches.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      bb4891a6
    • Dave Martin's avatar
      arm64: signal: Refactor sigcontext parsing in rt_sigreturn · 47ccb028
      Dave Martin authored
      Currently, rt_sigreturn does very limited checking on the
      sigcontext coming from userspace.
      
      Future additions to the sigcontext data will increase the potential
      for surprises.  Also, it is not clear whether the sigcontext
      extension records are supposed to occur in a particular order.
      
      To allow the parsing code to be extended more easily, this patch
      factors out the sigcontext parsing into a separate function, and
      adds extra checks to validate the well-formedness of the sigcontext
      structure.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      47ccb028
    • Dave Martin's avatar
      arm64: signal: split frame link record from sigcontext structure · 20987de3
      Dave Martin authored
      In order to be able to increase the amount of the data currently
      written to the __reserved[] array in the signal frame, it is
      necessary to overwrite the locations currently occupied by the
      {fp,lr} frame link record pushed at the top of the signal stack.
      
      In order for this to work, this patch detaches the frame link
      record from struct rt_sigframe and places it separately at the top
      of the signal stack.  This will allow subsequent patches to insert
      data between it and __reserved[].
      
      This change relies on the non-ABI status of the placement of the
      frame record with respect to struct sigframe: this status is
      undocumented, but the placement is not declared or described in the
      user headers, and known unwinder implementations (libgcc,
      libunwind, gdb) appear not to rely on it.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      20987de3
    • Ard Biesheuvel's avatar
      arm64: mm: select CONFIG_ARCH_PROC_KCORE_TEXT · 8f360948
      Ard Biesheuvel authored
      To avoid issues with the /proc/kcore code getting confused about the
      kernels block mappings in the VMALLOC region, enable the existing
      facility that describes the [_text, _end) interval as a separate
      KCORE_TEXT region, which supersedes the KCORE_VMALLOC region that
      it intersects with on arm64.
      Reported-by: default avatarTan Xiaojun <tanxiaojun@huawei.com>
      Tested-by: default avatarTan Xiaojun <tanxiaojun@huawei.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarLaura Abbott <labbott@redhat.com>
      Reviewed-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8f360948
    • Ard Biesheuvel's avatar
      fs/proc: kcore: use kcore_list type to check for vmalloc/module address · 737326aa
      Ard Biesheuvel authored
      Instead of passing each start address into is_vmalloc_or_module_addr()
      to decide whether it falls into either the VMALLOC or the MODULES region,
      we can simply check the type field of the current kcore_list entry, since
      it will be set to KCORE_VMALLOC based on exactly the same conditions.
      
      As a bonus, when reading the KCORE_TEXT region on architectures that have
      one, this will avoid using vread() on the region if it happens to intersect
      with a KCORE_VMALLOC region. This is due the fact that the KCORE_TEXT
      region is the first one to be added to the kcore region list.
      Reported-by: default avatarTan Xiaojun <tanxiaojun@huawei.com>
      Tested-by: default avatarTan Xiaojun <tanxiaojun@huawei.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarLaura Abbott <labbott@redhat.com>
      Reviewed-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      737326aa
    • Ard Biesheuvel's avatar
      drivers/char: kmem: disable on arm64 · 06c35ef1
      Ard Biesheuvel authored
      As it turns out, arm64 deviates from other architectures in the way it
      maps the VMALLOC region: on most (all?) other architectures, it resides
      strictly above the kernel's direct mapping of DRAM, but on arm64, this
      is the other way around. For instance, for a 48-bit VA configuration,
      we have
      
        modules : 0xffff000000000000 - 0xffff000008000000   (   128 MB)
        vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000   (129022 GB)
        ...
        vmemmap : 0xffff7e0000000000 - 0xffff800000000000   (  2048 GB maximum)
                  0xffff7e0000000000 - 0xffff7e0003ff0000   (    63 MB actual)
        memory  : 0xffff800000000000 - 0xffff8000ffc00000   (  4092 MB)
      
      This has mostly gone unnoticed until now, but it does appear that it
      breaks an assumption in the kmem read/write code, which does something
      like
      
        if (p < (unsigned long) high_memory) {
          ... use straight copy_[to|from]_user() using p as virtual address ...
        }
        ...
        if (count > 0) {
          ... use vread/vwrite for accesses past high_memory ...
        }
      
      The first condition will inadvertently hold for the VMALLOC region if
      VMALLOC_START < PAGE_OFFSET [which is the case on arm64], but the read
      or write will subsequently fail the virt_addr_valid() check, resulting
      in a -ENXIO return value.
      
      Given how kmem seems to be living in borrowed time anyway, and given
      the fact that nobody noticed that the read/write interface is broken
      on arm64 in the first place, let's not bother trying to fix it, but
      simply disable the /dev/kmem interface entirely for arm64.
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      06c35ef1
  3. 15 Jun, 2017 4 commits
  4. 12 Jun, 2017 9 commits
  5. 07 Jun, 2017 2 commits
    • Ard Biesheuvel's avatar
      arm64: ftrace: add support for far branches to dynamic ftrace · e71a4e1b
      Ard Biesheuvel authored
      Currently, dynamic ftrace support in the arm64 kernel assumes that all
      core kernel code is within range of ordinary branch instructions that
      occur in module code, which is usually the case, but is no longer
      guaranteed now that we have support for module PLTs and address space
      randomization.
      
      Since on arm64, all patching of branch instructions involves function
      calls to the same entry point [ftrace_caller()], we can emit the modules
      with a trampoline that has unlimited range, and patch both the trampoline
      itself and the branch instruction to redirect the call via the trampoline.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: minor clarification to smp_wmb() comment]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      e71a4e1b
    • Ard Biesheuvel's avatar
      arm64: ftrace: don't validate branch via PLT in ftrace_make_nop() · f8af0b36
      Ard Biesheuvel authored
      When turning branch instructions into NOPs, we attempt to validate the
      action by comparing the old value at the call site with the opcode of
      a direct relative branch instruction pointing at the old target.
      
      However, these call sites are statically initialized to call _mcount(),
      and may be redirected via a PLT entry if the module is loaded far away
      from the kernel text, leading to false negatives and spurious errors.
      
      So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f8af0b36
  6. 06 Jun, 2017 1 commit
  7. 05 Jun, 2017 1 commit
  8. 01 Jun, 2017 1 commit
    • Ard Biesheuvel's avatar
      arm64: kernel: restrict /dev/mem read() calls to linear region · 1151f838
      Ard Biesheuvel authored
      When running lscpu on an AArch64 system that has SMBIOS version 2.0
      tables, it will segfault in the following way:
      
        Unable to handle kernel paging request at virtual address ffff8000bfff0000
        pgd = ffff8000f9615000
        [ffff8000bfff0000] *pgd=0000000000000000
        Internal error: Oops: 96000007 [#1] PREEMPT SMP
        Modules linked in:
        CPU: 0 PID: 1284 Comm: lscpu Not tainted 4.11.0-rc3+ #103
        Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
        task: ffff8000fa78e800 task.stack: ffff8000f9780000
        PC is at __arch_copy_to_user+0x90/0x220
        LR is at read_mem+0xcc/0x140
      
      This is caused by the fact that lspci issues a read() on /dev/mem at the
      offset where it expects to find the SMBIOS structure array. However, this
      region is classified as EFI_RUNTIME_SERVICE_DATA (as per the UEFI spec),
      and so it is omitted from the linear mapping.
      
      So let's restrict /dev/mem read/write access to those areas that are
      covered by the linear region.
      Reported-by: default avatarAlexander Graf <agraf@suse.de>
      Fixes: 4dffbfc4 ("arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP")
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      1151f838
  9. 30 May, 2017 8 commits
    • Lorenzo Pieralisi's avatar
      ARM64/PCI: Set root bus NUMA node on ACPI systems · db46a72b
      Lorenzo Pieralisi authored
      PCI core requires the NUMA node for the struct pci_host_bridge.dev to
      be set by using the pcibus_to_node(struct pci_bus*) API, that on ARM64
      systems relies on the struct pci_host_bridge->bus.dev NUMA node.
      
      The struct pci_host_bridge.dev NUMA node is then propagated through
      the PCI device hierarchy as PCI devices (and bridges) are enumerated
      under it.
      
      Therefore, in order to set-up the PCI NUMA hierarchy appropriately, the
      struct pci_host_bridge->bus.dev NUMA node must be set before core
      code calls pcibus_to_node(struct pci_bus*) on it so that PCI core can
      retrieve the NUMA node for the struct pci_host_bridge.dev device and can
      propagate it through the PCI bus tree.
      
      On ARM64 ACPI based systems the struct pci_host_bridge->bus.dev NUMA
      node can be set-up in pcibios_root_bridge_prepare() by parsing the root
      bridge ACPI device firmware binding.
      
      Add code to the pcibios_root_bridge_prepare() that, when booting with
      ACPI, parse the root bridge ACPI device companion NUMA binding and set
      the corresponding struct pci_host_bridge->bus.dev NUMA node
      appropriately.
      
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarRobert Richter <rrichter@cavium.com>
      Tested-by: default avatarRobert Richter <rrichter@cavium.com>
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      db46a72b
    • Will Deacon's avatar
      arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage · 5f16a046
      Will Deacon authored
      FUTEX_OP_OPARG_SHIFT instructs the futex code to treat the 12-bit oparg
      field as a shift value, potentially leading to a left shift value that
      is negative or with an absolute value that is significantly larger then
      the size of the type. UBSAN chokes with:
      
      ================================================================================
      UBSAN: Undefined behaviour in ./arch/arm64/include/asm/futex.h:60:13
      shift exponent -1 is negative
      CPU: 1 PID: 1449 Comm: syz-executor0 Not tainted 4.11.0-rc4-00005-g977eb52-dirty #11
      Hardware name: linux,dummy-virt (DT)
      Call trace:
      [<ffff200008094778>] dump_backtrace+0x0/0x538 arch/arm64/kernel/traps.c:73
      [<ffff200008094cd0>] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
      [<ffff200008c194a8>] __dump_stack lib/dump_stack.c:16 [inline]
      [<ffff200008c194a8>] dump_stack+0x120/0x188 lib/dump_stack.c:52
      [<ffff200008cc24b8>] ubsan_epilogue+0x18/0x98 lib/ubsan.c:164
      [<ffff200008cc3098>] __ubsan_handle_shift_out_of_bounds+0x250/0x294 lib/ubsan.c:421
      [<ffff20000832002c>] futex_atomic_op_inuser arch/arm64/include/asm/futex.h:60 [inline]
      [<ffff20000832002c>] futex_wake_op kernel/futex.c:1489 [inline]
      [<ffff20000832002c>] do_futex+0x137c/0x1740 kernel/futex.c:3231
      [<ffff200008320504>] SYSC_futex kernel/futex.c:3281 [inline]
      [<ffff200008320504>] SyS_futex+0x114/0x268 kernel/futex.c:3249
      [<ffff200008084770>] el0_svc_naked+0x24/0x28
      ================================================================================
      syz-executor1 uses obsolete (PF_INET,SOCK_PACKET)
      sock: process `syz-executor0' is using obsolete setsockopt SO_BSDCOMPAT
      
      This patch attempts to fix some of this by:
      
        * Making encoded_op an unsigned type, so we can shift it left even if
          the top bit is set.
      
        * Casting to signed prior to shifting right when extracting oparg
          and cmparg
      
        * Consider only the bottom 5 bits of oparg when using it as a left-shift
          value.
      
      Whilst I think this catches all of the issues, I'd much prefer to remove
      this stuff, as I think it's unused and the bugs are copy-pasted between
      a bunch of architectures.
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5f16a046
    • Kefeng Wang's avatar
      arm64: check return value of of_flat_dt_get_machine_name · 690e95dd
      Kefeng Wang authored
      It's useless to print machine name and setup arch-specific system
      identifiers if of_flat_dt_get_machine_name() return NULL, especially
      when ACPI-based boot.
      Reviewed-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      690e95dd
    • Will Deacon's avatar
      arm64: cpufeature: Don't dump useless backtrace on CPU_OUT_OF_SPEC · 3fde2999
      Will Deacon authored
      Unfortunately, it turns out that mismatched CPU features in big.LITTLE
      systems are starting to appear in the wild. Whilst we should continue to
      taint the kernel with CPU_OUT_OF_SPEC for features that differ in ways
      that we can't fix up, dumping a useless backtrace out of the cpufeature
      code is pointless and irritating.
      
      This patch removes the backtrace from the taint.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3fde2999
    • Tobias Klauser's avatar
      arm64: mm: explicity include linux/vmalloc.h · 6efd8499
      Tobias Klauser authored
      arm64's mm/mmu.c uses vm_area_add_early, struct vm_area and other
      definitions  but relies on implict inclusion of linux/vmalloc.h which
      means that changes in other headers could break the build. Thus, add an
      explicit include.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarTobias Klauser <tklauser@distanz.ch>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      6efd8499
    • Kefeng Wang's avatar
      arm64: Add dump_backtrace() in show_regs · 1149aad1
      Kefeng Wang authored
      Generic code expects show_regs() to dump the stack, but arm64's
      show_regs() does not. This makes it hard to debug softlockups and
      other issues that result in show_regs() being called.
      
      This patch updates arm64's show_regs() to dump the stack, as common
      code expects.
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      [will: folded in bug_handler fix from mrutland]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      1149aad1
    • Kefeng Wang's avatar
      arm64: Call __show_regs directly · c07ab957
      Kefeng Wang authored
      Generic code expects show_regs() to also dump the stack, but arm64's
      show_reg() does not do this. Some arm64 callers of show_regs() *only*
      want the registers dumped, without the stack.
      
      To enable generic code to work as expected, we need to make
      show_regs() dump the stack. Where we only want the registers dumped,
      we must use __show_regs().
      
      This patch updates code to use __show_regs() where only registers are
      desired. A subsequent patch will modify show_regs().
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c07ab957
    • Dong Bo's avatar
      arm64: Preventing READ_IMPLIES_EXEC propagation · 48f99c8e
      Dong Bo authored
      Like arch/arm/, we inherit the READ_IMPLIES_EXEC personality flag across
      fork(). This is undesirable for a number of reasons:
      
        * ELF files that don't require executable stack can end up with it
          anyway
      
        * We end up performing un-necessary I-cache maintenance when mapping
          what should be non-executable pages
      
        * Restricting what is executable is generally desirable when defending
          against overflow attacks
      
      This patch clears the personality flag when setting up the personality for
      newly spwaned native tasks. Given that semi-recent AArch64 toolchains emit
      a non-executable PT_GNU_STACK header, userspace applications can already
      not rely on READ_IMPLIES_EXEC so shouldn't be adversely affected by this
      change.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: default avatarDong Bo <dongbo4@huawei.com>
      [will: added comment to compat code, rewrote commit message]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      48f99c8e
  10. 29 May, 2017 1 commit
  11. 28 May, 2017 1 commit