An error occurred fetching the project authors.
  1. 22 Jul, 2018 1 commit
  2. 05 Jun, 2018 1 commit
  3. 20 Sep, 2016 1 commit
    • Josh Poimboeuf's avatar
      x86/unwind: Add new unwind interface and implementations · 7c7900f8
      Josh Poimboeuf authored
      The x86 stack dump code is a bit of a mess.  dump_trace() uses
      callbacks, and each user of it seems to have slightly different
      requirements, so there are several slightly different callbacks floating
      around.
      
      Also there are some upcoming features which will need more changes to
      the stack dump code, including the printing of stack pt_regs, reliable
      stack detection for live patching, and a DWARF unwinder.  Each of those
      features would at least need more callbacks and/or callback interfaces,
      resulting in a much bigger mess than what we have today.
      
      Before doing all that, we should try to clean things up and replace
      dump_trace() with something cleaner and more flexible.
      
      The new unwinder is a simple state machine which was heavily inspired by
      a suggestion from Andy Lutomirski:
      
        https://lkml.kernel.org/r/CALCETrUbNTqaM2LRyXGRx=kVLRPeY5A3Pc6k4TtQxF320rUT=w@mail.gmail.com
      
      It's also similar to the libunwind API:
      
        http://www.nongnu.org/libunwind/man/libunwind(3).html
      
      Some if its advantages:
      
      - Simplicity: no more callback sprawl and less code duplication.
      
      - Flexibility: it allows the caller to stop and inspect the stack state
        at each step in the unwinding process.
      
      - Modularity: the unwinder code, console stack dump code, and stack
        metadata analysis code are all better separated so that changing one
        of them shouldn't have much of an impact on any of the others.
      
      Two implementations are added which conform to the new unwind interface:
      
      - The frame pointer unwinder which is used for CONFIG_FRAME_POINTER=y.
      
      - The "guess" unwinder which is used for CONFIG_FRAME_POINTER=n.  This
        isn't an "unwinder" per se.  All it does is scan the stack for kernel
        text addresses.  But with no frame pointers, guesses are better than
        nothing in most cases.
      Suggested-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/6dc2f909c47533d213d0505f0a113e64585bec82.1474045023.git.jpoimboe@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7c7900f8
  4. 18 Aug, 2016 1 commit
    • Jessica Yu's avatar
      livepatch/x86: apply alternatives and paravirt patches after relocations · d4c3e6e1
      Jessica Yu authored
      Implement arch_klp_init_object_loaded() for x86, which applies
      alternatives/paravirt patches. This fixes the order in which relocations
      and alternatives/paravirt patches are applied.
      
      Previously, if a patch module had alternatives or paravirt patches,
      these were applied first by the module loader before livepatch can apply
      per-object relocations. The (buggy) sequence of events was:
      
      (1) Load patch module
      (2) Apply alternatives and paravirt patches to patch module
          * Note that these are applied to the new functions in the patch module
      (3) Apply per-object relocations to patch module when target module loads.
          * This clobbers what was written in step 2
      
      This lead to crashes and corruption in general, since livepatch would
      overwrite or step on previously applied alternative/paravirt patches.
      The correct sequence of events should be:
      
      (1) Load patch module
      (2) Apply per-object relocations to patch module
      (3) Apply alternatives and paravirt patches to patch module
      
      This is fixed by delaying paravirt/alternatives patching until after
      relocations are applied. Any .altinstructions or .parainstructions
      sections are prefixed with ".klp.arch.${objname}" and applied in
      arch_klp_init_object_loaded().
      Signed-off-by: default avatarJessica Yu <jeyu@redhat.com>
      Acked-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      d4c3e6e1
  5. 08 Aug, 2016 1 commit
  6. 22 Apr, 2016 2 commits
    • Luis R. Rodriguez's avatar
      x86/init: Rename EBDA code file · f2d85299
      Luis R. Rodriguez authored
      This makes it clearer what this is.
      Signed-off-by: default avatarLuis R. Rodriguez <mcgrof@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: andrew.cooper3@citrix.com
      Cc: andriy.shevchenko@linux.intel.com
      Cc: bigeasy@linutronix.de
      Cc: boris.ostrovsky@oracle.com
      Cc: david.vrabel@citrix.com
      Cc: ffainelli@freebox.fr
      Cc: george.dunlap@citrix.com
      Cc: glin@suse.com
      Cc: jgross@suse.com
      Cc: jlee@suse.com
      Cc: josh@joshtriplett.org
      Cc: julien.grall@linaro.org
      Cc: konrad.wilk@oracle.com
      Cc: kozerkov@parallels.com
      Cc: lenb@kernel.org
      Cc: lguest@lists.ozlabs.org
      Cc: linux-acpi@vger.kernel.org
      Cc: lv.zheng@intel.com
      Cc: matt@codeblueprint.co.uk
      Cc: mbizon@freebox.fr
      Cc: rjw@rjwysocki.net
      Cc: robert.moore@intel.com
      Cc: rusty@rustcorp.com.au
      Cc: tiwai@suse.de
      Cc: toshi.kani@hp.com
      Cc: xen-devel@lists.xensource.com
      Link: http://lkml.kernel.org/r/1460592286-300-14-git-send-email-mcgrof@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f2d85299
    • Luis R. Rodriguez's avatar
      x86/rtc: Replace paravirt rtc check with platform legacy quirk · 8d152e7a
      Luis R. Rodriguez authored
      We have 4 types of x86 platforms that disable RTC:
      
        * Intel MID
        * Lguest - uses paravirt
        * Xen dom-U - uses paravirt
        * x86 on legacy systems annotated with an ACPI legacy flag
      
      We can consolidate all of these into a platform specific legacy
      quirk set early in boot through i386_start_kernel() and through
      x86_64_start_reservations(). This deals with the RTC quirks which
      we can rely on through the hardware subarch, the ACPI check can
      be dealt with separately.
      
      For Xen things are bit more complex given that the @X86_SUBARCH_XEN
      x86_hardware_subarch is shared on for Xen which uses the PV path for
      both domU and dom0. Since the semantics for differentiating between
      the two are Xen specific we provide a platform helper to help override
      default legacy features -- x86_platform.set_legacy_features(). Use
      of this helper is highly discouraged, its only purpose should be
      to account for the lack of semantics available within your given
      x86_hardware_subarch.
      
      As per 0-day, this bumps the vmlinux size using i386-tinyconfig as
      follows:
      
      TOTAL   TEXT   init.text    x86_early_init_platform_quirks()
      +70     +62    +62          +43
      
      Only 8 bytes overhead total, as the main increase in size is
      all removed via __init.
      Suggested-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarLuis R. Rodriguez <mcgrof@kernel.org>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: andrew.cooper3@citrix.com
      Cc: andriy.shevchenko@linux.intel.com
      Cc: bigeasy@linutronix.de
      Cc: boris.ostrovsky@oracle.com
      Cc: david.vrabel@citrix.com
      Cc: ffainelli@freebox.fr
      Cc: george.dunlap@citrix.com
      Cc: glin@suse.com
      Cc: jlee@suse.com
      Cc: josh@joshtriplett.org
      Cc: julien.grall@linaro.org
      Cc: konrad.wilk@oracle.com
      Cc: kozerkov@parallels.com
      Cc: lenb@kernel.org
      Cc: lguest@lists.ozlabs.org
      Cc: linux-acpi@vger.kernel.org
      Cc: lv.zheng@intel.com
      Cc: matt@codeblueprint.co.uk
      Cc: mbizon@freebox.fr
      Cc: rjw@rjwysocki.net
      Cc: robert.moore@intel.com
      Cc: rusty@rustcorp.com.au
      Cc: tiwai@suse.de
      Cc: toshi.kani@hp.com
      Cc: xen-devel@lists.xensource.com
      Link: http://lkml.kernel.org/r/1460592286-300-5-git-send-email-mcgrof@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8d152e7a
  7. 01 Apr, 2016 1 commit
    • Jessica Yu's avatar
      livepatch: reuse module loader code to write relocations · 425595a7
      Jessica Yu authored
      Reuse module loader code to write relocations, thereby eliminating the need
      for architecture specific relocation code in livepatch. Specifically, reuse
      the apply_relocate_add() function in the module loader to write relocations
      instead of duplicating functionality in livepatch's arch-dependent
      klp_write_module_reloc() function.
      
      In order to accomplish this, livepatch modules manage their own relocation
      sections (marked with the SHF_RELA_LIVEPATCH section flag) and
      livepatch-specific symbols (marked with SHN_LIVEPATCH symbol section
      index). To apply livepatch relocation sections, livepatch symbols
      referenced by relocs are resolved and then apply_relocate_add() is called
      to apply those relocations.
      
      In addition, remove x86 livepatch relocation code and the s390
      klp_write_module_reloc() function stub. They are no longer needed since
      relocation work has been offloaded to module loader.
      
      Lastly, mark the module as a livepatch module so that the module loader
      canappropriately identify and initialize it.
      Signed-off-by: default avatarJessica Yu <jeyu@redhat.com>
      Reviewed-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>   # for s390 changes
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      425595a7
  8. 25 Mar, 2016 1 commit
    • Alexander Potapenko's avatar
      mm, kasan: stackdepot implementation. Enable stackdepot for SLAB · cd11016e
      Alexander Potapenko authored
      Implement the stack depot and provide CONFIG_STACKDEPOT.  Stack depot
      will allow KASAN store allocation/deallocation stack traces for memory
      chunks.  The stack traces are stored in a hash table and referenced by
      handles which reside in the kasan_alloc_meta and kasan_free_meta
      structures in the allocated memory chunks.
      
      IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
      duplication.
      
      Right now stackdepot support is only enabled in SLAB allocator.  Once
      KASAN features in SLAB are on par with those in SLUB we can switch SLUB
      to stackdepot as well, thus removing the dependency on SLUB stack
      bookkeeping, which wastes a lot of memory.
      
      This patch is based on the "mm: kasan: stack depots" patch originally
      prepared by Dmitry Chernenkov.
      
      Joonsoo has said that he plans to reuse the stackdepot code for the
      mm/page_owner.c debugging facility.
      
      [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
      [aryabinin@virtuozzo.com: comment style fixes]
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cd11016e
  9. 22 Mar, 2016 1 commit
    • Dmitry Vyukov's avatar
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov authored
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  10. 29 Feb, 2016 1 commit
    • Josh Poimboeuf's avatar
      objtool: Mark non-standard object files and directories · c0dd6716
      Josh Poimboeuf authored
      Code which runs outside the kernel's normal mode of operation often does
      unusual things which can cause a static analysis tool like objtool to
      emit false positive warnings:
      
       - boot image
       - vdso image
       - relocation
       - realmode
       - efi
       - head
       - purgatory
       - modpost
      
      Set OBJECT_FILES_NON_STANDARD for their related files and directories,
      which will tell objtool to skip checking them.  It's ok to skip them
      because they don't affect runtime stack traces.
      
      Also skip the following code which does the right thing with respect to
      frame pointers, but is too "special" to be validated by a tool:
      
       - entry
       - mcount
      
      Also skip the test_nx module because it modifies its exception handling
      table at runtime, which objtool can't understand.  Fortunately it's
      just a test module so it doesn't matter much.
      
      Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
      might eventually be useful for other tools.
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c0dd6716
  11. 10 Sep, 2015 1 commit
    • Dave Young's avatar
      kexec: split kexec_load syscall from kexec core code · 2965faa5
      Dave Young authored
      There are two kexec load syscalls, kexec_load another and kexec_file_load.
       kexec_file_load has been splited as kernel/kexec_file.c.  In this patch I
      split kexec_load syscall code to kernel/kexec.c.
      
      And add a new kconfig option KEXEC_CORE, so we can disable kexec_load and
      use kexec_file_load only, or vice verse.
      
      The original requirement is from Ted Ts'o, he want kexec kernel signature
      being checked with CONFIG_KEXEC_VERIFY_SIG enabled.  But kexec-tools use
      kexec_load syscall can bypass the checking.
      
      Vivek Goyal proposed to create a common kconfig option so user can compile
      in only one syscall for loading kexec kernel.  KEXEC/KEXEC_FILE selects
      KEXEC_CORE so that old config files still work.
      
      Because there's general code need CONFIG_KEXEC_CORE, so I updated all the
      architecture Kconfig with a new option KEXEC_CORE, and let KEXEC selects
      KEXEC_CORE in arch Kconfig.  Also updated general kernel code with to
      kexec_load syscall.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarDave Young <dyoung@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Josh Boyer <jwboyer@fedoraproject.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2965faa5
  12. 19 Aug, 2015 1 commit
    • Dan Williams's avatar
      libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option · 7a67832c
      Dan Williams authored
      We currently register a platform device for e820 type-12 memory and
      register a nvdimm bus beneath it.  Registering the platform device
      triggers the device-core machinery to probe for a driver, but that
      search currently comes up empty.  Building the nvdimm-bus registration
      into the e820_pmem platform device registration in this way forces
      libnvdimm to be built-in.  Instead, convert the built-in portion of
      CONFIG_X86_PMEM_LEGACY to simply register a platform device and move the
      rest of the logic to the driver for e820_pmem, for the following
      reasons:
      
      1/ Letting e820_pmem support be a module allows building and testing
         libnvdimm.ko changes without rebooting
      
      2/ All the normal policy around modules can be applied to e820_pmem
         (unbind to disable and/or blacklisting the module from loading by
         default)
      
      3/ Moving the driver to a generic location and converting it to scan
         "iomem_resource" rather than "e820.map" means any other architecture can
         take advantage of this simple nvdimm resource discovery mechanism by
         registering a resource named "Persistent Memory (legacy)"
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      7a67832c
  13. 31 Jul, 2015 1 commit
    • Andy Lutomirski's avatar
      x86/ldt: Make modify_ldt() optional · a5b9e5a2
      Andy Lutomirski authored
      The modify_ldt syscall exposes a large attack surface and is
      unnecessary for modern userspace.  Make it optional.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Andrew Cooper <andrew.cooper3@citrix.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: security@kernel.org <security@kernel.org>
      Cc: xen-devel <xen-devel@lists.xen.org>
      Link: http://lkml.kernel.org/r/a605166a771c343fd64802dece77a903507333bd.1438291540.git.luto@kernel.org
      [ Made MATH_EMULATION dependent on MODIFY_LDT_SYSCALL. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a5b9e5a2
  14. 16 Jul, 2015 1 commit
  15. 06 Jul, 2015 2 commits
  16. 04 Jun, 2015 1 commit
    • Ingo Molnar's avatar
      x86/asm/entry: Move the vsyscall code to arch/x86/entry/vsyscall/ · 00398a00
      Ingo Molnar authored
      The vsyscall code is entry code too, so move it to arch/x86/entry/vsyscall/.
      
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      00398a00
  17. 03 Jun, 2015 1 commit
    • Ingo Molnar's avatar
      x86/asm/entry: Move entry_64.S and entry_32.S to arch/x86/entry/ · 905a36a2
      Ingo Molnar authored
      Create a new directory hierarchy for the low level x86 entry code:
      
          arch/x86/entry/*
      
      This will host all the low level glue that is currently scattered
      all across arch/x86/.
      
      Start with entry_64.S and entry_32.S.
      
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      905a36a2
  18. 19 May, 2015 1 commit
    • Ingo Molnar's avatar
      x86/fpu: Move i387.c and xsave.c to arch/x86/kernel/fpu/ · ce4c4c26
      Ingo Molnar authored
      Create a new subdirectory for the FPU support code in arch/x86/kernel/fpu/.
      
      Rename 'i387.c' to 'core.c' - as this really collects the core FPU support
      code, nothing i387 specific.
      
      We'll better organize this directory in later patches.
      Reviewed-by: default avatarBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ce4c4c26
  19. 01 Apr, 2015 1 commit
  20. 04 Mar, 2015 1 commit
  21. 14 Feb, 2015 2 commits
    • Andrey Ryabinin's avatar
      kasan: enable stack instrumentation · c420f167
      Andrey Ryabinin authored
      Stack instrumentation allows to detect out of bounds memory accesses for
      variables allocated on stack.  Compiler adds redzones around every
      variable on stack and poisons redzones in function's prologue.
      
      Such approach significantly increases stack usage, so all in-kernel stacks
      size were doubled.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c420f167
    • Andrey Ryabinin's avatar
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin authored
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  22. 04 Feb, 2015 1 commit
  23. 22 Dec, 2014 1 commit
  24. 03 Nov, 2014 1 commit
  25. 24 Sep, 2014 1 commit
  26. 29 Aug, 2014 1 commit
    • Vivek Goyal's avatar
      kexec: create a new config option CONFIG_KEXEC_FILE for new syscall · 74ca317c
      Vivek Goyal authored
      Currently new system call kexec_file_load() and all the associated code
      compiles if CONFIG_KEXEC=y.  But new syscall also compiles purgatory
      code which currently uses gcc option -mcmodel=large.  This option seems
      to be available only gcc 4.4 onwards.
      
      Hiding new functionality behind a new config option will not break
      existing users of old gcc.  Those who wish to enable new functionality
      will require new gcc.  Having said that, I am trying to figure out how
      can I move away from using -mcmodel=large but that can take a while.
      
      I think there are other advantages of introducing this new config
      option.  As this option will be enabled only on x86_64, other arches
      don't have to compile generic kexec code which will never be used.  This
      new code selects CRYPTO=y and CRYPTO_SHA256=y.  And all other arches had
      to do this for CONFIG_KEXEC.  Now with introduction of new config
      option, we can remove crypto dependency from other arches.
      
      Now CONFIG_KEXEC_FILE is available only on x86_64.  So whereever I had
      CONFIG_X86_64 defined, I got rid of that.
      
      For CONFIG_KEXEC_FILE, instead of doing select CRYPTO=y, I changed it to
      "depends on CRYPTO=y".  This should be safer as "select" is not
      recursive.
      Signed-off-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Tested-by: default avatarShaun Ruffell <sruffell@digium.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      74ca317c
  27. 08 Aug, 2014 1 commit
  28. 25 Jul, 2014 1 commit
  29. 14 May, 2014 1 commit
  30. 04 May, 2014 1 commit
  31. 30 Apr, 2014 1 commit
    • H. Peter Anvin's avatar
      x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack · 3891a04a
      H. Peter Anvin authored
      The IRET instruction, when returning to a 16-bit segment, only
      restores the bottom 16 bits of the user space stack pointer.  This
      causes some 16-bit software to break, but it also leaks kernel state
      to user space.  We have a software workaround for that ("espfix") for
      the 32-bit kernel, but it relies on a nonzero stack segment base which
      is not available in 64-bit mode.
      
      In checkin:
      
          b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
      
      we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
      the logic that 16-bit support is crippled on 64-bit kernels anyway (no
      V86 support), but it turns out that people are doing stuff like
      running old Win16 binaries under Wine and expect it to work.
      
      This works around this by creating percpu "ministacks", each of which
      is mapped 2^16 times 64K apart.  When we detect that the return SS is
      on the LDT, we copy the IRET frame to the ministack and use the
      relevant alias to return to userspace.  The ministacks are mapped
      readonly, so if IRET faults we promote #GP to #DF which is an IST
      vector and thus has its own stack; we then do the fixup in the #DF
      handler.
      
      (Making #GP an IST exception would make the msr_safe functions unsafe
      in NMI/MC context, and quite possibly have other effects.)
      
      Special thanks to:
      
      - Andy Lutomirski, for the suggestion of using very small stack slots
        and copy (as opposed to map) the IRET frame there, and for the
        suggestion to mark them readonly and let the fault promote to #DF.
      - Konrad Wilk for paravirt fixup and testing.
      - Borislav Petkov for testing help and useful comments.
      Reported-by: default avatarBrian Gerst <brgerst@gmail.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andrew Lutomriski <amluto@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Dirk Hohndel <dirk@hohndel.org>
      Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
      Cc: comex <comexk@gmail.com>
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: <stable@vger.kernel.org> # consider after upstream merge
      3891a04a
  32. 18 Mar, 2014 1 commit
  33. 16 Jan, 2014 1 commit
  34. 13 Jan, 2014 1 commit
  35. 08 Jan, 2014 1 commit
    • David E. Box's avatar
      arch: x86: New MailBox support driver for Intel SOC's · 46184415
      David E. Box authored
      Current Intel SOC cores use a MailBox Interface (MBI) to provide access to
      configuration registers on devices (called units) connected to the system
      fabric. This is a support driver that implements access to this interface on
      those platforms that can enumerate the device using PCI. Initial support is for
      BayTrail, for which port definitons are provided. This is a requirement for
      implementing platform specific features (e.g. RAPL driver requires this to
      perform platform specific power management using the registers in PUNIT).
      Dependant modules should select IOSF_MBI in their respective Kconfig
      configuraiton. Serialized access is handled by all exported routines with
      spinlocks.
      
      The API includes 3 functions for access to unit registers:
      
      int iosf_mbi_read(u8 port, u8 opcode, u32 offset, u32 *mdr)
      int iosf_mbi_write(u8 port, u8 opcode, u32 offset, u32 mdr)
      int iosf_mbi_modify(u8 port, u8 opcode, u32 offset, u32 mdr, u32 mask)
      
      port:	indicating the unit being accessed
      opcode:	the read or write port specific opcode
      offset:	the register offset within the port
      mdr:	the register data to be read, written, or modified
      mask:	bit locations in mdr to change
      
      Returns nonzero on error
      
      Note: GPU code handles access to the GFX unit. Therefore access to that unit
      with this driver is disallowed to avoid conflicts.
      Signed-off-by: default avatarDavid E. Box <david.e.box@linux.intel.com>
      Link: http://lkml.kernel.org/r/1389216471-734-1-git-send-email-david.e.box@linux.intel.comSigned-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      46184415
  36. 29 Dec, 2013 1 commit
    • Dave Young's avatar
      x86: Export x86 boot_params to sysfs · 5039e316
      Dave Young authored
      kexec-tools use boot_params for getting the 1st kernel hardware_subarch,
      the kexec kernel EFI runtime support also needs to read the old efi_info
      from boot_params. Currently it exists in debugfs which is not a good
      place for such infomation. Per HPA, we should avoid "sploit debugfs".
      
      In this patch /sys/kernel/boot_params are exported, also the setup_data is
      exported as a subdirectory. kexec-tools is using debugfs for hardware_subarch
      for a long time now so we're not removing it yet.
      
      Structure is like below:
      
      /sys/kernel/boot_params
      |__ data                /* boot_params in binary*/
      |__ setup_data
      |   |__ 0               /* the first setup_data node */
      |   |   |__ data        /* setup_data node 0 in binary*/
      |   |   |__ type        /* setup_data type of setup_data node 0, hex string */
      [snip]
      |__ version             /* boot protocal version (in hex, "0x" prefixed)*/
      Signed-off-by: default avatarDave Young <dyoung@redhat.com>
      Acked-by: default avatarBorislav Petkov <bp@suse.de>
      Tested-by: default avatarToshi Kani <toshi.kani@hp.com>
      Signed-off-by: default avatarMatt Fleming <matt.fleming@intel.com>
      5039e316
  37. 25 Sep, 2013 1 commit