Commit aefcf2f4 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'next-lockdown' of...

Merge branch 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull kernel lockdown mode from James Morris:
 "This is the latest iteration of the kernel lockdown patchset, from
  Matthew Garrett, David Howells and others.

  From the original description:

    This patchset introduces an optional kernel lockdown feature,
    intended to strengthen the boundary between UID 0 and the kernel.
    When enabled, various pieces of kernel functionality are restricted.
    Applications that rely on low-level access to either hardware or the
    kernel may cease working as a result - therefore this should not be
    enabled without appropriate evaluation beforehand.

    The majority of mainstream distributions have been carrying variants
    of this patchset for many years now, so there's value in providing a
    doesn't meet every distribution requirement, but gets us much closer
    to not requiring external patches.

  There are two major changes since this was last proposed for mainline:

   - Separating lockdown from EFI secure boot. Background discussion is
     covered here: https://lwn.net/Articles/751061/

   -  Implementation as an LSM, with a default stackable lockdown LSM
      module. This allows the lockdown feature to be policy-driven,
      rather than encoding an implicit policy within the mechanism.

  The new locked_down LSM hook is provided to allow LSMs to make a
  policy decision around whether kernel functionality that would allow
  tampering with or examining the runtime state of the kernel should be
  permitted.

  The included lockdown LSM provides an implementation with a simple
  policy intended for general purpose use. This policy provides a coarse
  level of granularity, controllable via the kernel command line:

    lockdown={integrity|confidentiality}

  Enable the kernel lockdown feature. If set to integrity, kernel features
  that allow userland to modify the running kernel are disabled. If set to
  confidentiality, kernel features that allow userland to extract
  confidential information from the kernel are also disabled.

  This may also be controlled via /sys/kernel/security/lockdown and
  overriden by kernel configuration.

  New or existing LSMs may implement finer-grained controls of the
  lockdown features. Refer to the lockdown_reason documentation in
  include/linux/security.h for details.

  The lockdown feature has had signficant design feedback and review
  across many subsystems. This code has been in linux-next for some
  weeks, with a few fixes applied along the way.

  Stephen Rothwell noted that commit 9d1f8be5 ("bpf: Restrict bpf
  when kernel lockdown is in confidentiality mode") is missing a
  Signed-off-by from its author. Matthew responded that he is providing
  this under category (c) of the DCO"

* 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits)
  kexec: Fix file verification on S390
  security: constify some arrays in lockdown LSM
  lockdown: Print current->comm in restriction messages
  efi: Restrict efivar_ssdt_load when the kernel is locked down
  tracefs: Restrict tracefs when the kernel is locked down
  debugfs: Restrict debugfs when the kernel is locked down
  kexec: Allow kexec_file() with appropriate IMA policy when locked down
  lockdown: Lock down perf when in confidentiality mode
  bpf: Restrict bpf when kernel lockdown is in confidentiality mode
  lockdown: Lock down tracing and perf kprobes when in confidentiality mode
  lockdown: Lock down /proc/kcore
  x86/mmiotrace: Lock down the testmmiotrace module
  lockdown: Lock down module params that specify hardware parameters (eg. ioport)
  lockdown: Lock down TIOCSSERIAL
  lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down
  acpi: Disable ACPI table override if the kernel is locked down
  acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down
  ACPI: Limit access to custom_method when the kernel is locked down
  x86/msr: Restrict MSR access when the kernel is locked down
  x86: Lock down IO port access when the kernel is locked down
  ...
parents f1f2f614 45893a0a
...@@ -2276,6 +2276,15 @@ ...@@ -2276,6 +2276,15 @@
lockd.nlm_udpport=M [NFS] Assign UDP port. lockd.nlm_udpport=M [NFS] Assign UDP port.
Format: <integer> Format: <integer>
lockdown= [SECURITY]
{ integrity | confidentiality }
Enable the kernel lockdown feature. If set to
integrity, kernel features that allow userland to
modify the running kernel are disabled. If set to
confidentiality, kernel features that allow userland
to extract confidential information from the kernel
are also disabled.
locktorture.nreaders_stress= [KNL] locktorture.nreaders_stress= [KNL]
Set the number of locking read-acquisition kthreads. Set the number of locking read-acquisition kthreads.
Defaults to being automatically set based on the Defaults to being automatically set based on the
......
...@@ -982,7 +982,7 @@ config KEXEC_FILE ...@@ -982,7 +982,7 @@ config KEXEC_FILE
for kernel and initramfs as opposed to list of segments as for kernel and initramfs as opposed to list of segments as
accepted by previous system call. accepted by previous system call.
config KEXEC_VERIFY_SIG config KEXEC_SIG
bool "Verify kernel signature during kexec_file_load() syscall" bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC_FILE depends on KEXEC_FILE
help help
...@@ -997,13 +997,13 @@ config KEXEC_VERIFY_SIG ...@@ -997,13 +997,13 @@ config KEXEC_VERIFY_SIG
config KEXEC_IMAGE_VERIFY_SIG config KEXEC_IMAGE_VERIFY_SIG
bool "Enable Image signature verification support" bool "Enable Image signature verification support"
default y default y
depends on KEXEC_VERIFY_SIG depends on KEXEC_SIG
depends on EFI && SIGNED_PE_FILE_VERIFICATION depends on EFI && SIGNED_PE_FILE_VERIFICATION
help help
Enable Image signature verification support. Enable Image signature verification support.
comment "Support for PE file signature verification disabled" comment "Support for PE file signature verification disabled"
depends on KEXEC_VERIFY_SIG depends on KEXEC_SIG
depends on !EFI || !SIGNED_PE_FILE_VERIFICATION depends on !EFI || !SIGNED_PE_FILE_VERIFICATION
config CRASH_DUMP config CRASH_DUMP
......
...@@ -554,7 +554,7 @@ config ARCH_HAS_KEXEC_PURGATORY ...@@ -554,7 +554,7 @@ config ARCH_HAS_KEXEC_PURGATORY
def_bool y def_bool y
depends on KEXEC_FILE depends on KEXEC_FILE
config KEXEC_VERIFY_SIG config KEXEC_SIG
bool "Verify kernel signature during kexec_file_load() syscall" bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC_FILE && MODULE_SIG_FORMAT depends on KEXEC_FILE && MODULE_SIG_FORMAT
help help
......
...@@ -130,7 +130,7 @@ static int s390_elf_probe(const char *buf, unsigned long len) ...@@ -130,7 +130,7 @@ static int s390_elf_probe(const char *buf, unsigned long len)
const struct kexec_file_ops s390_kexec_elf_ops = { const struct kexec_file_ops s390_kexec_elf_ops = {
.probe = s390_elf_probe, .probe = s390_elf_probe,
.load = s390_elf_load, .load = s390_elf_load,
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
.verify_sig = s390_verify_sig, .verify_sig = s390_verify_sig,
#endif /* CONFIG_KEXEC_VERIFY_SIG */ #endif /* CONFIG_KEXEC_SIG */
}; };
...@@ -59,7 +59,7 @@ static int s390_image_probe(const char *buf, unsigned long len) ...@@ -59,7 +59,7 @@ static int s390_image_probe(const char *buf, unsigned long len)
const struct kexec_file_ops s390_kexec_image_ops = { const struct kexec_file_ops s390_kexec_image_ops = {
.probe = s390_image_probe, .probe = s390_image_probe,
.load = s390_image_load, .load = s390_image_load,
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
.verify_sig = s390_verify_sig, .verify_sig = s390_verify_sig,
#endif /* CONFIG_KEXEC_VERIFY_SIG */ #endif /* CONFIG_KEXEC_SIG */
}; };
...@@ -22,7 +22,7 @@ const struct kexec_file_ops * const kexec_file_loaders[] = { ...@@ -22,7 +22,7 @@ const struct kexec_file_ops * const kexec_file_loaders[] = {
NULL, NULL,
}; };
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
int s390_verify_sig(const char *kernel, unsigned long kernel_len) int s390_verify_sig(const char *kernel, unsigned long kernel_len)
{ {
const unsigned long marker_len = sizeof(MODULE_SIG_STRING) - 1; const unsigned long marker_len = sizeof(MODULE_SIG_STRING) - 1;
...@@ -68,7 +68,7 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len) ...@@ -68,7 +68,7 @@ int s390_verify_sig(const char *kernel, unsigned long kernel_len)
VERIFYING_MODULE_SIGNATURE, VERIFYING_MODULE_SIGNATURE,
NULL, NULL); NULL, NULL);
} }
#endif /* CONFIG_KEXEC_VERIFY_SIG */ #endif /* CONFIG_KEXEC_SIG */
static int kexec_file_update_purgatory(struct kimage *image, static int kexec_file_update_purgatory(struct kimage *image,
struct s390_load_data *data) struct s390_load_data *data)
......
...@@ -2031,20 +2031,30 @@ config KEXEC_FILE ...@@ -2031,20 +2031,30 @@ config KEXEC_FILE
config ARCH_HAS_KEXEC_PURGATORY config ARCH_HAS_KEXEC_PURGATORY
def_bool KEXEC_FILE def_bool KEXEC_FILE
config KEXEC_VERIFY_SIG config KEXEC_SIG
bool "Verify kernel signature during kexec_file_load() syscall" bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC_FILE depends on KEXEC_FILE
---help--- ---help---
This option makes kernel signature verification mandatory for
the kexec_file_load() syscall.
In addition to that option, you need to enable signature This option makes the kexec_file_load() syscall check for a valid
signature of the kernel image. The image can still be loaded without
a valid signature unless you also enable KEXEC_SIG_FORCE, though if
there's a signature that we can check, then it must be valid.
In addition to this option, you need to enable signature
verification for the corresponding kernel image type being verification for the corresponding kernel image type being
loaded in order for this to work. loaded in order for this to work.
config KEXEC_SIG_FORCE
bool "Require a valid signature in kexec_file_load() syscall"
depends on KEXEC_SIG
---help---
This option makes kernel signature verification mandatory for
the kexec_file_load() syscall.
config KEXEC_BZIMAGE_VERIFY_SIG config KEXEC_BZIMAGE_VERIFY_SIG
bool "Enable bzImage signature verification support" bool "Enable bzImage signature verification support"
depends on KEXEC_VERIFY_SIG depends on KEXEC_SIG
depends on SIGNED_PE_FILE_VERIFICATION depends on SIGNED_PE_FILE_VERIFICATION
select SYSTEM_TRUSTED_KEYRING select SYSTEM_TRUSTED_KEYRING
---help--- ---help---
......
...@@ -26,7 +26,7 @@ struct mem_vector immovable_mem[MAX_NUMNODES*2]; ...@@ -26,7 +26,7 @@ struct mem_vector immovable_mem[MAX_NUMNODES*2];
*/ */
#define MAX_ADDR_LEN 19 #define MAX_ADDR_LEN 19
static acpi_physical_address get_acpi_rsdp(void) static acpi_physical_address get_cmdline_acpi_rsdp(void)
{ {
acpi_physical_address addr = 0; acpi_physical_address addr = 0;
...@@ -278,10 +278,7 @@ acpi_physical_address get_rsdp_addr(void) ...@@ -278,10 +278,7 @@ acpi_physical_address get_rsdp_addr(void)
{ {
acpi_physical_address pa; acpi_physical_address pa;
pa = get_acpi_rsdp(); pa = boot_params->acpi_rsdp_addr;
if (!pa)
pa = boot_params->acpi_rsdp_addr;
/* /*
* Try to get EFI data from setup_data. This can happen when we're a * Try to get EFI data from setup_data. This can happen when we're a
...@@ -311,7 +308,17 @@ static unsigned long get_acpi_srat_table(void) ...@@ -311,7 +308,17 @@ static unsigned long get_acpi_srat_table(void)
char arg[10]; char arg[10];
u8 *entry; u8 *entry;
rsdp = (struct acpi_table_rsdp *)(long)boot_params->acpi_rsdp_addr; /*
* Check whether we were given an RSDP on the command line. We don't
* stash this in boot params because the kernel itself may have
* different ideas about whether to trust a command-line parameter.
*/
rsdp = (struct acpi_table_rsdp *)get_cmdline_acpi_rsdp();
if (!rsdp)
rsdp = (struct acpi_table_rsdp *)(long)
boot_params->acpi_rsdp_addr;
if (!rsdp) if (!rsdp)
return 0; return 0;
......
...@@ -117,6 +117,12 @@ static inline bool acpi_has_cpu_in_madt(void) ...@@ -117,6 +117,12 @@ static inline bool acpi_has_cpu_in_madt(void)
return !!acpi_lapic; return !!acpi_lapic;
} }
#define ACPI_HAVE_ARCH_SET_ROOT_POINTER
static inline void acpi_arch_set_root_pointer(u64 addr)
{
x86_init.acpi.set_root_pointer(addr);
}
#define ACPI_HAVE_ARCH_GET_ROOT_POINTER #define ACPI_HAVE_ARCH_GET_ROOT_POINTER
static inline u64 acpi_arch_get_root_pointer(void) static inline u64 acpi_arch_get_root_pointer(void)
{ {
...@@ -125,6 +131,7 @@ static inline u64 acpi_arch_get_root_pointer(void) ...@@ -125,6 +131,7 @@ static inline u64 acpi_arch_get_root_pointer(void)
void acpi_generic_reduced_hw_init(void); void acpi_generic_reduced_hw_init(void);
void x86_default_set_root_pointer(u64 addr);
u64 x86_default_get_root_pointer(void); u64 x86_default_get_root_pointer(void);
#else /* !CONFIG_ACPI */ #else /* !CONFIG_ACPI */
...@@ -138,6 +145,8 @@ static inline void disable_acpi(void) { } ...@@ -138,6 +145,8 @@ static inline void disable_acpi(void) { }
static inline void acpi_generic_reduced_hw_init(void) { } static inline void acpi_generic_reduced_hw_init(void) { }
static inline void x86_default_set_root_pointer(u64 addr) { }
static inline u64 x86_default_get_root_pointer(void) static inline u64 x86_default_get_root_pointer(void)
{ {
return 0; return 0;
......
...@@ -134,10 +134,12 @@ struct x86_hyper_init { ...@@ -134,10 +134,12 @@ struct x86_hyper_init {
/** /**
* struct x86_init_acpi - x86 ACPI init functions * struct x86_init_acpi - x86 ACPI init functions
* @set_root_poitner: set RSDP address
* @get_root_pointer: get RSDP address * @get_root_pointer: get RSDP address
* @reduced_hw_early_init: hardware reduced platform early init * @reduced_hw_early_init: hardware reduced platform early init
*/ */
struct x86_init_acpi { struct x86_init_acpi {
void (*set_root_pointer)(u64 addr);
u64 (*get_root_pointer)(void); u64 (*get_root_pointer)(void);
void (*reduced_hw_early_init)(void); void (*reduced_hw_early_init)(void);
}; };
......
...@@ -1760,6 +1760,11 @@ void __init arch_reserve_mem_area(acpi_physical_address addr, size_t size) ...@@ -1760,6 +1760,11 @@ void __init arch_reserve_mem_area(acpi_physical_address addr, size_t size)
e820__update_table_print(); e820__update_table_print();
} }
void x86_default_set_root_pointer(u64 addr)
{
boot_params.acpi_rsdp_addr = addr;
}
u64 x86_default_get_root_pointer(void) u64 x86_default_get_root_pointer(void)
{ {
return boot_params.acpi_rsdp_addr; return boot_params.acpi_rsdp_addr;
......
...@@ -74,9 +74,9 @@ bool arch_ima_get_secureboot(void) ...@@ -74,9 +74,9 @@ bool arch_ima_get_secureboot(void)
/* secureboot arch rules */ /* secureboot arch rules */
static const char * const sb_arch_rules[] = { static const char * const sb_arch_rules[] = {
#if !IS_ENABLED(CONFIG_KEXEC_VERIFY_SIG) #if !IS_ENABLED(CONFIG_KEXEC_SIG)
"appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig", "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig",
#endif /* CONFIG_KEXEC_VERIFY_SIG */ #endif /* CONFIG_KEXEC_SIG */
"measure func=KEXEC_KERNEL_CHECK", "measure func=KEXEC_KERNEL_CHECK",
#if !IS_ENABLED(CONFIG_MODULE_SIG) #if !IS_ENABLED(CONFIG_MODULE_SIG)
"appraise func=MODULE_CHECK appraise_type=imasig", "appraise func=MODULE_CHECK appraise_type=imasig",
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/security.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -31,7 +32,8 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on) ...@@ -31,7 +32,8 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on)
if ((from + num <= from) || (from + num > IO_BITMAP_BITS)) if ((from + num <= from) || (from + num > IO_BITMAP_BITS))
return -EINVAL; return -EINVAL;
if (turn_on && !capable(CAP_SYS_RAWIO)) if (turn_on && (!capable(CAP_SYS_RAWIO) ||
security_locked_down(LOCKDOWN_IOPORT)))
return -EPERM; return -EPERM;
/* /*
...@@ -126,7 +128,8 @@ SYSCALL_DEFINE1(iopl, unsigned int, level) ...@@ -126,7 +128,8 @@ SYSCALL_DEFINE1(iopl, unsigned int, level)
return -EINVAL; return -EINVAL;
/* Trying to gain more privileges? */ /* Trying to gain more privileges? */
if (level > old) { if (level > old) {
if (!capable(CAP_SYS_RAWIO)) if (!capable(CAP_SYS_RAWIO) ||
security_locked_down(LOCKDOWN_IOPORT))
return -EPERM; return -EPERM;
} }
regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) | regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) |
......
...@@ -180,6 +180,7 @@ setup_efi_state(struct boot_params *params, unsigned long params_load_addr, ...@@ -180,6 +180,7 @@ setup_efi_state(struct boot_params *params, unsigned long params_load_addr,
if (efi_enabled(EFI_OLD_MEMMAP)) if (efi_enabled(EFI_OLD_MEMMAP))
return 0; return 0;
params->secure_boot = boot_params.secure_boot;
ei->efi_loader_signature = current_ei->efi_loader_signature; ei->efi_loader_signature = current_ei->efi_loader_signature;
ei->efi_systab = current_ei->efi_systab; ei->efi_systab = current_ei->efi_systab;
ei->efi_systab_hi = current_ei->efi_systab_hi; ei->efi_systab_hi = current_ei->efi_systab_hi;
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/security.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/msr.h> #include <asm/msr.h>
...@@ -79,6 +80,10 @@ static ssize_t msr_write(struct file *file, const char __user *buf, ...@@ -79,6 +80,10 @@ static ssize_t msr_write(struct file *file, const char __user *buf,
int err = 0; int err = 0;
ssize_t bytes = 0; ssize_t bytes = 0;
err = security_locked_down(LOCKDOWN_MSR);
if (err)
return err;
if (count % 8) if (count % 8)
return -EINVAL; /* Invalid chunk size */ return -EINVAL; /* Invalid chunk size */
...@@ -130,6 +135,9 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg) ...@@ -130,6 +135,9 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg)
err = -EFAULT; err = -EFAULT;
break; break;
} }
err = security_locked_down(LOCKDOWN_MSR);
if (err)
break;
err = wrmsr_safe_regs_on_cpu(cpu, regs); err = wrmsr_safe_regs_on_cpu(cpu, regs);
if (err) if (err)
break; break;
......
...@@ -95,6 +95,7 @@ struct x86_init_ops x86_init __initdata = { ...@@ -95,6 +95,7 @@ struct x86_init_ops x86_init __initdata = {
}, },
.acpi = { .acpi = {
.set_root_pointer = x86_default_set_root_pointer,
.get_root_pointer = x86_default_get_root_pointer, .get_root_pointer = x86_default_get_root_pointer,
.reduced_hw_early_init = acpi_generic_reduced_hw_init, .reduced_hw_early_init = acpi_generic_reduced_hw_init,
}, },
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/mmiotrace.h> #include <linux/mmiotrace.h>
#include <linux/security.h>
static unsigned long mmio_address; static unsigned long mmio_address;
module_param_hw(mmio_address, ulong, iomem, 0); module_param_hw(mmio_address, ulong, iomem, 0);
...@@ -115,6 +116,10 @@ static void do_test_bulk_ioremapping(void) ...@@ -115,6 +116,10 @@ static void do_test_bulk_ioremapping(void)
static int __init init(void) static int __init init(void)
{ {
unsigned long size = (read_far) ? (8 << 20) : (16 << 10); unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
int ret = security_locked_down(LOCKDOWN_MMIOTRACE);
if (ret)
return ret;
if (mmio_address == 0) { if (mmio_address == 0) {
pr_err("you have to use the module argument mmio_address.\n"); pr_err("you have to use the module argument mmio_address.\n");
......
...@@ -96,7 +96,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen, ...@@ -96,7 +96,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen,
if (!ddir->certs.virtual_address || !ddir->certs.size) { if (!ddir->certs.virtual_address || !ddir->certs.size) {
pr_debug("Unsigned PE binary\n"); pr_debug("Unsigned PE binary\n");
return -EKEYREJECTED; return -ENODATA;
} }
chkaddr(ctx->header_size, ddir->certs.virtual_address, chkaddr(ctx->header_size, ddir->certs.virtual_address,
...@@ -403,6 +403,8 @@ static int pefile_digest_pe(const void *pebuf, unsigned int pelen, ...@@ -403,6 +403,8 @@ static int pefile_digest_pe(const void *pebuf, unsigned int pelen,
* (*) 0 if at least one signature chain intersects with the keys in the trust * (*) 0 if at least one signature chain intersects with the keys in the trust
* keyring, or: * keyring, or:
* *
* (*) -ENODATA if there is no signature present.
*
* (*) -ENOPKG if a suitable crypto module couldn't be found for a check on a * (*) -ENOPKG if a suitable crypto module couldn't be found for a check on a
* chain. * chain.
* *
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/security.h>
#include "internal.h" #include "internal.h"
...@@ -29,6 +30,11 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf, ...@@ -29,6 +30,11 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
struct acpi_table_header table; struct acpi_table_header table;
acpi_status status; acpi_status status;
int ret;
ret = security_locked_down(LOCKDOWN_ACPI_TABLES);
if (ret)
return ret;
if (!(*ppos)) { if (!(*ppos)) {
/* parse the table header to get the table length */ /* parse the table header to get the table length */
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include <linux/semaphore.h> #include <linux/semaphore.h>
#include <linux/security.h>
#include <asm/io.h> #include <asm/io.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
...@@ -182,8 +183,19 @@ acpi_physical_address __init acpi_os_get_root_pointer(void) ...@@ -182,8 +183,19 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
acpi_physical_address pa; acpi_physical_address pa;
#ifdef CONFIG_KEXEC #ifdef CONFIG_KEXEC
if (acpi_rsdp) /*
* We may have been provided with an RSDP on the command line,
* but if a malicious user has done so they may be pointing us
* at modified ACPI tables that could alter kernel behaviour -
* so, we check the lockdown status before making use of
* it. If we trust it then also stash it in an architecture
* specific location (if appropriate) so it can be carried
* over further kexec()s.
*/
if (acpi_rsdp && !security_locked_down(LOCKDOWN_ACPI_TABLES)) {
acpi_arch_set_root_pointer(acpi_rsdp);
return acpi_rsdp; return acpi_rsdp;
}
#endif #endif
pa = acpi_arch_get_root_pointer(); pa = acpi_arch_get_root_pointer();
if (pa) if (pa)
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/earlycpio.h> #include <linux/earlycpio.h>
#include <linux/initrd.h> #include <linux/initrd.h>
#include <linux/security.h>
#include "internal.h" #include "internal.h"
#ifdef CONFIG_ACPI_CUSTOM_DSDT #ifdef CONFIG_ACPI_CUSTOM_DSDT
...@@ -578,6 +579,11 @@ void __init acpi_table_upgrade(void) ...@@ -578,6 +579,11 @@ void __init acpi_table_upgrade(void)
if (table_nr == 0) if (table_nr == 0)
return; return;
if (security_locked_down(LOCKDOWN_ACPI_TABLES)) {
pr_notice("kernel is locked down, ignoring table override\n");
return;
}
acpi_tables_addr = acpi_tables_addr =
memblock_find_in_range(0, ACPI_TABLE_UPGRADE_MAX_PHYS, memblock_find_in_range(0, ACPI_TABLE_UPGRADE_MAX_PHYS,
all_tables_size, PAGE_SIZE); all_tables_size, PAGE_SIZE);
......
...@@ -29,8 +29,8 @@ ...@@ -29,8 +29,8 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/security.h>
#ifdef CONFIG_IA64 #ifdef CONFIG_IA64
# include <linux/efi.h> # include <linux/efi.h>
...@@ -807,7 +807,10 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig) ...@@ -807,7 +807,10 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
static int open_port(struct inode *inode, struct file *filp) static int open_port(struct inode *inode, struct file *filp)
{ {
return capable(CAP_SYS_RAWIO) ? 0 : -EPERM; if (!capable(CAP_SYS_RAWIO))
return -EPERM;
return security_locked_down(LOCKDOWN_DEV_MEM);
} }
#define zero_lseek null_lseek #define zero_lseek null_lseek
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/ucs2_string.h> #include <linux/ucs2_string.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/security.h>
#include <asm/early_ioremap.h> #include <asm/early_ioremap.h>
...@@ -221,6 +222,11 @@ static void generic_ops_unregister(void) ...@@ -221,6 +222,11 @@ static void generic_ops_unregister(void)
static char efivar_ssdt[EFIVAR_SSDT_NAME_MAX] __initdata; static char efivar_ssdt[EFIVAR_SSDT_NAME_MAX] __initdata;
static int __init efivar_ssdt_setup(char *str) static int __init efivar_ssdt_setup(char *str)
{ {
int ret = security_locked_down(LOCKDOWN_ACPI_TABLES);
if (ret)
return ret;
if (strlen(str) < sizeof(efivar_ssdt)) if (strlen(str) < sizeof(efivar_ssdt))
memcpy(efivar_ssdt, str, strlen(str)); memcpy(efivar_ssdt, str, strlen(str));
else else
......
...@@ -755,6 +755,11 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj, ...@@ -755,6 +755,11 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj,
unsigned int size = count; unsigned int size = count;
loff_t init_off = off; loff_t init_off = off;
u8 *data = (u8 *) buf; u8 *data = (u8 *) buf;
int ret;
ret = security_locked_down(LOCKDOWN_PCI_ACCESS);
if (ret)
return ret;
if (off > dev->cfg_size) if (off > dev->cfg_size)
return 0; return 0;
...@@ -1016,6 +1021,11 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, ...@@ -1016,6 +1021,11 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr,
int bar = (unsigned long)attr->private; int bar = (unsigned long)attr->private;
enum pci_mmap_state mmap_type; enum pci_mmap_state mmap_type;
struct resource *res = &pdev->resource[bar]; struct resource *res = &pdev->resource[bar];
int ret;
ret = security_locked_down(LOCKDOWN_PCI_ACCESS);
if (ret)
return ret;
if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start)) if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start))
return -EINVAL; return -EINVAL;
...@@ -1092,6 +1102,12 @@ static ssize_t pci_write_resource_io(struct file *filp, struct kobject *kobj, ...@@ -1092,6 +1102,12 @@ static ssize_t pci_write_resource_io(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf, struct bin_attribute *attr, char *buf,
loff_t off, size_t count) loff_t off, size_t count)
{ {
int ret;
ret = security_locked_down(LOCKDOWN_PCI_ACCESS);
if (ret)
return ret;
return pci_resource_io(filp, kobj, attr, buf, off, count, true); return pci_resource_io(filp, kobj, attr, buf, off, count, true);
} }
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/security.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include "pci.h" #include "pci.h"
...@@ -115,7 +116,11 @@ static ssize_t proc_bus_pci_write(struct file *file, const char __user *buf, ...@@ -115,7 +116,11 @@ static ssize_t proc_bus_pci_write(struct file *file, const char __user *buf,
struct pci_dev *dev = PDE_DATA(ino); struct pci_dev *dev = PDE_DATA(ino);
int pos = *ppos; int pos = *ppos;
int size = dev->cfg_size; int size = dev->cfg_size;
int cnt; int cnt, ret;
ret = security_locked_down(LOCKDOWN_PCI_ACCESS);
if (ret)
return ret;
if (pos >= size) if (pos >= size)
return 0; return 0;
...@@ -196,6 +201,10 @@ static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd, ...@@ -196,6 +201,10 @@ static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd,
#endif /* HAVE_PCI_MMAP */ #endif /* HAVE_PCI_MMAP */
int ret = 0; int ret = 0;
ret = security_locked_down(LOCKDOWN_PCI_ACCESS);
if (ret)
return ret;
switch (cmd) { switch (cmd) {
case PCIIOC_CONTROLLER: case PCIIOC_CONTROLLER:
ret = pci_domain_nr(dev->bus); ret = pci_domain_nr(dev->bus);
...@@ -238,7 +247,8 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma) ...@@ -238,7 +247,8 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
struct pci_filp_private *fpriv = file->private_data; struct pci_filp_private *fpriv = file->private_data;
int i, ret, write_combine = 0, res_bit = IORESOURCE_MEM; int i, ret, write_combine = 0, res_bit = IORESOURCE_MEM;
if (!capable(CAP_SYS_RAWIO)) if (!capable(CAP_SYS_RAWIO) ||
security_locked_down(LOCKDOWN_PCI_ACCESS))
return -EPERM; return -EPERM;
if (fpriv->mmap_state == pci_mmap_io) { if (fpriv->mmap_state == pci_mmap_io) {
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/security.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include "pci.h" #include "pci.h"
...@@ -90,7 +91,8 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn, ...@@ -90,7 +91,8 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
u32 dword; u32 dword;
int err = 0; int err = 0;
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN) ||
security_locked_down(LOCKDOWN_PCI_ACCESS))
return -EPERM; return -EPERM;
dev = pci_get_domain_bus_and_slot(0, bus, dfn); dev = pci_get_domain_bus_and_slot(0, bus, dfn);
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/security.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
...@@ -1575,6 +1576,10 @@ static ssize_t pccard_store_cis(struct file *filp, struct kobject *kobj, ...@@ -1575,6 +1576,10 @@ static ssize_t pccard_store_cis(struct file *filp, struct kobject *kobj,
struct pcmcia_socket *s; struct pcmcia_socket *s;
int error; int error;
error = security_locked_down(LOCKDOWN_PCMCIA_CIS);
if (error)
return error;
s = to_socket(container_of(kobj, struct device, kobj)); s = to_socket(container_of(kobj, struct device, kobj));
if (off) if (off)
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/serial_core.h> #include <linux/serial_core.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/security.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
...@@ -862,6 +863,10 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port, ...@@ -862,6 +863,10 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
goto check_and_exit; goto check_and_exit;
} }
retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);
if (retval && (change_irq || change_port))
goto exit;
/* /*
* Ask the low level driver to verify the settings. * Ask the low level driver to verify the settings.
*/ */
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/security.h>
#include "internal.h" #include "internal.h"
...@@ -136,6 +137,25 @@ void debugfs_file_put(struct dentry *dentry) ...@@ -136,6 +137,25 @@ void debugfs_file_put(struct dentry *dentry)
} }
EXPORT_SYMBOL_GPL(debugfs_file_put); EXPORT_SYMBOL_GPL(debugfs_file_put);
/*
* Only permit access to world-readable files when the kernel is locked down.
* We also need to exclude any file that has ways to write or alter it as root
* can bypass the permissions check.
*/
static bool debugfs_is_locked_down(struct inode *inode,
struct file *filp,
const struct file_operations *real_fops)
{
if ((inode->i_mode & 07777) == 0444 &&
!(filp->f_mode & FMODE_WRITE) &&
!real_fops->unlocked_ioctl &&
!real_fops->compat_ioctl &&
!real_fops->mmap)
return false;
return security_locked_down(LOCKDOWN_DEBUGFS);
}
static int open_proxy_open(struct inode *inode, struct file *filp) static int open_proxy_open(struct inode *inode, struct file *filp)
{ {
struct dentry *dentry = F_DENTRY(filp); struct dentry *dentry = F_DENTRY(filp);
...@@ -147,6 +167,11 @@ static int open_proxy_open(struct inode *inode, struct file *filp) ...@@ -147,6 +167,11 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
return r == -EIO ? -ENOENT : r; return r == -EIO ? -ENOENT : r;
real_fops = debugfs_real_fops(filp); real_fops = debugfs_real_fops(filp);
r = debugfs_is_locked_down(inode, filp, real_fops);
if (r)
goto out;
real_fops = fops_get(real_fops); real_fops = fops_get(real_fops);
if (!real_fops) { if (!real_fops) {
/* Huh? Module did not clean up after itself at exit? */ /* Huh? Module did not clean up after itself at exit? */
...@@ -272,6 +297,11 @@ static int full_proxy_open(struct inode *inode, struct file *filp) ...@@ -272,6 +297,11 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
return r == -EIO ? -ENOENT : r; return r == -EIO ? -ENOENT : r;
real_fops = debugfs_real_fops(filp); real_fops = debugfs_real_fops(filp);
r = debugfs_is_locked_down(inode, filp, real_fops);
if (r)
goto out;
real_fops = fops_get(real_fops); real_fops = fops_get(real_fops);
if (!real_fops) { if (!real_fops) {
/* Huh? Module did not cleanup after itself at exit? */ /* Huh? Module did not cleanup after itself at exit? */
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <linux/parser.h> #include <linux/parser.h>
#include <linux/magic.h> #include <linux/magic.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/security.h>
#include "internal.h" #include "internal.h"
...@@ -35,6 +36,32 @@ static struct vfsmount *debugfs_mount; ...@@ -35,6 +36,32 @@ static struct vfsmount *debugfs_mount;
static int debugfs_mount_count; static int debugfs_mount_count;
static bool debugfs_registered; static bool debugfs_registered;
/*
* Don't allow access attributes to be changed whilst the kernel is locked down
* so that we can use the file mode as part of a heuristic to determine whether
* to lock down individual files.
*/
static int debugfs_setattr(struct dentry *dentry, struct iattr *ia)
{
int ret = security_locked_down(LOCKDOWN_DEBUGFS);
if (ret && (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID)))
return ret;
return simple_setattr(dentry, ia);
}
static const struct inode_operations debugfs_file_inode_operations = {
.setattr = debugfs_setattr,
};
static const struct inode_operations debugfs_dir_inode_operations = {
.lookup = simple_lookup,
.setattr = debugfs_setattr,
};
static const struct inode_operations debugfs_symlink_inode_operations = {
.get_link = simple_get_link,
.setattr = debugfs_setattr,
};
static struct inode *debugfs_get_inode(struct super_block *sb) static struct inode *debugfs_get_inode(struct super_block *sb)
{ {
struct inode *inode = new_inode(sb); struct inode *inode = new_inode(sb);
...@@ -369,6 +396,7 @@ static struct dentry *__debugfs_create_file(const char *name, umode_t mode, ...@@ -369,6 +396,7 @@ static struct dentry *__debugfs_create_file(const char *name, umode_t mode,
inode->i_mode = mode; inode->i_mode = mode;
inode->i_private = data; inode->i_private = data;
inode->i_op = &debugfs_file_inode_operations;
inode->i_fop = proxy_fops; inode->i_fop = proxy_fops;
dentry->d_fsdata = (void *)((unsigned long)real_fops | dentry->d_fsdata = (void *)((unsigned long)real_fops |
DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); DEBUGFS_FSDATA_IS_REAL_FOPS_BIT);
...@@ -532,7 +560,7 @@ struct dentry *debugfs_create_dir(const char *name, struct dentry *parent) ...@@ -532,7 +560,7 @@ struct dentry *debugfs_create_dir(const char *name, struct dentry *parent)
} }
inode->i_mode = S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO; inode->i_mode = S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO;
inode->i_op = &simple_dir_inode_operations; inode->i_op = &debugfs_dir_inode_operations;
inode->i_fop = &simple_dir_operations; inode->i_fop = &simple_dir_operations;
/* directory inodes start off with i_nlink == 2 (for "." entry) */ /* directory inodes start off with i_nlink == 2 (for "." entry) */
...@@ -632,7 +660,7 @@ struct dentry *debugfs_create_symlink(const char *name, struct dentry *parent, ...@@ -632,7 +660,7 @@ struct dentry *debugfs_create_symlink(const char *name, struct dentry *parent,
return failed_creating(dentry); return failed_creating(dentry);
} }
inode->i_mode = S_IFLNK | S_IRWXUGO; inode->i_mode = S_IFLNK | S_IRWXUGO;
inode->i_op = &simple_symlink_inode_operations; inode->i_op = &debugfs_symlink_inode_operations;
inode->i_link = link; inode->i_link = link;
d_instantiate(dentry, inode); d_instantiate(dentry, inode);
return end_creating(dentry); return end_creating(dentry);
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/security.h>
#include <asm/sections.h> #include <asm/sections.h>
#include "internal.h" #include "internal.h"
...@@ -545,9 +546,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) ...@@ -545,9 +546,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
static int open_kcore(struct inode *inode, struct file *filp) static int open_kcore(struct inode *inode, struct file *filp)
{ {
int ret = security_locked_down(LOCKDOWN_KCORE);
if (!capable(CAP_SYS_RAWIO)) if (!capable(CAP_SYS_RAWIO))
return -EPERM; return -EPERM;
if (ret)
return ret;
filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!filp->private_data) if (!filp->private_data)
return -ENOMEM; return -ENOMEM;
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/parser.h> #include <linux/parser.h>
#include <linux/magic.h> #include <linux/magic.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/security.h>
#define TRACEFS_DEFAULT_MODE 0700 #define TRACEFS_DEFAULT_MODE 0700
...@@ -27,6 +28,25 @@ static struct vfsmount *tracefs_mount; ...@@ -27,6 +28,25 @@ static struct vfsmount *tracefs_mount;
static int tracefs_mount_count; static int tracefs_mount_count;
static bool tracefs_registered; static bool tracefs_registered;
static int default_open_file(struct inode *inode, struct file *filp)
{
struct dentry *dentry = filp->f_path.dentry;
struct file_operations *real_fops;
int ret;
if (!dentry)
return -EINVAL;
ret = security_locked_down(LOCKDOWN_TRACEFS);
if (ret)
return ret;
real_fops = dentry->d_fsdata;
if (!real_fops->open)
return 0;
return real_fops->open(inode, filp);
}
static ssize_t default_read_file(struct file *file, char __user *buf, static ssize_t default_read_file(struct file *file, char __user *buf,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
...@@ -221,6 +241,12 @@ static int tracefs_apply_options(struct super_block *sb) ...@@ -221,6 +241,12 @@ static int tracefs_apply_options(struct super_block *sb)
return 0; return 0;
} }
static void tracefs_destroy_inode(struct inode *inode)
{
if (S_ISREG(inode->i_mode))
kfree(inode->i_fop);
}
static int tracefs_remount(struct super_block *sb, int *flags, char *data) static int tracefs_remount(struct super_block *sb, int *flags, char *data)
{ {
int err; int err;
...@@ -257,6 +283,7 @@ static int tracefs_show_options(struct seq_file *m, struct dentry *root) ...@@ -257,6 +283,7 @@ static int tracefs_show_options(struct seq_file *m, struct dentry *root)
static const struct super_operations tracefs_super_operations = { static const struct super_operations tracefs_super_operations = {
.statfs = simple_statfs, .statfs = simple_statfs,
.remount_fs = tracefs_remount, .remount_fs = tracefs_remount,
.destroy_inode = tracefs_destroy_inode,
.show_options = tracefs_show_options, .show_options = tracefs_show_options,
}; };
...@@ -387,6 +414,7 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode, ...@@ -387,6 +414,7 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
struct dentry *parent, void *data, struct dentry *parent, void *data,
const struct file_operations *fops) const struct file_operations *fops)
{ {
struct file_operations *proxy_fops;
struct dentry *dentry; struct dentry *dentry;
struct inode *inode; struct inode *inode;
...@@ -402,8 +430,20 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode, ...@@ -402,8 +430,20 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
if (unlikely(!inode)) if (unlikely(!inode))
return failed_creating(dentry); return failed_creating(dentry);
proxy_fops = kzalloc(sizeof(struct file_operations), GFP_KERNEL);
if (unlikely(!proxy_fops)) {
iput(inode);
return failed_creating(dentry);
}
if (!fops)
fops = &tracefs_file_operations;
dentry->d_fsdata = (void *)fops;
memcpy(proxy_fops, fops, sizeof(*proxy_fops));
proxy_fops->open = default_open_file;
inode->i_mode = mode; inode->i_mode = mode;
inode->i_fop = fops ? fops : &tracefs_file_operations; inode->i_fop = proxy_fops;
inode->i_private = data; inode->i_private = data;
d_instantiate(dentry, inode); d_instantiate(dentry, inode);
fsnotify_create(dentry->d_parent->d_inode, dentry); fsnotify_create(dentry->d_parent->d_inode, dentry);
......
...@@ -215,8 +215,13 @@ ...@@ -215,8 +215,13 @@
__start_lsm_info = .; \ __start_lsm_info = .; \
KEEP(*(.lsm_info.init)) \ KEEP(*(.lsm_info.init)) \
__end_lsm_info = .; __end_lsm_info = .;
#define EARLY_LSM_TABLE() . = ALIGN(8); \
__start_early_lsm_info = .; \
KEEP(*(.early_lsm_info.init)) \
__end_early_lsm_info = .;
#else #else
#define LSM_TABLE() #define LSM_TABLE()
#define EARLY_LSM_TABLE()
#endif #endif
#define ___OF_TABLE(cfg, name) _OF_TABLE_##cfg(name) #define ___OF_TABLE(cfg, name) _OF_TABLE_##cfg(name)
...@@ -627,7 +632,8 @@ ...@@ -627,7 +632,8 @@
ACPI_PROBE_TABLE(timer) \ ACPI_PROBE_TABLE(timer) \
THERMAL_TABLE(governor) \ THERMAL_TABLE(governor) \
EARLYCON_TABLE() \ EARLYCON_TABLE() \
LSM_TABLE() LSM_TABLE() \
EARLY_LSM_TABLE()
#define INIT_TEXT \ #define INIT_TEXT \
*(.init.text .init.text.*) \ *(.init.text .init.text.*) \
......
...@@ -643,6 +643,12 @@ bool acpi_gtdt_c3stop(int type); ...@@ -643,6 +643,12 @@ bool acpi_gtdt_c3stop(int type);
int acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem, int *timer_count); int acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem, int *timer_count);
#endif #endif
#ifndef ACPI_HAVE_ARCH_SET_ROOT_POINTER
static inline void acpi_arch_set_root_pointer(u64 addr)
{
}
#endif
#ifndef ACPI_HAVE_ARCH_GET_ROOT_POINTER #ifndef ACPI_HAVE_ARCH_GET_ROOT_POINTER
static inline u64 acpi_arch_get_root_pointer(void) static inline u64 acpi_arch_get_root_pointer(void)
{ {
......
...@@ -131,4 +131,13 @@ static inline int ima_inode_removexattr(struct dentry *dentry, ...@@ -131,4 +131,13 @@ static inline int ima_inode_removexattr(struct dentry *dentry,
return 0; return 0;
} }
#endif /* CONFIG_IMA_APPRAISE */ #endif /* CONFIG_IMA_APPRAISE */
#if defined(CONFIG_IMA_APPRAISE) && defined(CONFIG_INTEGRITY_TRUSTED_KEYRING)
extern bool ima_appraise_signature(enum kernel_read_file_id func);
#else
static inline bool ima_appraise_signature(enum kernel_read_file_id func)
{
return false;
}
#endif /* CONFIG_IMA_APPRAISE && CONFIG_INTEGRITY_TRUSTED_KEYRING */
#endif /* _LINUX_IMA_H */ #endif /* _LINUX_IMA_H */
...@@ -125,7 +125,7 @@ typedef void *(kexec_load_t)(struct kimage *image, char *kernel_buf, ...@@ -125,7 +125,7 @@ typedef void *(kexec_load_t)(struct kimage *image, char *kernel_buf,
unsigned long cmdline_len); unsigned long cmdline_len);
typedef int (kexec_cleanup_t)(void *loader_data); typedef int (kexec_cleanup_t)(void *loader_data);
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
typedef int (kexec_verify_sig_t)(const char *kernel_buf, typedef int (kexec_verify_sig_t)(const char *kernel_buf,
unsigned long kernel_len); unsigned long kernel_len);
#endif #endif
...@@ -134,7 +134,7 @@ struct kexec_file_ops { ...@@ -134,7 +134,7 @@ struct kexec_file_ops {
kexec_probe_t *probe; kexec_probe_t *probe;
kexec_load_t *load; kexec_load_t *load;
kexec_cleanup_t *cleanup; kexec_cleanup_t *cleanup;
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
kexec_verify_sig_t *verify_sig; kexec_verify_sig_t *verify_sig;
#endif #endif
}; };
......
...@@ -1449,6 +1449,11 @@ ...@@ -1449,6 +1449,11 @@
* @bpf_prog_free_security: * @bpf_prog_free_security:
* Clean up the security information stored inside bpf prog. * Clean up the security information stored inside bpf prog.
* *
* @locked_down
* Determine whether a kernel feature that potentially enables arbitrary
* code execution in kernel space should be permitted.
*
* @what: kernel feature being accessed
*/ */
union security_list_options { union security_list_options {
int (*binder_set_context_mgr)(struct task_struct *mgr); int (*binder_set_context_mgr)(struct task_struct *mgr);
...@@ -1812,6 +1817,7 @@ union security_list_options { ...@@ -1812,6 +1817,7 @@ union security_list_options {
int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
#endif /* CONFIG_BPF_SYSCALL */ #endif /* CONFIG_BPF_SYSCALL */
int (*locked_down)(enum lockdown_reason what);
}; };
struct security_hook_heads { struct security_hook_heads {
...@@ -2053,6 +2059,7 @@ struct security_hook_heads { ...@@ -2053,6 +2059,7 @@ struct security_hook_heads {
struct hlist_head bpf_prog_alloc_security; struct hlist_head bpf_prog_alloc_security;
struct hlist_head bpf_prog_free_security; struct hlist_head bpf_prog_free_security;
#endif /* CONFIG_BPF_SYSCALL */ #endif /* CONFIG_BPF_SYSCALL */
struct hlist_head locked_down;
} __randomize_layout; } __randomize_layout;
/* /*
...@@ -2111,12 +2118,18 @@ struct lsm_info { ...@@ -2111,12 +2118,18 @@ struct lsm_info {
}; };
extern struct lsm_info __start_lsm_info[], __end_lsm_info[]; extern struct lsm_info __start_lsm_info[], __end_lsm_info[];
extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[];
#define DEFINE_LSM(lsm) \ #define DEFINE_LSM(lsm) \
static struct lsm_info __lsm_##lsm \ static struct lsm_info __lsm_##lsm \
__used __section(.lsm_info.init) \ __used __section(.lsm_info.init) \
__aligned(sizeof(unsigned long)) __aligned(sizeof(unsigned long))
#define DEFINE_EARLY_LSM(lsm) \
static struct lsm_info __early_lsm_##lsm \
__used __section(.early_lsm_info.init) \
__aligned(sizeof(unsigned long))
#ifdef CONFIG_SECURITY_SELINUX_DISABLE #ifdef CONFIG_SECURITY_SELINUX_DISABLE
/* /*
* Assuring the safety of deleting a security module is up to * Assuring the safety of deleting a security module is up to
......
...@@ -77,6 +77,54 @@ enum lsm_event { ...@@ -77,6 +77,54 @@ enum lsm_event {
LSM_POLICY_CHANGE, LSM_POLICY_CHANGE,
}; };
/*
* These are reasons that can be passed to the security_locked_down()
* LSM hook. Lockdown reasons that protect kernel integrity (ie, the
* ability for userland to modify kernel code) are placed before
* LOCKDOWN_INTEGRITY_MAX. Lockdown reasons that protect kernel
* confidentiality (ie, the ability for userland to extract
* information from the running kernel that would otherwise be
* restricted) are placed before LOCKDOWN_CONFIDENTIALITY_MAX.
*
* LSM authors should note that the semantics of any given lockdown
* reason are not guaranteed to be stable - the same reason may block
* one set of features in one kernel release, and a slightly different
* set of features in a later kernel release. LSMs that seek to expose
* lockdown policy at any level of granularity other than "none",
* "integrity" or "confidentiality" are responsible for either
* ensuring that they expose a consistent level of functionality to
* userland, or ensuring that userland is aware that this is
* potentially a moving target. It is easy to misuse this information
* in a way that could break userspace. Please be careful not to do
* so.
*
* If you add to this, remember to extend lockdown_reasons in
* security/lockdown/lockdown.c.
*/
enum lockdown_reason {
LOCKDOWN_NONE,
LOCKDOWN_MODULE_SIGNATURE,
LOCKDOWN_DEV_MEM,
LOCKDOWN_KEXEC,
LOCKDOWN_HIBERNATION,
LOCKDOWN_PCI_ACCESS,
LOCKDOWN_IOPORT,
LOCKDOWN_MSR,
LOCKDOWN_ACPI_TABLES,
LOCKDOWN_PCMCIA_CIS,
LOCKDOWN_TIOCSSERIAL,
LOCKDOWN_MODULE_PARAMETERS,
LOCKDOWN_MMIOTRACE,
LOCKDOWN_DEBUGFS,
LOCKDOWN_INTEGRITY_MAX,
LOCKDOWN_KCORE,
LOCKDOWN_KPROBES,
LOCKDOWN_BPF_READ,
LOCKDOWN_PERF,
LOCKDOWN_TRACEFS,
LOCKDOWN_CONFIDENTIALITY_MAX,
};
/* These functions are in security/commoncap.c */ /* These functions are in security/commoncap.c */
extern int cap_capable(const struct cred *cred, struct user_namespace *ns, extern int cap_capable(const struct cred *cred, struct user_namespace *ns,
int cap, unsigned int opts); int cap, unsigned int opts);
...@@ -195,6 +243,7 @@ int unregister_blocking_lsm_notifier(struct notifier_block *nb); ...@@ -195,6 +243,7 @@ int unregister_blocking_lsm_notifier(struct notifier_block *nb);
/* prototypes */ /* prototypes */
extern int security_init(void); extern int security_init(void);
extern int early_security_init(void);
/* Security operations */ /* Security operations */
int security_binder_set_context_mgr(struct task_struct *mgr); int security_binder_set_context_mgr(struct task_struct *mgr);
...@@ -392,6 +441,7 @@ void security_inode_invalidate_secctx(struct inode *inode); ...@@ -392,6 +441,7 @@ void security_inode_invalidate_secctx(struct inode *inode);
int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen); int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen);
int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen); int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen);
int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen); int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen);
int security_locked_down(enum lockdown_reason what);
#else /* CONFIG_SECURITY */ #else /* CONFIG_SECURITY */
static inline int call_blocking_lsm_notifier(enum lsm_event event, void *data) static inline int call_blocking_lsm_notifier(enum lsm_event event, void *data)
...@@ -423,6 +473,11 @@ static inline int security_init(void) ...@@ -423,6 +473,11 @@ static inline int security_init(void)
return 0; return 0;
} }
static inline int early_security_init(void)
{
return 0;
}
static inline int security_binder_set_context_mgr(struct task_struct *mgr) static inline int security_binder_set_context_mgr(struct task_struct *mgr)
{ {
return 0; return 0;
...@@ -1210,6 +1265,10 @@ static inline int security_inode_getsecctx(struct inode *inode, void **ctx, u32 ...@@ -1210,6 +1265,10 @@ static inline int security_inode_getsecctx(struct inode *inode, void **ctx, u32
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static inline int security_locked_down(enum lockdown_reason what)
{
return 0;
}
#endif /* CONFIG_SECURITY */ #endif /* CONFIG_SECURITY */
#ifdef CONFIG_SECURITY_NETWORK #ifdef CONFIG_SECURITY_NETWORK
......
...@@ -2061,6 +2061,11 @@ config MODULE_SIG ...@@ -2061,6 +2061,11 @@ config MODULE_SIG
kernel build dependency so that the signing tool can use its crypto kernel build dependency so that the signing tool can use its crypto
library. library.
You should enable this option if you wish to use either
CONFIG_SECURITY_LOCKDOWN_LSM or lockdown functionality imposed via
another LSM - otherwise unsigned modules will be loadable regardless
of the lockdown policy.
!!!WARNING!!! If you enable this option, you MUST make sure that the !!!WARNING!!! If you enable this option, you MUST make sure that the
module DOES NOT get stripped after being signed. This includes the module DOES NOT get stripped after being signed. This includes the
debuginfo strip done by some packagers (such as rpmbuild) and debuginfo strip done by some packagers (such as rpmbuild) and
......
...@@ -593,6 +593,7 @@ asmlinkage __visible void __init start_kernel(void) ...@@ -593,6 +593,7 @@ asmlinkage __visible void __init start_kernel(void)
boot_cpu_init(); boot_cpu_init();
page_address_init(); page_address_init();
pr_notice("%s", linux_banner); pr_notice("%s", linux_banner);
early_security_init();
setup_arch(&command_line); setup_arch(&command_line);
setup_command_line(command_line); setup_command_line(command_line);
setup_nr_cpu_ids(); setup_nr_cpu_ids();
......
...@@ -10917,6 +10917,13 @@ SYSCALL_DEFINE5(perf_event_open, ...@@ -10917,6 +10917,13 @@ SYSCALL_DEFINE5(perf_event_open,
perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN)) perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
return -EACCES; return -EACCES;
err = security_locked_down(LOCKDOWN_PERF);
if (err && (attr.sample_type & PERF_SAMPLE_REGS_INTR))
/* REGS_INTR can leak data, lockdown must prevent this */
return err;
err = 0;
/* /*
* In cgroup mode, the pid argument is used to pass the fd * In cgroup mode, the pid argument is used to pass the fd
* opened to the cgroup directory in cgroupfs. The cpu argument * opened to the cgroup directory in cgroupfs. The cpu argument
......
...@@ -205,6 +205,14 @@ static inline int kexec_load_check(unsigned long nr_segments, ...@@ -205,6 +205,14 @@ static inline int kexec_load_check(unsigned long nr_segments,
if (result < 0) if (result < 0)
return result; return result;
/*
* kexec can be used to circumvent module loading restrictions, so
* prevent loading in that case
*/
result = security_locked_down(LOCKDOWN_KEXEC);
if (result)
return result;
/* /*
* Verify we have a legal set of flags * Verify we have a legal set of flags
* This leaves us room for future extensions. * This leaves us room for future extensions.
......
...@@ -88,7 +88,7 @@ int __weak arch_kimage_file_post_load_cleanup(struct kimage *image) ...@@ -88,7 +88,7 @@ int __weak arch_kimage_file_post_load_cleanup(struct kimage *image)
return kexec_image_post_load_cleanup_default(image); return kexec_image_post_load_cleanup_default(image);
} }
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
static int kexec_image_verify_sig_default(struct kimage *image, void *buf, static int kexec_image_verify_sig_default(struct kimage *image, void *buf,
unsigned long buf_len) unsigned long buf_len)
{ {
...@@ -177,6 +177,59 @@ void kimage_file_post_load_cleanup(struct kimage *image) ...@@ -177,6 +177,59 @@ void kimage_file_post_load_cleanup(struct kimage *image)
image->image_loader_data = NULL; image->image_loader_data = NULL;
} }
#ifdef CONFIG_KEXEC_SIG
static int
kimage_validate_signature(struct kimage *image)
{
const char *reason;
int ret;
ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf,
image->kernel_buf_len);
switch (ret) {
case 0:
break;
/* Certain verification errors are non-fatal if we're not
* checking errors, provided we aren't mandating that there
* must be a valid signature.
*/
case -ENODATA:
reason = "kexec of unsigned image";
goto decide;
case -ENOPKG:
reason = "kexec of image with unsupported crypto";
goto decide;
case -ENOKEY:
reason = "kexec of image with unavailable key";
decide:
if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) {
pr_notice("%s rejected\n", reason);
return ret;
}
/* If IMA is guaranteed to appraise a signature on the kexec
* image, permit it even if the kernel is otherwise locked
* down.
*/
if (!ima_appraise_signature(READING_KEXEC_IMAGE) &&
security_locked_down(LOCKDOWN_KEXEC))
return -EPERM;
return 0;
/* All other errors are fatal, including nomem, unparseable
* signatures and signature check failures - even if signatures
* aren't required.
*/
default:
pr_notice("kernel signature verification failed (%d).\n", ret);
}
return ret;
}
#endif
/* /*
* In file mode list of segments is prepared by kernel. Copy relevant * In file mode list of segments is prepared by kernel. Copy relevant
* data from user space, do error checking, prepare segment list * data from user space, do error checking, prepare segment list
...@@ -186,7 +239,7 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd, ...@@ -186,7 +239,7 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd,
const char __user *cmdline_ptr, const char __user *cmdline_ptr,
unsigned long cmdline_len, unsigned flags) unsigned long cmdline_len, unsigned flags)
{ {
int ret = 0; int ret;
void *ldata; void *ldata;
loff_t size; loff_t size;
...@@ -202,14 +255,11 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd, ...@@ -202,14 +255,11 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd,
if (ret) if (ret)
goto out; goto out;
#ifdef CONFIG_KEXEC_VERIFY_SIG #ifdef CONFIG_KEXEC_SIG
ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf, ret = kimage_validate_signature(image);
image->kernel_buf_len);
if (ret) { if (ret)
pr_debug("kernel signature verification failed.\n");
goto out; goto out;
}
pr_debug("kernel signature verification successful.\n");
#endif #endif
/* It is possible that there no initramfs is being loaded */ /* It is possible that there no initramfs is being loaded */
if (!(flags & KEXEC_FILE_NO_INITRAMFS)) { if (!(flags & KEXEC_FILE_NO_INITRAMFS)) {
......
...@@ -2839,8 +2839,9 @@ static inline void kmemleak_load_module(const struct module *mod, ...@@ -2839,8 +2839,9 @@ static inline void kmemleak_load_module(const struct module *mod,
#ifdef CONFIG_MODULE_SIG #ifdef CONFIG_MODULE_SIG
static int module_sig_check(struct load_info *info, int flags) static int module_sig_check(struct load_info *info, int flags)
{ {
int err = -ENOKEY; int err = -ENODATA;
const unsigned long markerlen = sizeof(MODULE_SIG_STRING) - 1; const unsigned long markerlen = sizeof(MODULE_SIG_STRING) - 1;
const char *reason;
const void *mod = info->hdr; const void *mod = info->hdr;
/* /*
...@@ -2855,16 +2856,38 @@ static int module_sig_check(struct load_info *info, int flags) ...@@ -2855,16 +2856,38 @@ static int module_sig_check(struct load_info *info, int flags)
err = mod_verify_sig(mod, info); err = mod_verify_sig(mod, info);
} }
if (!err) { switch (err) {
case 0:
info->sig_ok = true; info->sig_ok = true;
return 0; return 0;
}
/* Not having a signature is only an error if we're strict. */ /* We don't permit modules to be loaded into trusted kernels
if (err == -ENOKEY && !is_module_sig_enforced()) * without a valid signature on them, but if we're not
err = 0; * enforcing, certain errors are non-fatal.
*/
case -ENODATA:
reason = "Loading of unsigned module";
goto decide;
case -ENOPKG:
reason = "Loading of module with unsupported crypto";
goto decide;
case -ENOKEY:
reason = "Loading of module with unavailable key";
decide:
if (is_module_sig_enforced()) {
pr_notice("%s is rejected\n", reason);
return -EKEYREJECTED;
}
return err; return security_locked_down(LOCKDOWN_MODULE_SIGNATURE);
/* All other errors are fatal, including nomem, unparseable
* signatures and signature check failures - even if signatures
* aren't required.
*/
default:
return err;
}
} }
#else /* !CONFIG_MODULE_SIG */ #else /* !CONFIG_MODULE_SIG */
static int module_sig_check(struct load_info *info, int flags) static int module_sig_check(struct load_info *info, int flags)
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/security.h>
#ifdef CONFIG_SYSFS #ifdef CONFIG_SYSFS
/* Protects all built-in parameters, modules use their own param_lock */ /* Protects all built-in parameters, modules use their own param_lock */
...@@ -96,13 +97,19 @@ bool parameq(const char *a, const char *b) ...@@ -96,13 +97,19 @@ bool parameq(const char *a, const char *b)
return parameqn(a, b, strlen(a)+1); return parameqn(a, b, strlen(a)+1);
} }
static void param_check_unsafe(const struct kernel_param *kp) static bool param_check_unsafe(const struct kernel_param *kp)
{ {
if (kp->flags & KERNEL_PARAM_FL_HWPARAM &&
security_locked_down(LOCKDOWN_MODULE_PARAMETERS))
return false;
if (kp->flags & KERNEL_PARAM_FL_UNSAFE) { if (kp->flags & KERNEL_PARAM_FL_UNSAFE) {
pr_notice("Setting dangerous option %s - tainting kernel\n", pr_notice("Setting dangerous option %s - tainting kernel\n",
kp->name); kp->name);
add_taint(TAINT_USER, LOCKDEP_STILL_OK); add_taint(TAINT_USER, LOCKDEP_STILL_OK);
} }
return true;
} }
static int parse_one(char *param, static int parse_one(char *param,
...@@ -132,8 +139,10 @@ static int parse_one(char *param, ...@@ -132,8 +139,10 @@ static int parse_one(char *param,
pr_debug("handling %s with %p\n", param, pr_debug("handling %s with %p\n", param,
params[i].ops->set); params[i].ops->set);
kernel_param_lock(params[i].mod); kernel_param_lock(params[i].mod);
param_check_unsafe(&params[i]); if (param_check_unsafe(&params[i]))
err = params[i].ops->set(val, &params[i]); err = params[i].ops->set(val, &params[i]);
else
err = -EPERM;
kernel_param_unlock(params[i].mod); kernel_param_unlock(params[i].mod);
return err; return err;
} }
...@@ -553,8 +562,10 @@ static ssize_t param_attr_store(struct module_attribute *mattr, ...@@ -553,8 +562,10 @@ static ssize_t param_attr_store(struct module_attribute *mattr,
return -EPERM; return -EPERM;
kernel_param_lock(mk->mod); kernel_param_lock(mk->mod);
param_check_unsafe(attribute->param); if (param_check_unsafe(attribute->param))
err = attribute->param->ops->set(buf, attribute->param); err = attribute->param->ops->set(buf, attribute->param);
else
err = -EPERM;
kernel_param_unlock(mk->mod); kernel_param_unlock(mk->mod);
if (!err) if (!err)
return len; return len;
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/genhd.h> #include <linux/genhd.h>
#include <linux/ktime.h> #include <linux/ktime.h>
#include <linux/security.h>
#include <trace/events/power.h> #include <trace/events/power.h>
#include "power.h" #include "power.h"
...@@ -68,7 +69,7 @@ static const struct platform_hibernation_ops *hibernation_ops; ...@@ -68,7 +69,7 @@ static const struct platform_hibernation_ops *hibernation_ops;
bool hibernation_available(void) bool hibernation_available(void)
{ {
return (nohibernate == 0); return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
} }
/** /**
......
...@@ -142,8 +142,13 @@ BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr) ...@@ -142,8 +142,13 @@ BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr)
{ {
int ret; int ret;
ret = security_locked_down(LOCKDOWN_BPF_READ);
if (ret < 0)
goto out;
ret = probe_kernel_read(dst, unsafe_ptr, size); ret = probe_kernel_read(dst, unsafe_ptr, size);
if (unlikely(ret < 0)) if (unlikely(ret < 0))
out:
memset(dst, 0, size); memset(dst, 0, size);
return ret; return ret;
...@@ -569,6 +574,10 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size, ...@@ -569,6 +574,10 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size,
{ {
int ret; int ret;
ret = security_locked_down(LOCKDOWN_BPF_READ);
if (ret < 0)
goto out;
/* /*
* The strncpy_from_unsafe() call will likely not fill the entire * The strncpy_from_unsafe() call will likely not fill the entire
* buffer, but that's okay in this circumstance as we're probing * buffer, but that's okay in this circumstance as we're probing
...@@ -580,6 +589,7 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size, ...@@ -580,6 +589,7 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size,
*/ */
ret = strncpy_from_unsafe(dst, unsafe_ptr, size); ret = strncpy_from_unsafe(dst, unsafe_ptr, size);
if (unlikely(ret < 0)) if (unlikely(ret < 0))
out:
memset(dst, 0, size); memset(dst, 0, size);
return ret; return ret;
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/error-injection.h> #include <linux/error-injection.h>
#include <linux/security.h>
#include <asm/setup.h> /* for COMMAND_LINE_SIZE */ #include <asm/setup.h> /* for COMMAND_LINE_SIZE */
...@@ -460,6 +461,10 @@ static int __register_trace_kprobe(struct trace_kprobe *tk) ...@@ -460,6 +461,10 @@ static int __register_trace_kprobe(struct trace_kprobe *tk)
{ {
int i, ret; int i, ret;
ret = security_locked_down(LOCKDOWN_KPROBES);
if (ret)
return ret;
if (trace_kprobe_is_registered(tk)) if (trace_kprobe_is_registered(tk))
return -EINVAL; return -EINVAL;
......
...@@ -237,6 +237,7 @@ source "security/apparmor/Kconfig" ...@@ -237,6 +237,7 @@ source "security/apparmor/Kconfig"
source "security/loadpin/Kconfig" source "security/loadpin/Kconfig"
source "security/yama/Kconfig" source "security/yama/Kconfig"
source "security/safesetid/Kconfig" source "security/safesetid/Kconfig"
source "security/lockdown/Kconfig"
source "security/integrity/Kconfig" source "security/integrity/Kconfig"
...@@ -276,11 +277,11 @@ endchoice ...@@ -276,11 +277,11 @@ endchoice
config LSM config LSM
string "Ordered list of enabled LSMs" string "Ordered list of enabled LSMs"
default "yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor" if DEFAULT_SECURITY_SMACK default "lockdown,yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor" if DEFAULT_SECURITY_SMACK
default "yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" if DEFAULT_SECURITY_APPARMOR default "lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" if DEFAULT_SECURITY_APPARMOR
default "yama,loadpin,safesetid,integrity,tomoyo" if DEFAULT_SECURITY_TOMOYO default "lockdown,yama,loadpin,safesetid,integrity,tomoyo" if DEFAULT_SECURITY_TOMOYO
default "yama,loadpin,safesetid,integrity" if DEFAULT_SECURITY_DAC default "lockdown,yama,loadpin,safesetid,integrity" if DEFAULT_SECURITY_DAC
default "yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor" default "lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
help help
A comma-separated list of LSMs, in initialization order. A comma-separated list of LSMs, in initialization order.
Any LSMs left off this list will be ignored. This can be Any LSMs left off this list will be ignored. This can be
......
...@@ -11,6 +11,7 @@ subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor ...@@ -11,6 +11,7 @@ subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor
subdir-$(CONFIG_SECURITY_YAMA) += yama subdir-$(CONFIG_SECURITY_YAMA) += yama
subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin
subdir-$(CONFIG_SECURITY_SAFESETID) += safesetid subdir-$(CONFIG_SECURITY_SAFESETID) += safesetid
subdir-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown
# always enable default capabilities # always enable default capabilities
obj-y += commoncap.o obj-y += commoncap.o
...@@ -27,6 +28,7 @@ obj-$(CONFIG_SECURITY_APPARMOR) += apparmor/ ...@@ -27,6 +28,7 @@ obj-$(CONFIG_SECURITY_APPARMOR) += apparmor/
obj-$(CONFIG_SECURITY_YAMA) += yama/ obj-$(CONFIG_SECURITY_YAMA) += yama/
obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/ obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/
obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/ obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/
obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown/
obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o
# Object integrity file lists # Object integrity file lists
......
...@@ -160,7 +160,7 @@ config IMA_APPRAISE ...@@ -160,7 +160,7 @@ config IMA_APPRAISE
config IMA_ARCH_POLICY config IMA_ARCH_POLICY
bool "Enable loading an IMA architecture specific policy" bool "Enable loading an IMA architecture specific policy"
depends on (KEXEC_VERIFY_SIG && IMA) || IMA_APPRAISE \ depends on (KEXEC_SIG && IMA) || IMA_APPRAISE \
&& INTEGRITY_ASYMMETRIC_KEYS && INTEGRITY_ASYMMETRIC_KEYS
default n default n
help help
......
...@@ -114,6 +114,8 @@ struct ima_kexec_hdr { ...@@ -114,6 +114,8 @@ struct ima_kexec_hdr {
u64 count; u64 count;
}; };
extern const int read_idmap[];
#ifdef CONFIG_HAVE_IMA_KEXEC #ifdef CONFIG_HAVE_IMA_KEXEC
void ima_load_kexec_buffer(void); void ima_load_kexec_buffer(void);
#else #else
......
...@@ -518,7 +518,7 @@ int ima_read_file(struct file *file, enum kernel_read_file_id read_id) ...@@ -518,7 +518,7 @@ int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
return 0; return 0;
} }
static const int read_idmap[READING_MAX_ID] = { const int read_idmap[READING_MAX_ID] = {
[READING_FIRMWARE] = FIRMWARE_CHECK, [READING_FIRMWARE] = FIRMWARE_CHECK,
[READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK, [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
[READING_MODULE] = MODULE_CHECK, [READING_MODULE] = MODULE_CHECK,
...@@ -590,7 +590,7 @@ int ima_load_data(enum kernel_load_data_id id) ...@@ -590,7 +590,7 @@ int ima_load_data(enum kernel_load_data_id id)
switch (id) { switch (id) {
case LOADING_KEXEC_IMAGE: case LOADING_KEXEC_IMAGE:
if (IS_ENABLED(CONFIG_KEXEC_VERIFY_SIG) if (IS_ENABLED(CONFIG_KEXEC_SIG)
&& arch_ima_get_secureboot()) { && arch_ima_get_secureboot()) {
pr_err("impossible to appraise a kernel image without a file descriptor; try using kexec_file_load syscall.\n"); pr_err("impossible to appraise a kernel image without a file descriptor; try using kexec_file_load syscall.\n");
return -EACCES; return -EACCES;
......
...@@ -1507,3 +1507,53 @@ int ima_policy_show(struct seq_file *m, void *v) ...@@ -1507,3 +1507,53 @@ int ima_policy_show(struct seq_file *m, void *v)
return 0; return 0;
} }
#endif /* CONFIG_IMA_READ_POLICY */ #endif /* CONFIG_IMA_READ_POLICY */
#if defined(CONFIG_IMA_APPRAISE) && defined(CONFIG_INTEGRITY_TRUSTED_KEYRING)
/*
* ima_appraise_signature: whether IMA will appraise a given function using
* an IMA digital signature. This is restricted to cases where the kernel
* has a set of built-in trusted keys in order to avoid an attacker simply
* loading additional keys.
*/
bool ima_appraise_signature(enum kernel_read_file_id id)
{
struct ima_rule_entry *entry;
bool found = false;
enum ima_hooks func;
if (id >= READING_MAX_ID)
return false;
func = read_idmap[id] ?: FILE_CHECK;
rcu_read_lock();
list_for_each_entry_rcu(entry, ima_rules, list) {
if (entry->action != APPRAISE)
continue;
/*
* A generic entry will match, but otherwise require that it
* match the func we're looking for
*/
if (entry->func && entry->func != func)
continue;
/*
* We require this to be a digital signature, not a raw IMA
* hash.
*/
if (entry->flags & IMA_DIGSIG_REQUIRED)
found = true;
/*
* We've found a rule that matches, so break now even if it
* didn't require a digital signature - a later rule that does
* won't override it, so would be a false positive.
*/
break;
}
rcu_read_unlock();
return found;
}
#endif /* CONFIG_IMA_APPRAISE && CONFIG_INTEGRITY_TRUSTED_KEYRING */
config SECURITY_LOCKDOWN_LSM
bool "Basic module for enforcing kernel lockdown"
depends on SECURITY
select MODULE_SIG if MODULES
help
Build support for an LSM that enforces a coarse kernel lockdown
behaviour.
config SECURITY_LOCKDOWN_LSM_EARLY
bool "Enable lockdown LSM early in init"
depends on SECURITY_LOCKDOWN_LSM
help
Enable the lockdown LSM early in boot. This is necessary in order
to ensure that lockdown enforcement can be carried out on kernel
boot parameters that are otherwise parsed before the security
subsystem is fully initialised. If enabled, lockdown will
unconditionally be called before any other LSMs.
choice
prompt "Kernel default lockdown mode"
default LOCK_DOWN_KERNEL_FORCE_NONE
depends on SECURITY_LOCKDOWN_LSM
help
The kernel can be configured to default to differing levels of
lockdown.
config LOCK_DOWN_KERNEL_FORCE_NONE
bool "None"
help
No lockdown functionality is enabled by default. Lockdown may be
enabled via the kernel commandline or /sys/kernel/security/lockdown.
config LOCK_DOWN_KERNEL_FORCE_INTEGRITY
bool "Integrity"
help
The kernel runs in integrity mode by default. Features that allow
the kernel to be modified at runtime are disabled.
config LOCK_DOWN_KERNEL_FORCE_CONFIDENTIALITY
bool "Confidentiality"
help
The kernel runs in confidentiality mode by default. Features that
allow the kernel to be modified at runtime or that permit userland
code to read confidential material held inside the kernel are
disabled.
endchoice
obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown.o
// SPDX-License-Identifier: GPL-2.0
/* Lock down the kernel
*
* Copyright (C) 2016 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/security.h>
#include <linux/export.h>
#include <linux/lsm_hooks.h>
static enum lockdown_reason kernel_locked_down;
static const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
[LOCKDOWN_NONE] = "none",
[LOCKDOWN_MODULE_SIGNATURE] = "unsigned module loading",
[LOCKDOWN_DEV_MEM] = "/dev/mem,kmem,port",
[LOCKDOWN_KEXEC] = "kexec of unsigned images",
[LOCKDOWN_HIBERNATION] = "hibernation",
[LOCKDOWN_PCI_ACCESS] = "direct PCI access",
[LOCKDOWN_IOPORT] = "raw io port access",
[LOCKDOWN_MSR] = "raw MSR access",
[LOCKDOWN_ACPI_TABLES] = "modifying ACPI tables",
[LOCKDOWN_PCMCIA_CIS] = "direct PCMCIA CIS storage",
[LOCKDOWN_TIOCSSERIAL] = "reconfiguration of serial port IO",
[LOCKDOWN_MODULE_PARAMETERS] = "unsafe module parameters",
[LOCKDOWN_MMIOTRACE] = "unsafe mmio",
[LOCKDOWN_DEBUGFS] = "debugfs access",
[LOCKDOWN_INTEGRITY_MAX] = "integrity",
[LOCKDOWN_KCORE] = "/proc/kcore access",
[LOCKDOWN_KPROBES] = "use of kprobes",
[LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM",
[LOCKDOWN_PERF] = "unsafe use of perf",
[LOCKDOWN_TRACEFS] = "use of tracefs",
[LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality",
};
static const enum lockdown_reason lockdown_levels[] = {LOCKDOWN_NONE,
LOCKDOWN_INTEGRITY_MAX,
LOCKDOWN_CONFIDENTIALITY_MAX};
/*
* Put the kernel into lock-down mode.
*/
static int lock_kernel_down(const char *where, enum lockdown_reason level)
{
if (kernel_locked_down >= level)
return -EPERM;
kernel_locked_down = level;
pr_notice("Kernel is locked down from %s; see man kernel_lockdown.7\n",
where);
return 0;
}
static int __init lockdown_param(char *level)
{
if (!level)
return -EINVAL;
if (strcmp(level, "integrity") == 0)
lock_kernel_down("command line", LOCKDOWN_INTEGRITY_MAX);
else if (strcmp(level, "confidentiality") == 0)
lock_kernel_down("command line", LOCKDOWN_CONFIDENTIALITY_MAX);
else
return -EINVAL;
return 0;
}
early_param("lockdown", lockdown_param);
/**
* lockdown_is_locked_down - Find out if the kernel is locked down
* @what: Tag to use in notice generated if lockdown is in effect
*/
static int lockdown_is_locked_down(enum lockdown_reason what)
{
if (WARN(what >= LOCKDOWN_CONFIDENTIALITY_MAX,
"Invalid lockdown reason"))
return -EPERM;
if (kernel_locked_down >= what) {
if (lockdown_reasons[what])
pr_notice("Lockdown: %s: %s is restricted; see man kernel_lockdown.7\n",
current->comm, lockdown_reasons[what]);
return -EPERM;
}
return 0;
}
static struct security_hook_list lockdown_hooks[] __lsm_ro_after_init = {
LSM_HOOK_INIT(locked_down, lockdown_is_locked_down),
};
static int __init lockdown_lsm_init(void)
{
#if defined(CONFIG_LOCK_DOWN_KERNEL_FORCE_INTEGRITY)
lock_kernel_down("Kernel configuration", LOCKDOWN_INTEGRITY_MAX);
#elif defined(CONFIG_LOCK_DOWN_KERNEL_FORCE_CONFIDENTIALITY)
lock_kernel_down("Kernel configuration", LOCKDOWN_CONFIDENTIALITY_MAX);
#endif
security_add_hooks(lockdown_hooks, ARRAY_SIZE(lockdown_hooks),
"lockdown");
return 0;
}
static ssize_t lockdown_read(struct file *filp, char __user *buf, size_t count,
loff_t *ppos)
{
char temp[80];
int i, offset = 0;
for (i = 0; i < ARRAY_SIZE(lockdown_levels); i++) {
enum lockdown_reason level = lockdown_levels[i];
if (lockdown_reasons[level]) {
const char *label = lockdown_reasons[level];
if (kernel_locked_down == level)
offset += sprintf(temp+offset, "[%s] ", label);
else
offset += sprintf(temp+offset, "%s ", label);
}
}
/* Convert the last space to a newline if needed. */
if (offset > 0)
temp[offset-1] = '\n';
return simple_read_from_buffer(buf, count, ppos, temp, strlen(temp));
}
static ssize_t lockdown_write(struct file *file, const char __user *buf,
size_t n, loff_t *ppos)
{
char *state;
int i, len, err = -EINVAL;
state = memdup_user_nul(buf, n);
if (IS_ERR(state))
return PTR_ERR(state);
len = strlen(state);
if (len && state[len-1] == '\n') {
state[len-1] = '\0';
len--;
}
for (i = 0; i < ARRAY_SIZE(lockdown_levels); i++) {
enum lockdown_reason level = lockdown_levels[i];
const char *label = lockdown_reasons[level];
if (label && !strcmp(state, label))
err = lock_kernel_down("securityfs", level);
}
kfree(state);
return err ? err : n;
}
static const struct file_operations lockdown_ops = {
.read = lockdown_read,
.write = lockdown_write,
};
static int __init lockdown_secfs_init(void)
{
struct dentry *dentry;
dentry = securityfs_create_file("lockdown", 0600, NULL, NULL,
&lockdown_ops);
return PTR_ERR_OR_ZERO(dentry);
}
core_initcall(lockdown_secfs_init);
#ifdef CONFIG_SECURITY_LOCKDOWN_LSM_EARLY
DEFINE_EARLY_LSM(lockdown) = {
#else
DEFINE_LSM(lockdown) = {
#endif
.name = "lockdown",
.init = lockdown_lsm_init,
};
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
/* How many LSMs were built into the kernel? */ /* How many LSMs were built into the kernel? */
#define LSM_COUNT (__end_lsm_info - __start_lsm_info) #define LSM_COUNT (__end_lsm_info - __start_lsm_info)
#define EARLY_LSM_COUNT (__end_early_lsm_info - __start_early_lsm_info)
struct security_hook_heads security_hook_heads __lsm_ro_after_init; struct security_hook_heads security_hook_heads __lsm_ro_after_init;
static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain); static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain);
...@@ -277,6 +278,8 @@ static void __init ordered_lsm_parse(const char *order, const char *origin) ...@@ -277,6 +278,8 @@ static void __init ordered_lsm_parse(const char *order, const char *origin)
static void __init lsm_early_cred(struct cred *cred); static void __init lsm_early_cred(struct cred *cred);
static void __init lsm_early_task(struct task_struct *task); static void __init lsm_early_task(struct task_struct *task);
static int lsm_append(const char *new, char **result);
static void __init ordered_lsm_init(void) static void __init ordered_lsm_init(void)
{ {
struct lsm_info **lsm; struct lsm_info **lsm;
...@@ -323,6 +326,26 @@ static void __init ordered_lsm_init(void) ...@@ -323,6 +326,26 @@ static void __init ordered_lsm_init(void)
kfree(ordered_lsms); kfree(ordered_lsms);
} }
int __init early_security_init(void)
{
int i;
struct hlist_head *list = (struct hlist_head *) &security_hook_heads;
struct lsm_info *lsm;
for (i = 0; i < sizeof(security_hook_heads) / sizeof(struct hlist_head);
i++)
INIT_HLIST_HEAD(&list[i]);
for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) {
if (!lsm->enabled)
lsm->enabled = &lsm_enabled_true;
prepare_lsm(lsm);
initialize_lsm(lsm);
}
return 0;
}
/** /**
* security_init - initializes the security framework * security_init - initializes the security framework
* *
...@@ -330,14 +353,18 @@ static void __init ordered_lsm_init(void) ...@@ -330,14 +353,18 @@ static void __init ordered_lsm_init(void)
*/ */
int __init security_init(void) int __init security_init(void)
{ {
int i; struct lsm_info *lsm;
struct hlist_head *list = (struct hlist_head *) &security_hook_heads;
pr_info("Security Framework initializing\n"); pr_info("Security Framework initializing\n");
for (i = 0; i < sizeof(security_hook_heads) / sizeof(struct hlist_head); /*
i++) * Append the names of the early LSM modules now that kmalloc() is
INIT_HLIST_HEAD(&list[i]); * available
*/
for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) {
if (lsm->enabled)
lsm_append(lsm->name, &lsm_names);
}
/* Load LSMs in specified order. */ /* Load LSMs in specified order. */
ordered_lsm_init(); ordered_lsm_init();
...@@ -384,7 +411,7 @@ static bool match_last_lsm(const char *list, const char *lsm) ...@@ -384,7 +411,7 @@ static bool match_last_lsm(const char *list, const char *lsm)
return !strcmp(last, lsm); return !strcmp(last, lsm);
} }
static int lsm_append(char *new, char **result) static int lsm_append(const char *new, char **result)
{ {
char *cp; char *cp;
...@@ -422,8 +449,15 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count, ...@@ -422,8 +449,15 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count,
hooks[i].lsm = lsm; hooks[i].lsm = lsm;
hlist_add_tail_rcu(&hooks[i].list, hooks[i].head); hlist_add_tail_rcu(&hooks[i].list, hooks[i].head);
} }
if (lsm_append(lsm, &lsm_names) < 0)
panic("%s - Cannot get early memory.\n", __func__); /*
* Don't try to append during early_security_init(), we'll come back
* and fix this up afterwards.
*/
if (slab_is_available()) {
if (lsm_append(lsm, &lsm_names) < 0)
panic("%s - Cannot get early memory.\n", __func__);
}
} }
int call_blocking_lsm_notifier(enum lsm_event event, void *data) int call_blocking_lsm_notifier(enum lsm_event event, void *data)
...@@ -2364,3 +2398,9 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) ...@@ -2364,3 +2398,9 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux)
call_void_hook(bpf_prog_free_security, aux); call_void_hook(bpf_prog_free_security, aux);
} }
#endif /* CONFIG_BPF_SYSCALL */ #endif /* CONFIG_BPF_SYSCALL */
int security_locked_down(enum lockdown_reason what)
{
return call_int_hook(locked_down, 0, what);
}
EXPORT_SYMBOL(security_locked_down);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment