1. 22 Sep, 2015 8 commits
  2. 21 Sep, 2015 32 commits
    • David S. Miller's avatar
      sparc64: Fix userspace FPU register corruptions. · 31de7bfa
      David S. Miller authored
      [ Upstream commit 44922150 ]
      
      If we have a series of events from userpsace, with %fprs=FPRS_FEF,
      like follows:
      
      ETRAP
      	ETRAP
      		VIS_ENTRY(fprs=0x4)
      		VIS_EXIT
      		RTRAP (kernel FPU restore with fpu_saved=0x4)
      	RTRAP
      
      We will not restore the user registers that were clobbered by the FPU
      using kernel code in the inner-most trap.
      
      Traps allocate FPU save slots in the thread struct, and FPU using
      sequences save the "dirty" FPU registers only.
      
      This works at the initial trap level because all of the registers
      get recorded into the top-level FPU save area, and we'll return
      to userspace with the FPU disabled so that any FPU use by the user
      will take an FPU disabled trap wherein we'll load the registers
      back up properly.
      
      But this is not how trap returns from kernel to kernel operate.
      
      The simplest fix for this bug is to always save all FPU register state
      for anything other than the top-most FPU save area.
      
      Getting rid of the optimized inner-slot FPU saving code ends up
      making VISEntryHalf degenerate into plain VISEntry.
      
      Longer term we need to do something smarter to reinstate the partial
      save optimizations.  Perhaps the fundament error is having trap entry
      and exit allocate FPU save slots and restore register state.  Instead,
      the VISEntry et al. calls should be doing that work.
      
      This bug is about two decades old.
      Reported-by: default avatarJames Y Knight <jyknight@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      31de7bfa
    • Eric Dumazet's avatar
      udp: fix dst races with multicast early demux · 9625566c
      Eric Dumazet authored
      commit 10e2eb87 upstream.
      
      Multicast dst are not cached. They carry DST_NOCACHE.
      
      As mentioned in commit f8864972 ("ipv4: fix dst race in
      sk_dst_get()"), these dst need special care before caching them
      into a socket.
      
      Caching them is allowed only if their refcnt was not 0, ie we
      must use atomic_inc_not_zero()
      
      Also, we must use READ_ONCE() to fetch sk->sk_rx_dst, as mentioned
      in commit d0c294c5 ("tcp: prevent fetching dst twice in early demux
      code")
      
      Fixes: 421b3885 ("udp: ipv4: Add udp early demux")
      Tested-by: default avatarGregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarGregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
      Reported-by: default avatarAlex Gartrell <agartrell@fb.com>
      Cc: Michal Kubeček <mkubecek@suse.cz>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      [ luis: backported to 3.16: used davem's backport to 3.14 ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9625566c
    • Dan Carpenter's avatar
      rds: fix an integer overflow test in rds_info_getsockopt() · 950a7a69
      Dan Carpenter authored
      commit 468b732b upstream.
      
      "len" is a signed integer.  We check that len is not negative, so it
      goes from zero to INT_MAX.  PAGE_SIZE is unsigned long so the comparison
      is type promoted to unsigned long.  ULONG_MAX - 4095 is a higher than
      INT_MAX so the condition can never be true.
      
      I don't know if this is harmful but it seems safe to limit "len" to
      INT_MAX - 4095.
      
      Fixes: a8c879a7 ('RDS: Info and stats')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      950a7a69
    • Herbert Xu's avatar
      net: Fix skb_set_peeked use-after-free bug · 1f61c92c
      Herbert Xu authored
      commit a0a2a660 upstream.
      
      The commit 738ac1eb ("net: Clone
      skb before setting peeked flag") introduced a use-after-free bug
      in skb_recv_datagram.  This is because skb_set_peeked may create
      a new skb and free the existing one.  As it stands the caller will
      continue to use the old freed skb.
      
      This patch fixes it by making skb_set_peeked return the new skb
      (or the old one if unchanged).
      
      Fixes: 738ac1eb ("net: Clone skb before setting peeked flag")
      Reported-by: default avatarBrenden Blanco <bblanco@plumgrid.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Tested-by: default avatarBrenden Blanco <bblanco@plumgrid.com>
      Reviewed-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1f61c92c
    • David Ahern's avatar
      net: Fix RCU splat in af_key · 815c0610
      David Ahern authored
      commit ba51b6be upstream.
      
      Hit the following splat testing VRF change for ipsec:
      
      [  113.475692] ===============================
      [  113.476194] [ INFO: suspicious RCU usage. ]
      [  113.476667] 4.2.0-rc6-1+deb7u2+clUNRELEASED #3.2.65-1+deb7u2+clUNRELEASED Not tainted
      [  113.477545] -------------------------------
      [  113.478013] /work/monster-14/dsa/kernel.git/include/linux/rcupdate.h:568 Illegal context switch in RCU read-side critical section!
      [  113.479288]
      [  113.479288] other info that might help us debug this:
      [  113.479288]
      [  113.480207]
      [  113.480207] rcu_scheduler_active = 1, debug_locks = 1
      [  113.480931] 2 locks held by setkey/6829:
      [  113.481371]  #0:  (&net->xfrm.xfrm_cfg_mutex){+.+.+.}, at: [<ffffffff814e9887>] pfkey_sendmsg+0xfb/0x213
      [  113.482509]  #1:  (rcu_read_lock){......}, at: [<ffffffff814e767f>] rcu_read_lock+0x0/0x6e
      [  113.483509]
      [  113.483509] stack backtrace:
      [  113.484041] CPU: 0 PID: 6829 Comm: setkey Not tainted 4.2.0-rc6-1+deb7u2+clUNRELEASED #3.2.65-1+deb7u2+clUNRELEASED
      [  113.485422] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org 04/01/2014
      [  113.486845]  0000000000000001 ffff88001d4c7a98 ffffffff81518af2 ffffffff81086962
      [  113.487732]  ffff88001d538480 ffff88001d4c7ac8 ffffffff8107ae75 ffffffff8180a154
      [  113.488628]  0000000000000b30 0000000000000000 00000000000000d0 ffff88001d4c7ad8
      [  113.489525] Call Trace:
      [  113.489813]  [<ffffffff81518af2>] dump_stack+0x4c/0x65
      [  113.490389]  [<ffffffff81086962>] ? console_unlock+0x3d6/0x405
      [  113.491039]  [<ffffffff8107ae75>] lockdep_rcu_suspicious+0xfa/0x103
      [  113.491735]  [<ffffffff81064032>] rcu_preempt_sleep_check+0x45/0x47
      [  113.492442]  [<ffffffff8106404d>] ___might_sleep+0x19/0x1c8
      [  113.493077]  [<ffffffff81064268>] __might_sleep+0x6c/0x82
      [  113.493681]  [<ffffffff81133190>] cache_alloc_debugcheck_before.isra.50+0x1d/0x24
      [  113.494508]  [<ffffffff81134876>] kmem_cache_alloc+0x31/0x18f
      [  113.495149]  [<ffffffff814012b5>] skb_clone+0x64/0x80
      [  113.495712]  [<ffffffff814e6f71>] pfkey_broadcast_one+0x3d/0xff
      [  113.496380]  [<ffffffff814e7b84>] pfkey_broadcast+0xb5/0x11e
      [  113.497024]  [<ffffffff814e82d1>] pfkey_register+0x191/0x1b1
      [  113.497653]  [<ffffffff814e9770>] pfkey_process+0x162/0x17e
      [  113.498274]  [<ffffffff814e9895>] pfkey_sendmsg+0x109/0x213
      
      In pfkey_sendmsg the net mutex is taken and then pfkey_broadcast takes
      the RCU lock.
      
      Since pfkey_broadcast takes the RCU lock the allocation argument is
      pointless since GFP_ATOMIC must be used between the rcu_read_{,un}lock.
      The one call outside of rcu can be done with GFP_KERNEL.
      
      Fixes: 7f6b9dbd ("af_key: locking change")
      Signed-off-by: default avatarDavid Ahern <dsa@cumulusnetworks.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      815c0610
    • huaibin Wang's avatar
      ip6_gre: release cached dst on tunnel removal · 794674db
      huaibin Wang authored
      commit d4257295 upstream.
      
      When a tunnel is deleted, the cached dst entry should be released.
      
      This problem may prevent the removal of a netns (seen with a x-netns IPv6
      gre tunnel):
        unregister_netdevice: waiting for lo to become free. Usage count = 3
      
      CC: Dmitry Kozlov <xeb@mail.ru>
      Fixes: c12b395a ("gre: Support GRE over IPv6")
      Signed-off-by: default avatarhuaibin Wang <huaibin.wang@6wind.com>
      Signed-off-by: default avatarNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      [ kamal: backport to 3.13-stable ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      794674db
    • Marek Lindner's avatar
      batman-adv: protect tt_local_entry from concurrent delete events · ccc1ebfa
      Marek Lindner authored
      commit ef72706a upstream.
      
      The tt_local_entry deletion performed in batadv_tt_local_remove() was neither
      protecting against simultaneous deletes nor checking whether the element was
      still part of the list before calling hlist_del_rcu().
      
      Replacing the hlist_del_rcu() call with batadv_hash_remove() provides adequate
      protection via hash spinlocks as well as an is-element-still-in-hash check to
      avoid 'blind' hash removal.
      
      Fixes: 068ee6e2 ("batman-adv: roaming handling mechanism redesign")
      Reported-by: alfonsname@web.de
      Signed-off-by: default avatarMarek Lindner <mareklindner@neomailbox.ch>
      Signed-off-by: default avatarAntonio Quartulli <antonio@meshcoding.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ccc1ebfa
    • Marc Zyngier's avatar
      arm64: KVM: Fix host crash when injecting a fault into a 32bit guest · a1631ba4
      Marc Zyngier authored
      commit 126c69a0 upstream.
      
      When injecting a fault into a misbehaving 32bit guest, it seems
      rather idiotic to also inject a 64bit fault that is only going
      to corrupt the guest state. This leads to a situation where we
      perform an illegal exception return at EL2 causing the host
      to crash instead of killing the guest.
      
      Just fix the stupid bug that has been there from day 1.
      Reported-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Tested-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      a1631ba4
    • Guillermo A. Amaral's avatar
      Add factory recertified Crucial M500s to blacklist · 1e4d0268
      Guillermo A. Amaral authored
      commit 7a7184b0 upstream.
      
      The Crucial M500 is known to have issues with queued TRIM commands, the
      factory recertified SSDs use a different model number naming convention
      which causes them to get ignored by the blacklist.
      
      The new naming convention boils down to: s/Crucial_/FC/
      Signed-off-by: default avatarGuillermo A. Amaral <g@maral.me>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      [ luis: backported to 3.16:
        - dropped ATA_HORKAGE_ZERO_AFTER_TRIM flag
        - adjusted context ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1e4d0268
    • Manfred Spraul's avatar
      ipc/sem.c: update/correct memory barriers · da6c1a2f
      Manfred Spraul authored
      commit 3ed1f8a9 upstream.
      
      sem_lock() did not properly pair memory barriers:
      
      !spin_is_locked() and spin_unlock_wait() are both only control barriers.
      The code needs an acquire barrier, otherwise the cpu might perform read
      operations before the lock test.
      
      As no primitive exists inside <include/spinlock.h> and since it seems
      noone wants another primitive, the code creates a local primitive within
      ipc/sem.c.
      
      With regards to -stable:
      
      The change of sem_wait_array() is a bugfix, the change to sem_lock() is a
      nop (just a preprocessor redefinition to improve the readability).  The
      bugfix is necessary for all kernels that use sem_wait_array() (i.e.:
      starting from 3.10).
      Signed-off-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Reported-by: default avatarOleg Nesterov <oleg@redhat.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Kirill Tkhai <ktkhai@parallels.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      da6c1a2f
    • Manfred Spraul's avatar
      ipc/sem.c: change memory barrier in sem_lock() to smp_rmb() · 355b0f74
      Manfred Spraul authored
      commit 2e094abf upstream.
      
      When I fixed bugs in the sem_lock() logic, I was more conservative than
      necessary.  Therefore it is safe to replace the smp_mb() with smp_rmb().
      And: With smp_rmb(), semop() syscalls are up to 10% faster.
      
      The race we must protect against is:
      
      	sem->lock is free
      	sma->complex_count = 0
      	sma->sem_perm.lock held by thread B
      
      thread A:
      
      A: spin_lock(&sem->lock)
      
      			B: sma->complex_count++; (now 1)
      			B: spin_unlock(&sma->sem_perm.lock);
      
      A: spin_is_locked(&sma->sem_perm.lock);
      A: XXXXX memory barrier
      A: if (sma->complex_count == 0)
      
      Thread A must read the increased complex_count value, i.e. the read must
      not be reordered with the read of sem_perm.lock done by spin_is_locked().
      
      Since it's about ordering of reads, smp_rmb() is sufficient.
      
      [akpm@linux-foundation.org: update sem_lock() comment, from Davidlohr]
      Signed-off-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Reviewed-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [ luis: 3.16 prereq for:
        3ed1f8a9 "ipc/sem.c: update/correct memory barriers" ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      355b0f74
    • Herton R. Krzesinski's avatar
      ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits · a99220f8
      Herton R. Krzesinski authored
      commit 602b8593 upstream.
      
      The current semaphore code allows a potential use after free: in
      exit_sem we may free the task's sem_undo_list while there is still
      another task looping through the same semaphore set and cleaning the
      sem_undo list at freeary function (the task called IPC_RMID for the same
      semaphore set).
      
      For example, with a test program [1] running which keeps forking a lot
      of processes (which then do a semop call with SEM_UNDO flag), and with
      the parent right after removing the semaphore set with IPC_RMID, and a
      kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
      CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
      in the kernel log:
      
         Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
         000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b  kkkkkkkk.kkkkkkk
         010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff  ....kkkk........
         Prev obj: start=ffff88003b45c180, len=64
         000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
         010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff  ...........7....
         Next obj: start=ffff88003b45c200, len=64
         000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
         010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff  ........h).<....
         BUG: spinlock wrong CPU on CPU#2, test/18028
         general protection fault: 0000 [#1] SMP
         Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
         CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
         Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
         RIP: spin_dump+0x53/0xc0
         Call Trace:
           spin_bug+0x30/0x40
           do_raw_spin_unlock+0x71/0xa0
           _raw_spin_unlock+0xe/0x10
           freeary+0x82/0x2a0
           ? _raw_spin_lock+0xe/0x10
           semctl_down.clone.0+0xce/0x160
           ? __do_page_fault+0x19a/0x430
           ? __audit_syscall_entry+0xa8/0x100
           SyS_semctl+0x236/0x2c0
           ? syscall_trace_leave+0xde/0x130
           entry_SYSCALL_64_fastpath+0x12/0x71
         Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
         RIP  [<ffffffff810d6053>] spin_dump+0x53/0xc0
          RSP <ffff88003750fd68>
         ---[ end trace 783ebb76612867a0 ]---
         NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
         Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
         CPU: 3 PID: 18053 Comm: test Tainted: G      D         4.2.0-rc5+ #1
         Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
         RIP: native_read_tsc+0x0/0x20
         Call Trace:
           ? delay_tsc+0x40/0x70
           __delay+0xf/0x20
           do_raw_spin_lock+0x96/0x140
           _raw_spin_lock+0xe/0x10
           sem_lock_and_putref+0x11/0x70
           SYSC_semtimedop+0x7bf/0x960
           ? handle_mm_fault+0xbf6/0x1880
           ? dequeue_task_fair+0x79/0x4a0
           ? __do_page_fault+0x19a/0x430
           ? kfree_debugcheck+0x16/0x40
           ? __do_page_fault+0x19a/0x430
           ? __audit_syscall_entry+0xa8/0x100
           ? do_audit_syscall_entry+0x66/0x70
           ? syscall_trace_enter_phase1+0x139/0x160
           SyS_semtimedop+0xe/0x10
           SyS_semop+0x10/0x20
           entry_SYSCALL_64_fastpath+0x12/0x71
         Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
         Kernel panic - not syncing: softlockup: hung tasks
      
      I wasn't able to trigger any badness on a recent kernel without the
      proper config debugs enabled, however I have softlockup reports on some
      kernel versions, in the semaphore code, which are similar as above (the
      scenario is seen on some servers running IBM DB2 which uses semaphore
      syscalls).
      
      The patch here fixes the race against freeary, by acquiring or waiting
      on the sem_undo_list lock as necessary (exit_sem can race with freeary,
      while freeary sets un->semid to -1 and removes the same sem_undo from
      list_proc or when it removes the last sem_undo).
      
      After the patch I'm unable to reproduce the problem using the test case
      [1].
      
      [1] Test case used below:
      
          #include <stdio.h>
          #include <sys/types.h>
          #include <sys/ipc.h>
          #include <sys/sem.h>
          #include <sys/wait.h>
          #include <stdlib.h>
          #include <time.h>
          #include <unistd.h>
          #include <errno.h>
      
          #define NSEM 1
          #define NSET 5
      
          int sid[NSET];
      
          void thread()
          {
                  struct sembuf op;
                  int s;
                  uid_t pid = getuid();
      
                  s = rand() % NSET;
                  op.sem_num = pid % NSEM;
                  op.sem_op = 1;
                  op.sem_flg = SEM_UNDO;
      
                  semop(sid[s], &op, 1);
                  exit(EXIT_SUCCESS);
          }
      
          void create_set()
          {
                  int i, j;
                  pid_t p;
                  union {
                          int val;
                          struct semid_ds *buf;
                          unsigned short int *array;
                          struct seminfo *__buf;
                  } un;
      
                  /* Create and initialize semaphore set */
                  for (i = 0; i < NSET; i++) {
                          sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
                          if (sid[i] < 0) {
                                  perror("semget");
                                  exit(EXIT_FAILURE);
                          }
                  }
                  un.val = 0;
                  for (i = 0; i < NSET; i++) {
                          for (j = 0; j < NSEM; j++) {
                                  if (semctl(sid[i], j, SETVAL, un) < 0)
                                          perror("semctl");
                          }
                  }
      
                  /* Launch threads that operate on semaphore set */
                  for (i = 0; i < NSEM * NSET * NSET; i++) {
                          p = fork();
                          if (p < 0)
                                  perror("fork");
                          if (p == 0)
                                  thread();
                  }
      
                  /* Free semaphore set */
                  for (i = 0; i < NSET; i++) {
                          if (semctl(sid[i], NSEM, IPC_RMID))
                                  perror("IPC_RMID");
                  }
      
                  /* Wait for forked processes to exit */
                  while (wait(NULL)) {
                          if (errno == ECHILD)
                                  break;
                  };
          }
      
          int main(int argc, char **argv)
          {
                  pid_t p;
      
                  srand(time(NULL));
      
                  while (1) {
                          p = fork();
                          if (p < 0) {
                                  perror("fork");
                                  exit(EXIT_FAILURE);
                          }
                          if (p == 0) {
                                  create_set();
                                  goto end;
                          }
      
                          /* Wait for forked processes to exit */
                          while (wait(NULL)) {
                                  if (errno == ECHILD)
                                          break;
                          };
                  }
          end:
                  return 0;
          }
      
      [akpm@linux-foundation.org: use normal comment layout]
      Signed-off-by: default avatarHerton R. Krzesinski <herton@redhat.com>
      Acked-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Rafael Aquini <aquini@redhat.com>
      CC: Aristeu Rozanski <aris@redhat.com>
      Cc: David Jeffery <djeffery@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      a99220f8
    • Wanpeng Li's avatar
      mm/hwpoison: fix page refcount of unknown non LRU page · 0cc1cdb1
      Wanpeng Li authored
      commit 4f32be67 upstream.
      
      After trying to drain pages from pagevec/pageset, we try to get reference
      count of the page again, however, the reference count of the page is not
      reduced if the page is still not on LRU list.
      
      Fix it by adding the put_page() to drop the page reference which is from
      __get_any_page().
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Acked-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0cc1cdb1
    • Horia Geant?'s avatar
      crypto: caam - fix memory corruption in ahash_final_ctx · 2e1b7eb3
      Horia Geant? authored
      commit b310c178 upstream.
      
      When doing pointer operation for accessing the HW S/G table,
      a value representing number of entries (and not number of bytes)
      must be used.
      
      Fixes: 045e3678 ("crypto: caam - ahash hmac support")
      Signed-off-by: default avatarHoria Geant? <horia.geanta@freescale.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      2e1b7eb3
    • Michael Walle's avatar
      EDAC, ppc4xx: Access mci->csrows array elements properly · 8370c49f
      Michael Walle authored
      commit 5c16179b upstream.
      
      The commit
      
        de3910eb ("edac: change the mem allocation scheme to
      		 make Documentation/kobject.txt happy")
      
      changed the memory allocation for the csrows member. But ppc4xx_edac was
      forgotten in the patch. Fix it.
      Signed-off-by: default avatarMichael Walle <michael@walle.cc>
      Cc: linux-edac <linux-edac@vger.kernel.org>
      Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
      Link: http://lkml.kernel.org/r/1437469253-8611-1-git-send-email-michael@walle.ccSigned-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      8370c49f
    • Bart Van Assche's avatar
      libfc: Fix fc_fcp_cleanup_each_cmd() · f791fa15
      Bart Van Assche authored
      commit 8f2777f5 upstream.
      
      Since fc_fcp_cleanup_cmd() can sleep this function must not
      be called while holding a spinlock. This patch avoids that
      fc_fcp_cleanup_each_cmd() triggers the following bug:
      
      BUG: scheduling while atomic: sg_reset/1512/0x00000202
      1 lock held by sg_reset/1512:
       #0:  (&(&fsp->scsi_pkt_lock)->rlock){+.-...}, at: [<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc]
      Preemption disabled at:[<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc]
      Call Trace:
       [<ffffffff816c612c>] dump_stack+0x4f/0x7b
       [<ffffffff810828bc>] __schedule_bug+0x6c/0xd0
       [<ffffffff816c87aa>] __schedule+0x71a/0xa10
       [<ffffffff816c8ad2>] schedule+0x32/0x80
       [<ffffffffc0217eac>] fc_seq_set_resp+0xac/0x100 [libfc]
       [<ffffffffc0218b11>] fc_exch_done+0x41/0x60 [libfc]
       [<ffffffffc0225cff>] fc_fcp_cleanup_each_cmd.isra.21+0xcf/0x150 [libfc]
       [<ffffffffc0225f43>] fc_eh_device_reset+0x1c3/0x270 [libfc]
       [<ffffffff814a2cc9>] scsi_try_bus_device_reset+0x29/0x60
       [<ffffffff814a3908>] scsi_ioctl_reset+0x258/0x2d0
       [<ffffffff814a2650>] scsi_ioctl+0x150/0x440
       [<ffffffff814b3a9d>] sd_ioctl+0xad/0x120
       [<ffffffff8132f266>] blkdev_ioctl+0x1b6/0x810
       [<ffffffff811da608>] block_ioctl+0x38/0x40
       [<ffffffff811b4e08>] do_vfs_ioctl+0x2f8/0x530
       [<ffffffff811b50c1>] SyS_ioctl+0x81/0xa0
       [<ffffffff816cf8b2>] system_call_fastpath+0x16/0x7a
      Signed-off-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Odin.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f791fa15
    • Bart Van Assche's avatar
      libfc: Fix fc_exch_recv_req() error path · 98769e1a
      Bart Van Assche authored
      commit f6979ade upstream.
      
      Due to patch "libfc: Do not invoke the response handler after
      fc_exch_done()" (commit ID 7030fd62) the lport_recv() call
      in fc_exch_recv_req() is passed a dangling pointer. Avoid this
      by moving the fc_frame_free() call from fc_invoke_resp() to its
      callers. This patch fixes the following crash:
      
      general protection fault: 0000 [#3] PREEMPT SMP
      RIP: fc_lport_recv_req+0x72/0x280 [libfc]
      Call Trace:
       fc_exch_recv+0x642/0xde0 [libfc]
       fcoe_percpu_receive_thread+0x46a/0x5ed [fcoe]
       kthread+0x10a/0x120
       ret_from_fork+0x42/0x70
      Signed-off-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Odin.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      98769e1a
    • John Soni Jose's avatar
      libiscsi: Fix host busy blocking during connection teardown · 205b26c1
      John Soni Jose authored
      commit 660d0831 upstream.
      
      In case of hw iscsi offload, an host can have N-number of active
      connections. There can be IO's running on some connections which
      make host->host_busy always TRUE. Now if logout from a connection
      is tried then the code gets into an infinite loop as host->host_busy
      is always TRUE.
      
       iscsi_conn_teardown(....)
       {
         .........
          /*
           * Block until all in-progress commands for this connection
           * time out or fail.
           */
           for (;;) {
            spin_lock_irqsave(session->host->host_lock, flags);
            if (!atomic_read(&session->host->host_busy)) { /* OK for ERL == 0 */
      	      spin_unlock_irqrestore(session->host->host_lock, flags);
                    break;
            }
           spin_unlock_irqrestore(session->host->host_lock, flags);
           msleep_interruptible(500);
           iscsi_conn_printk(KERN_INFO, conn, "iscsi conn_destroy(): "
                       "host_busy %d host_failed %d\n",
      	          atomic_read(&session->host->host_busy),
      	          session->host->host_failed);
      
      	................
      	...............
           }
        }
      
      This is not an issue with software-iscsi/iser as each cxn is a separate
      host.
      
      Fix:
      Acquiring eh_mutex in iscsi_conn_teardown() before setting
      session->state = ISCSI_STATE_TERMINATE.
      Signed-off-by: default avatarJohn Soni Jose <sony.john@avagotech.com>
      Reviewed-by: default avatarMike Christie <michaelc@cs.wisc.edu>
      Reviewed-by: default avatarChris Leech <cleech@redhat.com>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Odin.com>
      [ luis: backported to 3.16: adjusted context ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      205b26c1
    • Alex Deucher's avatar
      drm/radeon: add new OLAND pci id · 0c5590e3
      Alex Deucher authored
      commit e037239e upstream.
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0c5590e3
    • Joe Thornber's avatar
      dm btree: add ref counting ops for the leaves of top level btrees · 9644ee54
      Joe Thornber authored
      commit b0dc3c8b upstream.
      
      When using nested btrees, the top leaves of the top levels contain
      block addresses for the root of the next tree down.  If we shadow a
      shared leaf node the leaf values (sub tree roots) should be incremented
      accordingly.
      
      This is only an issue if there is metadata sharing in the top levels.
      Which only occurs if metadata snapshots are being used (as is possible
      with dm-thinp).  And could result in a block from the thinp metadata
      snap being reused early, thus corrupting the thinp metadata snap.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      [ luis: backported to 3.16:
        - dropped changes to remove_one() as suggested by Mike Snitzer ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9644ee54
    • Joe Thornber's avatar
      dm thin metadata: delete btrees when releasing metadata snapshot · f953fcd3
      Joe Thornber authored
      commit 7f518ad0 upstream.
      
      The device details and mapping trees were just being decremented
      before.  Now btree_del() is called to do a deep delete.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f953fcd3
    • Richard Weinberger's avatar
      localmodconfig: Use Kbuild files too · cbc7b6a0
      Richard Weinberger authored
      commit c0ddc8c7 upstream.
      
      In kbuild it is allowed to define objects in files named "Makefile"
      and "Kbuild".
      Currently localmodconfig reads objects only from "Makefile"s and misses
      modules like nouveau.
      
      Link: http://lkml.kernel.org/r/1437948415-16290-1-git-send-email-richard@nod.atReported-and-tested-by: default avatarLeonidas Spyropoulos <artafinde@gmail.com>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      cbc7b6a0
    • Haozhong Zhang's avatar
      KVM: x86: Use adjustment in guest cycles when handling MSR_IA32_TSC_ADJUST · 2252d02b
      Haozhong Zhang authored
      commit d7add054 upstream.
      
      When kvm_set_msr_common() handles a guest's write to
      MSR_IA32_TSC_ADJUST, it will calcuate an adjustment based on the data
      written by guest and then use it to adjust TSC offset by calling a
      call-back adjust_tsc_offset(). The 3rd parameter of adjust_tsc_offset()
      indicates whether the adjustment is in host TSC cycles or in guest TSC
      cycles. If SVM TSC scaling is enabled, adjust_tsc_offset()
      [i.e. svm_adjust_tsc_offset()] will first scale the adjustment;
      otherwise, it will just use the unscaled one. As the MSR write here
      comes from the guest, the adjustment is in guest TSC cycles. However,
      the current kvm_set_msr_common() uses it as a value in host TSC
      cycles (by using true as the 3rd parameter of adjust_tsc_offset()),
      which can result in an incorrect adjustment of TSC offset if SVM TSC
      scaling is enabled. This patch fixes this problem.
      Signed-off-by: default avatarHaozhong Zhang <haozhong.zhang@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      2252d02b
    • Jan Kara's avatar
      fsnotify: fix oops in fsnotify_clear_marks_by_group_flags() · a4910b81
      Jan Kara authored
      commit 8f2f3eb5 upstream.
      
      fsnotify_clear_marks_by_group_flags() can race with
      fsnotify_destroy_marks() so that when fsnotify_destroy_mark_locked()
      drops mark_mutex, a mark from the list iterated by
      fsnotify_clear_marks_by_group_flags() can be freed and thus the next
      entry pointer we have cached may become stale and we dereference free
      memory.
      
      Fix the problem by first moving marks to free to a special private list
      and then always free the first entry in the special list.  This method
      is safe even when entries from the list can disappear once we drop the
      lock.
      Signed-off-by: default avatarJan Kara <jack@suse.com>
      Reported-by: default avatarAshish Sangwan <a.sangwan@samsung.com>
      Reviewed-by: default avatarAshish Sangwan <a.sangwan@samsung.com>
      Cc: Lino Sanfilippo <LinoSanfilippo@gmx.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      a4910b81
    • Joseph Qi's avatar
      ocfs2: fix BUG in ocfs2_downconvert_thread_do_work() · 9f2b5b3e
      Joseph Qi authored
      commit 209f7512 upstream.
      
      The "BUG_ON(list_empty(&osb->blocked_lock_list))" in
      ocfs2_downconvert_thread_do_work can be triggered in the following case:
      
      ocfs2dc has firstly saved osb->blocked_lock_count to local varibale
      processed, and then processes the dentry lockres.  During the dentry
      put, it calls iput and then deletes rw, inode and open lockres from
      blocked list in ocfs2_mark_lockres_freeing.  And this causes the
      variable `processed' to not reflect the number of blocked lockres to be
      processed, which triggers the BUG.
      Signed-off-by: default avatarJoseph Qi <joseph.qi@huawei.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9f2b5b3e
    • Marcus Gelderie's avatar
      ipc: modify message queue accounting to not take kernel data structures into account · 02fd58e5
      Marcus Gelderie authored
      commit de54b9ac upstream.
      
      A while back, the message queue implementation in the kernel was
      improved to use btrees to speed up retrieval of messages, in commit
      d6629859 ("ipc/mqueue: improve performance of send/recv").
      
      That patch introducing the improved kernel handling of message queues
      (using btrees) has, as a by-product, changed the meaning of the QSIZE
      field in the pseudo-file created for the queue.  Before, this field
      reflected the size of the user-data in the queue.  Since, it also takes
      kernel data structures into account.  For example, if 13 bytes of user
      data are in the queue, on my machine the file reports a size of 61
      bytes.
      
      There was some discussion on this topic before (for example
      https://lkml.org/lkml/2014/10/1/115).  Commenting on a th lkml, Michael
      Kerrisk gave the following background
      (https://lkml.org/lkml/2015/6/16/74):
      
          The pseudofiles in the mqueue filesystem (usually mounted at
          /dev/mqueue) expose fields with metadata describing a message
          queue. One of these fields, QSIZE, as originally implemented,
          showed the total number of bytes of user data in all messages in
          the message queue, and this feature was documented from the
          beginning in the mq_overview(7) page. In 3.5, some other (useful)
          work happened to break the user-space API in a couple of places,
          including the value exposed via QSIZE, which now includes a measure
          of kernel overhead bytes for the queue, a figure that renders QSIZE
          useless for its original purpose, since there's no way to deduce
          the number of overhead bytes consumed by the implementation.
          (The other user-space breakage was subsequently fixed.)
      
      This patch removes the accounting of kernel data structures in the
      queue.  Reporting the size of these data-structures in the QSIZE field
      was a breaking change (see Michael's comment above).  Without the QSIZE
      field reporting the total size of user-data in the queue, there is no
      way to deduce this number.
      
      It should be noted that the resource limit RLIMIT_MSGQUEUE is counted
      against the worst-case size of the queue (in both the old and the new
      implementation).  Therefore, the kernel overhead accounting in QSIZE is
      not necessary to help the user understand the limitations RLIMIT imposes
      on the processes.
      Signed-off-by: default avatarMarcus Gelderie <redmnic@gmail.com>
      Acked-by: default avatarDoug Ledford <dledford@redhat.com>
      Acked-by: default avatarMichael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: John Duffy <jb_duffy@btinternet.com>
      Cc: Arto Bendiken <arto@bendiken.net>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      02fd58e5
    • David Daney's avatar
      MIPS: Make set_pte() SMP safe. · 9952b2a2
      David Daney authored
      commit 46011e6e upstream.
      
      On MIPS the GLOBAL bit of the PTE must have the same value in any
      aligned pair of PTEs.  These pairs of PTEs are referred to as
      "buddies".  In a SMP system is is possible for two CPUs to be calling
      set_pte() on adjacent PTEs at the same time.  There is a race between
      setting the PTE and a different CPU setting the GLOBAL bit in its
      buddy PTE.
      
      This race can be observed when multiple CPUs are executing
      vmap()/vfree() at the same time.
      
      Make setting the buddy PTE's GLOBAL bit an atomic operation to close
      the race condition.
      
      The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not*
      handled.
      Signed-off-by: default avatarDavid Daney <david.daney@cavium.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/10835/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9952b2a2
    • Michal Hocko's avatar
      mm, vmscan: Do not wait for page writeback for GFP_NOFS allocations · 54e33b21
      Michal Hocko authored
      commit ecf5fc6e upstream.
      
      Nikolay has reported a hang when a memcg reclaim got stuck with the
      following backtrace:
      
      PID: 18308  TASK: ffff883d7c9b0a30  CPU: 1   COMMAND: "rsync"
        #0 __schedule at ffffffff815ab152
        #1 schedule at ffffffff815ab76e
        #2 schedule_timeout at ffffffff815ae5e5
        #3 io_schedule_timeout at ffffffff815aad6a
        #4 bit_wait_io at ffffffff815abfc6
        #5 __wait_on_bit at ffffffff815abda5
        #6 wait_on_page_bit at ffffffff8111fd4f
        #7 shrink_page_list at ffffffff81135445
        #8 shrink_inactive_list at ffffffff81135845
        #9 shrink_lruvec at ffffffff81135ead
       #10 shrink_zone at ffffffff811360c3
       #11 shrink_zones at ffffffff81136eff
       #12 do_try_to_free_pages at ffffffff8113712f
       #13 try_to_free_mem_cgroup_pages at ffffffff811372be
       #14 try_charge at ffffffff81189423
       #15 mem_cgroup_try_charge at ffffffff8118c6f5
       #16 __add_to_page_cache_locked at ffffffff8112137d
       #17 add_to_page_cache_lru at ffffffff81121618
       #18 pagecache_get_page at ffffffff8112170b
       #19 grow_dev_page at ffffffff811c8297
       #20 __getblk_slow at ffffffff811c91d6
       #21 __getblk_gfp at ffffffff811c92c1
       #22 ext4_ext_grow_indepth at ffffffff8124565c
       #23 ext4_ext_create_new_leaf at ffffffff81246ca8
       #24 ext4_ext_insert_extent at ffffffff81246f09
       #25 ext4_ext_map_blocks at ffffffff8124a848
       #26 ext4_map_blocks at ffffffff8121a5b7
       #27 mpage_map_one_extent at ffffffff8121b1fa
       #28 mpage_map_and_submit_extent at ffffffff8121f07b
       #29 ext4_writepages at ffffffff8121f6d5
       #30 do_writepages at ffffffff8112c490
       #31 __filemap_fdatawrite_range at ffffffff81120199
       #32 filemap_flush at ffffffff8112041c
       #33 ext4_alloc_da_blocks at ffffffff81219da1
       #34 ext4_rename at ffffffff81229b91
       #35 ext4_rename2 at ffffffff81229e32
       #36 vfs_rename at ffffffff811a08a5
       #37 SYSC_renameat2 at ffffffff811a3ffc
       #38 sys_renameat2 at ffffffff811a408e
       #39 sys_rename at ffffffff8119e51e
       #40 system_call_fastpath at ffffffff815afa89
      
      Dave Chinner has properly pointed out that this is a deadlock in the
      reclaim code because ext4 doesn't submit pages which are marked by
      PG_writeback right away.
      
      The heuristic was introduced by commit e62e384e ("memcg: prevent OOM
      with too many dirty pages") and it was applied only when may_enter_fs
      was specified.  The code has been changed by c3b94f44 ("memcg:
      further prevent OOM with too many dirty pages") which has removed the
      __GFP_FS restriction with a reasoning that we do not get into the fs
      code.  But this is not sufficient apparently because the fs doesn't
      necessarily submit pages marked PG_writeback for IO right away.
      
      ext4_bio_write_page calls io_submit_add_bh but that doesn't necessarily
      submit the bio.  Instead it tries to map more pages into the bio and
      mpage_map_one_extent might trigger memcg charge which might end up
      waiting on a page which is marked PG_writeback but hasn't been submitted
      yet so we would end up waiting for something that never finishes.
      
      Fix this issue by replacing __GFP_IO by may_enter_fs check (for case 2)
      before we go to wait on the writeback.  The page fault path, which is
      the only path that triggers memcg oom killer since 3.12, shouldn't
      require GFP_NOFS and so we shouldn't reintroduce the premature OOM
      killer issue which was originally addressed by the heuristic.
      
      As per David Chinner the xfs is doing similar thing since 2.6.15 already
      so ext4 is not the only affected filesystem.  Moreover he notes:
      
      : For example: IO completion might require unwritten extent conversion
      : which executes filesystem transactions and GFP_NOFS allocations. The
      : writeback flag on the pages can not be cleared until unwritten
      : extent conversion completes. Hence memory reclaim cannot wait on
      : page writeback to complete in GFP_NOFS context because it is not
      : safe to do so, memcg reclaim or otherwise.
      
      [tytso@mit.edu: corrected the control flow]
      Fixes: c3b94f44 ("memcg: further prevent OOM with too many dirty pages")
      Reported-by: default avatarNikolay Borisov <kernel@kyup.com>
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [ luis: backported to 3.16: used Hugh's backport for 4.1 ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      54e33b21
    • Peter Zijlstra's avatar
      perf: Fix fasync handling on inherited events · 09b16c51
      Peter Zijlstra authored
      commit fed66e2c upstream.
      
      Vince reported that the fasync signal stuff doesn't work proper for
      inherited events. So fix that.
      
      Installing fasync allocates memory and sets filp->f_flags |= FASYNC,
      which upon the demise of the file descriptor ensures the allocation is
      freed and state is updated.
      
      Now for perf, we can have the events stick around for a while after the
      original FD is dead because of references from child events. So we
      cannot copy the fasync pointer around. We can however consistently use
      the parent's fasync, as that will be updated.
      Reported-and-Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho deMelo <acme@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1434011521.1495.71.camel@twinsSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      09b16c51
    • Mathias Nyman's avatar
      xhci: fix off by one error in TRB DMA address boundary check · c03818d0
      Mathias Nyman authored
      commit 7895086a upstream.
      
      We need to check that a TRB is part of the current segment
      before calculating its DMA address.
      
      Previously a ring segment didn't use a full memory page, and every
      new ring segment got a new memory page, so the off by one
      error in checking the upper bound was never seen.
      
      Now that we use a full memory page, 256 TRBs (4096 bytes), the off by one
      didn't catch the case when a TRB was the first element of the next segment.
      
      This is triggered if the virtual memory pages for a ring segment are
      next to each in increasing order where the ring buffer wraps around and
      causes errors like:
      
      [  106.398223] xhci_hcd 0000:00:14.0: ERROR Transfer event TRB DMA ptr not part of current TD ep_index 0 comp_code 1
      [  106.398230] xhci_hcd 0000:00:14.0: Looking for event-dma fffd3000 trb-start fffd4fd0 trb-end fffd5000 seg-start fffd4000 seg-end fffd4ff0
      
      The trb-end address is one outside the end-seg address.
      Tested-by: default avatarArkadiusz Miśkiewicz <arekm@maven.pl>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      c03818d0
    • Felix Fietkau's avatar
      MIPS: Fix sched_getaffinity with MT FPAFF enabled · 0598aa81
      Felix Fietkau authored
      commit 1d62d737 upstream.
      
      p->thread.user_cpus_allowed is zero-initialized and is only filled on
      the first sched_setaffinity call.
      
      To avoid adding overhead in the task initialization codepath, simply OR
      the returned mask in sched_getaffinity with p->cpus_allowed.
      Signed-off-by: default avatarFelix Fietkau <nbd@openwrt.org>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/10740/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0598aa81
    • Roland Dreier's avatar
      target: REPORT LUNS should return LUN 0 even for dynamic ACLs · 346d7bc9
      Roland Dreier authored
      commit 9c395170 upstream.
      
      If an initiator doesn't have any real LUNs assigned, we should report
      LUN 0 and a LUN list length of 1.  Some versions of Solaris at least
      go beserk if we report a LUN list length of 0.
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      Signed-off-by: default avatarNicholas Bellinger <nab@linux-iscsi.org>
      [ luis: backported to 3.16: adjusted context ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      346d7bc9