- 03 Sep, 2015 10 commits
-
-
Thomas Hellstrom authored
commit 3e04e2fe upstream. This addresses two issues that cause problems with viewperf maya-03 in situation with memory pressure. The first issue causes attempts to unreserve buffers if batched reservation fails due to, for example, a signal pending. While previously the ttm_eu api was resistant against this type of error, it is no longer and the lockdep code will complain about attempting to unreserve buffers that are not reserved. The issue is resolved by avoid calling ttm_eu_backoff_reservation in the buffer reserve error path. The second issue is that the binding_mutex may be held when user-space fence objects are created and hence during memory reclaims. This may cause recursive attempts to grab the binding mutex. The issue is resolved by not holding the binding mutex across fence creation and submission. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Sinclair Yeh <syeh@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Horia Geant? authored
commit b310c178 upstream. When doing pointer operation for accessing the HW S/G table, a value representing number of entries (and not number of bytes) must be used. Fixes: 045e3678 ("crypto: caam - ahash hmac support") Signed-off-by: Horia Geant? <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Michael Walle authored
commit 5c16179b upstream. The commit de3910eb ("edac: change the mem allocation scheme to make Documentation/kobject.txt happy") changed the memory allocation for the csrows member. But ppc4xx_edac was forgotten in the patch. Fix it. Signed-off-by: Michael Walle <michael@walle.cc> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Link: http://lkml.kernel.org/r/1437469253-8611-1-git-send-email-michael@walle.ccSigned-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Bart Van Assche authored
commit 8f2777f5 upstream. Since fc_fcp_cleanup_cmd() can sleep this function must not be called while holding a spinlock. This patch avoids that fc_fcp_cleanup_each_cmd() triggers the following bug: BUG: scheduling while atomic: sg_reset/1512/0x00000202 1 lock held by sg_reset/1512: #0: (&(&fsp->scsi_pkt_lock)->rlock){+.-...}, at: [<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc] Preemption disabled at:[<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc] Call Trace: [<ffffffff816c612c>] dump_stack+0x4f/0x7b [<ffffffff810828bc>] __schedule_bug+0x6c/0xd0 [<ffffffff816c87aa>] __schedule+0x71a/0xa10 [<ffffffff816c8ad2>] schedule+0x32/0x80 [<ffffffffc0217eac>] fc_seq_set_resp+0xac/0x100 [libfc] [<ffffffffc0218b11>] fc_exch_done+0x41/0x60 [libfc] [<ffffffffc0225cff>] fc_fcp_cleanup_each_cmd.isra.21+0xcf/0x150 [libfc] [<ffffffffc0225f43>] fc_eh_device_reset+0x1c3/0x270 [libfc] [<ffffffff814a2cc9>] scsi_try_bus_device_reset+0x29/0x60 [<ffffffff814a3908>] scsi_ioctl_reset+0x258/0x2d0 [<ffffffff814a2650>] scsi_ioctl+0x150/0x440 [<ffffffff814b3a9d>] sd_ioctl+0xad/0x120 [<ffffffff8132f266>] blkdev_ioctl+0x1b6/0x810 [<ffffffff811da608>] block_ioctl+0x38/0x40 [<ffffffff811b4e08>] do_vfs_ioctl+0x2f8/0x530 [<ffffffff811b50c1>] SyS_ioctl+0x81/0xa0 [<ffffffff816cf8b2>] system_call_fastpath+0x16/0x7a Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Bart Van Assche authored
commit f6979ade upstream. Due to patch "libfc: Do not invoke the response handler after fc_exch_done()" (commit ID 7030fd62) the lport_recv() call in fc_exch_recv_req() is passed a dangling pointer. Avoid this by moving the fc_frame_free() call from fc_invoke_resp() to its callers. This patch fixes the following crash: general protection fault: 0000 [#3] PREEMPT SMP RIP: fc_lport_recv_req+0x72/0x280 [libfc] Call Trace: fc_exch_recv+0x642/0xde0 [libfc] fcoe_percpu_receive_thread+0x46a/0x5ed [fcoe] kthread+0x10a/0x120 ret_from_fork+0x42/0x70 Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
John Soni Jose authored
commit 660d0831 upstream. In case of hw iscsi offload, an host can have N-number of active connections. There can be IO's running on some connections which make host->host_busy always TRUE. Now if logout from a connection is tried then the code gets into an infinite loop as host->host_busy is always TRUE. iscsi_conn_teardown(....) { ......... /* * Block until all in-progress commands for this connection * time out or fail. */ for (;;) { spin_lock_irqsave(session->host->host_lock, flags); if (!atomic_read(&session->host->host_busy)) { /* OK for ERL == 0 */ spin_unlock_irqrestore(session->host->host_lock, flags); break; } spin_unlock_irqrestore(session->host->host_lock, flags); msleep_interruptible(500); iscsi_conn_printk(KERN_INFO, conn, "iscsi conn_destroy(): " "host_busy %d host_failed %d\n", atomic_read(&session->host->host_busy), session->host->host_failed); ................ ............... } } This is not an issue with software-iscsi/iser as each cxn is a separate host. Fix: Acquiring eh_mutex in iscsi_conn_teardown() before setting session->state = ISCSI_STATE_TERMINATE. Signed-off-by: John Soni Jose <sony.john@avagotech.com> Reviewed-by: Mike Christie <michaelc@cs.wisc.edu> Reviewed-by: Chris Leech <cleech@redhat.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Alex Deucher authored
commit e037239e upstream. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit b0dc3c8b upstream. When using nested btrees, the top leaves of the top levels contain block addresses for the root of the next tree down. If we shadow a shared leaf node the leaf values (sub tree roots) should be incremented accordingly. This is only an issue if there is metadata sharing in the top levels. Which only occurs if metadata snapshots are being used (as is possible with dm-thinp). And could result in a block from the thinp metadata snap being reused early, thus corrupting the thinp metadata snap. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> [ luis: backported to 3.16: - dropped changes to remove_one() as suggested by Mike Snitzer ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Joe Thornber authored
commit 7f518ad0 upstream. The device details and mapping trees were just being decremented before. Now btree_del() is called to do a deep delete. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Richard Weinberger authored
commit c0ddc8c7 upstream. In kbuild it is allowed to define objects in files named "Makefile" and "Kbuild". Currently localmodconfig reads objects only from "Makefile"s and misses modules like nouveau. Link: http://lkml.kernel.org/r/1437948415-16290-1-git-send-email-richard@nod.atReported-and-tested-by: Leonidas Spyropoulos <artafinde@gmail.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
- 31 Aug, 2015 13 commits
-
-
Juergen Gross authored
commit 4809146b upstream. Commit 37868fe1 ("x86/ldt: Make modify_ldt synchronous") introduced a new struct ldt_struct anchored at mm->context.ldt. Adapt the x86 fpu emulation code to use that new structure. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: billm@melbpc.org.au Link: http://lkml.kernel.org/r/1438883674-1240-1-git-send-email-jgross@suse.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Peter Zijlstra authored
commit 0a04b016 upstream. I want to use lockless_dereference() from seqlock.h, which would mean including rcupdate.h from it, however rcupdate.h already includes seqlock.h. Avoid this by moving lockless_dereference() into compiler.h. This is somewhat tricky since it uses smp_read_barrier_depends() which isn't available there, but its a CPP macro so we can get away with it. The alternative would be moving it into asm/barrier.h, but that would be updating each arch (I can do if people feel that is more appropriate). Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> [ luis: 3.16 prereq for: 37868fe1 "x86/ldt: Make modify_ldt synchronous" ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Paul E. McKenney authored
commit 54ef6df3 upstream. Although rcu_dereference() and friends can be used in situations where object lifetimes are being managed by something other than RCU, the resulting sparse and lockdep-RCU noise can be annoying. This commit therefore supplies a lockless_dereference(), which provides the protection for dereferences without the RCU-related debugging noise. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> [ luis: 3.16 prereq for: 37868fe1 "x86/ldt: Make modify_ldt synchronous" ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Juergen Gross authored
commit 136d9d83 upstream. Commit 37868fe1 ("x86/ldt: Make modify_ldt synchronous") introduced a new struct ldt_struct anchored at mm->context.ldt. convert_ip_to_linear() was changed to reflect this, but indexing into the ldt has to be changed as the pointer is no longer void *. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@suse.de Link: http://lkml.kernel.org/r/1438848278-12906-1-git-send-email-jgross@suse.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Andy Lutomirski authored
commit 37868fe1 upstream. modify_ldt() has questionable locking and does not synchronize threads. Improve it: redesign the locking and synchronize all threads' LDTs using an IPI on all modifications. This will dramatically slow down modify_ldt in multithreaded programs, but there shouldn't be any multithreaded programs that care about modify_ldt's performance in the first place. This fixes some fallout from the CVE-2015-5157 fixes. Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: security@kernel.org <security@kernel.org> Cc: xen-devel <xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> [ luis: backported to 3.16: - dropped changes to comments in switch_mm() - included asm/mmu_context.h in perf_event.c - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Haozhong Zhang authored
commit d7add054 upstream. When kvm_set_msr_common() handles a guest's write to MSR_IA32_TSC_ADJUST, it will calcuate an adjustment based on the data written by guest and then use it to adjust TSC offset by calling a call-back adjust_tsc_offset(). The 3rd parameter of adjust_tsc_offset() indicates whether the adjustment is in host TSC cycles or in guest TSC cycles. If SVM TSC scaling is enabled, adjust_tsc_offset() [i.e. svm_adjust_tsc_offset()] will first scale the adjustment; otherwise, it will just use the unscaled one. As the MSR write here comes from the guest, the adjustment is in guest TSC cycles. However, the current kvm_set_msr_common() uses it as a value in host TSC cycles (by using true as the 3rd parameter of adjust_tsc_offset()), which can result in an incorrect adjustment of TSC offset if SVM TSC scaling is enabled. This patch fixes this problem. Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Peter Zijlstra authored
commit fed66e2c upstream. Vince reported that the fasync signal stuff doesn't work proper for inherited events. So fix that. Installing fasync allocates memory and sets filp->f_flags |= FASYNC, which upon the demise of the file descriptor ensures the allocation is freed and state is updated. Now for perf, we can have the events stick around for a while after the original FD is dead because of references from child events. So we cannot copy the fasync pointer around. We can however consistently use the parent's fasync, as that will be updated. Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho deMelo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1434011521.1495.71.camel@twinsSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Roland Dreier authored
commit 9c395170 upstream. If an initiator doesn't have any real LUNs assigned, we should report LUN 0 and a LUN list length of 1. Some versions of Solaris at least go beserk if we report a LUN list length of 0. Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Alexei Potashnik authored
commit 9547308b upstream. Make sure all non-READ SCSI commands get targ_xfer_tag initialized to 0xffffffff, not just WRITEs. Double-free of a TUR cmd object occurs under the following scenario: 1. TUR received (targ_xfer_tag is uninitialized and left at 0) 2. TUR status sent 3. First unsolicited NOPIN is sent to initiator (gets targ_xfer_tag of 0) 4. NOPOUT for NOPIN (with TTT=0) arrives - its ExpStatSN acks TUR status, TUR is queued for removal - LIO tries to find NOPIN with TTT=0, but finds the same TUR instead, TUR is queued for removal for the 2nd time (Drop unbalanced conditional bracket usage - nab) Signed-off-by: Alexei Potashnik <alexei@purestorage.com> Signed-off-by: Spencer Baugh <sbaugh@catern.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> [ luis: backported to 3.16: - kept brackets as they are needed in 3.16 kernel ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Guenter Roeck authored
commit 8ef9724b upstream. When inserting a new register into a block, the present bit map size is increased using krealloc. krealloc does not clear the additionally allocated memory, leaving it filled with random values. Result is that some registers are considered cached even though this is not the case. Fix the problem by clearing the additionally allocated memory. Also, if the bitmap size does not increase, do not reallocate the bitmap at all to reduce overhead. Fixes: 3f4ff561 ("regmap: rbtree: Make cache_present bitmap per node") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Bob Liu authored
commit 53bc7dc0 upstream. The BUG_ON() in purge_persistent_gnt() will be triggered when previous purge work haven't finished. There is a work_pending() before this BUG_ON, but it doesn't account if the work is still currently running. Acked-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Bob Liu authored
commit 7b076750 upstream. We should consider info->feature_persistent when adding indirect page to list info->indirect_pages, else the BUG_ON() in blkif_free() would be triggered. When we are using persistent grants the indirect_pages list should always be empty because blkfront has pre-allocated enough persistent pages to fill all requests on the ring. Acked-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Charles Keepax authored
commit 72e43164 upstream. The PM runtime core by default assumes a chip is suspended when runtime PM is enabled. Currently the arizona driver enables runtime PM when the chip is fully active and then disables the DCVDD regulator at the end of arizona_dev_init. This however has several problems, firstly the if we reach the end of arizona_dev_init, we did not properly follow all the proceedures for shutting down the chip, and most notably we never marked the chip as cache only so any writes occurring between then and the next PM runtime resume will be lost. Secondly, if we are already resumed when we reach the end of dev_init, then at best we get unbalanced regulator enable/disables at work we lose DCVDD whilst we need it. Additionally, since the commit 4f0216409f7c ("mfd: arizona: Add better support for system suspend"), the PM runtime operations may disable/enable the IRQ, so the IRQs must now be enabled before we call any PM operations. This patch adds a call to pm_runtime_set_active to inform the PM core that the device is starting up active and moves the PM enabling to around the IRQ initialisation to avoid any PM callbacks happening until the IRQs are initialised. Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
- 27 Aug, 2015 17 commits
-
-
Mimi Zohar authored
commit 4351c294 upstream. The current "mask" policy option matches files opened as MAY_READ, MAY_WRITE, MAY_APPEND or MAY_EXEC. This patch extends the "mask" option to match files opened containing one of these modes. For example, "mask=^MAY_READ" would match files opened read-write. Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: Dr. Greg Wettstein <gw@idfusion.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Mimi Zohar authored
commit 139069ef upstream. The new "euid" policy condition measures files with the specified effective uid (euid). In addition, for CAP_SETUID files it measures files with the specified uid or suid. Changelog: - fixed checkpatch.pl warnings - fixed avc denied {setuid} messages - based on Roberto's feedback Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: Dr. Greg Wettstein <gw@idfusion.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Herton R. Krzesinski authored
commit 7250dc3f upstream. I received a report from an user of following mouse which needs this quirk: usb 1-1.6: USB disconnect, device number 58 usb 1-1.6: new low speed USB device number 59 using ehci_hcd usb 1-1.6: New USB device found, idVendor=04f2, idProduct=1053 usb 1-1.6: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1.6: Product: USB Optical Mouse usb 1-1.6: Manufacturer: PixArt usb 1-1.6: configuration #1 chosen from 1 choice input: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.6/1-1.6:1.0/input/input5887 generic-usb 0003:04F2:1053.16FE: input,hidraw2: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb-0000:00:1a.0-1.6/input0 The quirk was tested by the reporter and it fixed the frequent disconnections etc. [jkosina@suse.cz: reorder the position in hid-ids.h] Signed-off-by: Herton R. Krzesinski <herton@redhat.com> Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Nikolay Borisov authored
commit c45653c3 upstream. Switch ext4 to using sb_getblk_gfp with GFP_NOFS added to fix possible deadlocks in the page writeback path. Signed-off-by: Nikolay Borisov <kernel@kyup.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Nikolay Borisov authored
commit bd7ade3c upstream. sb_getblk() is used during ext4 (and possibly other FSes) writeback paths. Sometimes such path require allocating memory and guaranteeing that such allocation won't block. Currently, however, there is no way to provide user flags for sb_getblk which could lead to deadlocks. This patch implements a sb_getblk_gfp with the only difference it can accept user-provided GFP flags. Signed-off-by: Nikolay Borisov <kernel@kyup.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Gioh Kim authored
commit 3b5e6454 upstream. A buffer cache is allocated from movable area because it is referred for a while and released soon. But some filesystems are taking buffer cache for a long time and it can disturb page migration. New APIs are introduced to allocate buffer cache with user specific flag. *_gfp APIs are for user want to set page allocation flag for page cache allocation. And *_unmovable APIs are for the user wants to allocate page cache from non-movable area. Signed-off-by: Gioh Kim <gioh.kim@lge.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> [ luis: 3.16 prereq for: bd7ade3c "bufferhead: Add _gfp version for sb_getblk()" ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Christoph Hellwig authored
commit 16b8528d upstream. We only want to steer the I/O completion towards a queue, but don't actually access any per-CPU data, so the raw_ version is fine to use and avoids the warnings when using smp_processor_id(). Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Andy Lutomirski <luto@kernel.org> Tested-by: Andy Lutomirski <luto@kernel.org> Acked-by: Sumit Saxena <sumit.saxena@avagotech.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> [ luis: backported to 3.16: - dropped changes to megasas_build_dcdb_fusion() ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Chad Dupuis authored
commit ef86cb20 upstream. Signed-off-by: Chad Dupuis <chad.dupuis@qlogic.com> Signed-off-by: Saurav Kashyap <saurav.kashyap@qlogic.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Himanshu Madhani <himanshu.madhani@qlogic.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Linus Torvalds authored
commit 6f957724 upstream. The firmware class uevent function accessed the "fw_priv->buf" buffer without the proper locking and testing for NULL. This is an old bug (looks like it goes back to 2012 and commit 1244691c: "firmware loader: introduce firmware_buf"), but for some reason it's triggering only now in 4.2-rc1. Shuah Khan is trying to bisect what it is that causes this to trigger more easily, but in the meantime let's just fix the bug since others are hitting it too (at least Ingo reports having seen it as well). Reported-and-tested-by: Shuah Khan <shuahkh@osg.samsung.com> Acked-by: Ming Lei <ming.lei@canonical.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Florian Westphal authored
commit 1e16aa3d upstream. skb_gso_segment() has a 'features' argument representing offload features available to the output path. A few handlers, e.g. GRE, instead re-fetch the features of skb->dev and use those instead of the provided ones when handing encapsulation/tunnels. Depending on dev->hw_enc_features of the output device skb_gso_segment() can then return NULL even when the caller has disabled all GSO feature bits, as segmentation of inner header thinks device will take care of segmentation. This e.g. affects the tbf scheduler, which will silently drop GRE-encap GSO skbs that did not fit the remaining token quota as the segmentation does not work when device supports corresponding hw offload capabilities. Cc: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net> [ luis: backported to 3.16: used davem's backport to 3.14 ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Ivan Vecera authored
commit ade4dc3e upstream. The commit "e29aa339 bna: Enable Multi Buffer RX" moved packets counter increment from the beginning of the NAPI processing loop after the check for erroneous packets so they are never accounted. This counter is used to inform firmware about number of processed completions (packets). As these packets are never acked the firmware fires IRQs for them again and again. Fixes: e29aa339 ("bna: Enable Multi Buffer RX") Signed-off-by: Ivan Vecera <ivecera@redhat.com> Acked-by: Rasesh Mody <rasesh.mody@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Eric Dumazet authored
commit 10e2eb87 upstream. Multicast dst are not cached. They carry DST_NOCACHE. As mentioned in commit f8864972 ("ipv4: fix dst race in sk_dst_get()"), these dst need special care before caching them into a socket. Caching them is allowed only if their refcnt was not 0, ie we must use atomic_inc_not_zero() Also, we must use READ_ONCE() to fetch sk->sk_rx_dst, as mentioned in commit d0c294c5 ("tcp: prevent fetching dst twice in early demux code") Fixes: 421b3885 ("udp: ipv4: Add udp early demux") Tested-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz> Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz> Reported-by: Alex Gartrell <agartrell@fb.com> Cc: Michal Kubeček <mkubecek@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net> [ luis: backported to 3.16: used davem's backport to 3.14 ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Dan Carpenter authored
commit 468b732b upstream. "len" is a signed integer. We check that len is not negative, so it goes from zero to INT_MAX. PAGE_SIZE is unsigned long so the comparison is type promoted to unsigned long. ULONG_MAX - 4095 is a higher than INT_MAX so the condition can never be true. I don't know if this is harmful but it seems safe to limit "len" to INT_MAX - 4095. Fixes: a8c879a7 ('RDS: Info and stats') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Florian Westphal authored
commit 0470eb99 upstream. Kirill A. Shutemov says: This simple test-case trigers few locking asserts in kernel: int main(int argc, char **argv) { unsigned int block_size = 16 * 4096; struct nl_mmap_req req = { .nm_block_size = block_size, .nm_block_nr = 64, .nm_frame_size = 16384, .nm_frame_nr = 64 * block_size / 16384, }; unsigned int ring_size; int fd; fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_GENERIC); if (setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, &req, sizeof(req)) < 0) exit(1); if (setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, &req, sizeof(req)) < 0) exit(1); ring_size = req.nm_block_nr * req.nm_block_size; mmap(NULL, 2 * ring_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); return 0; } +++ exited with 0 +++ BUG: sleeping function called from invalid context at /home/kas/git/public/linux-mm/kernel/locking/mutex.c:616 in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: init 3 locks held by init/1: #0: (reboot_mutex){+.+...}, at: [<ffffffff81080959>] SyS_reboot+0xa9/0x220 #1: ((reboot_notifier_list).rwsem){.+.+..}, at: [<ffffffff8107f379>] __blocking_notifier_call_chain+0x39/0x70 #2: (rcu_callback){......}, at: [<ffffffff810d32e0>] rcu_do_batch.isra.49+0x160/0x10c0 Preemption disabled at:[<ffffffff8145365f>] __delay+0xf/0x20 CPU: 1 PID: 1 Comm: init Not tainted 4.1.0-00009-gbddf4c4818e0 #253 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014 ffff88017b3d8000 ffff88027bc03c38 ffffffff81929ceb 0000000000000102 0000000000000000 ffff88027bc03c68 ffffffff81085a9d 0000000000000002 ffffffff81ca2a20 0000000000000268 0000000000000000 ffff88027bc03c98 Call Trace: <IRQ> [<ffffffff81929ceb>] dump_stack+0x4f/0x7b [<ffffffff81085a9d>] ___might_sleep+0x16d/0x270 [<ffffffff81085bed>] __might_sleep+0x4d/0x90 [<ffffffff8192e96f>] mutex_lock_nested+0x2f/0x430 [<ffffffff81932fed>] ? _raw_spin_unlock_irqrestore+0x5d/0x80 [<ffffffff81464143>] ? __this_cpu_preempt_check+0x13/0x20 [<ffffffff8182fc3d>] netlink_set_ring+0x1ed/0x350 [<ffffffff8182e000>] ? netlink_undo_bind+0x70/0x70 [<ffffffff8182fe20>] netlink_sock_destruct+0x80/0x150 [<ffffffff817e484d>] __sk_free+0x1d/0x160 [<ffffffff817e49a9>] sk_free+0x19/0x20 [..] Cong Wang says: We can't hold mutex lock in a rcu callback, [..] Thomas Graf says: The socket should be dead at this point. It might be simpler to add a netlink_release_ring() function which doesn't require locking at all. Reported-by: "Kirill A. Shutemov" <kirill@shutemov.name> Diagnosed-by: Cong Wang <cwang@twopensource.com> Suggested-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
dingtianhong authored
commit a951bc1e upstream. The "follow" fail_over_mac policy is useful for multiport devices that either become confused or incur a performance penalty when multiple ports are programmed with the same MAC address, but the same MAC address still may happened by this steps for this policy: 1) echo +eth0 > /sys/class/net/bond0/bonding/slaves bond0 has the same mac address with eth0, it is MAC1. 2) echo +eth1 > /sys/class/net/bond0/bonding/slaves eth1 is backup, eth1 has MAC2. 3) ifconfig eth0 down eth1 became active slave, bond will swap MAC for eth0 and eth1, so eth1 has MAC1, and eth0 has MAC2. 4) ifconfig eth1 down there is no active slave, and eth1 still has MAC1, eth2 has MAC2. 5) ifconfig eth0 up the eth0 became active slave again, the bond set eth0 to MAC1. Something wrong here, then if you set eth1 up, the eth0 and eth1 will have the same MAC address, it will break this policy for ACTIVE_BACKUP mode. This patch will fix this problem by finding the old active slave and swap them MAC address before change active slave. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Tested-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> [ luis: backported to 3.16: used davem's backport to 3.14 ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Eric Dumazet authored
commit 03645a11 upstream. ip6_datagram_connect() is doing a lot of socket changes without socket being locked. This looks wrong, at least for udp_lib_rehash() which could corrupt lists because of concurrent udp_sk(sk)->udp_portaddr_hash accesses. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Herbert Xu authored
commit a0a2a660 upstream. The commit 738ac1eb ("net: Clone skb before setting peeked flag") introduced a use-after-free bug in skb_recv_datagram. This is because skb_set_peeked may create a new skb and free the existing one. As it stands the caller will continue to use the old freed skb. This patch fixes it by making skb_set_peeked return the new skb (or the old one if unchanged). Fixes: 738ac1eb ("net: Clone skb before setting peeked flag") Reported-by: Brenden Blanco <bblanco@plumgrid.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Brenden Blanco <bblanco@plumgrid.com> Reviewed-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-