- 16 Jul, 2012 40 commits
-
-
Jan Kara authored
commit adee11b2 upstream. Check provided length of partition table so that (possibly maliciously) corrupted partition table cannot cause accessing data beyond current buffer. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit cb14d340 upstream. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ryusuke Konishi authored
commit fbb24a3a upstream. A gc-inode is a pseudo inode used to buffer the blocks to be moved by garbage collection. Block caches of gc-inodes must be cleared every time a garbage collection function (nilfs_clean_segments) completes. Otherwise, stale blocks buffered in the caches may be wrongly reused in successive calls of the GC function. For user files, this is not a problem because their gc-inodes are distinguished by a checkpoint number as well as an inode number. They never buffer different blocks if either an inode number, a checkpoint number, or a block offset differs. However, gc-inodes of sufile, cpfile and DAT file can store different data for the same block offset. Thus, the nilfs_clean_segments function can move incorrect block for these meta-data files if an old block is cached. I found this is really causing meta-data corruption in nilfs. This fixes the issue by ensuring cache clear of gc-inodes and resolves reported GC problems including checkpoint file corruption, b-tree corruption, and the following warning during GC. nilfs_palloc_freev: entry number 307234 already freed. ... Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Tested-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lubomir Schmidt authored
commit 3026b0e9 upstream. There are two new devices for this driver. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Henrik Rydberg authored
commit ac852edb upstream. Key lookups may call read_smc() with a fixed-length key string, and if the lookup fails, trailing stack content may appear in the kernel log. Fixed with this patch. Signed-off-by: Henrik Rydberg <rydberg@euromail.se> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Rosenberg authored
commit 67de956f upstream. Fix multiple remotely-exploitable stack-based buffer overflows due to the NCI code pulling length fields directly from incoming frames and copying too much data into statically-sized arrays. Signed-off-by: Dan Rosenberg <dan.j.rosenberg@gmail.com> Cc: security@kernel.org Cc: Lauro Ramos Venancio <lauro.venancio@openbossa.org> Cc: Aloisio Almeida Jr <aloisio.almeida@openbossa.org> Cc: Samuel Ortiz <sameo@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Acked-by: Ilan Elias <ilane@ti.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
commit 03e934f6 upstream. Sasha Levin reported following panic : [ 2136.383310] BUG: unable to handle kernel NULL pointer dereference at 00000000000003b0 [ 2136.384022] IP: [<ffffffff8114e400>] __lock_acquire+0xc0/0x4b0 [ 2136.384022] PGD 131c4067 PUD 11c0c067 PMD 0 [ 2136.388106] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC [ 2136.388106] CPU 1 [ 2136.388106] Pid: 24855, comm: trinity-child1 Tainted: G W 3.5.0-rc2-sasha-00015-g7b268f7 #374 [ 2136.388106] RIP: 0010:[<ffffffff8114e400>] [<ffffffff8114e400>] __lock_acquire+0xc0/0x4b0 [ 2136.388106] RSP: 0018:ffff8800130b3ca8 EFLAGS: 00010046 [ 2136.388106] RAX: 0000000000000086 RBX: ffff88001186b000 RCX: 0000000000000000 [ 2136.388106] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 [ 2136.388106] RBP: ffff8800130b3d08 R08: 0000000000000001 R09: 0000000000000000 [ 2136.388106] R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000002 [ 2136.388106] R13: 00000000000003b0 R14: 0000000000000000 R15: 0000000000000000 [ 2136.388106] FS: 00007fa5b1bd4700(0000) GS:ffff88001b800000(0000) knlGS:0000000000000000 [ 2136.388106] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2136.388106] CR2: 00000000000003b0 CR3: 0000000011d1f000 CR4: 00000000000406e0 [ 2136.388106] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 2136.388106] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 2136.388106] Process trinity-child1 (pid: 24855, threadinfo ffff8800130b2000, task ffff88001186b000) [ 2136.388106] Stack: [ 2136.388106] ffff8800130b3cd8 ffffffff81121785 ffffffff81236774 000080d000000001 [ 2136.388106] ffff88001b9d6c00 00000000001d6c00 ffffffff130b3d08 ffff88001186b000 [ 2136.388106] 0000000000000000 0000000000000002 0000000000000000 0000000000000000 [ 2136.388106] Call Trace: [ 2136.388106] [<ffffffff81121785>] ? sched_clock_local+0x25/0x90 [ 2136.388106] [<ffffffff81236774>] ? get_empty_filp+0x74/0x220 [ 2136.388106] [<ffffffff8114e97a>] lock_acquire+0x18a/0x1e0 [ 2136.388106] [<ffffffff836b37df>] ? rawsock_release+0x4f/0xa0 [ 2136.388106] [<ffffffff837c0ef0>] _raw_write_lock_bh+0x40/0x80 [ 2136.388106] [<ffffffff836b37df>] ? rawsock_release+0x4f/0xa0 [ 2136.388106] [<ffffffff836b37df>] rawsock_release+0x4f/0xa0 [ 2136.388106] [<ffffffff8321cfe8>] sock_release+0x18/0x70 [ 2136.388106] [<ffffffff8321d069>] sock_close+0x29/0x30 [ 2136.388106] [<ffffffff81236bca>] __fput+0x11a/0x2c0 [ 2136.388106] [<ffffffff81236d85>] fput+0x15/0x20 [ 2136.388106] [<ffffffff8321de34>] sys_accept4+0x1b4/0x200 [ 2136.388106] [<ffffffff837c165c>] ? _raw_spin_unlock_irq+0x4c/0x80 [ 2136.388106] [<ffffffff837c1669>] ? _raw_spin_unlock_irq+0x59/0x80 [ 2136.388106] [<ffffffff837c2565>] ? sysret_check+0x22/0x5d [ 2136.388106] [<ffffffff8321de8b>] sys_accept+0xb/0x10 [ 2136.388106] [<ffffffff837c2539>] system_call_fastpath+0x16/0x1b [ 2136.388106] Code: ec 04 00 0f 85 ea 03 00 00 be d5 0b 00 00 48 c7 c7 8a c1 40 84 e8 b1 a5 f8 ff 31 c0 e9 d4 03 00 00 66 2e 0f 1f 84 00 00 00 00 00 <49> 81 7d 00 60 73 5e 85 b8 01 00 00 00 44 0f 44 e0 83 fe 01 77 [ 2136.388106] RIP [<ffffffff8114e400>] __lock_acquire+0xc0/0x4b0 [ 2136.388106] RSP <ffff8800130b3ca8> [ 2136.388106] CR2: 00000000000003b0 [ 2136.388106] ---[ end trace 6d450e935ee18982 ]--- [ 2136.388106] Kernel panic - not syncing: Fatal exception in interrupt rawsock_release() should test if sock->sk is NULL before calling sock_orphan()/sock_put() Reported-by: Sasha Levin <levinsasha928@gmail.com> Tested-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Herrenschmidt authored
commit 21b2de34 upstream. There was a typo, checking for CONFIG_TRACE_IRQFLAG instead of CONFIG_TRACE_IRQFLAGS causing some useful debug code to not be built This in turns causes a build error on BookE 64-bit due to incorrect semicolons at the end of a couple of macros, so let's fix that too Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Herrenschmidt authored
commit be2cf20a upstream. Looks like we still have issues with pSeries and Cell idle code vs. the lazy irq state. In fact, the reset fixes that went upstream are exposing the problem more by causing BUG_ON() to trigger (which this patch turns into a WARN_ON instead). We need to be careful when using a variant of low power state that has the side effect of turning interrupts back on, to properly set all the SW & lazy state to look as if everything is enabled before we enter the low power state with MSR:EE off as we will return with MSR:EE on. If not, we have a discrepancy of state which can cause things to go very wrong later on. This patch moves the logic into a helper and uses it from the pseries and cell idle code. The power4/970 idle code already got things right (in assembly even !) so I'm not touching it. The power7 "bare metal" idle code is subtly different and correct. Remains PA6T and some hypervisor based Cell platforms which have questionable code in there, but they are mostly dead platforms so I'll fix them when I manage to get final answers from the respective maintainers about how the low power state actually works on them. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pravin B Shelar authored
commit abca7c49 upstream. On arches that do not support this_cpu_cmpxchg_double() slab_lock is used to do atomic cmpxchg() on double word which contains page->_count. The page count can be changed from get_page() or put_page() without taking slab_lock. That corrupts page counter. Fix it by moving page->_count out of cmpxchg_double data. So that slub does no change it while updating slub meta-data in struct page. [akpm@linux-foundation.org: use standard comment layout, tweak comment text] Reported-by: Amey Bhide <abhide@nicira.com> Signed-off-by: Pravin B Shelar <pshelar@nicira.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ian Campbell authored
[ Upstream commit 6bc96d04 ] Fixes: [ 15.470311] WARNING: at /local/scratch/ianc/devel/kernels/linux/fs/sysfs/file.c:498 sysfs_attr_ns+0x95/0xa0() [ 15.470326] sysfs: kobject eth0 without dirent [ 15.470333] Modules linked in: [ 15.470342] Pid: 12, comm: xenwatch Not tainted 3.4.0-x86_32p-xenU #93 and [ 9.150554] BUG: unable to handle kernel paging request at 2b359000 [ 9.150577] IP: [<c1279561>] linkwatch_do_dev+0x81/0xc0 [ 9.150592] *pdpt = 000000002c3c9027 *pde = 0000000000000000 [ 9.150604] Oops: 0002 [#1] SMP [ 9.150613] Modules linked in: This is http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=675190Reported-by: George Shuklin <george.shuklin@gmail.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Tested-by: William Dauchy <wdauchy@gmail.com> Cc: stable@kernel.org Cc: 675190@bugs.debian.org Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
stephen hemminger authored
[ Upstream commit 149ddd83 ] This ensures that bridges created with brctl(8) or ioctl(2) directly also carry IFLA_LINKINFO when dumped over netlink. This also allows to create a bridge with ioctl(2) and delete it with RTM_DELLINK. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 62b1a8ab ] Orphaning skb in dev_hard_start_xmit() makes bonding behavior unfriendly for applications sending big UDP bursts : Once packets pass the bonding device and come to real device, they might hit a full qdisc and be dropped. Without orphaning, the sender is automatically throttled because sk->sk_wmemalloc reaches sk->sk_sndbuf (assuming sk_sndbuf is not too big) We could try to defer the orphaning adding another test in dev_hard_start_xmit(), but all this seems of little gain, now that BQL tends to make packets more likely to be parked in Qdisc queues instead of NIC TX ring, in cases where performance matters. Reverts commits : fc6055a5 net: Introduce skb_orphan_try() 87fd308c net: skb_tx_hash() fix relative to skb_orphan_try() and removes SKBTX_DRV_NEEDS_SK_REF flag Reported-and-bisected-by: Jean-Michel Hautbois <jhautbois@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Tested-by: Oliver Hartkopp <socketcan@hartkopp.net> Acked-by: Oliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit bc14786a ] There is a off by one error in the minimal number of BD in bnx2x_start_xmit() and bnx2x_tx_int() before stopping/resuming tx queue. A full size GSO packet, with data included in skb->head really needs (MAX_SKB_FRAGS + 4) BDs, because of bnx2x_tx_split() This error triggers if BQL is disabled and heavy TCP transmit traffic occurs. bnx2x_tx_split() definitely can be called, remove a wrong comment. Reported-by: Tomas Hruby <thruby@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Eilon Greenstein <eilong@broadcom.com> Cc: Yaniv Rosner <yanivr@broadcom.com> Cc: Merav Sicron <meravs@broadcom.com> Cc: Tom Herbert <therbert@google.com> Cc: Robert Evans <evansr@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit d6cb3e41 ] bnx2x driver incorrectly sets ip_summed to CHECKSUM_UNNECESSARY on encapsulated segments. TCP stack happily accepts frames with bad checksums, if they are inside a GRE or IPIP encapsulation. Our understanding is that if no IP or L4 csum validation was done by the hardware, we should leave ip_summed as is (CHECKSUM_NONE), since hardware doesn't provide CHECKSUM_COMPLETE support in its cqe. Then, if IP/L4 checksumming was done by the hardware, set CHECKSUM_UNNECESSARY if no error was flagged. Patch based on findings and analysis from Robert Evans Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Eilon Greenstein <eilong@broadcom.com> Cc: Yaniv Rosner <yanivr@broadcom.com> Cc: Merav Sicron <meravs@broadcom.com> Cc: Tom Herbert <therbert@google.com> Cc: Robert Evans <evansr@google.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 954fba02 ] Bogdan Hamciuc diagnosed and fixed following bug in netpoll_send_udp() : "skb->len += len;" instead of "skb_put(skb, len);" Meaning that _if_ a network driver needs to call skb_realloc_headroom(), only packet headers would be copied, leaving garbage in the payload. However the skb_realloc_headroom() must be avoided as much as possible since it requires memory and netpoll tries hard to work even if memory is exhausted (using a pool of preallocated skbs) It appears netpoll_send_udp() reserved 16 bytes for the ethernet header, which happens to work for typicall drivers but not all. Right thing is to use LL_RESERVED_SPACE(dev) (And also add dev->needed_tailroom of tailroom) This patch combines both fixes. Many thanks to Bogdan for raising this issue. Reported-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 5ee31c68 ] In the transmit path of the bonding driver, skb->cb is used to stash the skb->queue_mapping so that the bonding device can set its own queue mapping. This value becomes corrupted since the skb->cb is also used in __dev_xmit_skb. When transmitting through bonding driver, bond_select_queue is called from dev_queue_xmit. In bond_select_queue the original skb->queue_mapping is copied into skb->cb (via bond_queue_mapping) and skb->queue_mapping is overwritten with the bond driver queue. Subsequently in dev_queue_xmit, __dev_xmit_skb is called which writes the packet length into skb->cb, thereby overwriting the stashed queue mappping. In bond_dev_queue_xmit (called from hard_start_xmit), the queue mapping for the skb is set to the stashed value which is now the skb length and hence is an invalid queue for the slave device. If we want to save skb->queue_mapping into skb->cb[], best place is to add a field in struct qdisc_skb_cb, to make sure it wont conflict with other layers (eg : Qdiscc, Infiniband...) This patchs also makes sure (struct qdisc_skb_cb)->data is aligned on 8 bytes : netem qdisc for example assumes it can store an u64 in it, without misalignment penalty. Note : we only have 20 bytes left in (struct qdisc_skb_cb)->data[]. The largest user is CHOKe and it fills it. Based on a previous patch from Tom Herbert. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Tom Herbert <therbert@google.com> Cc: John Fastabend <john.r.fastabend@intel.com> Cc: Roland Dreier <roland@kernel.org> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 16b0dc29 ] Trying to "modprobe dummy numdummies=30000" triggers : INFO: rcu_sched self-detected stall on CPU { 8} (t=60000 jiffies) After this splat, RTNL is locked and reboot is needed. We must call cond_resched() to avoid this, even holding RTNL. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit cd8f76c0 ] As soon as hardware is notified of a transmit, we no longer can assume skb can be dereferenced, as TX completion might have freed the packet. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David S. Miller authored
[ Upstream commit 6a2b28ef ] This reverts commit efa230f2. BQL doesn't work with how this driver currently only takes TX interrupts every 1/4 of the TX ring. That behavior needs to be fixed, but that's a larger non-trivial task and for now we have to revert BQL support as this makes the device currently completely unusable. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
stephen hemminger authored
[ Upstream commit 5ff0feac ] The newer flavors of Yukon II use a different method for receive checksum offload. This is indicated in the driver by the SKY2_HW_NEW_LE flag. On these newer chips, the BMU_ENA_RX_CHKSUM should not be set. The driver would get incorrectly toggle the bit, enabling the old checksum logic on these chips and cause a BUG_ON() assertion. If receive checksum was toggled via ethtool. Reported-by: Kirill Smelkov <kirr@mns.spb.ru> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Graf authored
[ Upstream commit d189634e ] /proc/net/ipv6_route reflects the contents of fib_table_hash. The proc handler is installed in ip6_route_net_init() whereas fib_table_hash is allocated in fib6_net_init() _after_ the proc handler has been installed. This opens up a short time frame to access fib_table_hash with its pants down. Move the registration of the proc files to a later point in the init order to avoid the race. Tested :-) Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Graf authored
[ Upstream commit 8bd74516 ] Commit 5339ab8b (ipv6: fib: Convert fib6_age() to dst_neigh_lookup().) seems to have mistakenly inverted the exception for cached NTF_ROUTER routes. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 4bd6683b ] Denys found out "ip neigh" output was truncated to about 54 neighbours. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 3f16da51 ] __lpc_handle_xmit() has two bugs : 1) It can leak skbs in case TXSTATUS_ERROR is set 2) It can wake up txqueue while no slot was freed. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Roland Stigge <stigge@antcom.de> Tested-by: Roland Stigge <stigge@antcom.de> Cc: Kevin Wells <kevin.wells@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit e3047859 ] lpc_eth does a copy of transmitted skbs to DMA area, without checking skb lengths, so can trigger buffer overflows : memcpy(pldat->tx_buff_v + txidx * ENET_MAXF_SIZE, skb->data, len); One way to get bigger skbs is to allow MTU changes above the 1500 limit. Calling eth_change_mtu() in ndo_change_mtu() makes sure this cannot happen. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Roland Stigge <stigge@antcom.de> Cc: Kevin Wells <kevin.wells@nxp.com> Acked-by: Roland Stigge <stigge@antcom.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 4399a4df ] Commit 081b1b1b (l2tp: fix l2tp_ip_sendmsg() route handling) added a race, in case IP route cache is disabled. In this case, we should not do the dst_release(&rt->dst), since it'll free the dst immediately, instead of waiting a RCU grace period. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: James Chapman <jchapman@katalix.com> Cc: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit a06998b8 ] We must prevent module unloading if some devices are still attached to l2tp_eth driver. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Denys Fedoryshchenko <denys@visp.net.lb> Tested-by: Denys Fedoryshchenko <denys@visp.net.lb> Cc: James Chapman <jchapman@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 55432d2b ] commit 5faa5df1 (inetpeer: Invalidate the inetpeer tree along with the routing cache) added a race : Before freeing an inetpeer, we must respect a RCU grace period, and make sure no user will attempt to increase refcnt. inetpeer_invalidate_tree() waits for a RCU grace period before inserting inetpeer tree into gc_list and waking the worker. At that time, no concurrent lookup can find a inetpeer in this tree. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit bec4596b ] drop_monitor calls several sleeping functions while in atomic context. BUG: sleeping function called from invalid context at mm/slub.c:943 in_atomic(): 1, irqs_disabled(): 0, pid: 2103, name: kworker/0:2 Pid: 2103, comm: kworker/0:2 Not tainted 3.5.0-rc1+ #55 Call Trace: [<ffffffff810697ca>] __might_sleep+0xca/0xf0 [<ffffffff811345a3>] kmem_cache_alloc_node+0x1b3/0x1c0 [<ffffffff8105578c>] ? queue_delayed_work_on+0x11c/0x130 [<ffffffff815343fb>] __alloc_skb+0x4b/0x230 [<ffffffffa00b0360>] ? reset_per_cpu_data+0x160/0x160 [drop_monitor] [<ffffffffa00b022f>] reset_per_cpu_data+0x2f/0x160 [drop_monitor] [<ffffffffa00b03ab>] send_dm_alert+0x4b/0xb0 [drop_monitor] [<ffffffff810568e0>] process_one_work+0x130/0x4c0 [<ffffffff81058249>] worker_thread+0x159/0x360 [<ffffffff810580f0>] ? manage_workers.isra.27+0x240/0x240 [<ffffffff8105d403>] kthread+0x93/0xa0 [<ffffffff816be6d4>] kernel_thread_helper+0x4/0x10 [<ffffffff8105d370>] ? kthread_freezable_should_stop+0x80/0x80 [<ffffffff816be6d0>] ? gs_change+0xb/0xb Rework the logic to call the sleeping functions in right context. Use standard timer/workqueue api to let system chose any cpu to perform the allocation and netlink send. Also avoid a loop if reset_per_cpu_data() cannot allocate memory : use mod_timer() to wait 1/10 second before next try. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Devendra Naga authored
[ Upstream commit ad1be8d3 ] when register_netdev fails, the init'ed NAPIs by netif_napi_add must be deleted with netif_napi_del, and also when driver unloads, it should delete the NAPI before unregistering netdevice using unregister_netdev. Signed-off-by: Devendra Naga <devendra.aaru@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Moore authored
[ Upstream commit 20e2a864 ] When NetLabel is not enabled, e.g. CONFIG_NETLABEL=n, and the system receives a CIPSO tagged packet it is dropped (cipso_v4_validate() returns non-zero). In most cases this is the correct and desired behavior, however, in the case where we are simply forwarding the traffic, e.g. acting as a network bridge, this becomes a problem. This patch fixes the forwarding problem by providing the basic CIPSO validation code directly in ip_options_compile() without the need for the NetLabel or CIPSO code. The new validation code can not perform any of the CIPSO option label/value verification that cipso_v4_validate() does, but it can verify the basic CIPSO option format. The behavior when NetLabel is enabled is unchanged. Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Wang authored
[ Upstream commit cc9b17ad ] We need to validate the number of pages consumed by data_len, otherwise frags array could be overflowed by userspace. So this patch validate data_len and return -EMSGSIZE when data_len may occupies more frags than MAX_SKB_FRAGS. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hiroaki SHIMODA authored
[ Upstream commit 914bec10 ] dql->num_queued could change while processing dql_completed(). To provide consistent calculation, added an on stack variable. Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> Cc: Tom Herbert <therbert@google.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hiroaki SHIMODA authored
[ Upstream commit 25426b79 ] When below pattern is observed, TIME dql_queued() dql_completed() | a) initial state | | b) X bytes queued V c) Y bytes queued d) X bytes completed e) Z bytes queued f) Y bytes completed a) dql->limit has already some value and there is no in-flight packet. b) X bytes queued. c) Y bytes queued and excess limit. d) X bytes completed and dql->prev_ovlimit is set and also dql->prev_num_queued is set Y. e) Z bytes queued. f) Y bytes completed. inprogress and prev_inprogress are true. At f), according to the comment, all_prev_completed becomes true and limit should be increased. But POSDIFF() ignores (completed == dql->prev_num_queued) case, so limit is decreased. Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> Cc: Tom Herbert <therbert@google.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Denys Fedoryshchenko <denys@visp.net.lb> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hiroaki SHIMODA authored
[ Upstream commit 0cfd32b7 ] POSDIFF() fails to take into account integer overflow case. Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> Cc: Tom Herbert <therbert@google.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Denys Fedoryshchenko <denys@visp.net.lb> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Krufky authored
commit 3e1141e2 upstream. Signed-off-by: Michael Krufky <mkrufky@linuxtv.org> Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Anton Blanchard authored
commit bc1d7702 upstream. We have a bug report where the kernel hits a warning in the cpumask code: WARNING: at include/linux/cpumask.h:107 Which is: WARN_ON_ONCE(cpu >= nr_cpumask_bits); The backtrace is: cpu_cmd cmds xmon_core xmon die xmon is iterating through 0 to NR_CPUS. I'm not sure why we are still open coding this but iterating above nr_cpu_ids is definitely a bug. This patch iterates through all possible cpus, in case we issue a system reset and CPUs in an offline state call in. Perhaps the old code was trying to handle CPUs that were in the partition but were never started (eg kexec into a kernel with an nr_cpus= boot option). They are going to die way before we get into xmon since we haven't set any kernel state up for them. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Neuling authored
commit 2f584a14 upstream. Since we are taking a registers, this should never have been an sldi. Talking to paulus offline, this is the correct fix. Was introduced by: commit 19ccb76a Author: Paul Mackerras <paulus@samba.org> Date: Sat Jul 23 17:42:46 2011 +1000 Talking to paulus, this shouldn't be a literal. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicolas Pitre authored
commit 19b52abe upstream. On ARM with the 2-level page table format, a PMD entry is represented by two consecutive section entries covering 2MB of virtual space. However, static mappings always were allowed to use separate 1MB section entries. This means in practice that a static mapping may create half populated PMDs via create_mapping(). Since commit 0536bdf3 (ARM: move iotable mappings within the vmalloc region) those static mappings are located in the vmalloc area. We must ensure no such half populated PMDs are accessible once vmalloc() or ioremap() start looking at the vmalloc area for nearby free virtual address ranges, or various things leading to a kernel crash will happen. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reported-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: "R, Sricharan" <r.sricharan@ti.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-