- 24 Mar, 2018 40 commits
-
-
Jiri Benc authored
[ Upstream commit d074bf96 ] When IPv6 is compiled but disabled at runtime, __vxlan_sock_add returns -EAFNOSUPPORT. For metadata based tunnels, this causes failure of the whole operation of bringing up the tunnel. Ignore failure of IPv6 socket creation for metadata based tunnels caused by IPv6 not being available. Fixes: b1be00a6 ("vxlan: support both IPv4 and IPv6 sockets in a single vxlan device") Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dean Jenkins authored
[ Upstream commit 2d6f1da1 ] Before attempting to schedule a work-item onto hu->write_work in hci_uart_tx_wakeup(), check that the Data Link protocol layer is still bound to the HCI UART driver. Failure to perform this protocol check causes a race condition between the work queue hu->write_work running hci_uart_write_work() and the Data Link protocol layer being unbound (closed) in hci_uart_tty_close(). Note hci_uart_tty_close() does have a "cancel_work_sync(&hu->write_work)" but it is ineffective because it cannot prevent work-items being added to hu->write_work after cancel_work_sync() has run. Therefore, add a check for HCI_UART_PROTO_READY into hci_uart_tx_wakeup() which prevents scheduling of the work queue when HCI_UART_PROTO_READY is in the clear state. However, note a small race condition remains because the hci_uart_tx_wakeup() thread can run in parallel with the hci_uart_tty_close() thread so it is possible that a schedule of hu->write_work can occur when HCI_UART_PROTO_READY is cleared. A complete solution needs locking of the threads which is implemented in a future commit. Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dean Jenkins authored
[ Upstream commit 048e1bd3 ] Before attempting to dequeue a Data Link protocol encapsulated message, check that the Data Link protocol is still bound to the HCI UART driver. This makes the code consistent with the usage of the other proto function pointers. Therefore, add a check for HCI_UART_PROTO_READY into hci_uart_dequeue() and return NULL if the Data Link protocol is not bound. This is needed for robustness as there is a scheduling race condition. hci_uart_write_work() is scheduled to run via work queue hu->write_work from hci_uart_tx_wakeup(). Therefore, there is a delay between scheduling hci_uart_write_work() to run and hci_uart_dequeue() running whereby the Data Link protocol layer could become unbound during the scheduling delay. In this case, without the check, the call to the unbound Data Link protocol layer dequeue function can crash. It is noted that hci_uart_tty_close() has a "cancel_work_sync(&hu->write_work)" statement but this only reduces the window of the race condition because it is possible for a new work-item to be added to work queue hu->write_work after the call to cancel_work_sync(). For example, Data Link layer retransmissions can be added to the work queue after the cancel_work_sync() has finished. Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Valentin Longchamp authored
[ Upstream commit 2ccf80b7 ] Because of integer computation rounding in u-boot (that sets the QE brg-frequency DTS prop), the clk value is 99999999 Hz even though it is 100 MHz. When setting brg clks that are exact divisors of 100 MHz, this small differnce plays a role and can result in lower clks to be output (for instance 20 MHz - divide by 5 - results in 16.666 MHz - divide by 6). This patch fixes that by "forcing" the brg_clk to the nearest kHz when the difference is below 2 integer rounding errors (i.e. 4). Signed-off-by: Valentin Longchamp <valentin.longchamp@keymile.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christophe Leroy authored
[ Upstream commit 8b8642af ] Since commit 5093bb96 ("powerpc/QE: switch to the cpm_muram implementation"), muram area is not part of immrbar mapping anymore so immrbar_virt_to_phys() is not usable anymore. Fixes: 5093bb96 ("powerpc/QE: switch to the cpm_muram implementation") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Li Yang <pku.leo@gmail.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emil Tantilov authored
[ Upstream commit f87fc447 ] IXGBEVF_QUEUE_STATS_LEN is based on ixgebvf_stats, not ixgbe_stats. This change fixes a bug where ethtool -S displayed some empty fields. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
[ Upstream commit c52c47e4 ] I've hit a lockdep splat with generic/270 test complaining that: 3216.fsstress.b/3533 is trying to acquire lock: (jbd2_handle){++++..}, at: [<ffffffff813152e0>] jbd2_log_wait_commit+0x0/0x150 but task is already holding lock: (jbd2_handle){++++..}, at: [<ffffffff8130bd3b>] start_this_handle+0x35b/0x850 The underlying problem is that jbd2_journal_force_commit_nested() (called from ext4_should_retry_alloc()) may get called while a transaction handle is started. In such case it takes care to not wait for commit of the running transaction (which would deadlock) but only for a commit of a transaction that is already committing (which is safe as that doesn't wait for any filesystem locks). In fact there are also other callers of jbd2_log_wait_commit() that take care to pass tid of a transaction that is already committing and for those cases, the lockdep instrumentation is too restrictive and leading to false positive reports. Fix the problem by calling jbd2_might_wait_for_commit() from jbd2_log_wait_commit() only if the transaction isn't already committing. Fixes: 1eaa566dSigned-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mario Kleiner authored
[ Upstream commit 60b95d70 ] So far we only allowed for 1 retry and just failed the query - and thereby high precision vblank timestamping - if we did not get a reasonable result, as such a failure wasn't considered all too horrible. There are a few NVidia gpu models out there which may need a bit more than 1 retry to get a successful query result under some conditions. Since Linux 4.4 the update code for vblank counter and timestamp in drm_update_vblank_count() changed so that the implementation assumes that high precision vblank timestamping of a kms driver either consistently succeeds or consistently fails for a given video mode and encoder/connector combo. Iow. switching from success to fail or vice versa on a modeset or connector change is ok, but spurious temporary failure for a given setup can confuse the core code and potentially cause bad miscounting of vblanks and confusion or hangs in userspace clients which rely on vblank stuff, e.g., desktop compositors. Therefore change the max retry count to a larger number - more than any gpu so far is known to need to succeed, but still low enough so that these queries which do also happen in vblank interrupt are still fast enough to be not disastrously long if something would go badly wrong with them. As such sporadic retries only happen seldom even on affected gpu's, this could mean a vblank irq could take a few dozen microseconds longer every few hours of uptime -- better than a desktop compositor randomly hanging every couple of hours or days of uptime in a hard to reproduce manner. Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chunming Zhou authored
[ Upstream commit 51687759 ] [ 413.687439] BUG: unable to handle kernel NULL pointer dereference at 0000000000000548 [ 413.687479] IP: [<ffffffff8109b175>] to_live_kthread+0x5/0x60 [ 413.687507] PGD 1efd12067 [ 413.687519] PUD 1efd11067 [ 413.687531] PMD 0 [ 413.687543] Oops: 0000 [#1] SMP [ 413.687557] Modules linked in: amdgpu(OE) ttm(OE) drm_kms_helper(E) drm(E) i2c_algo_bit(E) fb_sys_fops(E) syscopyarea(E) sysfillrect(E) sysimgblt(E) rpcsec_gss_krb5(E) nfsv4(E) nfs(E) fscache(E) snd_hda_codec_realtek(E) snd_hda_codec_generic(E) snd_hda_codec_hdmi(E) snd_hda_intel(E) eeepc_wmi(E) snd_hda_codec(E) asus_wmi(E) snd_hda_core(E) sparse_keymap(E) snd_hwdep(E) video(E) snd_pcm(E) snd_seq_midi(E) joydev(E) snd_seq_midi_event(E) snd_rawmidi(E) snd_seq(E) snd_seq_device(E) snd_timer(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) snd(E) crc32_pclmul(E) ghash_clmulni_intel(E) soundcore(E) aesni_intel(E) aes_x86_64(E) lrw(E) gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) shpchp(E) serio_raw(E) i2c_piix4(E) 8250_dw(E) i2c_designware_platform(E) i2c_designware_core(E) mac_hid(E) binfmt_misc(E) [ 413.687894] parport_pc(E) ppdev(E) lp(E) parport(E) nfsd(E) auth_rpcgss(E) nfs_acl(E) lockd(E) grace(E) sunrpc(E) autofs4(E) hid_generic(E) usbhid(E) hid(E) psmouse(E) ahci(E) r8169(E) mii(E) libahci(E) wmi(E) [ 413.687989] CPU: 13 PID: 1134 Comm: kworker/13:2 Tainted: G OE 4.9.0-custom #4 [ 413.688019] Hardware name: System manufacturer System Product Name/PRIME B350-PLUS, BIOS 0606 04/06/2017 [ 413.688089] Workqueue: events amd_sched_job_timedout [amdgpu] [ 413.688116] task: ffff88020f9657c0 task.stack: ffffc90001a88000 [ 413.688139] RIP: 0010:[<ffffffff8109b175>] [<ffffffff8109b175>] to_live_kthread+0x5/0x60 [ 413.688171] RSP: 0018:ffffc90001a8bd60 EFLAGS: 00010282 [ 413.688191] RAX: ffff88020f0073f8 RBX: ffff88020f000000 RCX: 0000000000000000 [ 413.688217] RDX: 0000000000000001 RSI: ffff88020f9670c0 RDI: 0000000000000000 [ 413.688243] RBP: ffffc90001a8bd78 R08: 0000000000000000 R09: 0000000000001000 [ 413.688269] R10: 0000006051b11a82 R11: 0000000000000001 R12: 0000000000000000 [ 413.688295] R13: ffff88020f002770 R14: ffff88020f004838 R15: ffff8801b23c2c60 [ 413.688321] FS: 0000000000000000(0000) GS:ffff88021ef40000(0000) knlGS:0000000000000000 [ 413.688352] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 413.688373] CR2: 0000000000000548 CR3: 00000001efd0f000 CR4: 00000000003406e0 [ 413.688399] Stack: [ 413.688407] ffffffff8109b304 ffff88020f000000 0000000000000070 ffffc90001a8bdf0 [ 413.688439] ffffffffa05ce29d ffffffffa052feb7 ffffffffa07b5820 ffffc90001a8bda0 [ 413.688470] ffffffff00000018 ffff8801bb88f060 0000000001a8bdb8 ffff88021ef59280 [ 413.688502] Call Trace: [ 413.688514] [<ffffffff8109b304>] ? kthread_park+0x14/0x60 [ 413.688555] [<ffffffffa05ce29d>] amdgpu_gpu_reset+0x7d/0x670 [amdgpu] [ 413.688589] [<ffffffffa052feb7>] ? drm_printk+0x97/0xa0 [drm] [ 413.688643] [<ffffffffa0698136>] amdgpu_job_timedout+0x46/0x50 [amdgpu] [ 413.688700] [<ffffffffa06969e7>] amd_sched_job_timedout+0x17/0x20 [amdgpu] [ 413.688727] [<ffffffff81095493>] process_one_work+0x153/0x3f0 [ 413.688751] [<ffffffff81095c5b>] worker_thread+0x12b/0x4b0 [ 413.688773] [<ffffffff8100392e>] ? do_syscall_64+0x6e/0x180 [ 413.688795] [<ffffffff81095b30>] ? rescuer_thread+0x350/0x350 [ 413.688818] [<ffffffff8100392e>] ? do_syscall_64+0x6e/0x180 [ 413.688839] [<ffffffff8109b423>] kthread+0xd3/0xf0 [ 413.688858] [<ffffffff8109b350>] ? kthread_park+0x60/0x60 [ 413.688881] [<ffffffff817e1ee5>] ret_from_fork+0x25/0x30 [ 413.688901] Code: 25 40 d3 00 00 48 8b 80 48 05 00 00 48 89 e5 5d 48 8b 40 c8 48 c1 e8 02 83 e0 01 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 <48> 8b b7 48 05 00 00 55 48 89 e5 48 85 f6 74 31 8b 97 f8 18 00 [ 413.689045] RIP [<ffffffff8109b175>] to_live_kthread+0x5/0x60 [ 413.689064] RSP <ffffc90001a8bd60> [ 413.689076] CR2: 0000000000000548 [ 413.697985] ---[ end trace 0a314a64821f84e9 ]--- The root cause is some ring doesn't have scheduler, like KIQ ring Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
[ Upstream commit 2bde7c32 ] The power table addresses should be contiguous, but there was a hole where 0x34 was missing. On most devices this is not a problem as addresses above 0x34 are used for the BUC# convertors which are not used in the DSDTs I've access to but after the BUC# convertors there is a field named GPI1 in the DSTDs, which does get used in some cases and ended up turning BUC6 on and off due to the wrong addresses, resulting in turning the entire device off (or causing it to reboot). Removing the hole in the addresses fixes this, fixing one of my Bay Trail tablets turning off while booting the mainline kernel. While at it add comments with the field names used in the DSDTs to make it easier to compare the register and bits used at each address with the datasheet. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Lippert authored
[ Upstream commit 2c1175c2 ] Commit c49c0976 ("ipmi: Don't call receive handler in the panic context") means that the panic_recv_free is not called during a panic and the atomic count does not drop to 0. Fix this by only expecting one decrement of the atomic variable which comes from panic_smi_free. Signed-off-by: Robert Lippert <rlippert@google.com> Signed-off-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oleksij Rempel authored
[ Upstream commit e9b61518 ] some laptops, for example ASUS UX330UAK, have brocken als_get function but working als_set funktion. In this case, ALS will stay turned off. Method (WMNB, 3, Serialized) { ... If (Local0 == 0x53545344) { ... If (IIA0 == 0x00050001) { If (!ALSP) { Return (0x02) } Local0 = (GALS & 0x10) <<<---- bug, should be: (GALS () & 0x10) If (Local0) { Return (0x00050001) } Else { Return (0x00050000) } } ..... If (Local0 == 0x53564544) { ... If (IIA0 == 0x00050001) { Return (ALSC (IIA1)) } ...... Method (GALS, 0, NotSerialized) { Local0 = Zero Local0 |= 0x20 If (ALAE) { Local0 |= 0x10 } Local1 = 0x0A Local1 <<= 0x08 Local0 |= Local1 Return (Local0) } Since it works without problems on Windows I assume ASUS WMI driver for Win never trying to get ALS state, and instead it is setting it by default to ON. This patch will do the same. Turn ALS on by default. Signed-off-by: Oleksij Rempel <linux@rempel-privat.de> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tadeusz Struk authored
[ Upstream commit 22546b74 ] Soft lockups can occur because the mad processing on different CPUs acquire the spin lock dc8051_lock: [534552.835870] [<ffffffffa026f993>] ? read_dev_port_cntr.isra.37+0x23/0x160 [hfi1] [534552.835880] [<ffffffffa02775af>] read_dev_cntr+0x4f/0x60 [hfi1] [534552.835893] [<ffffffffa028d7cd>] pma_get_opa_portstatus+0x64d/0x8c0 [hfi1] [534552.835904] [<ffffffffa0290e7d>] hfi1_process_mad+0x48d/0x18c0 [hfi1] [534552.835908] [<ffffffff811dc1f1>] ? __slab_free+0x81/0x2f0 [534552.835936] [<ffffffffa024c34e>] ? ib_mad_recv_done+0x21e/0xa30 [ib_core] [534552.835939] [<ffffffff811dd153>] ? __kmalloc+0x1f3/0x240 [534552.835947] [<ffffffffa024c3fb>] ib_mad_recv_done+0x2cb/0xa30 [ib_core] [534552.835955] [<ffffffffa0237c85>] __ib_process_cq+0x55/0xd0 [ib_core] [534552.835962] [<ffffffffa0237d70>] ib_cq_poll_work+0x20/0x60 [ib_core] [534552.835964] [<ffffffff810a7f3b>] process_one_work+0x17b/0x470 [534552.835966] [<ffffffff810a8d76>] worker_thread+0x126/0x410 [534552.835969] [<ffffffff810a8c50>] ? rescuer_thread+0x460/0x460 [534552.835971] [<ffffffff810b052f>] kthread+0xcf/0xe0 [534552.835974] [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140 [534552.835977] [<ffffffff81696418>] ret_from_fork+0x58/0x90 [534552.835980] [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140 This issue is made worse when the 8051 is busy and the reads take longer. Fix by using a non-spinning lock procure. Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Mike Marciszyn <mike.marciniszyn@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
[ Upstream commit f0bb2d44 ] We need to call spin_unlock_irqrestore() instead of vanilla spin_unlock() on this error path. Fixes: 119a8e70 ("IB/rdmavt: Add AH to rdmavt") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kishon Vijay Abraham I authored
[ Upstream commit 2c949ce3 ] The PCIe programming sequence in TRM suggests CLKSTCTRL of PCIe should be set to SW_WKUP. There are no issues when CLKSTCTRL is set to HW_AUTO in RC mode. However in EP mode, the host system is not able to access the MEMSPACE and setting the CLKSTCTRL to SW_WKUP fixes it. Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
[ Upstream commit 7dde07e9 ] According to my static checker we should unlock here before the return. That seems reasonable to me as well. Fixes" b9e69e12 ("netfilter: xtables: don't hook tables by default") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
yangbo lu authored
[ Upstream commit a627f025 ] The ls1046a datasheet specified that the max SD clock frequency for eSDHC SDR104/HS200 was 167MHz, and the ls1012a datasheet specified it's 125MHz for ls1012a. So this patch is to add the limitation. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mohammed Shafi Shajakhan authored
[ Upstream commit 21a8e9dd ] Existing API 'ieee80211_get_sdata_band' returns default 2 GHz band even if the channel context configuration is NULL. This crashes for chipsets which support 5 Ghz alone when it tries to access members of 'sband'. Channel context configuration can be NULL in multivif case and when channel switch is in progress (or) when it fails. Fix this by replacing the API 'ieee80211_get_sdata_band' with 'ieee80211_get_sband' which returns a NULL pointer for sband when the channel configuration is NULL. An example scenario is as below: In multivif mode (AP + STA) with drivers like ath10k, when we do a channel switch in the AP vif (which has a number of clients connected) and a STA vif which is connected to some other AP, when the channel switch in AP vif fails, while the STA vifs tries to connect to the other AP, there is a window where the channel context is NULL/invalid and this results in a crash while the clients connected to the AP vif tries to reconnect and this race is very similar to the one investigated by Michal in https://patchwork.kernel.org/patch/3788161/ and this does happens with hardware that supports 5Ghz alone after long hours of testing with continuous channel switch on the AP vif ieee80211 phy0: channel context reservation cannot be finalized because some interfaces aren't switching wlan0: failed to finalize CSA, disconnecting wlan0-1: deauthenticating from 8c:fd:f0:01:54:9c by local choice (Reason: 3=DEAUTH_LEAVING) WARNING: CPU: 1 PID: 19032 at net/mac80211/ieee80211_i.h:1013 sta_info_alloc+0x374/0x3fc [mac80211] [<bf77272c>] (sta_info_alloc [mac80211]) [<bf78776c>] (ieee80211_add_station [mac80211])) [<bf73cc50>] (nl80211_new_station [cfg80211]) Unable to handle kernel NULL pointer dereference at virtual address 00000014 pgd = d5f4c000 Internal error: Oops: 17 [#1] PREEMPT SMP ARM PC is at sta_info_alloc+0x380/0x3fc [mac80211] LR is at sta_info_alloc+0x37c/0x3fc [mac80211] [<bf772738>] (sta_info_alloc [mac80211]) [<bf78776c>] (ieee80211_add_station [mac80211]) [<bf73cc50>] (nl80211_new_station [cfg80211])) Cc: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qti.qualcomm.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Abeni authored
[ Upstream commit 1442f6f7 ] When creating a new ipvs service, ipv6 addresses are always accepted if CONFIG_IP_VS_IPV6 is enabled. On dest creation the address family is not explicitly checked. This allows the user-space to configure ipvs services even if the system is booted with ipv6.disable=1. On specific configuration, ipvs can try to call ipv6 routing code at setup time, causing the kernel to oops due to fib6_rules_ops being NULL. This change addresses the issue adding a check for the ipv6 module being enabled while validating ipv6 service operations and adding the same validation for dest operations. According to git history, this issue is apparently present since the introduction of ipv6 support, and the oops can be triggered since commit 09571c7a ("IPVS: Add function to determine if IPv6 address is local") Fixes: 09571c7a ("IPVS: Add function to determine if IPv6 address is local") Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pan Bian authored
[ Upstream commit 9e966527 ] Function dev_alloc_skb() will return a NULL pointer if there is no enough memory. However, in function WILC_WFI_mon_xmit(), its return value is used without validation. This may result in a bad memory access bug. This patch fixes the bug. Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sameer Wadgaonkar authored
[ Upstream commit 3c2bf0bd ] The root issue is that we are not allowed to have items on the stack being passed to "DMA" like operations. In this case we have a vmcall and an inline completion of scsi command. This patch fixes the issue by moving the variables on stack in do_scsi_nolinuxstat() to heap memory. Signed-off-by: Sameer Wadgaonkar <sameer.wadgaonkar@unisys.com> Signed-off-by: David Kershner <david.kershner@unisys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kuppuswamy Sathyanarayanan authored
[ Upstream commit 881ebd22 ] According to Whiskey Cove PMIC spec, bit 7 of GPIOIRQ0_REG belongs to battery IO. So we should skip this bit when checking for GPIO IRQ pending status. Otherwise, wcove_gpio_irq_handler() might go into the infinite loop until IRQ "pending" status becomes 0. This patch fixes this issue. Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Baoquan He authored
[ Upstream commit da63b6b2 ] Dave found that a kdump kernel with KASLR enabled will reset to the BIOS immediately if physical randomization failed to find a new position for the kernel. A kernel with the 'nokaslr' option works in this case. The reason is that KASLR will install a new page table for the identity mapping, while it missed building it for the original kernel location if KASLR physical randomization fails. This only happens in the kexec/kdump kernel, because the identity mapping has been built for kexec/kdump in the 1st kernel for the whole memory by calling init_pgtable(). Here if physical randomizaiton fails, it won't build the identity mapping for the original area of the kernel but change to a new page table '_pgtable'. Then the kernel will triple fault immediately caused by no identity mappings. The normal kernel won't see this bug, because it comes here via startup_32() and CR3 will be set to _pgtable already. In startup_32() the identity mapping is built for the 0~4G area. In KASLR we just append to the existing area instead of entirely overwriting it for on-demand identity mapping building. So the identity mapping for the original area of kernel is still there. To fix it we just switch to the new identity mapping page table when physical KASLR succeeds. Otherwise we keep the old page table unchanged just like "nokaslr" does. Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Dave Young <dyoung@redhat.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: Borislav Petkov <bp@suse.de> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Garnier <thgarnie@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1493278940-5885-1-git-send-email-bhe@redhat.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
[ Upstream commit a4e84aae ] mtip32xx supposes that 'request_idx' passed to .init_request() is tag of the request, and use that as request's tag to initialize command header. After MQ IO scheduler is in, request tag assigned isn't same with the request index anymore, so cause strange hardware failure on mtip32xx, even whole system panic is triggered. This patch fixes the issue by initializing command header via request's real tag. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Keerthy authored
[ Upstream commit 85fdaf8e ] POWERHOLD signal has higher priority over the DEV_ON bit. So power off will not happen if the POWERHOLD is held high. Hence reset the MUX to GPIO_7 mode to release the POWERHOLD and the DEV_ON bit to take effect to power off the PMIC. PMIC Power off happens in dire situations like thermal shutdown so irrespective of the POWERHOLD setting go ahead and turn off the powerhold. Currently poweroff is broken on boards that have powerhold enabled. This fixes poweroff on those boards. Signed-off-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rask Ingemann Lambertsen authored
[ Upstream commit 8461cf20 ] commit b101829a029a ("mfd: axp20x: Fix AXP806 access errors on cold boot") was intended to fix the case where a board uses an AXP806 in slave mode, but the boot loader leaves it in master mode for lack of AXP806 support. But now the driver breaks on boards where the PMIC is operating in master mode. To let the device tree describe which mode of operation is needed, this patch introduces a new property "xpowers,master-mode". Fixes: 204ae296 ("mfd: axp20x: Add bindings for AXP806 PMIC") Signed-off-by: Rask Ingemann Lambertsen <rask@formelder.dk> Acked-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Colin Ian King authored
[ Upstream commit c894acc7 ] Ensure that when an invalid value in ret or value is found -EINVAL is returned. A previous commit broke the way the return error is being returned and instead caused the return code in ret to be re-assigned rather than be returned. Fixes: 5d9854ea ("iio: hid-sensor: Store restore poll and hysteresis on S3") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lv Zheng authored
[ Upstream commit bb1e23e6 ] ACPICA commit 637b88de24a78c20478728d9d66632b06fcaa5bf If the IORT template is compiled and then iort.aml binary disassembled to iort.dsl, SMMUv1 node lists incorrect offset for SMMU_Nsg_cfg_irpt Interrupt: [0ECh 0236 8] SMMU_Nsg_irpt Interrupt : 0000000000000000 [0ECh 0236 8] SMMU_Nsg_cfg_irpt Interrupt : 0000000000000000 This is because iasl hasn't implemented SMMU GSI decoding yet. This patch fixes this issue by preparing structures for decoding IORT SMMU GSI. ACPICA BZ 1340, reported by Alexei Fedorov, fixed by Lv Zheng. Link: https://github.com/acpica/acpica/commit/637b88de Link: https://bugs.acpica.org/show_bug.cgi?id=1340Reported-by: Alexei Fedorov <Alexei.Fedorov@arm.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
[ Upstream commit cf147085 ] ieee80211_frame_acked is called when a frame is acked by the peer. In case this is a management frame, we check if this an SMPS frame, in which case we can update our antenna configuration. When we parse the management frame we look at the category in case it is an action frame. That byte sits after the IV in case the frame was encrypted. This means that if the frame was encrypted, we basically look at the IV instead of looking at the category. It is then theorically possible that we think that an SMPS action frame was acked where really we had another frame that was encrypted. Since the only management frame whose ack needs to be tracked is the SMPS action frame, and that frame is not a robust management frame, it will never be encrypted. The easiest way to fix this problem is then to not look at frames that were encrypted. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martin Brandenburg authored
[ Upstream commit b5a9d61e ] When the computer is turned off, all the processes are killed and then all the filesystems are umounted. OrangeFS should not wait for the userspace daemon to come back in that case. This only works for plain umount(2). To actually take advantage of this interactively, `umount -f' is needed; otherwise umount will issue a statfs first, which will wait for the userspace daemon to come back. Signed-off-by: Martin Brandenburg <martin@omnibond.com> Signed-off-by: Mike Marshall <hubcap@omnibond.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Filipe Manana authored
[ Upstream commit be2d253c ] If the call to btrfs_qgroup_reserve_data() failed, we were leaking an extent map structure. The failure can happen either due to an -ENOMEM condition or, when quotas are enabled, due to -EDQUOT for example. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Filipe Manana authored
[ Upstream commit e1cbfd7b ] Normally we don't have inline extents followed by regular extents, but there's currently at least one harmless case where this happens. For example, when the page size is 4Kb and compression is enabled: $ mkfs.btrfs -f /dev/sdb $ mount -o compress /dev/sdb /mnt $ xfs_io -f -c "pwrite -S 0xaa 0 4K" -c "fsync" /mnt/foobar $ xfs_io -c "pwrite -S 0xbb 8K 4K" -c "fsync" /mnt/foobar In this case we get a compressed inline extent, representing 4Kb of data, followed by a hole extent and then a regular data extent. The inline extent was not expanded/converted to a regular extent exactly because it represents 4Kb of data. This does not cause any apparent problem (such as the issue solved by commit e1699d2d ("btrfs: add missing memset while reading compressed inline extents")) except trigger an unexpected case in the incremental send code path that makes us issue an operation to write a hole when it's not needed, resulting in more writes at the receiver and wasting space at the receiver. So teach the incremental send code to deal with this particular case. The issue can be currently triggered by running fstests btrfs/137 with compression enabled (MOUNT_OPTIONS="-o compress" ./check btrfs/137). Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Filipe Manana authored
[ Upstream commit 1c81ba23 ] When using compression, if we fail to insert an inline extent we incorrectly end up attempting to free the reserved data space twice, once through extent_clear_unlock_delalloc(), because we pass it the flag EXTENT_DO_ACCOUNTING, and once through a direct call to btrfs_free_reserved_data_space_noquota(). This results in a trace like the following: [ 834.576240] ------------[ cut here ]------------ [ 834.576825] WARNING: CPU: 2 PID: 486 at fs/btrfs/extent-tree.c:4316 btrfs_free_reserved_data_space_noquota+0x60/0x9f [btrfs] [ 834.579501] Modules linked in: btrfs crc32c_generic xor raid6_pq ppdev i2c_piix4 acpi_cpufreq psmouse tpm_tis parport_pc pcspkr serio_raw tpm_tis_core sg parport evdev i2c_core tpm button loop autofs4 ext4 crc16 jbd2 mbcache sr_mod cdrom sd_mod ata_generic virtio_scsi ata_piix virtio_pci libata virtio_ring virtio scsi_mod e1000 floppy [last unloaded: btrfs] [ 834.592116] CPU: 2 PID: 486 Comm: kworker/u32:4 Not tainted 4.10.0-rc8-btrfs-next-37+ #2 [ 834.593316] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014 [ 834.595273] Workqueue: btrfs-delalloc btrfs_delalloc_helper [btrfs] [ 834.596103] Call Trace: [ 834.596103] dump_stack+0x67/0x90 [ 834.596103] __warn+0xc2/0xdd [ 834.596103] warn_slowpath_null+0x1d/0x1f [ 834.596103] btrfs_free_reserved_data_space_noquota+0x60/0x9f [btrfs] [ 834.596103] compress_file_range.constprop.42+0x2fa/0x3fc [btrfs] [ 834.596103] ? submit_compressed_extents+0x3a7/0x3a7 [btrfs] [ 834.596103] async_cow_start+0x32/0x4d [btrfs] [ 834.596103] btrfs_scrubparity_helper+0x187/0x3e7 [btrfs] [ 834.596103] btrfs_delalloc_helper+0xe/0x10 [btrfs] [ 834.596103] process_one_work+0x273/0x4e4 [ 834.596103] worker_thread+0x1eb/0x2ca [ 834.596103] ? rescuer_thread+0x2b6/0x2b6 [ 834.596103] kthread+0x100/0x108 [ 834.596103] ? __list_del_entry+0x22/0x22 [ 834.596103] ret_from_fork+0x2e/0x40 [ 834.611656] ---[ end trace 719902fe6bdef08f ]--- So fix this by not calling directly btrfs_free_reserved_data_space_noquota() if an error happened. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pan Bian authored
[ Upstream commit 9dc7efd3 ] Function create_singlethread_workqueue() will return a NULL pointer if there is no enough memory, and its return value should be validated before using. However, in function rndis_wlan_bind(), its return value is not checked. This may cause NULL dereference bugs. This patch fixes it. Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pan Bian authored
[ Upstream commit dc3f89c3 ] Function alloc_workqueue() will return a NULL pointer if there is no enough memory, and its return value should be validated before using. However, in function if_spi_probe(), its return value is not checked. This may result in a NULL dereference bug. This patch fixes the bug. Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pan Bian authored
[ Upstream commit 5fb01e91 ] Function alloc_skb() will return a NULL pointer if there is no enough memory. However, in function mt7601u_mcu_msg_alloc(), its return value is not validated before it is used. This patch fixes it. Signed-off-by: Pan Bian <bianpan2016@163.com> Acked-by: Jakub Kicinski <kubakici@wp.pl> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shrirang Bagul authored
[ Upstream commit 7383d44b ] This patch fixes the sensor platform data initialisation for st_pressure and st_accel device drivers. Without this patch, the driver fails to register the sensors when the user removes and re-loads the driver. 1. Unload the kernel modules for st_pressure $ sudo rmmod st_pressure_i2c $ sudo rmmod st_pressure 2. Re-load the driver $ sudo insmod st_pressure $ sudo insmod st_pressure_i2c Signed-off-by: Jonathan Cameron <jic23@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
[ Upstream commit 99bbf6ec ] consider the sequence of commands: mkdir -p /import/nfs /import/bind /import/etc mount --bind / /import/bind mount --make-private /import/bind mount --bind /import/etc /import/bind/etc exportfs -o rw,no_root_squash,crossmnt,async,no_subtree_check localhost:/ mount -o vers=4 localhost:/ /import/nfs ls -l /import/nfs/etc You would not expect this to report a stale file handle. Yet it does. The manipulations under /import/bind cause the dentry for /etc to get the DCACHE_MOUNTED flag set, even though nothing is mounted on /etc. This causes nfsd to call nfsd_cross_mnt() even though there is no mountpoint. So an upcall to mountd for "/etc" is performed. The 'crossmnt' flag on the export of / causes mountd to report that /etc is exported as it is a descendant of /. It assumes the kernel wouldn't ask about something that wasn't a mountpoint. The filehandle returned identifies the filesystem and the inode number of /etc. When this filehandle is presented to rpc.mountd, via "nfsd.fh", the inode cannot be found associated with any name in /etc/exports, or with any mountpoint listed by getmntent(). So rpc.mountd says the filehandle doesn't exist. Hence ESTALE. This is fixed by teaching nfsd not to trust DCACHE_MOUNTED too much. It is just a hint, not a guarantee. Change nfsd_mountpoint() to return '1' for a certain mountpoint, '2' for a possible mountpoint, and 0 otherwise. Then change nfsd_crossmnt() to check if follow_down() actually found a mountpount and, if not, to avoid performing a lookup if the location is not known to certainly require an export-point. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chuck Lever authored
[ Upstream commit 9378b274 ] Trying to create MRs while the transport is being torn down can cause a crash. Fixes: e2ac236c ("xprtrdma: Allocate MRs on demand") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
[ Upstream commit 6aeafd05 ] The assumption should be that if the caller returns PNFS_ATTEMPTED, then hdr has been consumed, and so we should not be testing hdr->task.tk_status. If the caller returns PNFS_TRY_AGAIN, then we need to recoalesce and free hdr. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-