- 23 May, 2018 40 commits
-
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit b2e2f43a ] Keep the code for the nobp parameter handling with the code for expolines. Both are related to the spectre v2 mitigation. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Christian Borntraeger authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit d3f46896 ] when a system call is interrupted we might call the critical section cleanup handler that re-does some of the operations. When we are between .Lsysc_vtime and .Lsysc_do_svc we might also redo the saving of the problem state registers r0-r7: .Lcleanup_system_call: [...] 0: # update accounting time stamp mvc __LC_LAST_UPDATE_TIMER(8),__LC_SYNC_ENTER_TIMER # set up saved register r11 lg %r15,__LC_KERNEL_STACK la %r9,STACK_FRAME_OVERHEAD(%r15) stg %r9,24(%r11) # r11 pt_regs pointer # fill pt_regs mvc __PT_R8(64,%r9),__LC_SAVE_AREA_SYNC ---> stmg %r0,%r7,__PT_R0(%r9) The problem is now, that we might have already zeroed out r0. The fix is to move the zeroing of r0 after sysc_do_svc. Reported-by: Farhan Ali <alifm@linux.vnet.ibm.com> Fixes: 7041d281 ("s390: scrub registers on kernel entry and KVM exit") Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit d5feec04 ] The system call path can be interrupted before the switch back to the standard branch prediction with BPENTER has been done. The critical section cleanup code skips forward to .Lsysc_do_svc and bypasses the BPENTER. In this case the kernel and all subsequent code will run with the limited branch prediction. Fixes: eacf67eb9b32 ("s390: run user space and KVM guests with modified branch prediction") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Eugeniu Rosca authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit 2cb370d6 ] I've accidentally stumbled upon the IS_ENABLED(EXPOLINE_*) lines, which obviously always evaluate to false. Fix this. Fixes: f19fbd5e ("s390: introduce execute-trampolines for branches") Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit f19fbd5e ] Add CONFIG_EXPOLINE to enable the use of the new -mindirect-branch= and -mfunction_return= compiler options to create a kernel fortified against the specte v2 attack. With CONFIG_EXPOLINE=y all indirect branches will be issued with an execute type instruction. For z10 or newer the EXRL instruction will be used, for older machines the EX instruction. The typical indirect call basr %r14,%r1 is replaced with a PC relative call to a new thunk brasl %r14,__s390x_indirect_jump_r1 The thunk contains the EXRL/EX instruction to the indirect branch __s390x_indirect_jump_r1: exrl 0,0f j . 0: br %r1 The detour via the execute type instruction has a performance impact. To get rid of the detour the new kernel parameter "nospectre_v2" and "spectre_v2=[on,off,auto]" can be used. If the parameter is specified the kernel and module code will be patched at runtime. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit 6b73044b ] Define TIF_ISOLATE_BP and TIF_ISOLATE_BP_GUEST and add the necessary plumbing in entry.S to be able to run user space and KVM guests with limited branch prediction. To switch a user space process to limited branch prediction the s390_isolate_bp() function has to be call, and to run a vCPU of a KVM guest associated with the current task with limited branch prediction call s390_isolate_bp_guest(). Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit e2dd8333 ] Add an optimized version of the array_index_mask_nospec function for s390 based on a compare and a subtract with borrow. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit 7041d281 ] Clear all user space registers on entry to the kernel and all KVM guest registers on KVM guest exit if the register does not contain either a parameter or a result value. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit d768bd89 ] Add the PPA instruction to the system entry and exit path to switch the kernel to a different branch prediction behaviour. The instructions are added via CPU alternatives and can be disabled with the "nospec" or the "nobp=0" kernel parameter. If the default behaviour selected with CONFIG_KERNEL_NOBP is set to "n" then the "nobp=1" parameter can be used to enable the changed kernel branch prediction. Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Martin Schwidefsky authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit cf148998 ] To be able to switch off specific CPU alternatives with kernel parameters make a copy of the facility bit mask provided by STFLE and use the copy for the decision to apply an alternative. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Heiko Carstens authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit 049a2c2d ] Remove the CPU_ALTERNATIVES config option and enable the code unconditionally. The config option was only added to avoid a conflict with the named saved segment support. Since that code is gone there is no reason to keep the CPU_ALTERNATIVES config option. Just enable it unconditionally to also reduce the number of config options and make it less likely that something breaks. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Vasily Gorbik authored
BugLink: http://bugs.launchpad.net/bugs/1768474 [ Upstream commit 686140a1 ] Implement CPU alternatives, which allows to optionally patch newer instructions at runtime, based on CPU facilities availability. A new kernel boot parameter "noaltinstr" disables patching. Current implementation is derived from x86 alternatives. Although ideal instructions padding (when altinstr is longer then oldinstr) is added at compile time, and no oldinstr nops optimization has to be done at runtime. Also couple of compile time sanity checks are done: 1. oldinstr and altinstr must be <= 254 bytes long, 2. oldinstr and altinstr must not have an odd length. alternative(oldinstr, altinstr, facility); alternative_2(oldinstr, altinstr1, facility1, altinstr2, facility2); Both compile time and runtime padding consists of either 6/4/2 bytes nop or a jump (brcl) + 2 bytes nop filler if padding is longer then 6 bytes. .altinstructions and .altinstr_replacement sections are part of __init_begin : __init_end region and are freed after initialization. Signed-off-by: Vasily Gorbik <gor@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Juerg Haefliger authored
BugLink: http://bugs.launchpad.net/bugs/1768474 This reverts commit 008e2cfd to be replaced by the upstream patch set from 4.4.130. Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Juerg Haefliger authored
BugLink: http://bugs.launchpad.net/bugs/1768474 This reverts commit b226be50 to be replaced by the upstream patch set from 4.4.130. Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Juerg Haefliger authored
BugLink: http://bugs.launchpad.net/bugs/1768474 This reverts commit a9a979ca to be replaced by the upstream patch set from 4.4.130. Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Juerg Haefliger authored
BugLink: http://bugs.launchpad.net/bugs/1768474 This reverts commit 412264cd to be replaced by the upstream patch set from 4.4.130. Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Karthikeyan Periyasamy authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 55cc11da upstream. This reverts commit 55884c04. When Ath10k is in AP mode and an unassociated STA sends a VHT action frame (Operating Mode Notification for the NSS change) periodically to AP this causes ath10k to call ath10k_station_assoc() which sends WMI_PEER_ASSOC_CMDID during NSS update. Over the time (with a certain client it can happen within 15 mins when there are over 500 of these VHT action frames) continuous calls of WMI_PEER_ASSOC_CMDID cause firmware to assert due to resource exhaust. To my knowledge setting WMI_PEER_NSS peer param itself enough to handle NSS updates and no need to call ath10k_station_assoc(). So revert the original commit from 2014 as it's unclear why the change was really needed. Now the firmware assert doesn't happen anymore. Issue observed in QCA9984 platform with firmware version:10.4-3.5.3-00053. This Change tested in QCA9984 with firmware version: 10.4-3.5.3-00053 and QCA988x platform with firmware version: 10.2.4-1.0-00036. Firmware Assert log: ath10k_pci 0002:01:00.0: firmware crashed! (guid e61f1274-9acd-4c5b-bcca-e032ea6e723c) ath10k_pci 0002:01:00.0: qca9984/qca9994 hw1.0 target 0x01000000 chip_id 0x00000000 sub 168c:cafe ath10k_pci 0002:01:00.0: kconfig debug 1 debugfs 1 tracing 0 dfs 1 testmode 1 ath10k_pci 0002:01:00.0: firmware ver 10.4-3.5.3-00053 api 5 features no-p2p,mfp,peer-flow-ctrl,btcoex-param,allows-mesh-bcast crc32 4c56a386 ath10k_pci 0002:01:00.0: board_file api 2 bmi_id 0:4 crc32 c2271344 ath10k_pci 0002:01:00.0: htt-ver 2.2 wmi-op 6 htt-op 4 cal otp max-sta 512 raw 0 hwcrypto 1 ath10k_pci 0002:01:00.0: firmware register dump: ath10k_pci 0002:01:00.0: [00]: 0x0000000A 0x000015B3 0x00981E5F 0x00975B31 ath10k_pci 0002:01:00.0: [04]: 0x00981E5F 0x00060530 0x00000011 0x00446C60 ath10k_pci 0002:01:00.0: [08]: 0x0042F1FC 0x00458080 0x00000017 0x00000000 ath10k_pci 0002:01:00.0: [12]: 0x00000009 0x00000000 0x00973ABC 0x00973AD2 ath10k_pci 0002:01:00.0: [16]: 0x00973AB0 0x00960E62 0x009606CA 0x00000000 ath10k_pci 0002:01:00.0: [20]: 0x40981E5F 0x004066DC 0x00400000 0x00981E34 ath10k_pci 0002:01:00.0: [24]: 0x80983B48 0x0040673C 0x000000C0 0xC0981E5F ath10k_pci 0002:01:00.0: [28]: 0x80993DEB 0x0040676C 0x00431AB8 0x0045D0C4 ath10k_pci 0002:01:00.0: [32]: 0x80993E5C 0x004067AC 0x004303C0 0x0045D0C4 ath10k_pci 0002:01:00.0: [36]: 0x80994AAB 0x004067DC 0x00000000 0x0045D0C4 ath10k_pci 0002:01:00.0: [40]: 0x809971A0 0x0040681C 0x004303C0 0x00441B00 ath10k_pci 0002:01:00.0: [44]: 0x80991904 0x0040688C 0x004303C0 0x0045D0C4 ath10k_pci 0002:01:00.0: [48]: 0x80963AD3 0x00406A7C 0x004303C0 0x009918FC ath10k_pci 0002:01:00.0: [52]: 0x80960E80 0x00406A9C 0x0000001F 0x00400000 ath10k_pci 0002:01:00.0: [56]: 0x80960E51 0x00406ACC 0x00400000 0x00000000 ath10k_pci 0002:01:00.0: Copy Engine register dump: ath10k_pci 0002:01:00.0: index: addr: sr_wr_idx: sr_r_idx: dst_wr_idx: dst_r_idx: ath10k_pci 0002:01:00.0: [00]: 0x0004a000 15 15 3 3 ath10k_pci 0002:01:00.0: [01]: 0x0004a400 17 17 212 213 ath10k_pci 0002:01:00.0: [02]: 0x0004a800 21 21 20 21 ath10k_pci 0002:01:00.0: [03]: 0x0004ac00 25 25 27 25 ath10k_pci 0002:01:00.0: [04]: 0x0004b000 515 515 144 104 ath10k_pci 0002:01:00.0: [05]: 0x0004b400 28 28 155 156 ath10k_pci 0002:01:00.0: [06]: 0x0004b800 12 12 12 12 ath10k_pci 0002:01:00.0: [07]: 0x0004bc00 1 1 1 1 ath10k_pci 0002:01:00.0: [08]: 0x0004c000 0 0 127 0 ath10k_pci 0002:01:00.0: [09]: 0x0004c400 1 1 1 1 ath10k_pci 0002:01:00.0: [10]: 0x0004c800 0 0 0 0 ath10k_pci 0002:01:00.0: [11]: 0x0004cc00 0 0 0 0 ath10k_pci 0002:01:00.0: CE[1] write_index 212 sw_index 213 hw_index 0 nentries_mask 0x000001ff ath10k_pci 0002:01:00.0: CE[2] write_index 20 sw_index 21 hw_index 0 nentries_mask 0x0000007f ath10k_pci 0002:01:00.0: CE[5] write_index 155 sw_index 156 hw_index 0 nentries_mask 0x000001ff ath10k_pci 0002:01:00.0: DMA addr: nbytes: meta data: byte swap: gather: ath10k_pci 0002:01:00.0: [455]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [456]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [457]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [458]: 0x594a0038 0 0 0 1 ath10k_pci 0002:01:00.0: [459]: 0x580c0a42 0 0 0 0 ath10k_pci 0002:01:00.0: [460]: 0x594a0060 0 0 0 1 ath10k_pci 0002:01:00.0: [461]: 0x580c0c42 0 0 0 0 ath10k_pci 0002:01:00.0: [462]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [463]: 0x580c0c42 0 0 0 0 ath10k_pci 0002:01:00.0: [464]: 0x594a0038 0 0 0 1 ath10k_pci 0002:01:00.0: [465]: 0x580c0a42 0 0 0 0 ath10k_pci 0002:01:00.0: [466]: 0x594a0060 0 0 0 1 ath10k_pci 0002:01:00.0: [467]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [468]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [469]: 0x580c1c42 0 0 0 0 ath10k_pci 0002:01:00.0: [470]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [471]: 0x580c1c42 0 0 0 0 ath10k_pci 0002:01:00.0: [472]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [473]: 0x580c1c42 0 0 0 0 ath10k_pci 0002:01:00.0: [474]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [475]: 0x580c0642 0 0 0 0 ath10k_pci 0002:01:00.0: [476]: 0x594a0038 0 0 0 1 ath10k_pci 0002:01:00.0: [477]: 0x580c0842 0 0 0 0 ath10k_pci 0002:01:00.0: [478]: 0x594a0060 0 0 0 1 ath10k_pci 0002:01:00.0: [479]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [480]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [481]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [482]: 0x594a0038 0 0 0 1 ath10k_pci 0002:01:00.0: [483]: 0x580c0842 0 0 0 0 ath10k_pci 0002:01:00.0: [484]: 0x594a0060 0 0 0 1 ath10k_pci 0002:01:00.0: [485]: 0x580c0642 0 0 0 0 ath10k_pci 0002:01:00.0: [486]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [487]: 0x580c0642 0 0 0 0 ath10k_pci 0002:01:00.0: [488]: 0x594a0038 0 0 0 1 ath10k_pci 0002:01:00.0: [489]: 0x580c0842 0 0 0 0 ath10k_pci 0002:01:00.0: [490]: 0x594a0060 0 0 0 1 ath10k_pci 0002:01:00.0: [491]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [492]: 0x58174040 0 1 0 0 ath10k_pci 0002:01:00.0: [493]: 0x5a946040 0 1 0 0 ath10k_pci 0002:01:00.0: [494]: 0x59909040 0 1 0 0 ath10k_pci 0002:01:00.0: [495]: 0x5ae5a040 0 1 0 0 ath10k_pci 0002:01:00.0: [496]: 0x58096040 0 1 0 0 ath10k_pci 0002:01:00.0: [497]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [498]: 0x580c0642 0 0 0 0 ath10k_pci 0002:01:00.0: [499]: 0x5c1e0040 0 1 0 0 ath10k_pci 0002:01:00.0: [500]: 0x58153040 0 1 0 0 ath10k_pci 0002:01:00.0: [501]: 0x58129040 0 1 0 0 ath10k_pci 0002:01:00.0: [502]: 0x5952f040 0 1 0 0 ath10k_pci 0002:01:00.0: [503]: 0x59535040 0 1 0 0 ath10k_pci 0002:01:00.0: [504]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [505]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [506]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [507]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [508]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [509]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [510]: 0x594a0010 0 0 0 1 ath10k_pci 0002:01:00.0: [511]: 0x580c0042 0 0 0 0 ath10k_pci 0002:01:00.0: [512]: 0x5adcc040 0 1 0 0 ath10k_pci 0002:01:00.0: [513]: 0x5cf3d040 0 1 0 0 ath10k_pci 0002:01:00.0: [514]: 0x5c1e9040 64 1 0 0 ath10k_pci 0002:01:00.0: [515]: 0x00000000 0 0 0 0 Signed-off-by: Karthikeyan Periyasamy <periyasa@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Sahitya Tummala authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit dbfcef6b upstream. Below is the synchronization issue between unmount and kjournald2 contexts, which results into use after free issue in kjournald2(). Fix this issue by using journal->j_state_lock to synchronize the wait_event() done in journal_kill_thread() and the wake_up() done in kjournald2(). TASK 1: umount cmd: |--jbd2_journal_destroy() { |--journal_kill_thread() { write_lock(&journal->j_state_lock); journal->j_flags |= JBD2_UNMOUNT; ... write_unlock(&journal->j_state_lock); wake_up(&journal->j_wait_commit); TASK 2 wakes up here: kjournald2() { ... checks JBD2_UNMOUNT flag and calls goto end-loop; ... end_loop: write_unlock(&journal->j_state_lock); journal->j_task = NULL; --> If this thread gets pre-empted here, then TASK 1 wait_event will exit even before this thread is completely done. wait_event(journal->j_wait_done_commit, journal->j_task == NULL); ... write_lock(&journal->j_state_lock); write_unlock(&journal->j_state_lock); } |--kfree(journal); } } wake_up(&journal->j_wait_done_commit); --> this step now results into use after free issue. } Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Felix Fietkau authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit a34d0a0d upstream. In an RFC patch, Sven Eckelmann and Simon Wunderlich reported: "QCA 802.11n chips (especially AR9330/AR9340) sometimes end up in a state in which a read of AR_CFG always returns 0xdeadbeef. This should not happen when when the power_mode of the device is ATH9K_PM_AWAKE." Include the check for the default register state in the existing MAC hang check. Signed-off-by: Felix Fietkau <nbd@nbd.name> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Cc: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Dmitry Torokhov authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 74c82dae upstream. We were accidentally initializing haptics->rated_voltage twice, and did not initialize overdrive voltage. Acked-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Grant Grundler authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 90841047 upstream. This linksys dongle by default comes up in cdc_ether mode. This patch allows r8152 to claim the device: Bus 002 Device 002: ID 13b1:0041 Linksys Signed-off-by: Grant Grundler <grundler@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net> [krzk: Rebase on v4.4] Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Chen Feng authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 2ef23053 upstream. Since ion alloc can be called by userspace,eg gralloc. When it is called frequently, the efficiency of kswapd is to low. And the reclaimed memory is too lower. In this way, the kswapd can use to much cpu resources. With 3.5GB DMA Zone and 0.5 Normal Zone. pgsteal_kswapd_dma 9364140 pgsteal_kswapd_normal 7071043 pgscan_kswapd_dma 10428250 pgscan_kswapd_normal 37840094 With this change the reclaim ratio has greatly improved 18.9% -> 72.5% Signed-off-by: Chen Feng <puck.chen@hisilicon.com> Signed-off-by: Lu bing <albert.lubing@hisilicon.com> Reviewed-by: Laura Abbott <labbott@redhat.com> Cc: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Jiri Olsa authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 78b562fb upstream. Return immediately when we find issue in the user stack checks. The error value could get overwritten by following check for PERF_SAMPLE_REGS_INTR. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: syzkaller-bugs@googlegroups.com Cc: x86@kernel.org Fixes: 60e2364e ("perf: Add ability to sample machine state on interrupt") Link: http://lkml.kernel.org/r/20180415092352.12403-1-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Xiaoming Gao authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit d3878e16 upstream. The TSC calibration code uses HPET as reference. The conversion normalizes the delta of two HPET timestamps: hpetref = ((tshpet1 - tshpet2) * HPET_PERIOD) / 1e6 and then divides the normalized delta of the corresponding TSC timestamps by the result to calulate the TSC frequency. tscfreq = ((tstsc1 - tstsc2 ) * 1e6) / hpetref This uses do_div() which takes an u32 as the divisor, which worked so far because the HPET frequency was low enough that 'hpetref' never exceeded 32bit. On Skylake machines the HPET frequency increased so 'hpetref' can exceed 32bit. do_div() truncates the divisor, which causes the calibration to fail. Use div64_u64() to avoid the problem. [ tglx: Fixes whitespace mangled patch and rewrote changelog ] Signed-off-by: Xiaoming Gao <newtongao@tencent.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Cc: peterz@infradead.org Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/38894564-4fc9-b8ec-353f-de702839e44e@gmail.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Steve French authored
BugLink: http://bugs.launchpad.net/bugs/1768474 commit 1d0cffa6 upstream. RHBZ: 1453123 Since at least the 3.10 kernel and likely a lot earlier we have not been able to create unix domain sockets in a cifs share when mounted using the SFU mount option (except when mounted with the cifs unix extensions to Samba e.g.) Trying to create a socket, for example using the af_unix command from xfstests will cause : BUG: unable to handle kernel NULL pointer dereference at 00000000 00000040 Since no one uses or depends on being able to create unix domains sockets on a cifs share the easiest fix to stop this vulnerability is to simply not allow creation of any other special files than char or block devices when sfu is used. Added update to Ronnie's patch to handle a tcon link leak, and to address a buf leak noticed by Gustavo and Colin. Acked-by: Gustavo A. R. Silva <gustavo@embeddedor.com> CC: Colin Ian King <colin.king@canonical.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Reported-by: Eryu Guan <eguan@redhat.com> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Greg Kroah-Hartman authored
BugLink: http://bugs.launchpad.net/bugs/1768429Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Greg Thelen authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 2e898e4c upstream. lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if the page's memcg is undergoing move accounting, which occurs when a process leaves its memcg for a new one that has memory.move_charge_at_immigrate set. unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if the given inode is switching writeback domains. Switches occur when enough writes are issued from a new domain. This existing pattern is thus suspicious: lock_page_memcg(page); unlocked_inode_to_wb_begin(inode, &locked); ... unlocked_inode_to_wb_end(inode, locked); unlock_page_memcg(page); If both inode switch and process memcg migration are both in-flight then unlocked_inode_to_wb_end() will unconditionally enable interrupts while still holding the lock_page_memcg() irq spinlock. This suggests the possibility of deadlock if an interrupt occurs before unlock_page_memcg(). truncate __cancel_dirty_page lock_page_memcg unlocked_inode_to_wb_begin unlocked_inode_to_wb_end <interrupts mistakenly enabled> <interrupt> end_page_writeback test_clear_page_writeback lock_page_memcg <deadlock> unlock_page_memcg Due to configuration limitations this deadlock is not currently possible because we don't mix cgroup writeback (a cgroupv2 feature) and memory.move_charge_at_immigrate (a cgroupv1 feature). If the kernel is hacked to always claim inode switching and memcg moving_account, then this script triggers lockup in less than a minute: cd /mnt/cgroup/memory mkdir a b echo 1 > a/memory.move_charge_at_immigrate echo 1 > b/memory.move_charge_at_immigrate ( echo $BASHPID > a/cgroup.procs while true; do dd if=/dev/zero of=/mnt/big bs=1M count=256 done ) & while true; do sync done & sleep 1h & SLEEP=$! while true; do echo $SLEEP > a/cgroup.procs echo $SLEEP > b/cgroup.procs done The deadlock does not seem possible, so it's debatable if there's any reason to modify the kernel. I suggest we should to prevent future surprises. And Wang Long said "this deadlock occurs three times in our environment", so there's more reason to apply this, even to stable. Stable 4.4 has minor conflicts applying this patch. For a clean 4.4 patch see "[PATCH for-4.4] writeback: safer lock nesting" https://lkml.org/lkml/2018/4/11/146 Wang Long said "this deadlock occurs three times in our environment" [gthelen@google.com: v4] Link: http://lkml.kernel.org/r/20180411084653.254724-1-gthelen@google.com [akpm@linux-foundation.org: comment tweaks, struct initialization simplification] Change-Id: Ibb773e8045852978f6207074491d262f1b3fb613 Link: http://lkml.kernel.org/r/20180410005908.167976-1-gthelen@google.com Fixes: 682aa8e1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates") Signed-off-by: Greg Thelen <gthelen@google.com> Reported-by: Wang Long <wanglong19@meituan.com> Acked-by: Wang Long <wanglong19@meituan.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Tejun Heo <tj@kernel.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: <stable@vger.kernel.org> [v4.2+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [natechancellor: Applied to 4.4 based on Greg's backport on lkml.org] Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Amir Goldstein authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 54a307ba upstream. When event on child inodes are sent to the parent inode mark and parent inode mark was not marked with FAN_EVENT_ON_CHILD, the event will not be delivered to the listener process. However, if the same process also has a mount mark, the event to the parent inode will be delivered regadless of the mount mark mask. This behavior is incorrect in the case where the mount mark mask does not contain the specific event type. For example, the process adds a mark on a directory with mask FAN_MODIFY (without FAN_EVENT_ON_CHILD) and a mount mark with mask FAN_CLOSE_NOWRITE (without FAN_ONDIR). A modify event on a file inside that directory (and inside that mount) should not create a FAN_MODIFY event, because neither of the marks requested to get that event on the file. Fixes: 1968f5ee ("fanotify: use both marks when possible") Cc: stable <stable@vger.kernel.org> Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> [natechancellor: Fix small conflict due to lack of 3cd5eca8] Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
wangguang authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 4e800c03 upstream. Pages clear buffers after ext4 delayed block allocation failed, However, it does not clean its pte_dirty flag. if the pages unmap ,in cording to the pte_dirty , unmap_page_range may try to call __set_page_dirty, which may lead to the bugon at mpage_prepare_extent_to_map:head = page_buffers(page);. This patch just call clear_page_dirty_for_io to clean pte_dirty at mpage_release_unused_pages for pages mmaped. Steps to reproduce the bug: (1) mmap a file in ext4 addr = (char *)mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); memset(addr, 'i', 4096); (2) return EIO at ext4_writepages->mpage_map_and_submit_extent->mpage_map_one_extent which causes this log message to be print: ext4_msg(sb, KERN_CRIT, "Delayed block allocation failed for " "inode %lu at logical offset %llu with" " max blocks %u with error %d", inode->i_ino, (unsigned long long)map->m_lblk, (unsigned)map->m_len, -err); (3)Unmap the addr cause warning at __set_page_dirty:WARN_ON_ONCE(warn && !PageUptodate(page)); (4) wait for a minute,then bugon happen. Cc: stable@vger.kernel.org Signed-off-by: wangguang <wangguang03@zte.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> [@nathanchance: Resolved conflict from lack of 09cbfeaf] Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Matthew Wilcox authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit abc1be13 upstream. f2fs specifies the __GFP_ZERO flag for allocating some of its pages. Unfortunately, the page cache also uses the mapping's GFP flags for allocating radix tree nodes. It always masked off the __GFP_HIGHMEM flag, and masks off __GFP_ZERO in some paths, but not all. That causes radix tree nodes to be allocated with a NULL list_head, which causes backtraces like: __list_del_entry+0x30/0xd0 list_lru_del+0xac/0x1ac page_cache_tree_insert+0xd8/0x110 The __GFP_DMA and __GFP_DMA32 flags would also be able to sneak through if they are ever used. Fix them all by using GFP_RECLAIM_MASK at the innermost location, and remove it from earlier in the callchain. Link: http://lkml.kernel.org/r/20180411060320.14458-2-willy@infradead.org Fixes: 449dd698 ("mm: keep page cache radix tree nodes in check") Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> Reported-by: Chris Fries <cfries@google.com> Debugged-by: Minchan Kim <minchan@kernel.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Michal Hocko authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit c20cd45e upstream. page_cache_read has been historically using page_cache_alloc_cold to allocate a new page. This means that mapping_gfp_mask is used as the base for the gfp_mask. Many filesystems are setting this mask to GFP_NOFS to prevent from fs recursion issues. page_cache_read is called from the vm_operations_struct::fault() context during the page fault. This context doesn't need the reclaim protection normally. ceph and ocfs2 which call filemap_fault from their fault handlers seem to be OK because they are not taking any fs lock before invoking generic implementation. xfs which takes XFS_MMAPLOCK_SHARED is safe from the reclaim recursion POV because this lock serializes truncate and punch hole with the page faults and it doesn't get involved in the reclaim. There is simply no reason to deliberately use a weaker allocation context when a __GFP_FS | __GFP_IO can be used. The GFP_NOFS protection might be even harmful. There is a push to fail GFP_NOFS allocations rather than loop within allocator indefinitely with a very limited reclaim ability. Once we start failing those requests the OOM killer might be triggered prematurely because the page cache allocation failure is propagated up the page fault path and end up in pagefault_out_of_memory. We cannot play with mapping_gfp_mask directly because that would be racy wrt. parallel page faults and it might interfere with other users who really rely on NOFS semantic from the stored gfp_mask. The mask is also inode proper so it would even be a layering violation. What we can do instead is to push the gfp_mask into struct vm_fault and allow fs layer to overwrite it should the callback need to be called with a different allocation context. Initialize the default to (mapping_gfp_mask | __GFP_FS | __GFP_IO) because this should be safe from the page fault path normally. Why do we care about mapping_gfp_mask at all then? Because this doesn't hold only reclaim protection flags but it also might contain zone and movability restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have to respect those. Signed-off-by: Michal Hocko <mhocko@suse.com> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Acked-by: Jan Kara <jack@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Mel Gorman <mgorman@suse.de> Cc: Dave Chinner <david@fromorbit.com> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Ian Kent authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 1e630665 upstream. The autofs file system mkdir inode operation blindly sets the created directory mode to S_IFDIR | 0555, ingoring the passed in mode, which can cause selinux dac_override denials. But the function also checks if the caller is the daemon (as no-one else should be able to do anything here) so there's no point in not honouring the passed in mode, allowing the daemon to set appropriate mode when required. Link: http://lkml.kernel.org/r/152361593601.8051.14014139124905996173.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Al Viro authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 16a34adb upstream. We want it only for the stuff created by SB_KERNMOUNT mounts, *not* for their copies. As it is, creating a deep stack of bindings of /proc/*/ns/* somewhere in a new namespace and exiting yields a stack overflow. Cc: stable@kernel.org Reported-by: Alexander Aring <aring@mojatatu.com> Bisected-by: Kirill Tkhai <ktkhai@virtuozzo.com> Tested-by: Kirill Tkhai <ktkhai@virtuozzo.com> Tested-by: Alexander Aring <aring@mojatatu.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Al Viro authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 4a3877c4 upstream. if we ever hit rpc_gssd_dummy_depopulate() dentry passed to it has refcount equal to 1. __rpc_rmpipe() drops it and dput() done after that hits an already freed dentry. Cc: stable@kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Al Viro authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit a24cd490 upstream. hypfs_fill_super() might fail to allocate sbi; hypfs_kill_super() should not oops on that. Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Al Viro authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit c66b23c2 upstream. jffs2_fill_super() might fail to allocate jffs2_sb_info; jffs2_kill_sb() must survive that. Cc: stable@kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Michael Ellerman authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit b8858581 upstream. When we patch an alternate feature section, we have to adjust any relative branches that branch out of the alternate section. But currently we have a bug if we have a branch that points to past the last instruction of the alternate section, eg: FTR_SECTION_ELSE 1: b 2f or 6,6,6 2: ALT_FTR_SECTION_END(...) nop This will result in a relative branch at 1 with a target that equals the end of the alternate section. That branch does not need adjusting when it's moved to the non-else location. Currently we do adjust it, resulting in a branch that goes off into the link-time location of the else section, which is junk. The fix is to not patch branches that have a target == end of the alternate section. Fixes: d20fe50a ("KVM: PPC: Book3S HV: Branch inside feature section") Fixes: 9b1a735d ("powerpc: Add logic to patch alternative feature sections") Cc: stable@vger.kernel.org # v2.6.27+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Michael Neuling authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit 13a83eac upstream. On boot we save the configuration space of PCIe bridges. We do this so when we get an EEH event and everything gets reset that we can restore them. Unfortunately we save this state before we've enabled the MMIO space on the bridges. Hence if we have to reset the bridge when we come back MMIO is not enabled and we end up taking an PE freeze when the driver starts accessing again. This patch forces the memory/MMIO and bus mastering on when restoring bridges on EEH. Ideally we'd do this correctly by saving the configuration space writes later, but that will have to come later in a larger EEH rewrite. For now we have this simple fix. The original bug can be triggered on a boston machine by doing: echo 0x8000000000000000 > /sys/kernel/debug/powerpc/PCI0001/err_injct_outbound On boston, this PHB has a PCIe switch on it. Without this patch, you'll see two EEH events, 1 expected and 1 the failure we are fixing here. The second EEH event causes the anything under the PHB to disappear (i.e. the i40e eth). With this patch, only 1 EEH event occurs and devices properly recover. Fixes: 652defed ("powerpc/eeh: Check PCIe link after reset") Cc: stable@vger.kernel.org # v3.11+ Reported-by: Pridhiviraj Paidipeddi <ppaidipe@linux.vnet.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Acked-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Matt Redfearn authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit c96eebf0 upstream. The label .Llast_fixup\@ is jumped to on page fault within the final byte set loop of memset (on < MIPSR6 architectures). For some reason, in this fault handler, the v1 register is randomly set to a2 & STORMASK. This clobbers v1 for the calling function. This can be observed with the following test code: static int __init __attribute__((optimize("O0"))) test_clear_user(void) { register int t asm("v1"); char *test; int j, k; pr_info("\n\n\nTesting clear_user\n"); test = vmalloc(PAGE_SIZE); for (j = 256; j < 512; j++) { t = 0xa5a5a5a5; if ((k = clear_user(test + PAGE_SIZE - 256, j)) != j - 256) { pr_err("clear_user (%px %d) returned %d\n", test + PAGE_SIZE - 256, j, k); } if (t != 0xa5a5a5a5) { pr_err("v1 was clobbered to 0x%x!\n", t); } } return 0; } late_initcall(test_clear_user); Which demonstrates that v1 is indeed clobbered (MIPS64): Testing clear_user v1 was clobbered to 0x1! v1 was clobbered to 0x2! v1 was clobbered to 0x3! v1 was clobbered to 0x4! v1 was clobbered to 0x5! v1 was clobbered to 0x6! v1 was clobbered to 0x7! Since the number of bytes that could not be set is already contained in a2, the andi placing a value in v1 is not necessary and actively harmful in clobbering v1. Reported-by: James Hogan <jhogan@kernel.org> Signed-off-by: Matt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/19109/Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Matt Redfearn authored
BugLink: http://bugs.launchpad.net/bugs/1768429 commit daf70d89 upstream. The __clear_user function is defined to return the number of bytes that could not be cleared. From the underlying memset / bzero implementation this means setting register a2 to that number on return. Currently if a page fault is triggered within the memset_partial block, the value loaded into a2 on return is meaningless. The label .Lpartial_fixup\@ is jumped to on page fault. In order to work out how many bytes failed to copy, the exception handler should find how many bytes left in the partial block (andi a2, STORMASK), add that to the partial block end address (a2), and subtract the faulting address to get the remainder. Currently it incorrectly subtracts the partial block start address (t1), which has additionally been clobbered to generate a jump target in memset_partial. Fix this by adding the block end address instead. This issue was found with the following test code: int j, k; for (j = 0; j < 512; j++) { if ((k = clear_user(NULL, j)) != j) { pr_err("clear_user (NULL %d) returned %d\n", j, k); } } Which now passes on Creator Ci40 (MIPS32) and Cavium Octeon II (MIPS64). Suggested-by: James Hogan <jhogan@kernel.org> Signed-off-by: Matt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/19108/Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Juerg Haefliger <juergh@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-