- 23 Jan, 2018 11 commits
-
-
Jan Kara authored
Both ext2 and ext4 are now converted to mbcache2. Remove the old mbcache code. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit ecd1e644) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
When someone tried to set xattr to the same value (i.e., not changing anything) we did all the work of removing original xattr, possibly breaking references to shared xattr block, inserting new xattr, and merging xattr blocks again. Since this is not so rare operation and it is relatively cheap for us to detect this case, check for this and shortcut xattr setting in that case. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit 3fd16462) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Andreas Gruenbacher authored
This variable, introduced in commit 9c191f70, is unnecessary: it is set once the module has been initialized correctly, and ext4_fill_super cannot run unless the module has been initialized correctly. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit 2335d05f) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
Currently we maintain perfect LRU list by moving entry to the tail of the list when it gets used. However these operations on cache-global list are relatively expensive. In this patch we switch to lazy updates of LRU list. Whenever entry gets used, we set a referenced bit in it. When reclaiming entries, we give referenced entries another round in the LRU. Since the list is not a real LRU anymore, rename it to just 'list'. In my testing this logic gives about 30% boost to workloads with mostly unique xattr blocks (e.g. xattr-bench with 10 files and 10000 unique xattr values). Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit f0c8b462) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
So far number of entries in mbcache is limited only by the pressure from the shrinker. Since too many entries degrade the hash table and generally we expect that caching more entries has diminishing returns, limit number of entries the same way as in the old mbcache to 16 * hash table size. Once we exceed the desired maximum number of entries, we schedule a backround work to reclaim entries. If the background work cannot keep up and the number of entries exceeds two times the desired maximum, we reclaim some entries directly when allocating a new entry. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit c2f3140f) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
The conversion is generally straightforward. The only tricky part is that xattr block corresponding to found mbcache entry can get freed before we get buffer lock for that block. So we have to check whether the entry is still valid after getting buffer lock. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (backported from commit 82939d79) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
The conversion is generally straightforward. We convert filesystem from a global cache to per-fs one. Similarly to ext4 the tricky part is that xattr block corresponding to found mbcache entry can get freed before we get buffer lock for that block. So we have to check whether the entry is still valid after getting the buffer lock. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit be0726d3) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Jan Kara authored
Original mbcache was designed to have more features than what ext? filesystems ended up using. It supported entry being in more hashes, it had a home-grown rwlocking of each entry, and one cache could cache entries from multiple filesystems. This genericity also resulted in more complex locking, larger cache entries, and generally more code complexity. This is reimplementation of the mbcache functionality to exactly fit the purpose ext? filesystems use it for. Cache entries are now considerably smaller (7 instead of 13 longs), the code is considerably smaller as well (414 vs 913 lines of code), and IMO also simpler. The new code is also much more lightweight. I have measured the speed using artificial xattr-bench benchmark, which spawns P processes, each process sets xattr for F different files, and the value of xattr is randomly chosen from a pool of V values. Averages of runtimes for 5 runs for various combinations of parameters are below. The first value in each cell is old mbache, the second value is the new mbcache. V=10 F\P 1 2 4 8 16 32 64 10 0.158,0.157 0.208,0.196 0.500,0.277 0.798,0.400 3.258,0.584 13.807,1.047 61.339,2.803 100 0.172,0.167 0.279,0.222 0.520,0.275 0.825,0.341 2.981,0.505 12.022,1.202 44.641,2.943 1000 0.185,0.174 0.297,0.239 0.445,0.283 0.767,0.340 2.329,0.480 6.342,1.198 16.440,3.888 V=100 F\P 1 2 4 8 16 32 64 10 0.162,0.153 0.200,0.186 0.362,0.257 0.671,0.496 1.433,0.943 3.801,1.345 7.938,2.501 100 0.153,0.160 0.221,0.199 0.404,0.264 0.945,0.379 1.556,0.485 3.761,1.156 7.901,2.484 1000 0.215,0.191 0.303,0.246 0.471,0.288 0.960,0.347 1.647,0.479 3.916,1.176 8.058,3.160 V=1000 F\P 1 2 4 8 16 32 64 10 0.151,0.129 0.210,0.163 0.326,0.245 0.685,0.521 1.284,0.859 3.087,2.251 6.451,4.801 100 0.154,0.153 0.211,0.191 0.276,0.282 0.687,0.506 1.202,0.877 3.259,1.954 8.738,2.887 1000 0.145,0.179 0.202,0.222 0.449,0.319 0.899,0.333 1.577,0.524 4.221,1.240 9.782,3.579 V=10000 F\P 1 2 4 8 16 32 64 10 0.161,0.154 0.198,0.190 0.296,0.256 0.662,0.480 1.192,0.818 2.989,2.200 6.362,4.746 100 0.176,0.174 0.236,0.203 0.326,0.255 0.696,0.511 1.183,0.855 4.205,3.444 19.510,17.760 1000 0.199,0.183 0.240,0.227 1.159,1.014 2.286,2.154 6.023,6.039 ---,10.933 ---,36.620 V=100000 F\P 1 2 4 8 16 32 64 10 0.171,0.162 0.204,0.198 0.285,0.230 0.692,0.500 1.225,0.881 2.990,2.243 6.379,4.771 100 0.151,0.171 0.220,0.210 0.295,0.255 0.720,0.518 1.226,0.844 3.423,2.831 19.234,17.544 1000 0.192,0.189 0.249,0.225 1.162,1.043 2.257,2.093 5.853,4.997 ---,10.399 ---,32.198 We see that the new code is faster in pretty much all the cases and starting from 4 processes there are significant gains with the new code resulting in upto 20-times shorter runtimes. Also for large numbers of cached entries all values for the old code could not be measured as the kernel started hitting softlockups and died before the test completed. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry picked from commit f9a61eb4) CVE-2015-8952 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Xin Long authored
Now when peeling off an association to the sock in another netns, all transports in this assoc are not to be rehashed and keep use the old key in hashtable. As a transport uses sk->net as the hash key to insert into hashtable, it would miss removing these transports from hashtable due to the new netns when closing the sock and all transports are being freeed, then later an use-after-free issue could be caused when looking up an asoc and dereferencing those transports. This is a very old issue since very beginning, ChunYu found it with syzkaller fuzz testing with this series: socket$inet6_sctp() bind$inet6() sendto$inet6() unshare(0x40000000) getsockopt$inet_sctp6_SCTP_GET_ASSOC_ID_LIST() getsockopt$inet_sctp6_SCTP_SOCKOPT_PEELOFF() This patch is to block this call when peeling one assoc off from one netns to another one, so that the netns of all transport would not go out-sync with the key in hashtable. Note that this patch didn't fix it by rehashing transports, as it's difficult to handle the situation when the tuple is already in use in the new netns. Besides, no one would like to peel off one assoc to another netns, considering ipaddrs, ifaces, etc. are usually different. Reported-by: ChunYu Wang <chunwang@redhat.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> CVE-2017-15115 (cherry picked from commit df80cd9b) Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Colin Ian King <colin.king@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Mohamed Ghannam authored
Whenever the sock object is in DCCP_CLOSED state, dccp_disconnect() must free dccps_hc_tx_ccid and dccps_hc_rx_ccid and set to NULL. Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> CVE-2017-8824 (cherry picked from commit 69c64866 linux-next) Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Seth Forshee <seth.forshee@canonical.com> Acked-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
-
Stefan Bader authored
Ignore: yes Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
- 19 Jan, 2018 4 commits
-
-
Stefan Bader authored
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Marcelo Henrique Cerri authored
CVE-2017-5753 CVE-2017-5715 Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Khaled Elmously <khalid.elmously@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Marcelo Henrique Cerri authored
CVE-2017-5753 CVE-2017-5715 Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Khaled Elmously <khalid.elmously@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Martin Schwidefsky authored
CVE-2017-5753 CVE-2017-5715 Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Khaled Elmously <khalid.elmously@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
- 17 Jan, 2018 3 commits
-
-
Stefan Bader authored
CVE-2017-5753 CVE-2017-5715 Initial change was missing code to correctly mask EDX bits of cpuid level 7.0. Fixes: 8339cae2 ("KVM: x86: Add speculative control CPUID support for guests") Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
He Chen authored
CVE-2017-5715 CVE-2017-5753 Sparse populated CPUID leafs are collected in a software provided leaf to avoid bloat of the x86_capability array, but there is no way to rebuild the real leafs (e.g. for KVM CPUID enumeration) other than rereading the CPUID leaf from the CPU. While this is possible it is problematic as it does not take software disabled features into account. If a feature is disabled on the host it should not be exposed to a guest either. Add get_scattered_cpuid_leaf() which rebuilds the leaf from the scattered cpuid table information and the active CPU features. [ tglx: Rewrote changelog ] Signed-off-by: He Chen <he.chen@linux.intel.com> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Luwei Kang <luwei.kang@intel.com> Cc: kvm@vger.kernel.org Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Piotr Luc <Piotr.Luc@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Link: http://lkml.kernel.org/r/1478856336-9388-3-git-send-email-he.chen@linux.intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> (backported from commit 47bdf337) Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Kleber Sacilotto de Souza authored
Ignore: yes Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
- 15 Jan, 2018 7 commits
-
-
Kleber Sacilotto de Souza authored
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Andy Whitcroft authored
BugLink: http://bugs.launchpad.net/bugs/1743383Signed-off-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
Jim Mattson authored
CVE-2017-5753 CVE-2017-5715 commit 0cb5b306 upstream. Guest GPR values are live in the hardware GPRs at VM-exit. Do not leave any guest values in hardware GPRs after the guest GPR values are saved to the vcpu_vmx structure. This is a partial mitigation for CVE 2017-5715 and CVE 2017-5753. Specifically, it defeats the Project Zero PoC for CVE 2017-5715. Suggested-by: Eric Northup <digitaleric@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Eric Northup <digitaleric@google.com> Reviewed-by: Benjamin Serebrin <serebrin@google.com> Reviewed-by: Andrew Honig <ahonig@google.com> [Paolo: Add AMD bits, Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Andy Whitcroft authored
CVE-2017-5753 CVE-2017-5715 This reverts commit a1c61c3a. Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Andy Whitcroft authored
UBUNTU: SAUCE: x86/microcode: Extend post microcode reload to support IBPB feature -- repair missmerge CVE-2017-5753 CVE-2017-5715 Fix missmerge leading to removal of late_initcall(). Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Seth Forshee authored
CVE-2017-5754 hwsync was added as a mnemonic for sync in binutils 2.25, prior to that there is no support for hwsync. Replace uses of hwsync with sync to maintain compatibility with older binutils. Fixes: ee71154e ("UBUNTU: SAUCE: rfi-flush: Add barriers to the fallback L1D flushing") Acked-by: Kamal Mostafa <kamal@canonical.com> Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
-
Kleber Sacilotto de Souza authored
Ignore: yes Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
-
- 12 Jan, 2018 15 commits
-
-
Marcelo Henrique Cerri authored
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Andy Whitcroft authored
CVE-2017-5753 CVE-2017-5715 Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Andy Whitcroft authored
CVE-2017-5753 CVE-2017-5715 Signed-off-by: Andy Whitcroft <apw@canonical.com>
-
Marcelo Henrique Cerri authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Spotted by Paul. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Spotted by Paul. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Nicholas Piggin authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Add a data dependency on loads for the fallback flush. This reduces or eliminates instances of incomplete flushing on P8 and P9. Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 We use the x86 'nopti' option because all the documenation on earth is going to refer to that, and we can guess what users mean when they specify that - they want to avoid any overhead due to Meltdown mitigations. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 To avoid a bug like the previous commit ever happening again, put the nops in a single place. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 We forgot to expand the number of nops in HRFI_TO_UNKNOWN when we expanded the number of nops. The result is we actually overwrite the rfid with a nop, which is not good. Luckily this is only used in denorm_done, which is not hit often. Spotted by Ram. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Since we now have three nops, we need to branch further to get over the nops to the branch to the fallback flush. Instead of putting the branch in slot 1 and branching by 8, put it in 0 and branch all the way to keep it simple. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Rather than assuming a successful return from the hcall will tell us a valid flush type, if the hcall doesn't select one of the known flush types use the fallback. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Michael Ellerman authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Assume we need to do the fallback flush, unless firmware tells us explicitly not to, by having the two needs-l1d-flush properties set to disabled. The previous logic assumed that the existence of a "fw-features" node with no further properties was sufficient to indicate the flush wasn't needed. This should make no difference in practice with current firmwares, because the "fw-features" node has only just been introduced, so there are no machines in the wild which have an empty "fw-features" node. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-
Balbir Singh authored
CVE-2017-5754 BugLink: http://bugs.launchpad.net/bugs/1742772 Add a hwsync after DCBT_STOP_ALL_STREAM_IDS to order loads/ stores prior to stopping prefetch with loads and stores as a part of the flushing. A lwsync is needed to ensure that after we don't mix the flushing of one congruence class with another Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
-