- 13 Jan, 2017 16 commits
-
-
Theodore Ts'o authored
[ Upstream commit 9e47a4c9 ] If the block size or cluster size is insane, reject the mount. This is important for security reasons (although we shouldn't be just depending on this check). Ref: http://www.securityfocus.com/archive/1/539661 Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1332506Reported-by: Borislav Petkov <bp@alien8.de> Reported-by: Nikolay Borisov <kernel@kyup.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Mathias Nyman authored
[ Upstream commit 37be6676 ] USB-3 does not have any link state that will avoid negotiating a connection with a plugged-in cable but will signal the host when the cable is unplugged. For USB-3 we used to first set the link to Disabled, then to RxDdetect to be able to detect cable connects or disconnects. But in RxDetect the connected device is detected again and eventually enabled. Instead set the link into U3 and disable remote wakeups for the device. This is what Windows does, and what Alan Stern suggested. Cc: stable@vger.kernel.org Cc: Alan Stern <stern@rowland.harvard.edu> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Larry Finger authored
[ Upstream commit 8052d724 ] When there is a CRC error in the SPROM read from the device, the code attempts to handle a fallback SPROM. When this also fails, the driver returns zero rather than an error code. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Cc: Stable <stable@vger.kernel.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Vittorio Gambaletta (VittGam) authored
[ Upstream commit 79e57dd1 ] The active_high LED of my Wistron DNMA-92 is still being recognized as active_low on 4.7.6 mainline. When I was preparing my former commit 0f9edcdd ("ath9k: Fix LED polarity for some Mini PCI AR9220 MB92 cards.") to fix that I must have somehow messed up with testing, because I tested the final version of that patch before sending it, and it was apparently working; but now it is not working on 4.7.6 mainline. I initially added the PCI_DEVICE_SUB section for 0x0029/0x2096 above the PCI_VDEVICE section for 0x0029; but then I moved the former below the latter after seeing how 0x002A sections were sorted in the file. This turned out to be wrong: if a generic PCI_VDEVICE entry (that has both subvendor and subdevice IDs set to PCI_ANY_ID) is put before a more specific one (PCI_DEVICE_SUB), then the generic PCI_VDEVICE entry will match first and will be used. With this patch, 0x0029/0x2096 has finally got active_high LED on 4.7.6. While I'm at it, let's fix 0x002A too by also moving its generic definition below its specific ones. Fixes: 0f9edcdd ("ath9k: Fix LED polarity for some Mini PCI AR9220 MB92 cards.") Cc: <stable@vger.kernel.org> #4.7+ Signed-off-by: Vittorio Gambaletta <linuxbugs@vittgam.net> [kvalo@qca.qualcomm.com: improve the commit log based on email discussions] Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Ian Abbott authored
[ Upstream commit 655c4d44 ] For NI M Series cards, the Comedi `insn_read` handler for the AI subdevice is broken due to ANDing the value read from the AI FIFO data register with an incorrect mask. The incorrect mask clears all but the most significant bit of the sample data. It should preserve all the sample data bits. Correct it. Fixes: 817144ae ("staging: comedi: ni_mio_common: remove unnecessary use of 'board->adbits'") Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Cc: <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Chandan Rajendra authored
[ Upstream commit 30a9d7af ] The number of 'counters' elements needed in 'struct sg' is super_block->s_blocksize_bits + 2. Presently we have 16 'counters' elements in the array. This is insufficient for block sizes >= 32k. In such cases the memcpy operation performed in ext4_mb_seq_groups_show() would cause stack memory corruption. Fixes: c9de560dSigned-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Chandan Rajendra authored
[ Upstream commit 69e43e8c ] 'border' variable is set to a value of 2 times the block size of the underlying filesystem. With 64k block size, the resulting value won't fit into a 16-bit variable. Hence this commit changes the data type of 'border' to 'unsigned int'. Fixes: c9de560dSigned-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Andreas Dilger <adilger@dilger.ca> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Omar Sandoval authored
[ Upstream commit b4a567e8 ] ->queue_rq() should return one of the BLK_MQ_RQ_QUEUE_* constants, not an errno. f4aa4c7b ("block: loop: convert to per-device workqueue") Signed-off-by: Omar Sandoval <osandov@fb.com> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Alex Porosanu authored
[ Upstream commit d128af17 ] The AEAD givenc descriptor relies on moving the IV through the output FIFO and then back to the CTX2 for authentication. The SEQ FIFO STORE could be scheduled before the data can be read from OFIFO, especially since the SEQ FIFO LOAD needs to wait for the SEQ FIFO LOAD SKIP to finish first. The SKIP takes more time when the input is SG than when it's a contiguous buffer. If the SEQ FIFO LOAD is not scheduled before the STORE, the DECO will hang waiting for data to be available in the OFIFO so it can be transferred to C2. In order to overcome this, first force transfer of IV to C2 by starting the "cryptlen" transfer first and then starting to store data from OFIFO to the output buffer. Fixes: 1acebad3 ("crypto: caam - faster aead implementation") Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Linus Walleij authored
[ Upstream commit 295070e9 ] The regulator has never been properly enabled, it has been dormant all the time. It's strange that MMC was working at all, but it likely worked by the signals going through the levelshifter and reaching the card anyways. Fixes: 3615a34e ("regulator: add STw481x VMMC driver") Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Alan Stern authored
[ Upstream commit ccdb6be9 ] The UHCI controllers in Intel chipsets rely on a platform-specific non-PME mechanism for wakeup signalling. They can generate wakeup signals even though they don't support PME. We need to let the USB core know this so that it will enable runtime suspend for UHCI controllers. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CC: stable@vger.kernel.org Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Alan Stern authored
[ Upstream commit 6496ebd7 ] One some systems, the firmware does not allow certain PCI devices to be put in deep D-states. This can cause problems for wakeup signalling, if the device does not support PME# in the deepest allowed suspend state. For example, Pierre reports that on his system, ACPI does not permit his xHCI host controller to go into D3 during runtime suspend -- but D3 is the only state in which the controller can generate PME# signals. As a result, the controller goes into runtime suspend but never wakes up, so it doesn't work properly. USB devices plugged into the controller are never detected. If the device relies on PME# for wakeup signals but is not capable of generating PME# in the target state, the PCI core should accurately report that it cannot do wakeup from runtime suspend. This patch modifies the pci_dev_run_wake() routine to add this check. Reported-by: Pierre de Villemereuil <flyos@mailoo.org> Tested-by: Pierre de Villemereuil <flyos@mailoo.org> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> CC: stable@vger.kernel.org CC: Lukas Wunner <lukas@wunner.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Kashyap Desai authored
[ Upstream commit 18e1c7f6 ] For SRIOV enabled firmware, if there is a OCR(online controller reset) possibility driver set the convert flag to 1, which is not happening if there are outstanding commands even after 180 seconds. As driver does not set convert flag to 1 and still making the OCR to run, VF(Virtual function) driver is directly writing on to the register instead of waiting for 30 seconds. Setting convert flag to 1 will cause VF driver will wait for 30 secs before going for reset. CC: stable@vger.kernel.org Signed-off-by: Kiran Kumar Kasturi <kiran-kumar.kasturi@broadcom.com> Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Patrik Jakobsson authored
[ Upstream commit 0a97c81a ] Hook up drm_compat_ioctl to support 32-bit userspace on 64-bit kernels. It turns out that N2600 and N2800 comes with 64-bit enabled. We previously assumed there where no such systems out there. Cc: stable@vger.kernel.org Signed-off-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Signed-off-by: Sean Paul <seanpaul@chromium.org> Link: http://patchwork.freedesktop.org/patch/msgid/20161101144315.2955-1-patrik.r.jakobsson@gmail.comSigned-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Felipe Balbi authored
[ Upstream commit e8f29bb7 ] usb_endpoint_maxp() returns wMaxPacketSize in its raw form. Without taking into consideration that it also contains other bits reserved for isochronous endpoints. This patch fixes one occasion where this is a problem by making sure that we initialize ep->maxpacket only with lower 10 bits of the value returned by usb_endpoint_maxp(). Note that seperate patches will be necessary to audit all call sites of usb_endpoint_maxp() and make sure that usb_endpoint_maxp() only returns lower 10 bits of wMaxPacketSize. Cc: <stable@vger.kernel.org> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Chris Wilson authored
[ Upstream commit 5f12b80a ] We use obj->phys_handle to choose the pread/pwrite path, but as obj->phys_handle is a union with obj->userptr, we then mistakenly use the phys_handle path for userptr objects within pread/pwrite. Testcase: igt/gem_userptr_blits/forbidden-operations Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97519Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161003124516.12388-2-chris@chris-wilson.co.ukSigned-off-by: Sasha Levin <alexander.levin@verizon.com>
-
- 12 Jan, 2017 1 commit
-
-
Paulo Zanoni authored
[ Upstream commit 0727e40a ] Bspec says: "The mailbox response data may not account for memory read latency. If the mailbox response data for level 0 is 0us, add 2 microseconds to the result for each valid level." This means we should only do the +2 in case wm[0] == 0, not always. So split the sanitizing implementation from the WA implementation and fix the WA implementation. v2: Add Fixes tag (Maarten). Fixes: 367294be ("drm/i915/gen9: Add 2us read latency to WM level") Cc: stable@vger.kernel.org Cc: Vandana Kannan <vandana.kannan@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1474578035-424-5-git-send-email-paulo.r.zanoni@intel.comSigned-off-by: Sasha Levin <alexander.levin@verizon.com>
-
- 24 Dec, 2016 1 commit
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
- 23 Dec, 2016 19 commits
-
-
Sumit Saxena authored
[ Upstream commit 5e5ec175 ] This patch will fix regression caused by commit 1e793f6f ("scsi: megaraid_sas: Fix data integrity failure for JBOD (passthrough) devices"). The problem was that the MEGASAS_IS_LOGICAL macro did not have braces and as a result the driver ended up exposing a lot of non-existing SCSI devices (all SCSI commands to channels 1,2,3 were returned as SUCCESS-DID_OK by driver). [mkp: clarified patch description] Fixes: 1e793f6fReported-by: Jens Axboe <axboe@kernel.dk> CC: stable@vger.kernel.org Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Tested-by: Sumit Saxena <sumit.saxena@broadcom.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Tested-by: Jens Axboe <axboe@fb.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Michal Kubeček authored
[ Upstream commit 3de81b75 ] Qian Zhang (张谦) reported a potential socket buffer overflow in tipc_msg_build() which is also known as CVE-2016-8632: due to insufficient checks, a buffer overflow can occur if MTU is too short for even tipc headers. As anyone can set device MTU in a user/net namespace, this issue can be abused by a regular user. As agreed in the discussion on Ben Hutchings' original patch, we should check the MTU at the moment a bearer is attached rather than for each processed packet. We also need to repeat the check when bearer MTU is adjusted to new device MTU. UDP case also needs a check to avoid overflow when calculating bearer MTU. References: CVE-2016-8632 Fixes: b97bf3fd ("[TIPC] Initial merge") Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Reported-by: Qian Zhang (张谦) <zhangqian-c@360.cn> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Conflicts: net/tipc/bearer.c net/tipc/bearer.h due to 1a90632d: tipc: eliminate remnants of hungarian notation and b1c29f6b tipc: simplify resetting and disabling of bearers Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Kees Cook authored
[ Upstream commit 0eab121e ] Prior to commit c0371da6 ("put iov_iter into msghdr") in v3.19, there was no check that the iovec contained enough bytes for an ICMP header, and the read loop would walk across neighboring stack contents. Since the iov_iter conversion, bad arguments are noticed, but the returned error is EFAULT. Returning EINVAL is a clearer error and also solves the problem prior to v3.19. This was found using trinity with KASAN on v3.18: BUG: KASAN: stack-out-of-bounds in memcpy_fromiovec+0x60/0x114 at addr ffffffc071077da0 Read of size 8 by task trinity-c2/9623 page:ffffffbe034b9a08 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x0() page dumped because: kasan: bad access detected CPU: 0 PID: 9623 Comm: trinity-c2 Tainted: G BU 3.18.0-dirty #15 Hardware name: Google Tegra210 Smaug Rev 1,3+ (DT) Call trace: [<ffffffc000209c98>] dump_backtrace+0x0/0x1ac arch/arm64/kernel/traps.c:90 [<ffffffc000209e54>] show_stack+0x10/0x1c arch/arm64/kernel/traps.c:171 [< inline >] __dump_stack lib/dump_stack.c:15 [<ffffffc000f18dc4>] dump_stack+0x7c/0xd0 lib/dump_stack.c:50 [< inline >] print_address_description mm/kasan/report.c:147 [< inline >] kasan_report_error mm/kasan/report.c:236 [<ffffffc000373dcc>] kasan_report+0x380/0x4b8 mm/kasan/report.c:259 [< inline >] check_memory_region mm/kasan/kasan.c:264 [<ffffffc00037352c>] __asan_load8+0x20/0x70 mm/kasan/kasan.c:507 [<ffffffc0005b9624>] memcpy_fromiovec+0x5c/0x114 lib/iovec.c:15 [< inline >] memcpy_from_msg include/linux/skbuff.h:2667 [<ffffffc000ddeba0>] ping_common_sendmsg+0x50/0x108 net/ipv4/ping.c:674 [<ffffffc000dded30>] ping_v4_sendmsg+0xd8/0x698 net/ipv4/ping.c:714 [<ffffffc000dc91dc>] inet_sendmsg+0xe0/0x12c net/ipv4/af_inet.c:749 [< inline >] __sock_sendmsg_nosec net/socket.c:624 [< inline >] __sock_sendmsg net/socket.c:632 [<ffffffc000cab61c>] sock_sendmsg+0x124/0x164 net/socket.c:643 [< inline >] SYSC_sendto net/socket.c:1797 [<ffffffc000cad270>] SyS_sendto+0x178/0x1d8 net/socket.c:1761 CVE-2016-8399 Reported-by: Qidan He <i@flanker017.me> Fixes: c319b4d7 ("net: ipv4: add IPPROTO_ICMP socket kind") Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Philip Pettersson authored
[ Upstream commit 84ac7260 ] When packet_set_ring creates a ring buffer it will initialize a struct timer_list if the packet version is TPACKET_V3. This value can then be raced by a different thread calling setsockopt to set the version to TPACKET_V1 before packet_set_ring has finished. This leads to a use-after-free on a function pointer in the struct timer_list when the socket is closed as the previously initialized timer will not be deleted. The bug is fixed by taking lock_sock(sk) in packet_setsockopt when changing the packet version while also taking the lock at the start of packet_set_ring. References: CVE-2016-8655 Fixes: f6fb8f10 ("af-packet: TPACKET_V3 flexible buffer implementation.") Signed-off-by: Philip Pettersson <philip.pettersson@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Sabrina Dubroca authored
[ Debian: net-add-recursion-limit-to-gro.patch ] Currently, GRO can do unlimited recursion through the gro_receive handlers. This was fixed for tunneling protocols by limiting tunnel GRO to one level with encap_mark, but both VLAN and TEB still have this problem. Thus, the kernel is vulnerable to a stack overflow, if we receive a packet composed entirely of VLAN headers. This patch adds a recursion counter to the GRO layer to prevent stack overflow. When a gro_receive function hits the recursion limit, GRO is aborted for this skb and it is processed normally. Thanks to Vladimír Beneš <vbenes@redhat.com> for the initial bug report. Fixes: CVE-2016-7039 Fixes: 9b174d88 ("net: Add Transparent Ethernet Bridging GRO support.") Fixes: 66e5133f ("vlan: Add GRO support for non hardware accelerated vlan") Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Jiri Benc <jbenc@redhat.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jaganath Kanakkassery authored
[ Upstream commit 951b6a07 ] addr can be NULL and it should not be dereferenced before NULL checking. References: CVE-2015-8956 Signed-off-by: Jaganath Kanakkassery <jaganath.k@samsung.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jann Horn authored
[ bugfix/all/ptrace-being-capable-wrt-a-process-requires-mapped-uids-gids.patch ] ptrace_has_cap() checks whether the current process should be treated as having a certain capability for ptrace checks against another process. Until now, this was equivalent to has_ns_capability(current, target_ns, CAP_SYS_PTRACE). However, if a root-owned process wants to enter a user namespace for some reason without knowing who owns it and therefore can't change to the namespace owner's uid and gid before entering, as soon as it has entered the namespace, the namespace owner can attach to it via ptrace and thereby gain access to its uid and gid. While it is possible for the entering process to switch to the uid of a claimed namespace owner before entering, causing the attempt to enter to fail if the claimed uid is wrong, this doesn't solve the problem of determining an appropriate gid. With this change, the entering process can first enter the namespace and then safely inspect the namespace's properties, e.g. through /proc/self/{uid_map,gid_map}, assuming that the namespace owner doesn't have access to uid 0. Changed in v2: The caller needs to be capable in the namespace into which tcred's uids/gids can be mapped. Rederences: CVE-2015-8709 References: https://lkml.org/lkml/2015/12/25/71Signed-off-by: Jann Horn <jann@thejh.net> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Dan Carpenter authored
[ Upstream commit 7bc2b55a ] We need to put an upper bound on "user_len" so the memcpy() doesn't overflow. References: CVE-2016-7425 Cc: <stable@vger.kernel.org> Reported-by: Marco Grassi <marco.gra@gmail.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Eric W. Biederman authored
[ Upstream commit d2921684 ] CAI Qian <caiqian@redhat.com> pointed out that the semantics of shared subtrees make it possible to create an exponentially increasing number of mounts in a mount namespace. mkdir /tmp/1 /tmp/2 mount --make-rshared / for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done Will create create 2^20 or 1048576 mounts, which is a practical problem as some people have managed to hit this by accident. As such CVE-2016-6213 was assigned. Ian Kent <raven@themaw.net> described the situation for autofs users as follows: > The number of mounts for direct mount maps is usually not very large because of > the way they are implemented, large direct mount maps can have performance > problems. There can be anywhere from a few (likely case a few hundred) to less > than 10000, plus mounts that have been triggered and not yet expired. > > Indirect mounts have one autofs mount at the root plus the number of mounts that > have been triggered and not yet expired. > > The number of autofs indirect map entries can range from a few to the common > case of several thousand and in rare cases up to between 30000 and 50000. I've > not heard of people with maps larger than 50000 entries. > > The larger the number of map entries the greater the possibility for a large > number of active mounts so it's not hard to expect cases of a 1000 or somewhat > more active mounts. So I am setting the default number of mounts allowed per mount namespace at 100,000. This is more than enough for any use case I know of, but small enough to quickly stop an exponential increase in mounts. Which should be perfect to catch misconfigurations and malfunctioning programs. For anyone who needs a higher limit this can be changed by writing to the new /proc/sys/fs/mount-max sysctl. Tested-by: CAI Qian <caiqian@redhat.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Conflicts: fs/namespace.c kernel/sysctl.c Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jan Kara authored
[ Upstream commit 07393101 ] When file permissions are modified via chmod(2) and the user is not in the owning group or capable of CAP_FSETID, the setgid bit is cleared in inode_change_ok(). Setting a POSIX ACL via setxattr(2) sets the file permissions as well as the new ACL, but doesn't clear the setgid bit in a similar way; this allows to bypass the check in chmod(2). Fix that. References: CVE-2016-7097 Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jan Kara authored
[ Upstream commit 030b533c ] Currently, notify_change() clears capabilities or IMA attributes by calling security_inode_killpriv() before calling into ->setattr. Thus it happens before any other permission checks in inode_change_ok() and user is thus allowed to trigger clearing of capabilities or IMA attributes for any file he can look up e.g. by calling chown for that file. This is unexpected and can lead to user DoSing a system. Fix the problem by calling security_inode_killpriv() at the end of inode_change_ok() instead of from notify_change(). At that moment we are sure user has permissions to do the requested change. References: CVE-2015-1350 Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jan Kara authored
[ Upstream commit 31051c85 ] inode_change_ok() will be resposible for clearing capabilities and IMA extended attributes and as such will need dentry. Give it as an argument to inode_change_ok() instead of an inode. Also rename inode_change_ok() to setattr_prepare() to better relect that it does also some modifications in addition to checks. References: CVE-2015-1350 Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Andreas Gruenbacher authored
NFSv2 can set the atime and/or mtime of a file to specific timestamps but not to the server's current time. To implement the equivalent of utimes("file", NULL), it uses a heuristic. NFSv3 and later do support setting the atime and/or mtime to the server's current time directly. The NFSv2 heuristic is still enabled, and causes timestamps to be set wrong sometimes. Fix this by moving the heuristic into the NFSv2 specific code. We can leave it out of the create code path: the owner can always set timestamps arbitrarily, and the workaround would never trigger. References: CVE-2015-1350 Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jan Kara authored
[ Upstream commit 62490330 ] To avoid clearing of capabilities or security related extended attributes too early, inode_change_ok() will need to take dentry instead of inode. Propagate it down to fuse_do_setattr(). References: CVE-2015-1350 Acked-by: Miklos Szeredi <mszeredi@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Conflicts: Missing file_dentry() from d101a125Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Jan Kara authored
[ upstream commit 69bca807 ] To avoid clearing of capabilities or security related extended attributes too early, inode_change_ok() will need to take dentry instead of inode. Propagate dentry down to functions calling inode_change_ok(). This is rather straightforward except for xfs_set_mode() function which does not have dentry easily available. Luckily that function does not call inode_change_ok() anyway so we just have to do a little dance with function prototypes. References: CVE-2015-1350 Acked-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Conflicts: Missing file_dentry() from d101a125Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Andreas Dilger authored
mbcache provides absolutely no value for Lustre xattrs (because they are unique and cannot be shared between files) and as we can see it has a noticable overhead in some cases. In the past there was a CONFIG_MBCACHE option that would allow it to be disabled, but this was removed in newer kernels, so we will need to patch ldiskfs to fix this. References: <https://bugzilla.kernel.org/show_bug.cgi?id=107301> References: <https://git.hpdd.intel.com/fs/lustre-release.git/blob_plain/HEAD:/ldiskfs/kernel_patches/patches/rhel7/ext4-disable-mb-cache.patch> References: CVE-2015-8952 On 13.12.2016 at 15:58 Ben Hutchings wrote: > I decided not to apply this as it's a userland ABI extension that we > would then need to carry indefinitely. Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Eric Dumazet authored
[ Upstream commit bb1fceca ] When tcp_sendmsg() allocates a fresh and empty skb, it puts it at the tail of the write queue using tcp_add_write_queue_tail() Then it attempts to copy user data into this fresh skb. If the copy fails, we undo the work and remove the fresh skb. Unfortunately, this undo lacks the change done to tp->highest_sack and we can leave a dangling pointer (to a freed skb) Later, tcp_xmit_retransmit_queue() can dereference this pointer and access freed memory. For regular kernels where memory is not unmapped, this might cause SACK bugs because tcp_highest_sack_seq() is buggy, returning garbage instead of tp->snd_nxt, but with various debug features like CONFIG_DEBUG_PAGEALLOC, this can crash the kernel. This bug was found by Marco Grassi thanks to syzkaller. Fixes: 6859d494 ("[TCP]: Abstract tp->highest_sack accessing & point to next skb") Reported-by: Marco Grassi <marco.gra@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> References: CVE-2016-6828 Signed-off-by: Philipp Hahn <hahn@univention.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Sebastian Andrzej Siewior authored
[ Upstream commit 90944e40 ] If the gcc is configured to do -fPIE by default then the build aborts later with: | Unsupported relocation type: unknown type rel type name (29) Tagging it stable so it is possible to compile recent stable kernels as well. Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Michal Marek <mmarek@suse.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Sebastian Andrzej Siewior authored
[ Upstream commit 82031ea2 ] Adding -no-PIE to the fstack protector check. -no-PIE was introduced before -fstack-protector so there is no need for a runtime check. Without it the build stops: |Cannot use CONFIG_CC_STACKPROTECTOR_STRONG: -fstack-protector-strong available but compiler is broken due to -mcmodel=kernel + -fPIE if -fPIE is enabled by default. Tagging it stable so it is possible to compile recent stable kernels as well. Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Michal Marek <mmarek@suse.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
- 22 Dec, 2016 3 commits
-
-
Andy Lutomirski authored
[ Upstream commit e1bfc11c ] cr4_init_shadow() will panic on 486-like machines without CR4. Fix it using __read_cr4_safe(). Reported-by: david@saggiorato.net Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 1e02ce4c ("x86: Store a per-cpu shadow copy of CR4") Link: http://lkml.kernel.org/r/43a20f81fb504013bf613913dc25574b45336a61.1475091074.git.luto@kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
Roger Quadros authored
[ Upstream commit d248220f ] Since commit 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation"), dma_to_pfn() already returns the PFN with the physical memory start offset so we don't need to add it again. This fixes USB mass storage lock-up problem on systems that can't do DMA over the entire physical memory range (e.g.) Keystone 2 systems with 4GB RAM can only do DMA over the first 2GB. [K2E-EVM]. What happens there is that without this patch SCSI layer sets a wrong bounce buffer limit in scsi_calculate_bounce_limit() for the USB mass storage device. dma_max_pfn() evaluates to 0x8fffff and bounce_limit is set to 0x8fffff000 whereas maximum DMA'ble physical memory on Keystone 2 is 0x87fffffff. This results in non DMA'ble pages being given to the USB controller and hence the lock-up. NOTE: in the above case, USB-SCSI-device's dma_pfn_offset was showing as 0. This should have really been 0x780000 as on K2e, LOWMEM_START is 0x80000000 and HIGHMEM_START is 0x800000000. DMA zone is 2GB so dma_max_pfn should be 0x87ffff. The incorrect dma_pfn_offset for the USB storage device is because USB devices are not correctly inheriting the dma_pfn_offset from the USB host controller. This will be fixed by a separate patch. Fixes: 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation") Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reported-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-
zhong jiang authored
[ Upstream commit 5b398e41 ] I hit the following hung task when runing a OOM LTP test case with 4.1 kernel. Call trace: [<ffffffc000086a88>] __switch_to+0x74/0x8c [<ffffffc000a1bae0>] __schedule+0x23c/0x7bc [<ffffffc000a1c09c>] schedule+0x3c/0x94 [<ffffffc000a1eb84>] rwsem_down_write_failed+0x214/0x350 [<ffffffc000a1e32c>] down_write+0x64/0x80 [<ffffffc00021f794>] __ksm_exit+0x90/0x19c [<ffffffc0000be650>] mmput+0x118/0x11c [<ffffffc0000c3ec4>] do_exit+0x2dc/0xa74 [<ffffffc0000c46f8>] do_group_exit+0x4c/0xe4 [<ffffffc0000d0f34>] get_signal+0x444/0x5e0 [<ffffffc000089fcc>] do_signal+0x1d8/0x450 [<ffffffc00008a35c>] do_notify_resume+0x70/0x78 The oom victim cannot terminate because it needs to take mmap_sem for write while the lock is held by ksmd for read which loops in the page allocator ksm_do_scan scan_get_next_rmap_item down_read get_next_rmap_item alloc_rmap_item #ksmd will loop permanently. There is no way forward because the oom victim cannot release any memory in 4.1 based kernel. Since 4.6 we have the oom reaper which would solve this problem because it would release the memory asynchronously. Nevertheless we can relax alloc_rmap_item requirements and use __GFP_NORETRY because the allocation failure is acceptable as ksm_do_scan would just retry later after the lock got dropped. Such a patch would be also easy to backport to older stable kernels which do not have oom_reaper. While we are at it add GFP_NOWARN so the admin doesn't have to be alarmed by the allocation failure. Link: http://lkml.kernel.org/r/1474165570-44398-1-git-send-email-zhongjiang@huawei.comSigned-off-by: zhong jiang <zhongjiang@huawei.com> Suggested-by: Hugh Dickins <hughd@google.com> Suggested-by: Michal Hocko <mhocko@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
-