- 05 Oct, 2013 40 commits
-
-
Alex Deucher authored
commit 99d79aa2 upstream. When dpm was merged, I added a new asic struct for rv6xx, but it never got properly updated when the hdmi callbacks were added due to the two patch sets being developed in parallel. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=69729Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 4a1132a0 upstream. The tests are only usable if the acceleration engines have been successfully initialized. Based on an initial patch from: Alex Ivanov <gnidorah@p0n4ik.tk> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Ivanov authored
commit 0eb3448a upstream. Prevent NULL pointer dereference in case when radeon_ring_fini() did it's job. Reading of r100_cp_ring_info and radeon_ring_gfx debugfs entries will lead to a KP if ring buffer was deallocated, e.g. on failed ring test. Seen on PA-RISC machine having "radeon: ring test failed (scratch(0x8504)=0xCAFEDEAD)" issue. v2: agd5f: add some parens around ring->ready check Signed-off-by: Alex Ivanov <gnidorah@p0n4ik.tk> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 4ca5a6cb upstream. If the user has forced the driver to use the internal GPU gart rather than AGP on an AGP card, force the buffers to vram as well. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jani Nikula authored
commit 8d16f258 upstream. There is no clear cut rules or specs for the retry interval, as there are many factors that affect overall response time. Increase the interval, and even more so on branch devices which may have limited i2c bit rates. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Reference: https://bugs.freedesktop.org/show_bug.cgi?id=60263Tested-by: Nicolas Suzor <nic@suzor.com> Reviewed-by: Todd Previte <tprevite@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit 67c72a12 upstream. This regression has been introduced in commit 9f11a9e4 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Jun 13 00:54:58 2013 +0200 drm/i915: set up PIPECONF explicitly for i9xx/vlv platforms Ville brough up the idea that this is just the pipe A quirk gone wrong. Note that after resume the bios might or might not have enabled pipe A already. We have a bit of magic to make sure that on resume we set up a decent mode for pipe A, but I fear if I just smash pipe A to always on we'd enable it in a bogus state and hang the hw. Hence the readback. v2: Clarify the logic a bit as suggested by Chris. Also amend the commit message to clarify why we don't unconditionally enable the pipe. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66462 References: https://lkml.org/lkml/2013/8/26/238 Cc: Meelis Roos <mroos@ut.ee> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Use |= instead of = as suggested by Chris.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit 3f6bbd3f upstream. This doesn't really need to be initialised, but it doesn't hurt, silences the compiler, and as it is a counter it makes sense for it to start at zero. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit f84cb8a4 upstream. Workaround the SCSI layer's problematic WRITE SAME heuristics by disabling WRITE SAME in the DM multipath device's queue_limits if an underlying device disabled it. The WRITE SAME heuristics, with both the original commit 5db44863 ("[SCSI] sd: Implement support for WRITE SAME") and the updated commit 66c28f97 ("[SCSI] sd: Update WRITE SAME heuristics"), default to enabling WRITE SAME(10) even without successfully determining it is supported. After the first failed WRITE SAME the SCSI layer will disable WRITE SAME for the device (by setting sdkp->device->no_write_same which results in 'max_write_same_sectors' in device's queue_limits to be set to 0). When a device is stacked ontop of such a SCSI device any changes to that SCSI device's queue_limits do not automatically propagate up the stack. As such, a DM multipath device will not have its WRITE SAME support disabled. This causes the block layer to continue to issue WRITE SAME requests to the mpath device which causes paths to fail and (if mpath IO isn't configured to queue when no paths are available) it will result in actual IO errors to the upper layers. This fix doesn't help configurations that have additional devices stacked ontop of the mpath device (e.g. LVM created linear DM devices ontop). A proper fix that restacks all the queue_limits from the bottom of the device stack up will need to be explored if SCSI will continue to use this model of optimistically allowing op codes and then disabling them after they fail for the first time. Before this patch: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null) device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121) device-mapper: multipath: XXX snitm debugging: failing WRITE SAME IO with error=-121 end_request: critical target error, dev dm-6, sector 528 dm-6: WRITE SAME failed. Manually zeroing. device-mapper: multipath: Failing path 8:112. end_request: I/O error, dev dm-6, sector 4616 dm-6: WRITE SAME failed. Manually zeroing. end_request: I/O error, dev dm-6, sector 4616 end_request: I/O error, dev dm-6, sector 5640 end_request: I/O error, dev dm-6, sector 6664 end_request: I/O error, dev dm-6, sector 7688 end_request: I/O error, dev dm-6, sector 524288 Buffer I/O error on device dm-6, logical block 65536 lost page write due to I/O error on dm-6 JBD2: Error -5 detected when updating journal superblock for dm-6-8. end_request: I/O error, dev dm-6, sector 524296 Aborting journal on device dm-6-8. end_request: I/O error, dev dm-6, sector 524288 Buffer I/O error on device dm-6, logical block 65536 lost page write due to I/O error on dm-6 JBD2: Error -5 detected when updating journal superblock for dm-6-8. # cat /sys/block/sdh/queue/write_same_max_bytes 0 # cat /sys/block/dm-6/queue/write_same_max_bytes 33553920 After this patch: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null) device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121) device-mapper: multipath: XXX snitm debugging: WRITE SAME I/O failed with error=-121 end_request: critical target error, dev dm-6, sector 528 dm-6: WRITE SAME failed. Manually zeroing. # cat /sys/block/sdh/queue/write_same_max_bytes 0 # cat /sys/block/dm-6/queue/write_same_max_bytes 0 It should be noted that WRITE SAME support wasn't enabled in DM multipath until v3.10. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 60e356f3 upstream. LVM2, since version 2.02.96, creates origin with zero size, then loads the snapshot driver and then loads the origin. Consequently, the snapshot driver sees the origin size zero and sets the hash size to the lower bound 64. Such small hash table causes performance degradation. This patch changes it so that the hash size is determined by the size of snapshot volume, not minimum of origin and snapshot size. It doesn't make sense to set the snapshot size significantly larger than the origin size, so we do not need to take origin size into account when calculating the hash size. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 5ea330a7 upstream. The kernel reports a lockdep warning if a snapshot is invalidated because it runs out of space. The lockdep warning was triggered by commit 0976dfc1 ("workqueue: Catch more locking problems with flush_work()") in v3.5. The warning is false positive. The real cause for the warning is that the lockdep engine treats different instances of md->lock as a single lock. This patch is a workaround - we use flush_workqueue instead of flush_work. This code path is not performance sensitive (it is called only on initialization or invalidation), thus it doesn't matter that we flush the whole workqueue. The real fix for the problem would be to teach the lockdep engine to treat different instances of md->lock as separate locks. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benson Leung authored
commit f123db8e upstream. The put_device(dev) at the bottom of the loop of device_shutdown may result in the dev being cleaned up. In device_create_release, the dev is kfreed. However, device_shutdown attempts to use the dev pointer again after put_device by referring to dev->parent. Copy the parent pointer instead to avoid this condition. This bug was found on Chromium OS's chromeos-3.8, which is based on v3.8.11. See bug report : https://code.google.com/p/chromium/issues/detail?id=297842 This can easily be reproduced when shutting down with hidraw devices that report battery condition. Two examples are the HP Bluetooth Mouse X4000b and the Apple Magic Mouse. For example, with the magic mouse : The dev in question is "hidraw0" dev->parent is "magicmouse" In the course of the shutdown for this device, the input event cleanup calls a put on hidraw0, decrementing its reference count. When we finally get to put_device(dev) in device_shutdown, kobject_cleanup is called and device_create_release does kfree(dev). dev->parent is no longer valid, and we may crash in put_device(dev->parent). This change should be applied on any kernel with this change : d1c6c030Signed-off-by: Benson Leung <bleung@chromium.org> Reviewed-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kurt Garloff authored
commit 831abf76 upstream. Trying to read data from the Pegasus Technologies NoteTaker (0e20:0101) [1] with the Windows App (EasyNote) works natively but fails when Windows is running under KVM (and the USB device handed to KVM). The reason is a USB control message usb 4-2.2: control urb: bRequestType=22 bRequest=09 wValue=0200 wIndex=0001 wLength=0008 This goes to endpoint address 0x01 (wIndex); however, endpoint address 0x01 does not exist. There is an endpoint 0x81 though (same number, but other direction); the app may have meant that endpoint instead. The kernel thus rejects the IO and thus we see the failure. Apparently, Linux is more strict here than Windows ... we can't change the Win app easily, so that's a problem. It seems that the Win app/driver is buggy here and the driver does not behave fully according to the USB HID class spec that it claims to belong to. The device seems to happily deal with that though (and seems to not really care about this value much). So the question is whether the Linux kernel should filter here. Rejecting has the risk that somewhat non-compliant userspace apps/ drivers (most likely in a virtual machine) are prevented from working. Not rejecting has the risk of confusing an overly sensitive device with such a transfer. Given the fact that Windows does not filter it makes this risk rather small though. The patch makes the kernel more tolerant: If the endpoint address in wIndex does not exist, but an endpoint with toggled direction bit does, it will let the transfer through. (It does NOT change the message.) With attached patch, the app in Windows in KVM works. usb 4-2.2: check_ctrlrecip: process 13073 (qemu-kvm) requesting ep 01 but needs 81 I suspect this will mostly affect apps in virtual environments; as on Linux the apps would have been adapted to the stricter handling of the kernel. I have done that for mine[2]. [1] http://www.pegatech.com/ [2] https://sourceforge.net/projects/notetakerpen/Signed-off-by: Kurt Garloff <kurt@garloff.de> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Cohen authored
commit 85601f8c upstream. Add PCI id for Intel Merrifield Signed-off-by: David Cohen <david.a.cohen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Heikki Krogerus authored
commit b62cd96d upstream. Add PCI id for Intel BayTrail. Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ramneek Mehresh authored
commit ad1260e9 upstream. For controller versions greater than 1.6, setting ULPI_PHY_CLK_SEL bit when USB_EN bit is already set causes instability issues with PHY_CLK_VLD bit. So USB_EN is set only for IP controller version below 1.6 before setting ULPI_PHY_CLK_SEL bit Signed-off-by: Ramneek Mehresh <ramneek.mehresh@freescale.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Al Viro authored
commit 2606b28a upstream. There's a bunch of failure exits in ffs_fs_mount() with seriously broken recovery logics. Most of that appears to stem from misunderstanding of the ->kill_sb() semantics; unlike ->put_super() it is called for *all* superblocks of given type, no matter how (in)complete the setup had been. ->put_super() is called only if ->s_root is not NULL; any failure prior to setting ->s_root will have the call of ->put_super() skipped. ->kill_sb(), OTOH, awaits every superblock that has come from sget(). Current behaviour of ffs_fs_mount(): We have struct ffs_sb_fill_data data on stack there. We do ffs_dev = functionfs_acquire_dev_callback(dev_name); and store that in data.private_data. Then we call mount_nodev(), passing it ffs_sb_fill() as a callback. That will either fail outright, or manage to call ffs_sb_fill(). There we allocate an instance of struct ffs_data, slap the value of ffs_dev (picked from data.private_data) into ffs->private_data and overwrite data.private_data by storing ffs into an overlapping member (data.ffs_data). Then we store ffs into sb->s_fs_info and attempt to set the rest of the things up (root inode, root dentry, then create /ep0 there). Any of those might fail. Should that happen, we get ffs_fs_kill_sb() called before mount_nodev() returns. If mount_nodev() fails for any reason whatsoever, we proceed to functionfs_release_dev_callback(data.ffs_data); That's broken in a lot of ways. Suppose the thing has failed in allocation of e.g. root inode or dentry. We have functionfs_release_dev_callback(ffs); ffs_data_put(ffs); done by ffs_fs_kill_sb() (ffs accessed via sb->s_fs_info), followed by functionfs_release_dev_callback(ffs); from ffs_fs_mount() (via data.ffs_data). Note that the second functionfs_release_dev_callback() has every chance to be done to freed memory. Suppose we fail *before* root inode allocation. What happens then? ffs_fs_kill_sb() doesn't do anything to ffs (it's either not called at all, or it doesn't have a pointer to ffs stored in sb->s_fs_info). And functionfs_release_dev_callback(data.ffs_data); is called by ffs_fs_mount(), but here we are in nasal daemon country - we are reading from a member of union we'd never stored into. In practice, we'll get what we used to store into the overlapping field, i.e. ffs_dev. And then we get screwed, since we treat it (struct gfs_ffs_obj * in disguise, returned by functionfs_acquire_dev_callback()) as struct ffs_data *, pick what would've been ffs_data ->private_data from it (*well* past the actual end of the struct gfs_ffs_obj - struct ffs_data is much bigger) and poke in whatever it points to. FWIW, there's a minor leak on top of all that in case if ffs_sb_fill() fails on kstrdup() - ffs is obviously forgotten. The thing is, there is no point in playing all those games with union. Just allocate and initialize ffs_data *before* calling mount_nodev() and pass a pointer to it via data.ffs_data. And once it's stored in sb->s_fs_info, clear data.ffs_data, so that ffs_fs_mount() knows that it doesn't need to kill the sucker manually - from that point on we'll have it done by ->kill_sb(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alan Stern authored
commit bef073b0 upstream. Commit 24f53137 (USB: EHCI: accept very late isochronous URBs) changed the isochronous API provided by ehci-hcd. URBs submitted too late, so that the time slots for all their packets have already expired, are no longer rejected outright. Instead the submission is accepted, and the URB completes normally with a -EXDEV error for each packet. This is what client drivers expect. This patch implements the same policy in uhci-hcd. It should be applied to all kernels containing commit c44b2250 (UHCI: implement new semantics for URB_ISO_ASAP). Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alan Stern authored
commit a8693424 upstream. Commit 24f53137 (USB: EHCI: accept very late isochronous URBs) changed the isochronous API provided by ehci-hcd. URBs submitted too late, so that the time slots for all their packets have already expired, are no longer rejected outright. Instead the submission is accepted, and the URB completes normally with a -EXDEV error for each packet. This is what client drivers expect. This patch implements the same policy in ohci-hcd. The change is more complicated than it was in ehci-hcd, because ohci-hcd doesn't scan for isochronous completions in the same way as ehci-hcd does. Rather, it depends on the hardware adding completed TDs to a "done queue". Some OHCI controller don't handle this properly when a TD's time slot has already expired, so we have to avoid adding such TDs to the schedule in the first place. As a result, if the URB was submitted too late then none of its TDs will get put on the schedule, so none of them will end up on the done queue, so the driver will never realize that the URB should be completed. To solve this problem, the patch adds one to urb_priv->td_cnt for such URBs, making it larger than urb_priv->length (td_cnt already gets set to the number of TD's that had to be skipped because their slots have expired). Each time an URB is given back, the finish_urb() routine looks to see if urb_priv->td_cnt for the next URB on the same endpoint is marked in this way. If so, it gives back the next URB right away. This should be applied to all kernels containing commit 815fa7b9 (USB: OHCI: fix logic for scheduling isochronous URBs). Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Wolter authored
commit 526867c3 upstream. The halted state of a endpoint cannot be cleared over CLEAR_HALT from a user process, because the stopped_td variable was overwritten in the handle_stopped_endpoint() function. So the xhci_endpoint_reset() function will refuse the reset and communication with device can not run over this endpoint. https://bugzilla.kernel.org/show_bug.cgi?id=60699Signed-off-by: Florian Wolter <wolly84@web.de> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Cc: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alan Stern authored
commit f875fdbf upstream. Since uhci-hcd, ehci-hcd, and xhci-hcd support runtime PM, the .pm field in their pci_driver structures should be protected by CONFIG_PM rather than CONFIG_PM_SLEEP. The corresponding change has already been made for ohci-hcd. Without this change, controllers won't do runtime suspend if system suspend or hibernation isn't enabled. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> CC: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Nyman authored
commit 284d2055 upstream. When a command times out, the command ring is first aborted, and then stopped. If the command ring is empty when it is stopped the stop event will point to next command which is not yet set. xHCI tries to handle this next event often causing an oops. Don't handle command completion events on stopped cmd ring if ring is empty. This patch should be backported to kernels as old as 3.7, that contain the commit b92cc66c "xHCI: add aborting command ring function" Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Reported-by: Giovanni <giovanni.nervi@yahoo.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Nyman authored
commit ec7e43e2 upstream. If a command on the command ring needs to be cancelled before it is handled it can be turned to a no-op operation when the ring is stopped. We want to store the command ring enqueue pointer in the command structure when the command in enqueued for the cancellation case. Some commands used to store the command ring dequeue pointers instead of enqueue (these often worked because enqueue happends to equal dequeue quite often) Other commands correctly used the enqueue pointer but did not check if it pointed to a valid trb or a link trb, this caused for example stop endpoint command to timeout in xhci_stop_device() in about 2% of suspend/resume cases. This should also solve some weird behavior happening in command cancellation cases. This patch is based on a patch submitted by Sarah Sharp to linux-usb, but then forgotten: http://marc.info/?l=linux-usb&m=136269803207465&w=2 This patch should be backported to kernels as old as 3.7, that contain the commit b92cc66c "xHCI: add aborting command ring function" Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit 1062b815 upstream. The native TV encoder has it's own flags to adjust sync modes and enabled interlaced modes which are totally irrelevant for the adjusted mode. This worked out nicely since the input modes used by both the load detect code and reported in the ->get_modes callbacks all have no flags set, and we also don't fill out any of them in the ->get_config callback. This changed with the additional sanitation done with commit 2960bc9c Author: Imre Deak <imre.deak@intel.com> Date: Tue Jul 30 13:36:32 2013 +0300 drm/i915: make user mode sync polarity setting explicit sinc now the "no flags at all" state wouldn't fit through core code any more. So fix this up again by explicitly clearing the flags in the ->compute_config callback. Aside: We have zero checking in place to make sure that the requested mode is indeed the right input mode we want for the selected TV mode. So we'll happily fall over if userspace tries to pull us. But that's definitely work for a different patch series. So just add a FIXME comment for now. Reported-by: Knut Petersen <Knut_Petersen@t-online.de> Cc: Knut Petersen <Knut_Petersen@t-online.de> Cc: Imre Deak <imre.deak@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Knut Petersen <Knut_Petersen@t-online.de> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit 5e8c3d3e upstream. Don't allow entry to iwctl_siwencodeext if device not open. This fixes a race condition where wpa supplicant/network manager enters the function when the device is already closed. Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Malcolm Priestley authored
commit e3eb270f upstream. The vt6656 is prone to resetting on the usb bus. It seems there is a race condition and wpa supplicant is trying to open the device via iw_handlers before its actually closed at a stage that the buffers are being removed. The device is longer considered open when the buffers are being removed. So move ~DEVICE_FLAGS_OPENED flag to before freeing the device buffers. Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ard Biesheuvel authored
commit 40190c85 upstream. Patch 638591cd enabled building the AES assembler code in Thumb2 mode. However, this code used arithmetic involving PC rather than adr{l} instructions to generate PC-relative references to the lookup tables, and this needs to take into account the different PC offset when running in Thumb mode. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 19b85cfb upstream. Fix tty_kref leak when tty_buffer_request room fails in dma-rx path. Note that the tty ref isn't really needed anymore, but as the leak has always been there, fixing it before removing should makes it easier to backport the fix. Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit fc0919c6 upstream. Fix tty-kref leak introduced by commit 384e301e ("pch_uart: fix a deadlock when pch_uart as console") which never put its tty reference. Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit cfd29aa0 upstream. Fix potential tty-kref leak in stop_rx path. Signed-off-by: Johan Hovold <jhovold@gmail.com> Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Hurley authored
commit 5cec7bf6 upstream. Commit 'e7f3880c' tty: Fix recursive deadlock in tty_perform_flush() introduced a regression where tcflush() does not generate SIGTTOU for background process groups. Make sure ioctl(TCFLSH) calls tty_check_change() when invoked from the line discipline. Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
commit 4a704575 upstream. Unset init_clients_timer and amthif_stall_timers in mei_reset in order to cancel timer ticking and hence avoid recursive reset calls. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
commit e2b31644 upstream. Bus layer omitted check for client state transition while waiting for read completion The client state transition may occur for example as result of firmware initiated reset Add mei_cl_is_transitioning wrapper to reduce the code repetition.: Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomas Winkler authored
commit 1aee351a upstream. 1. u8 counters are prone to hard to detect overflow: make them unsigned long to match bit_ functions argument type 2. don't check me_clients_num for negativity, it is unsigned. 3. init all the me client counters from one place Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Josh Boyer authored
commit 70087011 upstream. Add patch to fix 32bit EFI service mapping (rhbz 726701) Multiple people are reporting hitting the following WARNING on i386, WARNING: at arch/x86/mm/ioremap.c:102 __ioremap_caller+0x3d3/0x440() Modules linked in: Pid: 0, comm: swapper Not tainted 3.9.0-rc7+ #95 Call Trace: [<c102b6af>] warn_slowpath_common+0x5f/0x80 [<c1023fb3>] ? __ioremap_caller+0x3d3/0x440 [<c1023fb3>] ? __ioremap_caller+0x3d3/0x440 [<c102b6ed>] warn_slowpath_null+0x1d/0x20 [<c1023fb3>] __ioremap_caller+0x3d3/0x440 [<c106007b>] ? get_usage_chars+0xfb/0x110 [<c102d937>] ? vprintk_emit+0x147/0x480 [<c1418593>] ? efi_enter_virtual_mode+0x1e4/0x3de [<c102406a>] ioremap_cache+0x1a/0x20 [<c1418593>] ? efi_enter_virtual_mode+0x1e4/0x3de [<c1418593>] efi_enter_virtual_mode+0x1e4/0x3de [<c1407984>] start_kernel+0x286/0x2f4 [<c1407535>] ? repair_env_string+0x51/0x51 [<c1407362>] i386_start_kernel+0x12c/0x12f Due to the workaround described in commit 916f676f ("x86, efi: Retain boot service code until after switching to virtual mode") EFI Boot Service regions are mapped for a period during boot. Unfortunately, with the limited size of the i386 direct kernel map it's possible that some of the Boot Service regions will not be directly accessible, which causes them to be ioremap()'d, triggering the above warning as the regions are marked as E820_RAM in the e820 memmap. There are currently only two situations where we need to map EFI Boot Service regions, 1. To workaround the firmware bug described in 916f676f 2. To access the ACPI BGRT image but since we haven't seen an i386 implementation that requires either, this simple fix should suffice for now. [ Added to changelog - Matt ] Reported-by: Bryan O'Donoghue <bryan.odonoghue.lkml@nexus-software.ie> Acked-by: Tom Zanussi <tom.zanussi@intel.com> Acked-by: Darren Hart <dvhart@linux.intel.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Josh Boyer <jwboyer@redhat.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vinson Lee authored
commit ce7eebe5 upstream. The compilation only looks for linux/magic.h from the default include paths, which does not include the source tree. This results in a build error if linux/magic.h is not available or not installed. For example, this build error occurs on CentOS 5. $ make -C tools/lib/lk V=1 [...] gcc -o debugfs.o -c -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -D_FORTIFY_SOURCE=2 -Wbad-function-cast -Wdeclaration-after-statement -Wformat-security -Wformat-y2k -Winit-self -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wno-system-headers -Wold-style-definition -Wpacked -Wredundant-decls -Wshadow -Wstrict-aliasing=3 -Wstrict-prototypes -Wswitch-default -Wswitch-enum -Wundef -Wwrite-strings -Wformat -fPIC -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 debugfs.c debugfs.c:8:25: error: linux/magic.h: No such file or directory The only symbol from linux/magic.h needed by debugfs.c is DEBUGFS_MAGIC, and that is already defined in debugfs.h. linux/magic.h isn't providing any extra symbols and can unincluded. This is similar to the approach by perf, which has its own magic.h wrapper at tools/perf/util/include/linux/magic.h Signed-off-by: Vinson Lee <vlee@twitter.com> Acked-by: Borislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@suse.de> Cc: Vinson Lee <vlee@freedesktop.org> Link: http://lkml.kernel.org/r/1379546200-17028-1-git-send-email-vlee@freedesktop.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Masoud Sharbiani authored
commit 4f0acd31 upstream. Dell PowerEdge C6100 machines fail to completely reboot about 20% of the time. Signed-off-by: Masoud Sharbiani <msharbiani@twitter.com> Signed-off-by: Vinson Lee <vlee@twitter.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Link: http://lkml.kernel.org/r/1379717947-18042-1-git-send-email-vlee@freedesktop.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kent Overstreet authored
commit c0f04d88 upstream. In writeback mode, when we get a cache flush we need to make sure we issue a flush to the backing device. The code for sending down an extra flush was wrong - by cloning the bio we were probably getting flags that didn't make sense for a bare flush, and also the old code was firing for FUA bios, for which we don't need to send a flush to the backing device. This was causing data corruption somehow - the mechanism was never determined, but this patch fixes it for the users that were seeing it. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kent Overstreet authored
commit 84786438 upstream. btree_sort_fixup() was overly clever, because it was trying to avoid pulling a key off the btree iterator in more than one place. This led to a really obscure bug where we'd break early from the loop in btree_sort_fixup() if the current key overlapped with keys in more than one older set, and the next key it overlapped with was zero size. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kent Overstreet authored
commit a698e08c upstream. GFP_NOIO means we could be getting called recursively - mca_alloc() -> mca_data_alloc() - definitely can't use mutex_lock(bucket_lock) then. Whoops. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kent Overstreet authored
commit 79e3dab9 upstream. schedule_timeout() != schedule_timeout_uninterruptible() Signed-off-by: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-