- 08 Sep, 2010 36 commits
-
-
Chris Wilson authored
Just makes sure that writes are not being aliased by the CPU cache and do make it out to main memory. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=24977 Cc: stable@kernel.org
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
... take advantage of the new implicit request issuing of i915_wait_request. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
One caller (for the pageflip support) wants a purely pipelined flush. Distinguish this case by a new parameter. This will also be useful later on for pipelined fencing. v2: Simplify the code by depending upon the implicit request emitting of i915_wait_request. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> [ickle: And drop the non-interruptible support in the process.] Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
By moving one i915_add_request we can solely depend on the new auto-seqno-numbering behaviour. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
i915_gem_object_move_to_active can handle zero seqno for us now. And not emitting a request is not fatal here - we'll try to emit a new one if we have to wait for some rendering to complete. In case this assumption ever gets accidentally broken, there's already a BUG_ON to catch it in i915_do_wait_request. So just silently ignore ENOMEM here instead of screwing up the whole drm. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
... instead of threading flush_domains through the execbuf code to i915_add_request. With this change 2 small cleanups are possible (likewise the majority of the patch): - The flush_domains parameter of i915_add_request is always 0. Drop it and the corresponding logic. - Ditto for the seqno param of i915_gem_process_flushing_list. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
Previously I thought that one interrupt per batchbuffer should be enough. Now tedious benchmarking showed this to be wrong. Therefore track whether any commands have been isssued with a future seqno (like pipelined fencing changes or flushes). If this is the case emit a request before issueing the batchbuffer. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
Now that we can move objects to the active list without already having emitted a request, move the flushing list handling into i915_gem_flush. This makes more sense and allows to drop a few i915_add_request calls that are not strictly necessary. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
Sometimes (like when flushing in preparation of batchbuffer execution) we know that we'll emit a request but haven't yet done so. Allow this case by simply taking the next seqno by default. Ensure that a request is eventually emitted before waiting for an request by issuing it in i915_wait_request iff this is not yet done. Also replace one open-coded version of i915_gem_object_wait_rendering, to prevent future code-diversion. Chris Wilson asked me to explain and clarify what this patch does and why. Here it goes: Old way of moving objects onto the active list and associating them with a reques: 1. i915_add_request + store the returned seqno somewhere 2. i915_gem_object_move_to_active (with the stored seqno as parameter) For the current users, this is all fine. But I'd like to associate objects (and fence regs) with the batchbuffer request deep down in the execbuf call-chain. I thought about three ways of implementing this. a) Don't care, just emit request when we need a new seqno. When heavily pipelining fence reg changes, this would have caused tons of superflous request (and corresponding irqs). b) Thread all changed fences, objects, whatever through the execbuf-maze, so that when we emit a request, we can store the new seqno at all the right places. c) Kill that seqno-threading-around business by simply storing the next seqno, i.e. allow 2. to be done before 1. in the above sequence. I've decided to implement c) (in this patch). The following patches are just fall-out that resulted from this small conceptual change. * We can handle the flushing list processing where we actually emit a flush (i915_gem_flush and i915_retire_commands) instead of in i915_add_request. The code makes IMHO more sense this way (and i915_add_request looses the flush_domains parameter, obviously). * We can avoid emitting unnecessary requests. IMHO there's no point in emitting more than one request per batchbuffer (with or without an corresponding irq). * By enforcing 2. before 1. ordering in the above sequence the seqno argument of i915_gem_object_move_to_active is redundant and can be dropped. v2: Now i915_wait_request issues request if it is not yet emitted. Also introduce i915_gem_next_request_seqno(dev) just in case we ever need to do some prep work before using a new seqno. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> [ickle: Keep i915_gem_object_set_to_display_plane() uninterruptible.] Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Jesse Barnes authored
Useful for capturing register read/write traces to send to the hw guys. Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Store the pixel-multiplier on the adjusted mode and avoid modifying the requested mode. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Instead of sleeping for an arbitrary length of time (the documentation fails to specify how long to wait for) wait until the load detection has changed state (or at most the 20ms as before). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Sitsofe Wheeler authored
With the extra intel_wait_for_vblank added in commit 9d0498a2 periodic stalls were being triggered (which were detected by i915_hangcheck_elapsed). Partially revert this change for now. Signed-off-by: Sitsofe Wheeler <sitsofe@yahoo.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Fix a minor confusion between intel_page_flip_finish(pipe) and intel_page_flip_finish_plane(plane) -- should have no effect as currently we map pipe 0 to plane 0 (and pipe 1 to plane 1). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
My Samsung N210 has a VBT with DEVICE_TYPE_INT_LFP with a zero addin-offset. With the check in place, the panel was declared absent. v2: Only trust BIOS writers that have graduated to writing OpRegions. (We are all doomed.) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhao Yakui <yakui.zhao@intel.com> Cc: Adam Jackson <ajax@redhat.com> Reviewed-by: Adam Jackson <ajax@redhat.com>
-
Chris Wilson authored
It is recommended that we use the Video BIOS tables that were copied into the OpRegion during POST when initialising the driver. This saves us from having to furtle around inside the ROM ourselves and possibly allows the vBIOS to adjust the tables prior to initialisation. On some systems, such as the Samsung N210, there is no accessible VBIOS and the only means of finding the VBT is through the OpRegion. v2: Rearrange the code so that ASLE is enabled along with ACPI v3: Enable OpRegion parsing even without ACPI Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Garrett <mjg@redhat.com>
-
Chris Wilson authored
It's part of the generic Intel driver infrastructure so rename it in prepreparation for using it for VBT. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
If we don't flush the write then we can not be sure that the border colour will have taken effect by the time we try to read it back. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
wait_for() uses msleep() to yield the cpu whilst spinning waiting for a register to change. kdb asserts that mode changes are atomic and so prohibits msleep. The alternative would be to use mdelay or to simply probe the register more often instead of busy waiting. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Jesse's feedback from using the wait_for() macro was that the msleep argument was that it was superfluous and made the macro more difficult to use and to read. As the actually amount of time to sleep is not critical, the crucial part is to sleep and let the processor schedule something else whilst we wait for the event, replace the argument with a hardcoded value. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
-
Daniel Vetter authored
ums-gem code correctly cancels the retire work (at lastclose time), kms does not do so. Fix this by canceling the work right after ideling the gpu. While staring at the code I noticed that the work function is not static. Fix this, too. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
When the module unloads, all users should be gone, hence all bo references held by userspace, too. This should already result in an idle ringbuffer. Still, be paranoid and idle gem before starting the unload dance. Also kill the call to i915_gem_lastclose under an if (kms), it's a noop for kms. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
Kill any outstanding unpin_work when destroying the corresponding crtc. Then flush the workqueue before the gem teardown, in case any unpin work is still outstanding. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
idle_work wasn't cleaned up at all. It takes &dev->struct_mutex, but accesss the mode_config crtc list (without any other locking!). Hence this work needs to be canceled before calling drm_mode_config_cleanup. As evidenced by the kernel's object debuggin code, the current code also cleans up the timer to early (it gets rearmed). So move it right before the final cleanup (it seems to work). Also unconditionally set up the idle_timer in intel_increase_pllclock. If we're unlucky the timer might fire right away, rendering the call in the modesetting teardown pointless. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
With kms, interrupts now get disabled in the modesetting cleanup. So free the error state afterwards, it currently gets allocated in the interrupt handler. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
hotplug_work is queued by the hotplug interrupt and only either emits a hotplug uevent or queues a crt poll slow-work. No other locking. So it's safe to cancel this work _after_ irq's have been turned off. But before the modesetting objects are destroyed because the hotplug function accesses them (without locking). The current code (for kms) only switches irqs off after modesetting teardown, hence move the irq teardown into the modeset cleanup right before the crtc cleanup. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
This is the first patch to clean up module unload races due to outstanding timers/work. Preparatory step: Thou shalt not destroy the workqueue when new work might still get enqued. Now error_work gets queued by the hangcheck timer and only (atomically) reads the chip wedged status. So cancel it right after the hangcheck timer is killed. But the hangcheck is armed by interrupts, so move everything after irqs are disabled. Also change a del_timer to a del_timer_sync in the ums gem code, the hangcheck timer is self-rearming. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Daniel Vetter authored
struct intel_dp contains both struct intel_encoder at the beginning (as it's base-class) and an i2c adapater. When initializing, the i2c adapter gets assigned intel_encoder->ddc_adaptor = &intel_dp->adapter and the generic intel_encode_destroy happily calls kfree on this pointer. Ouch. Fix this by using a dp specific cleanup function. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
-
Chris Wilson authored
This reverts commit b9421ae8. This warning was so prelevant, even for apparently working machines, that it was just causing fear, anxiety and panic. The root cause still remains, so we will add some better debugging when we focus on fixing it. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=17021Reported-by: Maciej Rutecki <maciej.rutecki@gmail.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
This reverts commit 0f3ee801. Enabling LVDS on pipe A was causing excessive wakeups on otherwise idle systems due to i915 interrupts. So restrict the LVDS to pipe B once more, whilst the issue is properly diagnosed. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16307Reported-and-tested-by: Enrico Bandiello <enban@postal.uv.es> Poked-by: Florian Mickler <florian@mickler.org> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Adam Jackson <ajax@redhat.com> Cc: stable@kernel.org
-
- 07 Sep, 2010 4 commits
-
-
Chris Wilson authored
This reverts commit ce171780. This commit has been independently bisected a few times as being the cause of a s2ram failure. Reported-and-tested-by: Kyle McMartin <kyle@mcmartin.ca> Reported-and-tested-by: Andy Isaacson <adi@hexapodia.org> Cc: Zou Nan hai <nanhai.zou@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Zhenyu Wang authored
New pci ids for GT2 and GT2+ on desktop and mobile sandybridge, and graphics device ids for server sandybridge. Also rename original ids string to reflect GT1 version. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: stable@kernel.org Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Zhenyu Wang authored
MI_FLUSH is being deprecated, but still available on Sandybridge. Make sure it's enabled as userspace still uses MI_FLUSH. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: stable@kernel.org Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Zhenyu Wang authored
Sandybridge GTT has new cache control bits in PTE, which controls graphics page cache in LLC or LLC/MLC, so we need to extend the mask function to respect the new bits. And set cache control to always LLC only by default on Gen6. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: stable@kernel.org Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-