- 18 Aug, 2013 1 commit
-
-
Geert Uytterhoeven authored
If NO_DMA=y: drivers/built-in.o: In function `__drm_pci_free': drivers/gpu/drm/drm_pci.c:112: undefined reference to `dma_free_coherent' drivers/built-in.o: In function `drm_pci_alloc': drivers/gpu/drm/drm_pci.c:72: undefined reference to `dma_alloc_coherent' drivers/built-in.o: In function `drm_gem_unmap_dma_buf': drivers/gpu/drm/drm_prime.c:87: undefined reference to `dma_unmap_sg' drivers/built-in.o: In function `drm_gem_map_dma_buf': drivers/gpu/drm/drm_prime.c:78: undefined reference to `dma_map_sg' Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
- 08 Aug, 2013 1 commit
-
-
David Herrmann authored
We currently rely on gcc dead-code elimination so the drm_agp_* helpers are not called if drm_core_has_AGP() is false. That's ugly as hell so provide "static inline" dummies for the case that AGP is disabled. Fixes a build-regression introduced by: commit 28ec711c Author: David Herrmann <dh.herrmann@gmail.com> Date: Sat Jul 27 16:37:00 2013 +0200 drm/agp: move AGP cleanup paths to drm_agpsupport.c v2: switch #ifdef -> #if (spotted by Stephen) Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Daniel Vetter <daniel@ffwll.ch> Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
- 07 Aug, 2013 10 commits
-
-
Dave Airlie authored
Merge tag 'drm-intel-next-2013-07-26-fixed' of git://people.freedesktop.org/~danvet/drm-intel into drm-next Neat that QA (and Ben) keeps on humming along while I'm on vacation, so you already get the next feature pull request: - proper eLLC support for HSW from Ben - more interrupt refactoring - add w/a tags where we implement them already (Damien) - hangcheck fixes (Chris) + hangcheck stats (Mika) - flesh out the new vm structs for ppgtt and ggtt (Ben) - PSR for Haswell, still disabled by default (Rodrigo et al.) - pc8+ refclock sequence code from Paulo - more interrupt refactoring from Paulo, unifying ilk/snb with the ivb/hsw interrupt code - full solution for the Haswell concurrent reg access issues (Chris) - fix racy object accounting, used by some new leak tests - fix sync polarity settings on ch7xxx dvo encoder - random bits&pieces, little fixes and better debug output all over [airlied: fix conflict with drm_mm cleanups] * tag 'drm-intel-next-2013-07-26-fixed' of git://people.freedesktop.org/~danvet/drm-intel: (289 commits) drm/i915: Do not dereference NULL crtc or fb until after checking drm/i915: fix pnv display core clock readout out drm/i915: Replace open-coded offset_in_page() drm/i915: Retry DP aux_ch communications with a different clock after failure drm/i915: Add messages useful for HPD storm detection debugging (v2) drm/i915: dvo_ch7xxx: fix vsync polarity setting drm/i915: fix the racy object accounting drm/i915: Convert the register access tracepoint to be conditional drm/i915: Squash gen lookup through multiple indirections inside GT access drm/i915: Use the common register access functions for NOTRACE variants drm/i915: Use a private interface for register access within GT drm/i915: Colocate all GT access routines in the same file drm/i915: fix reference counting in i915_gem_create drm/i915: Use Graphics Base of Stolen Memory on all gen3+ drm/i915: disable stolen mem for OVERLAY_NEEDS_PHYSICAL drm/i915: add functions to disable and restore LCPLL drm/i915: disable CLKOUT_DP when it's not needed drm/i915: extend lpt_enable_clkout_dp drm/i915: fix up error cleanup in i915_gem_object_bind_to_gtt drm/i915: Add some debug breadcrumbs to connector detection ...
-
David Herrmann authored
This helper is used only once and just wraps a call to drm_vma_offset_add(). Remove this unneeded indirection to safe 10 lines of code. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Jerome Glisse <jglisse@redhat.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
David Herrmann authored
We used to pre-allocate drm_mm nodes and save them in a linked list for later usage so we always have spare ones in atomic contexts. However, this is really racy if multiple threads are in an atomic context at the same time and we don't have enough spare nodes. Moreover, all remaining users run in user-context and just lock drm_mm with a spinlock. So we can easily preallocate the node, take the spinlock and insert the node. This may have worked well with BKL in place, however, with today's infrastructure it really doesn't make any sense. Besides, most users can easily embed drm_mm_node into their objects so no allocation is needed at all. Thus, remove the old pre-alloc API and all the helpers that it provides. Drivers have already been converted and we should not use the old API for new code, anymore. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
David Herrmann authored
i915 is the last user of the weird search+get_block drm_mm API. Convert it to an explicit kmalloc()+insert_node(). This drops the last user of the node-cache in drm_mm. We can remove it now in a follow-up patch. v2: - simplify error path in i915_setup_compression() v3: - simplify error path even more Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
David Herrmann authored
Instead of calling drm_mm_pre_get() in a row, we now preallocate the node and then use the atomic insertion functions. This has the exact same semantics and there is no reason to use the racy pre-allocations. Note that ttm_bo_man_get_node() does not run in atomic context. Nouveau already uses GFP_KERNEL alloc in nouveau/nouveau_ttm.c in nouveau_gart_manager_new(). So we can do the same in ttm_bo_man_get_node(). Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
David Herrmann authored
Introduce two new helpers, drm_agp_clear() and drm_agp_destroy() which clear all AGP mappings and destroy the AGP head. This allows to reduce the AGP code in core DRM and move it all to drm_agpsupport.c. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Andy Shevchenko authored
There is no need to pass constants via stack. The width may be explicitly specified in the format. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Rob Clark authored
Because, there is no reason for it not to be const. v1: original v2: fix compile break in vmwgfx, and couple related cleanups suggested by Ville Syrjälä Signed-off-by: Rob Clark <robdclark@gmail.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
David Herrmann authored
Add a "best_match" flag similar to the drm_mm_search_*() helpers so we can convert TTM to use them in follow up patches. We can also inline the non-generic helpers and move them into the header to allow compile-time optimizations. To make calls to drm_mm_{search,insert}_node() more readable, this converts the boolean argument to a flagset. There are pending patches that add additional flags for top-down allocators and more. v2: - use flag parameter instead of boolean "best_match" - convert *_search_free() helpers to also use flags argument Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Chris Wilson authored
We can apply the same optimisation tricks as kref_put_mutex() in our local equivalent function. However, we have a different locking semantic (we unlock ourselves, in kref_put_mutex() the callee unlocks) so that we can use the same callbacks for both locked and unlocked kref_put()s and so can not simply convert to using kref_put_mutex() directly. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
- 06 Aug, 2013 1 commit
-
-
Daniel Vetter authored
All the gem based kms drivers really want the same function to destroy a dumb framebuffer backing storage object. So give it to them and roll it out in all drivers. This still leaves the option open for kms drivers which don't use GEM for backing storage, but it does decently simplify matters for gem drivers. Acked-by: Inki Dae <inki.dae@samsung.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org> Cc: Ben Skeggs <skeggsb@gmail.com> Reviwed-by: Rob Clark <robdclark@gmail.com> Cc: Alex Deucher <alexdeucher@gmail.com> Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
- 04 Aug, 2013 1 commit
-
-
Chris Wilson authored
Fixes regression from commit 4906557e Author: Rodrigo Vivi <rodrigo.vivi@gmail.com> Date: Thu Jul 11 18:45:05 2013 -0300 drm/i915: Hook PSR functionality Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67526Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Rodrigo Vivi <rodrigo.vivi@gmail.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 Aug, 2013 1 commit
-
-
David Herrmann authored
We need BUG_ON(), spinlock_t and standard kernel data-types so include the right headers. Subject: [drm-intel:drm-intel-nightly 154/166] include/drm/drm_mm.h:67:2: error: unknown type name 'spinlock_t' Message-ID: <51f14693.g5HGdcuw2v3m8FOd%fengguang.wu@intel.com> In case it didn't link to it correctly. Somehow this bug doesn't occur here on my machine, hmm. But I think fixing drm_mm.h is better than changing the include-order in drm_vma_manager.h, so this is what I did. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
- 26 Jul, 2013 5 commits
-
-
Daniel Vetter authored
We need the correct clock to accurately assess whether we need to enable the double wide pipe mode or not. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Stéphane Marchesin <marcheu@chromium.org> Cc: Stuart Abercrombie <sabercrombie@google.com> Reviewed-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
The w/a db makes the recommendation to both use a non-default value for the initial clock and then to retry with an alternative clock for Haswell with the Lakeport PCH. "On LPT:H, use a divider value of 63 decimal (03Fh). If there is a failure, retry at least three times with 63, then retry at least three times with 72 decimal (048h)." Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Egbert Eich authored
For HPD storm detection we now mask out individual interrupt source bits. We have already seen a case where HPD interrupt enable bits were assigned to the wrong pins. To track these conditions more easily add some debugging messages. v2: Spelling fixes as suggested by Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: Egbert Eich <eich@suse.de> Reviewed-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
David Herrmann authored
The VMA manager is page-size based so drm_vma_node_size() returns the size in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small buffers. This bug was introduced in commit: 0de23977 "drm/gem: convert to new unified vma manager" Fixes i915 gtt mmap failure reported by Sedat Dilek in: Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ] Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reported-by: Sedat Dilek <sedat.dilek@gmail.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
- 25 Jul, 2013 12 commits
-
-
Imre Deak authored
This fixes a typo which set the wrong vsync and possibly also hsync polarity for any modes with positive vsync polarity. Signed-off-by: Imre Deak <imre.deak@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Just use a spinlock to protect them. v2: Rebase onto the new object create refcount fix patch. v3: Don't kill dev_priv->mm.object_memory as requested by Chris and hence just use a spinlock instead of atomic_t. Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67287Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
The TRACE_EVENT_CONDITION is supposed to generate more efficient code than if (cond) trace(), which is what we are currently using inside the register access functions. v2: Rebase onto uncore Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
The INTEL_INFO() macro extracts the dev_private pointer from the device, so passing in the dev_private->dev is a long winded circumlocution. v2: rebase onto uncore Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Detangle the confusion that NOTRACE variants of the register read/write routines were directly using the raw register access. We need for those routines to reuse the common code for serializing register access and ensuring the correct register power states. This is only possible now that the only routines that required raw access use their own API. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
The GT functions for enabling register access also need to occasionally write to and read from registers. To avoid the potential recursion as we modify the public interface to be stricter, introduce a private register access API for the GT functions. v2: Rebase v3: Rebase onto uncore v4: Use raw interfaces consistently so that we only use the low-level readN functions from a single location. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Currently, the register access code is split between i915_drv.c and intel_pm.c. It only bares a superficial resemblance to the reset of the powermanagement code, so move it all into its own file. This is to ease further patches to enforce serialised register access. v2: Scan for random abuse of I915_WRITE_NOTRACE v3: Take the opportunity to rename the GT functions as uncore. Uncore is the term used by the hardware design (and bspec) for all functions outside of the GPU (and CPU) cores in what is also known as the System Agent. v4: Rebase onto SNB rc6 fixes Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> [danvet: Wrestle patch into applying and inline intel_uncore_early_sanitize (plus move the old comment to the new function). Also keep the _santize postfix for intel_uncore_sanitize.] [danvet: Squash in fixup spotted by Chris on irc: We need to call intel_pm_init before intel_uncore_sanitize since the later will call cancel_work on the delayed rps setup work the former initializes.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
git://people.freedesktop.org/~airlied/linuxDaniel Vetter authored
This backmerges Linus' merge commit of the latest drm-fixes pull: commit 549f3a12 Merge: 42577ca8 058ca4a2 Author: Linus Torvalds <torvalds@linux-foundation.org> Date: Tue Jul 23 15:47:08 2013 -0700 Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux We've accrued a few too many conflicts, but the real reason is that I want to merge the 100% solution for Haswell concurrent registers writes into drm-intel-next. But that depends upon the 90% bandaid merged into -fixes: commit a7cd1b8f Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jul 19 20:36:51 2013 +0100 drm/i915: Serialize almost all register access Also, we can roll up on accrued conflicts. Usually I'd backmerge a tagged -rc, but I want to get this done before heading off to vacations next week ;-) Conflicts: drivers/gpu/drm/i915/i915_dma.c drivers/gpu/drm/i915/i915_gem.c v2: For added hilarity we have a init sequence conflict around the gt_lock, so need to move that one, too. Spotted by Jani Nikula. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
David Herrmann authored
Instead of unmapping the nodes in TTM and GEM users manually, we provide a generic wrapper which does the correct thing for all vma-nodes. v2: remove bdev->dev_mapping test in ttm_bo_unmap_virtual_unlocked() as ttm_mem_io_free_vm() does nothing in that case (io_reserved_vm is 0). v4: Fix docbook comments v5: use drm_vma_node_size() Cc: Dave Airlie <airlied@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
David Herrmann authored
Use the new vma-manager infrastructure. This doesn't change any implementation details as the vma-offset-manager is nearly copied 1-to-1 from TTM. The vm_lock is moved into the offset manager so we can drop it from TTM. During lookup, we use the vma locking helpers to take a reference to the found object. In all other scenarios, locking stays the same as before. We always guarantee that drm_vma_offset_remove() is called only during destruction. Hence, helpers like drm_vma_node_offset_addr() are always safe as long as the node has a valid offset. This also drops the addr_space_offset member as it is a copy of vm_start in vma_node objects. Use the accessor functions instead. v4: - remove vm_lock - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock) Cc: Dave Airlie <airlied@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Martin Peres <martin.peres@labri.fr> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
David Herrmann authored
Use the new vma manager instead of the old hashtable. Also convert all drivers to use the new convenience helpers. This drops all the (map_list.hash.key << PAGE_SHIFT) non-sense. Locking and access-management is exactly the same as before with an additional lock inside of the vma-manager, which strictly wouldn't be needed for gem. v2: - rebase on drm-next - init nodes via drm_vma_node_reset() in drm_gem.c v3: - fix tegra v4: - remove duplicate if (drm_vma_node_has_offset()) checks - inline now trivial drm_vma_node_offset_addr() calls v5: - skip node-reset on gem-init due to kzalloc() - do not allow mapping gem-objects with offsets (backwards compat) - remove unneccessary casts Cc: Inki Dae <inki.dae@samsung.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Thierry Reding <thierry.reding@gmail.com> Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
David Herrmann authored
If we want to map GPU memory into user-space, we need to linearize the addresses to not confuse mm-core. Currently, GEM and TTM both implement their own offset-managers to assign a pgoff to each object for user-space CPU access. GEM uses a hash-table, TTM uses an rbtree. This patch provides a unified implementation that can be used to replace both. TTM allows partial mmaps with a given offset, so we cannot use hashtables as the start address may not be known at mmap time. Hence, we use the rbtree-implementation of TTM. We could easily update drm_mm to use an rbtree instead of a linked list for it's object list and thus drop the rbtree from the vma-manager. However, this would slow down drm_mm object allocation for all other use-cases (rbtree insertion) and add another 4-8 bytes to each mm node. Hence, use the separate tree but allow for later migration. This is a rewrite of the 2012-proposal by David Airlie <airlied@linux.ie> v2: - fix Docbook integration - drop drm_mm_node_linked() and use drm_mm_node_allocated() - remove unjustified likely/unlikely usage (but keep for rbtree paths) - remove BUG_ON() as drm_mm already does that - clarify page-based vs. byte-based addresses - use drm_vma_node_reset() for initialization, too v4: - allow external locking via drm_vma_offset_un/lock_lookup() - add locked lookup helper drm_vma_offset_lookup_locked() v5: - fix drm_vma_offset_lookup() to correctly validate range-mismatches (fix (offset > start + pages)) - fix drm_vma_offset_exact_lookup() to actually do what it says - remove redundant vm_pages member (add drm_vma_node_size() helper) - remove unneeded goto - fix documentation Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@gmail.com>
-
- 24 Jul, 2013 8 commits
-
-
Daniel Vetter authored
This function is called without the dev->struct_mutex held, hence we need to use the _unlocked unreference variants. As soon as the object is registered userspace can sneak in here with a gem_close ioctl call, so the object can (and with my new evil tests actually does) get the final unreference in this place. The lack of locking then results in hilarity and some good leakage. To fix this we simply need to revert Chris Wilson <chris@chris-wilson.co.uk> v2: We need to make the trace call _before_ we drop our ref - the object might very well be gone by then already. v3: Just revert the original patch as suggested by Chris Wilson. Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Remove the added white line again to tighten the return block, requested by Chris.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
So I made the mistake of missing that the desktop and mobile chipsets have different layouts in their PCI configurations, and we were incorrectly setting the wrong physical address for stolen memory on mobile chipsets. Since all gen3+ are actually consistent in the location of the GBSM register in the PCI configuration space on device 2 (the GPU), use it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Drop cc: stable and fudge conflicts.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Our phys_object code can't deal with stolen memory and so blows up. Fixing this is quite a bit of work and not worth it much for a single page object, so just opt-out. This is necessary prep work to enable stolen on gen2/3 platforms where the overlay register file isn't stored in the gtt. Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
For now there are no callers, but these functions are going to be needed for the code that allows Package C8+. Other future features may also require this code. Also merge the commit which introduced assert_can_disable_lcpll and had the following commit message: Most of the hardware needs to be disabled before LCPLL is disabled, so let's add a function to assert some of items listed in the "Display Sequences for LCPLL disabling" documentation. The idea is that hsw_disable_lcpll should not disable the hardware, the callers need to take care of calling hsw_disable_lcpll only once everything is already disabled. v2: - Rebase. - Fix D_COMP wait timeout. v3: - Use wait_for_atomic_use (Ben) - Remove/add a useless/needed POSTING_READ (Ben) - Early return in case LCPLL is already restored (Ben) - Add ndelay(100) (Ben) v4: - Merge the commit that added assert_can_disable_lcpll (Ben) - Add interrupt assertions (Ben) Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Fix compile fail since there's no HAS_LP_PCH yet.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
We currently don't support HDMI clock bending nor use SSC for DP or HDMI on Haswell, so the only case where we need CLKOUT_DP is for VGA. v2: - Replace the IS_ULT check for LPT-LP - Simplify GEN0/DBUFF0 check due to change on the previous patch - Also check for SBI_SSCCTL_DISABLE (Ben). Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
Now it implements 3 different sequences from BSpec and also has support for ULT. v2: - Change IS_ULT checks for LPT-LP checks - Add check for LPT-LP + with_fdi (Ben) - Merge DBUFF0/GEN0 bit definitions since they're the same register (Ben) - DBUFF0 (1<<0) is Disable, not Enable Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
This has been broken in commit 2f633156 Author: Ben Widawsky <ben@bwidawsk.net> Date: Wed Jul 17 12:19:03 2013 -0700 drm/i915: Create VMAs which resulted in an OOPS the first time around we've hit -ENOSPC. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67156 Cc: Imre Deak <imre.deak@intel.com> Cc: Ben Widawsky <ben@bwidawsk.net> Tested-by: meng <mengmeng.meng@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Try to decypher detection failures is a little tricker at the moment as the only indicator of progress is when output_poll_execute() tells us the result after the connector->detect() has run. This patch adds a telltale to the start of each detect function so that we can track progress and associate activity more clearly with each connector. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-