- 26 Apr, 2019 25 commits
-
-
Chris Wilson authored
Having transitioned GEM over to using intel_context as its primary means of tracking the GEM context and engine combined and using i915_request_create(), we can move the older i915_request_alloc() helper function into selftests/ where the remaining users are confined. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-9-chris@chris-wilson.co.uk
-
Chris Wilson authored
We no longer need to track the active intel_contexts within each engine, allowing us to drop a tricky mutex_lock from inside unpin (which may occur inside fs_reclaim). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-8-chris@chris-wilson.co.uk
-
Chris Wilson authored
We switched to a tree of per-engine HW context to accommodate the introduction of virtual engines. However, we plan to also support multiple instances of the same engine within the GEM context, defeating our use of the engine as a key to looking up the HW context. Just allocate a logical per-engine instance and always use an index into the ctx->engines[]. Later on, this ctx->engines[] may be replaced by a user specified map. v2: Add for_each_gem_engine() helper to iterator within the engines lock v3: intel_context_create_request() helper v4: s/unsigned long/unsigned int/ 4 billion engines is quite enough. v5: Push iterator locking to caller Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-7-chris@chris-wilson.co.uk
-
Chris Wilson authored
In the next patch, we require the engine vfuncs setup prior to initialising the pinned kernel contexts, so split the vfunc setup from the engine initialisation and call it earlier. v2: s/setup_xcs/setup_common/ for intel_ring_submission_setup() Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-6-chris@chris-wilson.co.uk
-
Chris Wilson authored
Move the intel_context_instance() to the caller so that we can decouple ourselves from one context instance per engine. v2: Rename pin_lock() to lock_pinned(), hopefully that is clearer. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
Combine the (i915_gem_context, intel_engine) into a single parameter, the intel_context for convenience and later simplification. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Simply the setup slightly for the sseu selftests to use the actual kernel_context. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
We want to pass in a intel_context into intel_context_pin() and that requires us to first be able to lookup the intel_context! Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Our eventual goal is to rid request construction of struct_mutex, with the short term step of lifting the struct_mutex requirements into the higher levels (i.e. the caller must ensure that the context is already pinned into the GTT). In this patch, we pin GVT's shadow context upon allocation and so keep them pinned into the GGTT for as long as the virtual machine is alive, and so we can use the simpler request construction path safe in the knowledge that the hard work is already done. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426163336.15906-1-chris@chris-wilson.co.uk
-
Jani Nikula authored
Get gvt-fixes back to dinq. Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-
Ville Syrjälä authored
I like my functions simple, so split up the low level bits from cherryview_load_luts() into separate functions. Also rename the whole thing to chv_load_luts() to match the new world order. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190408121815.30142-1-ville.syrjala@linux.intel.comReviewed-by: Swati Sharma <swati2.sharma@intel.com>
-
Ville Syrjälä authored
When I refactored the code into its own function I accidentally misplaced the <<16 shifts for some of the registers causing us to lose the blue channel entirely. We should really find a way to test this... Cc: Uma Shankar <uma.shankar@intel.com> Fixes: d2c19b06 ("drm/i915: Clean up ilk/icl pipe/output CSC programming") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425192419.24931-1-ville.syrjala@linux.intel.comReviewed-by: Swati Sharma <swati2.sharma@intel.com>
-
Chris Wilson authored
Broadwater and the rest of gen4 do support being able to saving and reloading context specific registers between contexts, providing isolation of the basic GPU state (as programmable by userspace). This allows userspace to assume that the GPU retains their state from one batch to the next, minimising the amount of state it needs to reload and manually save across batches. v2: CONSTANT_BUFFER woes Running through piglit turned up an interesting issue, a GPU hang inside the context load. The context image includes the CONSTANT_BUFFER command that loads an address into a on-gpu buffer, and the context load was executing that immediately. However, since it was reading from the GTT there is no guarantee that the GTT retains the same configuration as when the context was saved, resulting in stray reads and a GPU hang. Having tried issuing a CONSTANT_BUFFER (to disable the command) from the ring before saving the context to no avail, we resort to patching out the instruction inside the context image before loading. This does impose that gen4 always reissues CONSTANT_BUFFER commands on each batch, but due to the use of a shared GTT that was and will remain a requirement. v3: ECOSKPD to the rescue Ville found the magic bit in the ECOSKPD to disable saving and restoring the CONSTANT_BUFFER from the context image, thereby completely avoiding the GPU hangs from chasing invalid pointers. This appears to be the default behaviour for gen5, and so we just need to tweak gen4 to match. v4: Fix spelling of ECOSKPD and discover it already exists Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190419172720.5462-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Ironlake does support being able to saving and reloading context specific registers between contexts, providing isolation of the basic GPU state (as programmable by userspace). This allows userspace to assume that the GPU retains their state from one batch to the next, minimising the amount of state it needs to reload, or manually save and restore. v2: Fix off-by-one in reading CXT_SIZE, and add a comment that the CXT_SIZE and context-layout do not match in bspec, but the difference is irrelevant as we overallocate the full page anyway (Ville). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419111749.3910-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Despite what I think the prm recommends, commit f2253bd9 ("drm/i915/ringbuffer: EMIT_INVALIDATE after switch context") turned out to be a huge mistake when enabling Ironlake contexts as the GPU would hang on either a MI_FLUSH or PIPE_CONTROL immediately following the MI_SET_CONTEXT of an active mesa context (more vanilla contexts, e.g. simple rendercopies with igt, do not suffer). Ville found the following clue, "[DevCTG+]: For the invalidate operation of the pipe control, the following pointers are affected. The invalidate operation affects the restore of these packets. If the pipe control invalidate operation is completed before the context save, the indirect pointers will not be restored from memory. 1. Pipeline State Pointer 2. Media State Pointer 3. Constant Buffer Packet" which suggests by us emitting the INVALIDATE prior to the MI_SET_CONTEXT, we prevent the context-restore from chasing the dangling pointers within the image, and explains why this likely prevents the GPU hang. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190419111749.3910-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
sandybride_pcode is another sideband, so move it to their new home. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-8-chris@chris-wilson.co.uk
-
Chris Wilson authored
These routines are identical except in the nature of the value parameter. For writes it is a pure in-param, but for a read, we need an out-param. Since they differ in a single line, merge the two routines into one. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-7-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since intel_sideband_read and intel_sideband_write differ by only a couple of lines (depending on whether we feed the value in or out), merge the two into a single common accessor. v2: Restore vlv_flisdsi_read() lost during rebasing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-6-chris@chris-wilson.co.uk
-
Chris Wilson authored
Split the sideback declarations out of the ginormous i915_drv.h Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
We now have two locks for sideband access. The general one covering sideband access across all generation, sb_lock, and a specific one covering sideband access via the punit on vlv/chv. After lifting the sb_lock around the punit into the callers, the pcu_lock is now redudant and can be separated from its other use to regulate RPS (essentially giving RPS a lock all of its own). v2: Extract a couple of minor bug fixes. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
Lift the sideband acquisition for vlv_punit_read and vlv_punit_write into their callers, so that we can lock the sideband once for a sequence of operations, rather than perform the heavyweight acquisition on each request. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
As we now employ a very heavy pm_qos around the punit access, we want to minimise the number of synchronous requests by performing one for the whole punit sequence rather than around individual accesses. The sideband lock is used for this, so push the pm_qos into the sideband lock acquisition and release, moving it from the lowlevel punit rw routine to the callers. In the first step, we move the punit magic into the common sideband lock so that we can acquire a bunch of ports simultaneously, and if need be extend the workaround protection later. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
While we talk to the punit over its sideband, we need to prevent the cpu from sleeping in order to prevent a potential machine hang. Note that by itself, it appears that pm_qos_update_request (via intel_idle) doesn't provide a sufficient barrier to ensure that all core are indeed awake (out of Cstate) and that the package is awake. To do so, we need to supplement the pm_qos with a manual ping on_each_cpu. v2: Restrict the heavy-weight wakeup to just the ISOF_PORT_PUNIT, there is insufficient evidence to implicate a wider problem atm. Similarly, restrict the w/a to Valleyview, as Cherryview doesn't have an angry cadre of users. The working theory, courtesy of Ville and Hans, is the issue lies within the power delivery and so is likely to be unit and board specific and occurs when both the unit/fw require extra power at the same time as the cpu package is changing its own power state. References: https://bugzilla.kernel.org/show_bug.cgi?id=109051 References: https://bugs.freedesktop.org/show_bug.cgi?id=102657 References: https://bugzilla.kernel.org/show_bug.cgi?id=195255Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-1-chris@chris-wilson.co.uk
-
Dave Airlie authored
Merge tag 'drm-intel-next-fixes-2019-04-25' of git://anongit.freedesktop.org/drm/drm-intel into drm-next - Use after free fix during GEM_CREATE when reporting back object size - Icelake DP register programming order fix Signed-off-by: Dave Airlie <airlied@redhat.com> From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425061312.GA2919@jlahtine-desk.ger.corp.intel.com
-
Dave Airlie authored
Merge tag 'drm-misc-next-fixes-2019-04-24' of git://anongit.freedesktop.org/drm/drm-misc into drm-next - fb_helper: Fix NULL deref in legacy drivers (Noralf) - leases: Ensure lessees can't connect to objects outside their perview (Daniel) - leases: Enforce that lessees hold the lease for implicitly set planes (Daniel) - leases: A few non-functional cleanups (Daniel) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Noralf Trønnes <noralf@tronnes.org> Signed-off-by: Dave Airlie <airlied@redhat.com> From: Sean Paul <sean@poorly.run> Link: https://patchwork.freedesktop.org/patch/msgid/20190424210604.GA32581@art_vandelay
-
- 25 Apr, 2019 2 commits
-
-
Chris Wilson authored
It was noted that we made the same mistake for VM_ID as for object handles, whereby we ensured that we only allocated a single handle for one ppgtt. This has the unfortunate consequence for userspace that they need to reference count the handles to avoid destroying an active ID. If we allow multiple handles to the same ppgtt, userspace can freely unreference any handle they own without fear of destroying the same handle in use elsewhere. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425054333.27299-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
In order to separate the reservation phase of building a request from its emission phase, we need to pull some of the request alloc activities from deep inside i915_request to the surface, GEM_EXECBUFFER. v2: Be frivolous, use a local drm_i915_private. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190425050143.811-1-chris@chris-wilson.co.uk
-
- 24 Apr, 2019 13 commits
-
-
Chris Wilson authored
In the current scheme, on submitting a request we take a single global GEM wakeref, which trickles down to wake up all GT power domains. This is undesirable as we would like to be able to localise our power management to the available power domains and to remove the global GEM operations from the heart of the driver. (The intent there is to push global GEM decisions to the boundary as used by the GEM user interface.) Now during request construction, each request is responsible via its logical context to acquire a wakeref on each power domain it intends to utilize. Currently, each request takes a wakeref on the engine(s) and the engines themselves take a chipset wakeref. This gives us a transition on each engine which we can extend if we want to insert more powermangement control (such as soft rc6). The global GEM operations that currently require a struct_mutex are reduced to listening to pm events from the chipset GT wakeref. As we reduce the struct_mutex requirement, these listeners should evaporate. Perhaps the biggest immediate change is that this removes the struct_mutex requirement around GT power management, allowing us greater flexibility in request construction. Another important knock-on effect, is that by tracking engine usage, we can insert a switch back to the kernel context on that engine immediately, avoiding any extra delay or inserting global synchronisation barriers. This makes tracking when an engine and its associated contexts are idle much easier -- important for when we forgo our assumed execution ordering and need idle barriers to unpin used contexts. In the process, it means we remove a large chunk of code whose only purpose was to switch back to the kernel context. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
Start acquiring the logical intel_context and using that as our primary means for request allocation. This is the initial step to allow us to avoid requiring struct_mutex for request allocation along the perma-pinned kernel context, but it also provides a foundation for breaking up the complex request allocation to handle different scenarios inside execbuf. For the purpose of emitting a request from inside retirement (see the next patch for engine power management), we also need to lift control over the timeline mutex to the caller. v2: Note that the request carries the active reference upon construction. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
We wish to start segregating the power management into different control domains, both with respect to the hardware and the user interface. The first step is that at the lowest level flow of requests, we want to process a context event (and not a global GEM operation). In this patch, we introduce the context callbacks that in future patches will be redirected to per-engine interfaces leading to global operations as required. The intent is that this will be guarded by the timeline->mutex, except that retiring has not quite finished transitioning over from being guarded by struct_mutex. So at the moment it is protected by struct_mutex with a reminded to switch. v2: Rename default handlers to intel_context_enter_engine. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Split out the powermanagement portion (GT wakeref, suspend/resume) of GEM from i915_gem.c into its own file. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
For controlling runtime pm of the GT and engines, we would like to have a callback to do extra work the first time we wake up and the last time we drop the wakeref. This first/last access needs serialisation and so we encompass a mutex with the regular intel_wakeref_t tracker. v2: Drop the _once naming and report the errors. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc; Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Start partitioning off the code that talks to the hardware (GT) from the uapi layers and move the device facing code under gt/ One casualty is s/intel_ringbuffer.h/intel_engine.h/ with the plan to subdivide that header and body further (and split out the submission code from the ringbuffer and logical context handling). This patch aims to be simple motion so git can fixup inflight patches with little mess. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424174839.7141-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
The RING_NONPRIV allows us to add registers to a whitelist that allows userspace to modify them. Ideally such registers should be safe and saved within the context such that they do not impact system behaviour for other users. This selftest verifies that those registers we do add are (a) then writable by userspace and (b) only affect a single client. Opens: - Is GEN9_SLICE_COMMON_ECO_CHICKEN1 really write-only? v2: Remove the blatant copy-paste. v3: Emulate userspace register writes via the batch again. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424110941.9869-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
As we push for better compartmentalisation, it is more convenient to copy the default sseu configuration from the engine into the derived logical context, than it is to dig it out from i915->runtime_info. v2: Use intel_sseu_from_device_info() to describe the converter Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190424095134.30249-1-chris@chris-wilson.co.uk
-
Noralf Trønnes authored
Non-atomic drivers like ast doesn't have connector->state set resulting in a NULL pointer deref: [ 29.609593] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 [ 29.609619] Call Trace: [ 29.609630] ? drm_helper_probe_single_connector_modes+0x27f/0x680 [ 29.609640] drm_setup_crtcs+0x431/0xd80 [drm_kms_helper] [ 29.753065] __drm_fb_helper_initial_config_and_unlock+0x6f/0x6a0 [ 29.753160] ? drm_modeset_unlock_all+0x31/0x50 [drm] [ 29.765758] ast_fbdev_init+0xa8/0xc0 [ast] [ 29.765762] ast_driver_load.cold.7+0x2b3/0xe11 [ast] [ 29.765775] drm_dev_register+0x111/0x150 [drm] Fix by bailing out if the driver does not support atomic modesetting. Fixes: 09ded8af ("drm/i915/fbdev: Move intel_fb_initial_config() to fbdev helper") Reported-by: Thomas Zimmermann <tzimmermann@suse.de> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: Noralf Trønnes <noralf@tronnes.org> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Tested-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20190423145353.30158-1-noralf@tronnes.org
-
Daniel Vetter authored
With the previous patch drm_crtc_find will return NULL when the crtc isn't in our lease, which will then disable the plane/connector. No longer an issue since the lessor can't escape their lease terms anymore, but not quite great semantics yet either. Catch this and return -EACCES, so that at least evil test cases have a better chance of making sure the kernel works correctly. Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190228144910.26488-8-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
We need this to make sure lessees can only connect their plane/connectors to crtc objects they own. And note that this is irrespective of whether the lessor is atomic or not, lessor cannot prevent lessees from enabling atomic. Cc: stable@vger.kernel.org Cc: Keith Packard <keithp@keithp.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190228144910.26488-7-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
If userspace doesn't enable universal planes, then we automatically add the primary and cursor planes. But for universal userspace there's no such check (and maybe we only want to give the lessee one plane, maybe not even the primary one), hence we need to check for the implied plane. v2: don't forget setcrtc ioctl. v3: Still allow disabling of the crtc in SETCRTC. Cc: stable@vger.kernel.org Cc: Keith Packard <keithp@keithp.com> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190228144910.26488-6-daniel.vetter@ffwll.ch
-
Daniel Vetter authored
The lessor is invariant over a lifetime of a lease, we don't have to grab any locks for that. Speeds up the common case of not being a lease. Cc: Keith Packard <keithp@keithp.com> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20190228144910.26488-5-daniel.vetter@ffwll.ch
-