Commit 4d07b0bc authored by Lyude Paul's avatar Lyude Paul

drm/display/dp_mst: Move all payload info into the atomic state

Now that we've finally gotten rid of the non-atomic MST users leftover in
the kernel, we can finally get rid of all of the legacy payload code we
have and move as much as possible into the MST atomic state structs. The
main purpose of this is to make the MST code a lot less confusing to work
on, as there's a lot of duplicated logic that doesn't really need to be
here. As well, this should make introducing features like fallback link
retraining and DSC support far easier.

Since the old payload code was pretty gnarly and there's a Lot of changes
here, I expect this might be a bit difficult to review. So to make things
as easy as possible for reviewers, I'll sum up how both the old and new
code worked here (it took me a while to figure this out too!).

The old MST code basically worked by maintaining two different payload
tables - proposed_vcpis, and payloads. proposed_vcpis would hold the
modified payload we wanted to push to the topology, while payloads held the
payload table that was currently programmed in hardware. Modifications to
proposed_vcpis would be handled through drm_dp_allocate_vcpi(),
drm_dp_mst_deallocate_vcpi(), and drm_dp_mst_reset_vcpi_slots(). Then, they
would be pushed via drm_dp_mst_update_payload_step1() and
drm_dp_mst_update_payload_step2().

Furthermore, it's important to note how adding and removing VC payloads
actually worked with drm_dp_mst_update_payload_step1(). When a VC payload
is removed from the VC table, all VC payloads which come after the removed
VC payload's slots must have their time slots shifted towards the start of
the table. The old code handles this by looping through the entire payload
table and recomputing the start slot for every payload in the topology from
scratch. While very much overkill, this ends up doing the right thing
because we always order the VCPIs for payloads from first to last starting
timeslot.

It's important to also note that drm_dp_mst_update_payload_step2() isn't
actually limited to updating a single payload - the driver can use it to
queue up multiple payload changes so that as many of them can be sent as
possible before waiting for the ACT. This is -technically- not against
spec, but as Wayne Lin has pointed out it's not consistently implemented
correctly in hubs - so it might as well be.

drm_dp_mst_update_payload_step2() is pretty self explanatory and basically
the same between the old and new code, save for the fact we don't have a
second step for deleting payloads anymore -and thus rename it to
drm_dp_mst_add_payload_step2().

The new payload code stores all of the current payload info within the MST
atomic state and computes as much of the state as possible ahead of time.
This has the one exception of the starting timeslots for payloads, which
can't be determined at atomic check time since the starting time slots will
vary depending on what order CRTCs are enabled in the atomic state - which
varies from driver to driver. These are still stored in the atomic MST
state, but are only copied from the old MST state during atomic commit
time. Likewise, this is when new start slots are determined.

Adding/removing payloads now works much more closely to how things are
described in the spec. When we delete a payload, we loop through the
current list of payloads and update the start slots for any payloads whose
time slots came after the payload we just deleted. Determining the starting
time slots for new payloads being added is done by simply keeping track of
where the end of the VC table is in
drm_dp_mst_topology_mgr->next_start_slot. Additionally, it's worth noting
that we no longer have a single update_payload() function. Instead, we now
have drm_dp_mst_add_payload_step1|2() and drm_dp_mst_remove_payload(). As
such, it's now left it up to the driver to figure out when to add or remove
payloads. The driver already knows when it's disabling/enabling CRTCs, so
it also already knows when payloads should be added or removed.

Changes since v1:
* Refactor around all of the completely dead code changes that are
  happening in amdgpu for some reason when they really shouldn't even be
  there in the first place… :\
* Remove mention of sending one ACT per series of payload updates. As Wayne
  Lin pointed out, there are apparently hubs on the market that don't work
  correctly with this scheme and require a separate ACT per payload update.
* Fix accidental drop of mst_mgr.lock - Wayne Lin
* Remove mentions of allowing multiple ACT updates per payload change,
  mention that this is a result of vendors not consistently supporting this
  part of the spec and requiring a unique ACT for each payload change.
* Get rid of reference to drm_dp_mst_port in DC - turns out I just got
  myself confused by DC and we don't actually need this.
Changes since v2:
* Get rid of fix for not sending payload deallocations if ddps=0 and just
  go back to wayne's fix
Signed-off-by: default avatarLyude Paul <lyude@redhat.com>
Cc: Wayne Lin <Wayne.Lin@amd.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Fangzhi Zuo <Jerry.Zuo@amd.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sean Paul <sean@poorly.run>
Acked-by: default avatarJani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220817193847.557945-18-lyude@redhat.com
parent 01ad1d9c
...@@ -6385,6 +6385,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, ...@@ -6385,6 +6385,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode; const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
struct drm_dp_mst_topology_mgr *mst_mgr; struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port; struct drm_dp_mst_port *mst_port;
struct drm_dp_mst_topology_state *mst_state;
enum dc_color_depth color_depth; enum dc_color_depth color_depth;
int clock, bpp = 0; int clock, bpp = 0;
bool is_y420 = false; bool is_y420 = false;
...@@ -6398,6 +6399,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, ...@@ -6398,6 +6399,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
if (!crtc_state->connectors_changed && !crtc_state->mode_changed) if (!crtc_state->connectors_changed && !crtc_state->mode_changed)
return 0; return 0;
mst_state = drm_atomic_get_mst_topology_state(state, mst_mgr);
if (IS_ERR(mst_state))
return PTR_ERR(mst_state);
if (!mst_state->pbn_div)
mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link);
if (!state->duplicated) { if (!state->duplicated) {
int max_bpc = conn_state->max_requested_bpc; int max_bpc = conn_state->max_requested_bpc;
is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) && is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
...@@ -6409,11 +6417,10 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, ...@@ -6409,11 +6417,10 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
clock = adjusted_mode->clock; clock = adjusted_mode->clock;
dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false); dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
} }
dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_time_slots(state,
mst_mgr, dm_new_connector_state->vcpi_slots =
mst_port, drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port,
dm_new_connector_state->pbn, dm_new_connector_state->pbn);
dm_mst_get_pbn_divider(aconnector->dc_link));
if (dm_new_connector_state->vcpi_slots < 0) { if (dm_new_connector_state->vcpi_slots < 0) {
DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots); DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots);
return dm_new_connector_state->vcpi_slots; return dm_new_connector_state->vcpi_slots;
...@@ -6483,18 +6490,12 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state, ...@@ -6483,18 +6490,12 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
dm_conn_state->pbn = pbn; dm_conn_state->pbn = pbn;
dm_conn_state->vcpi_slots = slot_num; dm_conn_state->vcpi_slots = slot_num;
drm_dp_mst_atomic_enable_dsc(state, drm_dp_mst_atomic_enable_dsc(state, aconnector->port, dm_conn_state->pbn,
aconnector->port,
dm_conn_state->pbn,
0,
false); false);
continue; continue;
} }
vcpi = drm_dp_mst_atomic_enable_dsc(state, vcpi = drm_dp_mst_atomic_enable_dsc(state, aconnector->port, pbn, true);
aconnector->port,
pbn, pbn_div,
true);
if (vcpi < 0) if (vcpi < 0)
return vcpi; return vcpi;
...@@ -9336,8 +9337,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ...@@ -9336,8 +9337,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;
#if defined(CONFIG_DRM_AMD_DC_DCN) #if defined(CONFIG_DRM_AMD_DC_DCN)
struct dsc_mst_fairness_vars vars[MAX_PIPES]; struct dsc_mst_fairness_vars vars[MAX_PIPES];
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_topology_mgr *mgr;
#endif #endif
trace_amdgpu_dm_atomic_check_begin(state); trace_amdgpu_dm_atomic_check_begin(state);
...@@ -9576,33 +9575,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ...@@ -9576,33 +9575,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
lock_and_validation_needed = true; lock_and_validation_needed = true;
} }
#if defined(CONFIG_DRM_AMD_DC_DCN)
/* set the slot info for each mst_state based on the link encoding format */
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
struct amdgpu_dm_connector *aconnector;
struct drm_connector *connector;
struct drm_connector_list_iter iter;
u8 link_coding_cap;
if (!mgr->mst_state )
continue;
drm_connector_list_iter_begin(dev, &iter);
drm_for_each_connector_iter(connector, &iter) {
int id = connector->index;
if (id == mst_state->mgr->conn_base_id) {
aconnector = to_amdgpu_dm_connector(connector);
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
drm_dp_mst_update_slots(mst_state, link_coding_cap);
break;
}
}
drm_connector_list_iter_end(&iter);
}
#endif
/** /**
* Streams and planes are reset when there are changes that affect * Streams and planes are reset when there are changes that affect
* bandwidth. Anything that affects bandwidth needs to go through * bandwidth. Anything that affects bandwidth needs to go through
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/i2c.h> #include <linux/i2c.h>
#include <drm/drm_atomic.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <drm/amdgpu_drm.h> #include <drm/amdgpu_drm.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
...@@ -154,40 +155,27 @@ enum dc_edid_status dm_helpers_parse_edid_caps( ...@@ -154,40 +155,27 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
} }
static void static void
fill_dc_mst_payload_table_from_drm(struct amdgpu_dm_connector *aconnector, fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state,
struct dc_dp_mst_stream_allocation_table *proposed_table) struct amdgpu_dm_connector *aconnector,
struct dc_dp_mst_stream_allocation_table *table)
{ {
int i; struct dc_dp_mst_stream_allocation_table new_table = { 0 };
struct drm_dp_mst_topology_mgr *mst_mgr = struct dc_dp_mst_stream_allocation *sa;
&aconnector->mst_port->mst_mgr; struct drm_dp_mst_atomic_payload *payload;
mutex_lock(&mst_mgr->payload_lock); /* Fill payload info*/
list_for_each_entry(payload, &mst_state->payloads, next) {
if (payload->delete)
continue;
proposed_table->stream_count = 0; sa = &new_table.stream_allocations[new_table.stream_count];
sa->slot_count = payload->time_slots;
/* number of active streams */ sa->vcp_id = payload->vcpi;
for (i = 0; i < mst_mgr->max_payloads; i++) { new_table.stream_count++;
if (mst_mgr->payloads[i].num_slots == 0)
break; /* end of vcp_id table */
ASSERT(mst_mgr->payloads[i].payload_state !=
DP_PAYLOAD_DELETE_LOCAL);
if (mst_mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL ||
mst_mgr->payloads[i].payload_state ==
DP_PAYLOAD_REMOTE) {
struct dc_dp_mst_stream_allocation *sa =
&proposed_table->stream_allocations[
proposed_table->stream_count];
sa->slot_count = mst_mgr->payloads[i].num_slots;
sa->vcp_id = mst_mgr->proposed_vcpis[i]->vcpi;
proposed_table->stream_count++;
}
} }
mutex_unlock(&mst_mgr->payload_lock); /* Overwrite the old table */
*table = new_table;
} }
void dm_helpers_dp_update_branch_info( void dm_helpers_dp_update_branch_info(
...@@ -205,11 +193,9 @@ bool dm_helpers_dp_mst_write_payload_allocation_table( ...@@ -205,11 +193,9 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
bool enable) bool enable)
{ {
struct amdgpu_dm_connector *aconnector; struct amdgpu_dm_connector *aconnector;
struct dm_connector_state *dm_conn_state; struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_atomic_payload *payload;
struct drm_dp_mst_topology_mgr *mst_mgr; struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port;
bool ret;
u8 link_coding_cap = DP_8b_10b_ENCODING;
aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
/* Accessing the connector state is required for vcpi_slots allocation /* Accessing the connector state is required for vcpi_slots allocation
...@@ -220,40 +206,21 @@ bool dm_helpers_dp_mst_write_payload_allocation_table( ...@@ -220,40 +206,21 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
if (!aconnector || !aconnector->mst_port) if (!aconnector || !aconnector->mst_port)
return false; return false;
dm_conn_state = to_dm_connector_state(aconnector->base.state);
mst_mgr = &aconnector->mst_port->mst_mgr; mst_mgr = &aconnector->mst_port->mst_mgr;
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
if (!mst_mgr->mst_state)
return false;
mst_port = aconnector->port;
#if defined(CONFIG_DRM_AMD_DC_DCN)
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
#endif
if (enable) {
ret = drm_dp_mst_allocate_vcpi(mst_mgr, mst_port,
dm_conn_state->pbn,
dm_conn_state->vcpi_slots);
if (!ret)
return false;
} else {
drm_dp_mst_reset_vcpi_slots(mst_mgr, mst_port);
}
/* It's OK for this to fail */ /* It's OK for this to fail */
drm_dp_update_payload_part1(mst_mgr, (link_coding_cap == DP_CAP_ANSI_128B132B) ? 0:1); payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
if (enable)
drm_dp_add_payload_part1(mst_mgr, mst_state, payload);
else
drm_dp_remove_payload(mst_mgr, mst_state, payload);
/* mst_mgr->->payloads are VC payload notify MST branch using DPCD or /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
* AUX message. The sequence is slot 1-63 allocated sequence for each * AUX message. The sequence is slot 1-63 allocated sequence for each
* stream. AMD ASIC stream slot allocation should follow the same * stream. AMD ASIC stream slot allocation should follow the same
* sequence. copy DRM MST allocation to dc */ * sequence. copy DRM MST allocation to dc */
fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table);
fill_dc_mst_payload_table_from_drm(aconnector, proposed_table);
return true; return true;
} }
...@@ -310,8 +277,9 @@ bool dm_helpers_dp_mst_send_payload_allocation( ...@@ -310,8 +277,9 @@ bool dm_helpers_dp_mst_send_payload_allocation(
bool enable) bool enable)
{ {
struct amdgpu_dm_connector *aconnector; struct amdgpu_dm_connector *aconnector;
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_topology_mgr *mst_mgr; struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port; struct drm_dp_mst_atomic_payload *payload;
enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD; enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD;
enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD; enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
...@@ -320,19 +288,16 @@ bool dm_helpers_dp_mst_send_payload_allocation( ...@@ -320,19 +288,16 @@ bool dm_helpers_dp_mst_send_payload_allocation(
if (!aconnector || !aconnector->mst_port) if (!aconnector || !aconnector->mst_port)
return false; return false;
mst_port = aconnector->port;
mst_mgr = &aconnector->mst_port->mst_mgr; mst_mgr = &aconnector->mst_port->mst_mgr;
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
if (!mst_mgr->mst_state) payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
return false;
if (!enable) { if (!enable) {
set_flag = MST_CLEAR_ALLOCATED_PAYLOAD; set_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
clr_flag = MST_ALLOCATE_NEW_PAYLOAD; clr_flag = MST_ALLOCATE_NEW_PAYLOAD;
} }
if (drm_dp_update_payload_part2(mst_mgr)) { if (enable && drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload)) {
amdgpu_dm_set_mst_status(&aconnector->mst_status, amdgpu_dm_set_mst_status(&aconnector->mst_status,
set_flag, false); set_flag, false);
} else { } else {
...@@ -342,9 +307,6 @@ bool dm_helpers_dp_mst_send_payload_allocation( ...@@ -342,9 +307,6 @@ bool dm_helpers_dp_mst_send_payload_allocation(
clr_flag, false); clr_flag, false);
} }
if (!enable)
drm_dp_mst_deallocate_vcpi(mst_mgr, mst_port);
return true; return true;
} }
......
...@@ -597,15 +597,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm, ...@@ -597,15 +597,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap); dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap);
aconnector->mst_mgr.cbs = &dm_mst_cbs; aconnector->mst_mgr.cbs = &dm_mst_cbs;
drm_dp_mst_topology_mgr_init( drm_dp_mst_topology_mgr_init(&aconnector->mst_mgr, adev_to_drm(dm->adev),
&aconnector->mst_mgr, &aconnector->dm_dp_aux.aux, 16, 4, aconnector->connector_id);
adev_to_drm(dm->adev),
&aconnector->dm_dp_aux.aux,
16,
4,
max_link_enc_cap.lane_count,
drm_dp_bw_code_to_link_rate(max_link_enc_cap.link_rate),
aconnector->connector_id);
drm_connector_attach_dp_subconnector_property(&aconnector->base); drm_connector_attach_dp_subconnector_property(&aconnector->base);
} }
...@@ -710,6 +703,7 @@ static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn) ...@@ -710,6 +703,7 @@ static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn)
} }
static bool increase_dsc_bpp(struct drm_atomic_state *state, static bool increase_dsc_bpp(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct dc_link *dc_link, struct dc_link *dc_link,
struct dsc_mst_fairness_params *params, struct dsc_mst_fairness_params *params,
struct dsc_mst_fairness_vars *vars, struct dsc_mst_fairness_vars *vars,
...@@ -722,12 +716,9 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state, ...@@ -722,12 +716,9 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
int min_initial_slack; int min_initial_slack;
int next_index; int next_index;
int remaining_to_increase = 0; int remaining_to_increase = 0;
int pbn_per_timeslot;
int link_timeslots_used; int link_timeslots_used;
int fair_pbn_alloc; int fair_pbn_alloc;
pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link);
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
if (vars[i + k].dsc_enabled) { if (vars[i + k].dsc_enabled) {
initial_slack[i] = initial_slack[i] =
...@@ -758,17 +749,17 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state, ...@@ -758,17 +749,17 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
link_timeslots_used = 0; link_timeslots_used = 0;
for (i = 0; i < count; i++) for (i = 0; i < count; i++)
link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot); link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, mst_state->pbn_div);
fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot; fair_pbn_alloc =
(63 - link_timeslots_used) / remaining_to_increase * mst_state->pbn_div;
if (initial_slack[next_index] > fair_pbn_alloc) { if (initial_slack[next_index] > fair_pbn_alloc) {
vars[next_index].pbn += fair_pbn_alloc; vars[next_index].pbn += fair_pbn_alloc;
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
pbn_per_timeslot) < 0)
return false; return false;
if (!drm_dp_mst_atomic_check(state)) { if (!drm_dp_mst_atomic_check(state)) {
vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn); vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn);
...@@ -777,8 +768,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state, ...@@ -777,8 +768,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
pbn_per_timeslot) < 0)
return false; return false;
} }
} else { } else {
...@@ -786,8 +776,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state, ...@@ -786,8 +776,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
pbn_per_timeslot) < 0)
return false; return false;
if (!drm_dp_mst_atomic_check(state)) { if (!drm_dp_mst_atomic_check(state)) {
vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16; vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16;
...@@ -796,8 +785,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state, ...@@ -796,8 +785,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
pbn_per_timeslot) < 0)
return false; return false;
} }
} }
...@@ -854,8 +842,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state, ...@@ -854,8 +842,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
dm_mst_get_pbn_divider(dc_link)) < 0)
return false; return false;
if (!drm_dp_mst_atomic_check(state)) { if (!drm_dp_mst_atomic_check(state)) {
...@@ -866,8 +853,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state, ...@@ -866,8 +853,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr, params[next_index].port->mgr,
params[next_index].port, params[next_index].port,
vars[next_index].pbn, vars[next_index].pbn) < 0)
dm_mst_get_pbn_divider(dc_link)) < 0)
return false; return false;
} }
...@@ -881,17 +867,27 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, ...@@ -881,17 +867,27 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
struct dc_state *dc_state, struct dc_state *dc_state,
struct dc_link *dc_link, struct dc_link *dc_link,
struct dsc_mst_fairness_vars *vars, struct dsc_mst_fairness_vars *vars,
struct drm_dp_mst_topology_mgr *mgr,
int *link_vars_start_index) int *link_vars_start_index)
{ {
int i, k;
struct dc_stream_state *stream; struct dc_stream_state *stream;
struct dsc_mst_fairness_params params[MAX_PIPES]; struct dsc_mst_fairness_params params[MAX_PIPES];
struct amdgpu_dm_connector *aconnector; struct amdgpu_dm_connector *aconnector;
struct drm_dp_mst_topology_state *mst_state = drm_atomic_get_mst_topology_state(state, mgr);
int count = 0; int count = 0;
int i, k;
bool debugfs_overwrite = false; bool debugfs_overwrite = false;
memset(params, 0, sizeof(params)); memset(params, 0, sizeof(params));
if (IS_ERR(mst_state))
return false;
mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link);
#if defined(CONFIG_DRM_AMD_DC_DCN)
drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link));
#endif
/* Set up params */ /* Set up params */
for (i = 0; i < dc_state->stream_count; i++) { for (i = 0; i < dc_state->stream_count; i++) {
struct dc_dsc_policy dsc_policy = {0}; struct dc_dsc_policy dsc_policy = {0};
...@@ -950,11 +946,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, ...@@ -950,11 +946,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
vars[i + k].dsc_enabled = false; vars[i + k].dsc_enabled = false;
vars[i + k].bpp_x16 = 0; vars[i + k].bpp_x16 = 0;
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port,
params[i].port->mgr, vars[i + k].pbn) < 0)
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
return false; return false;
} }
if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) {
...@@ -968,21 +961,15 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, ...@@ -968,21 +961,15 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps);
vars[i + k].dsc_enabled = true; vars[i + k].dsc_enabled = true;
vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16;
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
params[i].port->mgr, params[i].port, vars[i + k].pbn) < 0)
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
return false; return false;
} else { } else {
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
vars[i + k].dsc_enabled = false; vars[i + k].dsc_enabled = false;
vars[i + k].bpp_x16 = 0; vars[i + k].bpp_x16 = 0;
if (drm_dp_atomic_find_time_slots(state, if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
params[i].port->mgr, params[i].port, vars[i + k].pbn) < 0)
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
return false; return false;
} }
} }
...@@ -990,7 +977,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, ...@@ -990,7 +977,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
return false; return false;
/* Optimize degree of compression */ /* Optimize degree of compression */
if (!increase_dsc_bpp(state, dc_link, params, vars, count, k)) if (!increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k))
return false; return false;
if (!try_disable_dsc(state, dc_link, params, vars, count, k)) if (!try_disable_dsc(state, dc_link, params, vars, count, k))
...@@ -1136,8 +1123,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, ...@@ -1136,8 +1123,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
continue; continue;
mutex_lock(&aconnector->mst_mgr.lock); mutex_lock(&aconnector->mst_mgr.lock);
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
vars, &link_vars_start_index)) { &aconnector->mst_mgr,
&link_vars_start_index)) {
mutex_unlock(&aconnector->mst_mgr.lock); mutex_unlock(&aconnector->mst_mgr.lock);
return false; return false;
} }
...@@ -1195,10 +1183,8 @@ static bool ...@@ -1195,10 +1183,8 @@ static bool
continue; continue;
mutex_lock(&aconnector->mst_mgr.lock); mutex_lock(&aconnector->mst_mgr.lock);
if (!compute_mst_dsc_configs_for_link(state, if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
dc_state, &aconnector->mst_mgr,
stream->link,
vars,
&link_vars_start_index)) { &link_vars_start_index)) {
mutex_unlock(&aconnector->mst_mgr.lock); mutex_unlock(&aconnector->mst_mgr.lock);
return false; return false;
......
...@@ -251,6 +251,9 @@ union dpcd_training_lane_set { ...@@ -251,6 +251,9 @@ union dpcd_training_lane_set {
* _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic * _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic
* state calculations in DM, or you will break something. * state calculations in DM, or you will break something.
*/ */
struct drm_dp_mst_port;
/* DP MST stream allocation (payload bandwidth number) */ /* DP MST stream allocation (payload bandwidth number) */
struct dc_dp_mst_stream_allocation { struct dc_dp_mst_stream_allocation {
uint8_t vcp_id; uint8_t vcp_id;
......
...@@ -52,6 +52,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, ...@@ -52,6 +52,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
struct drm_atomic_state *state = crtc_state->uapi.state; struct drm_atomic_state *state = crtc_state->uapi.state;
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
struct intel_dp *intel_dp = &intel_mst->primary->dp; struct intel_dp *intel_dp = &intel_mst->primary->dp;
struct drm_dp_mst_topology_state *mst_state;
struct intel_connector *connector = struct intel_connector *connector =
to_intel_connector(conn_state->connector); to_intel_connector(conn_state->connector);
struct drm_i915_private *i915 = to_i915(connector->base.dev); struct drm_i915_private *i915 = to_i915(connector->base.dev);
...@@ -60,22 +61,28 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, ...@@ -60,22 +61,28 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N); bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N);
int bpp, slots = -EINVAL; int bpp, slots = -EINVAL;
mst_state = drm_atomic_get_mst_topology_state(state, &intel_dp->mst_mgr);
if (IS_ERR(mst_state))
return PTR_ERR(mst_state);
crtc_state->lane_count = limits->max_lane_count; crtc_state->lane_count = limits->max_lane_count;
crtc_state->port_clock = limits->max_rate; crtc_state->port_clock = limits->max_rate;
// TODO: Handle pbn_div changes by adding a new MST helper
if (!mst_state->pbn_div) {
mst_state->pbn_div = drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
limits->max_rate,
limits->max_lane_count);
}
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
crtc_state->pipe_bpp = bpp; crtc_state->pipe_bpp = bpp;
crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
crtc_state->pipe_bpp, crtc_state->pipe_bpp,
false); false);
slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr, slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr,
connector->port, connector->port, crtc_state->pbn);
crtc_state->pbn,
drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
crtc_state->port_clock,
crtc_state->lane_count));
if (slots == -EDEADLK) if (slots == -EDEADLK)
return slots; return slots;
if (slots >= 0) if (slots >= 0)
...@@ -360,21 +367,17 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state, ...@@ -360,21 +367,17 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
struct intel_dp *intel_dp = &dig_port->dp; struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector = struct intel_connector *connector =
to_intel_connector(old_conn_state->connector); to_intel_connector(old_conn_state->connector);
struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_mst_topology_state(&state->base, &intel_dp->mst_mgr);
struct drm_i915_private *i915 = to_i915(connector->base.dev); struct drm_i915_private *i915 = to_i915(connector->base.dev);
int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1;
int ret;
drm_dbg_kms(&i915->drm, "active links %d\n", drm_dbg_kms(&i915->drm, "active links %d\n",
intel_dp->active_mst_links); intel_dp->active_mst_links);
intel_hdcp_disable(intel_mst->connector); intel_hdcp_disable(intel_mst->connector);
drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port); drm_dp_remove_payload(&intel_dp->mst_mgr, mst_state,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
if (ret) {
drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret);
}
intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state); intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state);
} }
...@@ -402,8 +405,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state, ...@@ -402,8 +405,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
intel_disable_transcoder(old_crtc_state); intel_disable_transcoder(old_crtc_state);
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
clear_act_sent(encoder, old_crtc_state); clear_act_sent(encoder, old_crtc_state);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder), intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
...@@ -411,8 +412,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state, ...@@ -411,8 +412,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
wait_for_act_sent(encoder, old_crtc_state); wait_for_act_sent(encoder, old_crtc_state);
drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port);
intel_ddi_disable_transcoder_func(old_crtc_state); intel_ddi_disable_transcoder_func(old_crtc_state);
if (DISPLAY_VER(dev_priv) >= 9) if (DISPLAY_VER(dev_priv) >= 9)
...@@ -479,7 +478,8 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state, ...@@ -479,7 +478,8 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_connector *connector = struct intel_connector *connector =
to_intel_connector(conn_state->connector); to_intel_connector(conn_state->connector);
int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1; struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
int ret; int ret;
bool first_mst_stream; bool first_mst_stream;
...@@ -505,16 +505,13 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state, ...@@ -505,16 +505,13 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
dig_port->base.pre_enable(state, &dig_port->base, dig_port->base.pre_enable(state, &dig_port->base,
pipe_config, NULL); pipe_config, NULL);
ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
connector->port,
pipe_config->pbn,
pipe_config->dp_m_n.tu);
if (!ret)
drm_err(&dev_priv->drm, "failed to allocate vcpi\n");
intel_dp->active_mst_links++; intel_dp->active_mst_links++;
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot); ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
if (ret < 0)
drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n",
connector->base.name, ret);
/* /*
* Before Gen 12 this is not done as part of * Before Gen 12 this is not done as part of
...@@ -537,7 +534,10 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state, ...@@ -537,7 +534,10 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
struct intel_digital_port *dig_port = intel_mst->primary; struct intel_digital_port *dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &dig_port->dp; struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
enum transcoder trans = pipe_config->cpu_transcoder; enum transcoder trans = pipe_config->cpu_transcoder;
drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder);
...@@ -565,7 +565,8 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state, ...@@ -565,7 +565,8 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
wait_for_act_sent(encoder, pipe_config); wait_for_act_sent(encoder, pipe_config);
drm_dp_update_payload_part2(&intel_dp->mst_mgr); drm_dp_add_payload_part2(&intel_dp->mst_mgr, &state->base,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable) if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable)
intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0, intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0,
...@@ -949,8 +950,6 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id) ...@@ -949,8 +950,6 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
struct intel_dp *intel_dp = &dig_port->dp; struct intel_dp *intel_dp = &dig_port->dp;
enum port port = dig_port->base.port; enum port port = dig_port->base.port;
int ret; int ret;
int max_source_rate =
intel_dp->source_rates[intel_dp->num_source_rates - 1];
if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp)) if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp))
return 0; return 0;
...@@ -966,10 +965,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id) ...@@ -966,10 +965,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
/* create encoders */ /* create encoders */
intel_dp_create_fake_mst_encoders(dig_port); intel_dp_create_fake_mst_encoders(dig_port);
ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm, ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm,
&intel_dp->aux, 16, 3, &intel_dp->aux, 16, 3, conn_base_id);
dig_port->max_lanes,
max_source_rate,
conn_base_id);
if (ret) { if (ret) {
intel_dp->mst_mgr.cbs = NULL; intel_dp->mst_mgr.cbs = NULL;
return ret; return ret;
......
...@@ -30,8 +30,30 @@ ...@@ -30,8 +30,30 @@
static int intel_conn_to_vcpi(struct intel_connector *connector) static int intel_conn_to_vcpi(struct intel_connector *connector)
{ {
struct drm_dp_mst_topology_mgr *mgr;
struct drm_dp_mst_atomic_payload *payload;
struct drm_dp_mst_topology_state *mst_state;
int vcpi = 0;
/* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */ /* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */
return connector->port ? connector->port->vcpi.vcpi : 0; if (!connector->port)
return 0;
mgr = connector->port->mgr;
drm_modeset_lock(&mgr->base.lock, NULL);
mst_state = to_drm_dp_mst_topology_state(mgr->base.state);
payload = drm_atomic_get_mst_payload_state(mst_state, connector->port);
if (drm_WARN_ON(mgr->dev, !payload))
goto out;
vcpi = payload->vcpi;
if (drm_WARN_ON(mgr->dev, vcpi < 0)) {
vcpi = 0;
goto out;
}
out:
drm_modeset_unlock(&mgr->base.lock);
return vcpi;
} }
/* /*
......
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment