• Eli Cohen's avatar
    vdpa/mlx5: Use consistent RQT size · acde3929
    Eli Cohen authored
    The current code evaluates RQT size based on the configured number of
    virtqueues. This can raise an issue in the following scenario:
    
    Assume MQ was negotiated.
    1. mlx5_vdpa_set_map() gets called.
    2. handle_ctrl_mq() is called setting cur_num_vqs to some value, lower
       than the configured max VQs.
    3. A second set_map gets called, but now a smaller number of VQs is used
       to evaluate the size of the RQT.
    4. handle_ctrl_mq() is called with a value larger than what the RQT can
       hold. This will emit errors and the driver state is compromised.
    
    To fix this, we use a new field in struct mlx5_vdpa_net to hold the
    required number of entries in the RQT. This value is evaluated in
    mlx5_vdpa_set_driver_features() where we have the negotiated features
    all set up.
    
    In addition to that, we take into consideration the max capability of RQT
    entries early when the device is added so we don't need to take consider
    it when creating the RQT.
    
    Last, we remove the use of mlx5_vdpa_max_qps() which just returns the
    max_vas / 2 and make the code clearer.
    
    Fixes: 52893733 ("vdpa/mlx5: Add multiqueue support")
    Acked-by: default avatarJason Wang <jasowang@redhat.com>
    Signed-off-by: default avatarEli Cohen <elic@nvidia.com>
    Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
    acde3929
mlx5_vnet.c 75.2 KB