Commit 52b113e9 authored by Daniel Vetter's avatar Daniel Vetter

Merge tag 'drm-misc-next-2023-04-06' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.4-rc1:

UAPI Changes:

Cross-subsystem Changes:
- Document port and rotation dt bindings better.
- For panel timing DT bindings, document that vsync and hsync are
  first, rather than last in image.
- Fix video/aperture typos.

Core Changes:
- Reject prime DMA-Buf attachment if get_sg_table is missing.
  (For self-importing dma-buf only.)
- Add prime import/export to vram-helper.
- Fix oops in drm/vblank when init is not called.
- Fixup xres/yres_virtual and other fixes in fb helper.
- Improve SCDC debugs.
- Skip setting deadline on modesets.
- Assorted TTM fixes.

Driver Changes:
- Add lima usage stats.
- Assorted fixes to bridge/lt8192b, tc358767, ivpu,
  bridge/ti-sn65dsi83, ps8640.
- Use pci aperture helpers in drm/ast lynxfb, radeonfb.
- Revert some lima patches, as they required a commit that has been
  reverted upstream.
- Add AUO NE135FBM-N41 v8.1 eDP panel.
- Add QAIC accel driver.
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/64bb9696-a76a-89d9-1866-bcdf7c69c284@linux.intel.com
parents f8628656 e44f18c6
......@@ -8,6 +8,7 @@ Compute Accelerators
:maxdepth: 1
introduction
qaic/index
.. only:: subproject and html
......
This diff is collapsed.
.. SPDX-License-Identifier: GPL-2.0-only
=====================================
accel/qaic Qualcomm Cloud AI driver
=====================================
The accel/qaic driver supports the Qualcomm Cloud AI machine learning
accelerator cards.
.. toctree::
qaic
aic100
.. SPDX-License-Identifier: GPL-2.0-only
=============
QAIC driver
=============
The QAIC driver is the Kernel Mode Driver (KMD) for the AIC100 family of AI
accelerator products.
Interrupts
==========
While the AIC100 DMA Bridge hardware implements an IRQ storm mitigation
mechanism, it is still possible for an IRQ storm to occur. A storm can happen
if the workload is particularly quick, and the host is responsive. If the host
can drain the response FIFO as quickly as the device can insert elements into
it, then the device will frequently transition the response FIFO from empty to
non-empty and generate MSIs at a rate equivalent to the speed of the
workload's ability to process inputs. The lprnet (license plate reader network)
workload is known to trigger this condition, and can generate in excess of 100k
MSIs per second. It has been observed that most systems cannot tolerate this
for long, and will crash due to some form of watchdog due to the overhead of
the interrupt controller interrupting the host CPU.
To mitigate this issue, the QAIC driver implements specific IRQ handling. When
QAIC receives an IRQ, it disables that line. This prevents the interrupt
controller from interrupting the CPU. Then AIC drains the FIFO. Once the FIFO
is drained, QAIC implements a "last chance" polling algorithm where QAIC will
sleep for a time to see if the workload will generate more activity. The IRQ
line remains disabled during this time. If no activity is detected, QAIC exits
polling mode and reenables the IRQ line.
This mitigation in QAIC is very effective. The same lprnet usecase that
generates 100k IRQs per second (per /proc/interrupts) is reduced to roughly 64
IRQs over 5 minutes while keeping the host system stable, and having the same
workload throughput performance (within run to run noise variation).
Neural Network Control (NNC) Protocol
=====================================
The implementation of NNC is split between the KMD (QAIC) and UMD. In general
QAIC understands how to encode/decode NNC wire protocol, and elements of the
protocol which require kernel space knowledge to process (for example, mapping
host memory to device IOVAs). QAIC understands the structure of a message, and
all of the transactions. QAIC does not understand commands (the payload of a
passthrough transaction).
QAIC handles and enforces the required little endianness and 64-bit alignment,
to the degree that it can. Since QAIC does not know the contents of a
passthrough transaction, it relies on the UMD to satisfy the requirements.
The terminate transaction is of particular use to QAIC. QAIC is not aware of
the resources that are loaded onto a device since the majority of that activity
occurs within NNC commands. As a result, QAIC does not have the means to
roll back userspace activity. To ensure that a userspace client's resources
are fully released in the case of a process crash, or a bug, QAIC uses the
terminate command to let QSM know when a user has gone away, and the resources
can be released.
QSM can report a version number of the NNC protocol it supports. This is in the
form of a Major number and a Minor number.
Major number updates indicate changes to the NNC protocol which impact the
message format, or transactions (impacts QAIC).
Minor number updates indicate changes to the NNC protocol which impact the
commands (does not impact QAIC).
uAPI
====
QAIC defines a number of driver specific IOCTLs as part of the userspace API.
This section describes those APIs.
DRM_IOCTL_QAIC_MANAGE
This IOCTL allows userspace to send a NNC request to the QSM. The call will
block until a response is received, or the request has timed out.
DRM_IOCTL_QAIC_CREATE_BO
This IOCTL allows userspace to allocate a buffer object (BO) which can send
or receive data from a workload. The call will return a GEM handle that
represents the allocated buffer. The BO is not usable until it has been
sliced (see DRM_IOCTL_QAIC_ATTACH_SLICE_BO).
DRM_IOCTL_QAIC_MMAP_BO
This IOCTL allows userspace to prepare an allocated BO to be mmap'd into the
userspace process.
DRM_IOCTL_QAIC_ATTACH_SLICE_BO
This IOCTL allows userspace to slice a BO in preparation for sending the BO
to the device. Slicing is the operation of describing what portions of a BO
get sent where to a workload. This requires a set of DMA transfers for the
DMA Bridge, and as such, locks the BO to a specific DBC.
DRM_IOCTL_QAIC_EXECUTE_BO
This IOCTL allows userspace to submit a set of sliced BOs to the device. The
call is non-blocking. Success only indicates that the BOs have been queued
to the device, but does not guarantee they have been executed.
DRM_IOCTL_QAIC_PARTIAL_EXECUTE_BO
This IOCTL operates like DRM_IOCTL_QAIC_EXECUTE_BO, but it allows userspace
to shrink the BOs sent to the device for this specific call. If a BO
typically has N inputs, but only a subset of those is available, this IOCTL
allows userspace to indicate that only the first M bytes of the BO should be
sent to the device to minimize data transfer overhead. This IOCTL dynamically
recomputes the slicing, and therefore has some processing overhead before the
BOs can be queued to the device.
DRM_IOCTL_QAIC_WAIT_BO
This IOCTL allows userspace to determine when a particular BO has been
processed by the device. The call will block until either the BO has been
processed and can be re-queued to the device, or a timeout occurs.
DRM_IOCTL_QAIC_PERF_STATS_BO
This IOCTL allows userspace to collect performance statistics on the most
recent execution of a BO. This allows userspace to construct an end to end
timeline of the BO processing for a performance analysis.
DRM_IOCTL_QAIC_PART_DEV
This IOCTL allows userspace to request a duplicate "shadow device". This extra
accelN device is associated with a specific partition of resources on the
AIC100 device and can be used for limiting a process to some subset of
resources.
Userspace Client Isolation
==========================
AIC100 supports multiple clients. Multiple DBCs can be consumed by a single
client, and multiple clients can each consume one or more DBCs. Workloads
may contain sensitive information therefore only the client that owns the
workload should be allowed to interface with the DBC.
Clients are identified by the instance associated with their open(). A client
may only use memory they allocate, and DBCs that are assigned to their
workloads. Attempts to access resources assigned to other clients will be
rejected.
Module parameters
=================
QAIC supports the following module parameters:
**datapath_polling (bool)**
Configures QAIC to use a polling thread for datapath events instead of relying
on the device interrupts. Useful for platforms with broken multiMSI. Must be
set at QAIC driver initialization. Default is 0 (off).
**mhi_timeout_ms (unsigned int)**
Sets the timeout value for MHI operations in milliseconds (ms). Must be set
at the time the driver detects a device. Default is 2000 (2 seconds).
**control_resp_timeout_s (unsigned int)**
Sets the timeout value for QSM responses to NNC messages in seconds (s). Must
be set at the time the driver is sending a request to QSM. Default is 60 (one
minute).
**wait_exec_default_timeout_ms (unsigned int)**
Sets the default timeout for the wait_exec ioctl in milliseconds (ms). Must be
set prior to the waic_exec ioctl call. A value specified in the ioctl call
overrides this for that call. Default is 5000 (5 seconds).
**datapath_poll_interval_us (unsigned int)**
Sets the polling interval in microseconds (us) when datapath polling is active.
Takes effect at the next polling interval. Default is 100 (100 us).
......@@ -17,7 +17,9 @@ properties:
const: elida,kd35t133
reg: true
backlight: true
port: true
reset-gpios: true
rotation: true
iovcc-supply:
description: regulator that supplies the iovcc voltage
vdd-supply:
......@@ -27,6 +29,7 @@ required:
- compatible
- reg
- backlight
- port
- iovcc-supply
- vdd-supply
......@@ -43,6 +46,12 @@ examples:
backlight = <&backlight>;
iovcc-supply = <&vcc_1v8>;
vdd-supply = <&vcc3v3_lcd>;
port {
mipi_in_panel: endpoint {
remote-endpoint = <&mipi_out_panel>;
};
};
};
};
......
......@@ -26,6 +26,7 @@ properties:
dvdd-supply:
description: 3v3 digital regulator
port: true
reset-gpios: true
backlight: true
......@@ -35,6 +36,7 @@ required:
- reg
- avdd-supply
- dvdd-supply
- port
additionalProperties: false
......@@ -53,5 +55,11 @@ examples:
dvdd-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
port {
mipi_in_panel: endpoint {
remote-endpoint = <&mipi_out_panel>;
};
};
};
};
......@@ -17,29 +17,29 @@ description: |
The parameters are defined as seen in the following illustration.
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vback_porch | | |
| | v | | |
+----------#######################################----------+-------+
| # ^ # | |
| # | # | |
| hback # | # hfront | hsync |
| porch # | hactive # porch | len |
|<-------->#<-------+--------------------------->#<-------->|<----->|
| # | # | |
| # |vactive # | |
| # | # | |
| # v # | |
+----------#######################################----------+-------+
| | ^ | | |
| | |vfront_porch | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vsync_len | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
+-------+----------+-------------------------------------+----------+
| | | ^ | |
| | | |vsync_len | |
| | | v | |
+-------+----------+-------------------------------------+----------+
| | | ^ | |
| | | |vback_porch | |
| | | v | |
+-------+----------#######################################----------+
| | # ^ # |
| | # | # |
| hsync | hback # | # hfront |
| len | porch # | hactive # porch |
|<----->|<-------->#<-------+--------------------------->#<-------->|
| | # | # |
| | # |vactive # |
| | # | # |
| | # v # |
+-------+----------#######################################----------+
| | | ^ | |
| | | |vfront_porch | |
| | | v | |
+-------+----------+-------------------------------------+----------+
The following is the panel timings shown with time on the x-axis.
......
......@@ -42,7 +42,9 @@ properties:
IOVCC-supply:
description: I/O system regulator
port: true
reset-gpios: true
rotation: true
backlight: true
......@@ -51,6 +53,7 @@ required:
- reg
- VCC-supply
- IOVCC-supply
- port
- reset-gpios
additionalProperties: false
......@@ -70,5 +73,11 @@ examples:
IOVCC-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
port {
mipi_in_panel: endpoint {
remote-endpoint = <&mipi_out_panel>;
};
};
};
};
......@@ -26,6 +26,10 @@ properties:
spi-cpha: true
spi-cpol: true
dc-gpios:
maxItems: 1
description: DCX pin, Display data/command selection pin in parallel interface
required:
- compatible
- reg
......
......@@ -17,6 +17,7 @@ properties:
const: xinpeng,xpp055c272
reg: true
backlight: true
port: true
reset-gpios: true
iovcc-supply:
description: regulator that supplies the iovcc voltage
......@@ -27,6 +28,7 @@ required:
- compatible
- reg
- backlight
- port
- iovcc-supply
- vci-supply
......@@ -44,6 +46,12 @@ examples:
backlight = <&backlight>;
iovcc-supply = <&vcc_1v8>;
vci-supply = <&vcc3v3_lcd>;
port {
mipi_in_panel: endpoint {
remote-endpoint = <&mipi_out_panel>;
};
};
};
};
......
......@@ -17265,6 +17265,16 @@ F: Documentation/devicetree/bindings/clock/qcom,*
F: drivers/clk/qcom/
F: include/dt-bindings/clock/qcom,*
QUALCOMM CLOUD AI (QAIC) DRIVER
M: Jeffrey Hugo <quic_jhugo@quicinc.com>
L: linux-arm-msm@vger.kernel.org
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/accel/qaic/
F: drivers/accel/qaic/
F: include/uapi/drm/qaic_accel.h
QUALCOMM CORE POWER REDUCTION (CPR) AVS DRIVER
M: Bjorn Andersson <andersson@kernel.org>
M: Konrad Dybcio <konrad.dybcio@linaro.org>
......
......@@ -26,5 +26,6 @@ menuconfig DRM_ACCEL
source "drivers/accel/habanalabs/Kconfig"
source "drivers/accel/ivpu/Kconfig"
source "drivers/accel/qaic/Kconfig"
endif
......@@ -2,3 +2,4 @@
obj-$(CONFIG_DRM_ACCEL_HABANALABS) += habanalabs/
obj-$(CONFIG_DRM_ACCEL_IVPU) += ivpu/
obj-$(CONFIG_DRM_ACCEL_QAIC) += qaic/
......@@ -433,6 +433,10 @@ static int ivpu_pci_init(struct ivpu_device *vdev)
/* Clear any pending errors */
pcie_capability_clear_word(pdev, PCI_EXP_DEVSTA, 0x3f);
/* VPU MTL does not require PCI spec 10m D3hot delay */
if (ivpu_is_mtl(vdev))
pdev->d3hot_delay = 0;
ret = pcim_enable_device(pdev);
if (ret) {
ivpu_err(vdev, "Failed to enable PCI device: %d\n", ret);
......
# SPDX-License-Identifier: GPL-2.0-only
#
# Qualcomm Cloud AI accelerators driver
#
config DRM_ACCEL_QAIC
tristate "Qualcomm Cloud AI accelerators"
depends on DRM_ACCEL
depends on PCI && HAS_IOMEM
depends on MHI_BUS
depends on MMU
select CRC32
help
Enables driver for Qualcomm's Cloud AI accelerator PCIe cards that are
designed to accelerate Deep Learning inference workloads.
The driver manages the PCIe devices and provides an IOCTL interface
for users to submit workloads to the devices.
If unsure, say N.
To compile this driver as a module, choose M here: the
module will be called qaic.
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for Qualcomm Cloud AI accelerators driver
#
obj-$(CONFIG_DRM_ACCEL_QAIC) := qaic.o
qaic-y := \
mhi_controller.o \
mhi_qaic_ctrl.o \
qaic_control.o \
qaic_data.o \
qaic_drv.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
* Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef MHICONTROLLERQAIC_H_
#define MHICONTROLLERQAIC_H_
struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, void __iomem *mhi_bar,
int mhi_irq);
void qaic_mhi_free_controller(struct mhi_controller *mhi_cntrl, bool link_up);
void qaic_mhi_start_reset(struct mhi_controller *mhi_cntrl);
void qaic_mhi_reset_done(struct mhi_controller *mhi_cntrl);
#endif /* MHICONTROLLERQAIC_H_ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef __MHI_QAIC_CTRL_H__
#define __MHI_QAIC_CTRL_H__
int mhi_qaic_ctrl_init(void);
void mhi_qaic_ctrl_deinit(void);
#endif /* __MHI_QAIC_CTRL_H__ */
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _QAIC_H_
#define _QAIC_H_
#include <linux/interrupt.h>
#include <linux/kref.h>
#include <linux/mhi.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/spinlock.h>
#include <linux/srcu.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
#include <drm/drm_device.h>
#include <drm/drm_gem.h>
#define QAIC_DBC_BASE SZ_128K
#define QAIC_DBC_SIZE SZ_4K
#define QAIC_NO_PARTITION -1
#define QAIC_DBC_OFF(i) ((i) * QAIC_DBC_SIZE + QAIC_DBC_BASE)
#define to_qaic_bo(obj) container_of(obj, struct qaic_bo, base)
extern bool datapath_polling;
struct qaic_user {
/* Uniquely identifies this user for the device */
int handle;
struct kref ref_count;
/* Char device opened by this user */
struct qaic_drm_device *qddev;
/* Node in list of users that opened this drm device */
struct list_head node;
/* SRCU used to synchronize this user during cleanup */
struct srcu_struct qddev_lock;
atomic_t chunk_id;
};
struct dma_bridge_chan {
/* Pointer to device strcut maintained by driver */
struct qaic_device *qdev;
/* ID of this DMA bridge channel(DBC) */
unsigned int id;
/* Synchronizes access to xfer_list */
spinlock_t xfer_lock;
/* Base address of request queue */
void *req_q_base;
/* Base address of response queue */
void *rsp_q_base;
/*
* Base bus address of request queue. Response queue bus address can be
* calculated by adding request queue size to this variable
*/
dma_addr_t dma_addr;
/* Total size of request and response queue in byte */
u32 total_size;
/* Capacity of request/response queue */
u32 nelem;
/* The user that opened this DBC */
struct qaic_user *usr;
/*
* Request ID of next memory handle that goes in request queue. One
* memory handle can enqueue more than one request elements, all
* this requests that belong to same memory handle have same request ID
*/
u16 next_req_id;
/* true: DBC is in use; false: DBC not in use */
bool in_use;
/*
* Base address of device registers. Used to read/write request and
* response queue's head and tail pointer of this DBC.
*/
void __iomem *dbc_base;
/* Head of list where each node is a memory handle queued in request queue */
struct list_head xfer_list;
/* Synchronizes DBC readers during cleanup */
struct srcu_struct ch_lock;
/*
* When this DBC is released, any thread waiting on this wait queue is
* woken up
*/
wait_queue_head_t dbc_release;
/* Head of list where each node is a bo associated with this DBC */
struct list_head bo_lists;
/* The irq line for this DBC. Used for polling */
unsigned int irq;
/* Polling work item to simulate interrupts */
struct work_struct poll_work;
};
struct qaic_device {
/* Pointer to base PCI device struct of our physical device */
struct pci_dev *pdev;
/* Req. ID of request that will be queued next in MHI control device */
u32 next_seq_num;
/* Base address of bar 0 */
void __iomem *bar_0;
/* Base address of bar 2 */
void __iomem *bar_2;
/* Controller structure for MHI devices */
struct mhi_controller *mhi_cntrl;
/* MHI control channel device */
struct mhi_device *cntl_ch;
/* List of requests queued in MHI control device */
struct list_head cntl_xfer_list;
/* Synchronizes MHI control device transactions and its xfer list */
struct mutex cntl_mutex;
/* Array of DBC struct of this device */
struct dma_bridge_chan *dbc;
/* Work queue for tasks related to MHI control device */
struct workqueue_struct *cntl_wq;
/* Synchronizes all the users of device during cleanup */
struct srcu_struct dev_lock;
/* true: Device under reset; false: Device not under reset */
bool in_reset;
/*
* true: A tx MHI transaction has failed and a rx buffer is still queued
* in control device. Such a buffer is considered lost rx buffer
* false: No rx buffer is lost in control device
*/
bool cntl_lost_buf;
/* Maximum number of DBC supported by this device */
u32 num_dbc;
/* Reference to the drm_device for this device when it is created */
struct qaic_drm_device *qddev;
/* Generate the CRC of a control message */
u32 (*gen_crc)(void *msg);
/* Validate the CRC of a control message */
bool (*valid_crc)(void *msg);
};
struct qaic_drm_device {
/* Pointer to the root device struct driven by this driver */
struct qaic_device *qdev;
/*
* The physical device can be partition in number of logical devices.
* And each logical device is given a partition id. This member stores
* that id. QAIC_NO_PARTITION is a sentinel used to mark that this drm
* device is the actual physical device
*/
s32 partition_id;
/* Pointer to the drm device struct of this drm device */
struct drm_device *ddev;
/* Head in list of users who have opened this drm device */
struct list_head users;
/* Synchronizes access to users list */
struct mutex users_mutex;
};
struct qaic_bo {
struct drm_gem_object base;
/* Scatter/gather table for allocate/imported BO */
struct sg_table *sgt;
/* BO size requested by user. GEM object might be bigger in size. */
u64 size;
/* Head in list of slices of this BO */
struct list_head slices;
/* Total nents, for all slices of this BO */
int total_slice_nents;
/*
* Direction of transfer. It can assume only two value DMA_TO_DEVICE and
* DMA_FROM_DEVICE.
*/
int dir;
/* The pointer of the DBC which operates on this BO */
struct dma_bridge_chan *dbc;
/* Number of slice that belongs to this buffer */
u32 nr_slice;
/* Number of slice that have been transferred by DMA engine */
u32 nr_slice_xfer_done;
/* true = BO is queued for execution, true = BO is not queued */
bool queued;
/*
* If true then user has attached slicing information to this BO by
* calling DRM_IOCTL_QAIC_ATTACH_SLICE_BO ioctl.
*/
bool sliced;
/* Request ID of this BO if it is queued for execution */
u16 req_id;
/* Handle assigned to this BO */
u32 handle;
/* Wait on this for completion of DMA transfer of this BO */
struct completion xfer_done;
/*
* Node in linked list where head is dbc->xfer_list.
* This link list contain BO's that are queued for DMA transfer.
*/
struct list_head xfer_list;
/*
* Node in linked list where head is dbc->bo_lists.
* This link list contain BO's that are associated with the DBC it is
* linked to.
*/
struct list_head bo_list;
struct {
/*
* Latest timestamp(ns) at which kernel received a request to
* execute this BO
*/
u64 req_received_ts;
/*
* Latest timestamp(ns) at which kernel enqueued requests of
* this BO for execution in DMA queue
*/
u64 req_submit_ts;
/*
* Latest timestamp(ns) at which kernel received a completion
* interrupt for requests of this BO
*/
u64 req_processed_ts;
/*
* Number of elements already enqueued in DMA queue before
* enqueuing requests of this BO
*/
u32 queue_level_before;
} perf_stats;
};
struct bo_slice {
/* Mapped pages */
struct sg_table *sgt;
/* Number of requests required to queue in DMA queue */
int nents;
/* See enum dma_data_direction */
int dir;
/* Actual requests that will be copied in DMA queue */
struct dbc_req *reqs;
struct kref ref_count;
/* true: No DMA transfer required */
bool no_xfer;
/* Pointer to the parent BO handle */
struct qaic_bo *bo;
/* Node in list of slices maintained by parent BO */
struct list_head slice;
/* Size of this slice in bytes */
u64 size;
/* Offset of this slice in buffer */
u64 offset;
};
int get_dbc_req_elem_size(void);
int get_dbc_rsp_elem_size(void);
int get_cntl_version(struct qaic_device *qdev, struct qaic_user *usr, u16 *major, u16 *minor);
int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
void qaic_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result);
void qaic_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result);
int qaic_control_open(struct qaic_device *qdev);
void qaic_control_close(struct qaic_device *qdev);
void qaic_release_usr(struct qaic_device *qdev, struct qaic_user *usr);
irqreturn_t dbc_irq_threaded_fn(int irq, void *data);
irqreturn_t dbc_irq_handler(int irq, void *data);
int disable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr);
void enable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr);
void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id);
void release_dbc(struct qaic_device *qdev, u32 dbc_id);
void wake_all_cntl(struct qaic_device *qdev);
void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset);
struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf);
int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv);
void irq_polling_work(struct work_struct *work);
#endif /* _QAIC_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -89,27 +89,13 @@ static const struct pci_device_id ast_pciidlist[] = {
MODULE_DEVICE_TABLE(pci, ast_pciidlist);
static int ast_remove_conflicting_framebuffers(struct pci_dev *pdev)
{
bool primary = false;
resource_size_t base, size;
base = pci_resource_start(pdev, 0);
size = pci_resource_len(pdev, 0);
#ifdef CONFIG_X86
primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
#endif
return drm_aperture_remove_conflicting_framebuffers(base, size, primary, &ast_driver);
}
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct ast_device *ast;
struct drm_device *dev;
int ret;
ret = ast_remove_conflicting_framebuffers(pdev);
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &ast_driver);
if (ret)
return ret;
......
......@@ -84,10 +84,16 @@ struct fsl_ldb {
struct drm_bridge *panel_bridge;
struct clk *clk;
struct regmap *regmap;
bool lvds_dual_link;
const struct fsl_ldb_devdata *devdata;
bool ch0_enabled;
bool ch1_enabled;
};
static bool fsl_ldb_is_dual(const struct fsl_ldb *fsl_ldb)
{
return (fsl_ldb->ch0_enabled && fsl_ldb->ch1_enabled);
}
static inline struct fsl_ldb *to_fsl_ldb(struct drm_bridge *bridge)
{
return container_of(bridge, struct fsl_ldb, bridge);
......@@ -95,7 +101,7 @@ static inline struct fsl_ldb *to_fsl_ldb(struct drm_bridge *bridge)
static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock)
{
if (fsl_ldb->lvds_dual_link)
if (fsl_ldb_is_dual(fsl_ldb))
return clock * 3500;
else
return clock * 7000;
......@@ -170,35 +176,28 @@ static void fsl_ldb_atomic_enable(struct drm_bridge *bridge,
configured_link_freq = clk_get_rate(fsl_ldb->clk);
if (configured_link_freq != requested_link_freq)
dev_warn(fsl_ldb->dev, "Configured LDB clock (%lu Hz) does not match requested LVDS clock: %lu Hz",
dev_warn(fsl_ldb->dev, "Configured LDB clock (%lu Hz) does not match requested LVDS clock: %lu Hz\n",
configured_link_freq,
requested_link_freq);
clk_prepare_enable(fsl_ldb->clk);
/* Program LDB_CTRL */
reg = LDB_CTRL_CH0_ENABLE;
reg = (fsl_ldb->ch0_enabled ? LDB_CTRL_CH0_ENABLE : 0) |
(fsl_ldb->ch1_enabled ? LDB_CTRL_CH1_ENABLE : 0) |
(fsl_ldb_is_dual(fsl_ldb) ? LDB_CTRL_SPLIT_MODE : 0);
if (fsl_ldb->lvds_dual_link)
reg |= LDB_CTRL_CH1_ENABLE | LDB_CTRL_SPLIT_MODE;
if (lvds_format_24bpp)
reg |= (fsl_ldb->ch0_enabled ? LDB_CTRL_CH0_DATA_WIDTH : 0) |
(fsl_ldb->ch1_enabled ? LDB_CTRL_CH1_DATA_WIDTH : 0);
if (lvds_format_24bpp) {
reg |= LDB_CTRL_CH0_DATA_WIDTH;
if (fsl_ldb->lvds_dual_link)
reg |= LDB_CTRL_CH1_DATA_WIDTH;
}
if (lvds_format_jeida)
reg |= (fsl_ldb->ch0_enabled ? LDB_CTRL_CH0_BIT_MAPPING : 0) |
(fsl_ldb->ch1_enabled ? LDB_CTRL_CH1_BIT_MAPPING : 0);
if (lvds_format_jeida) {
reg |= LDB_CTRL_CH0_BIT_MAPPING;
if (fsl_ldb->lvds_dual_link)
reg |= LDB_CTRL_CH1_BIT_MAPPING;
}
if (mode->flags & DRM_MODE_FLAG_PVSYNC) {
reg |= LDB_CTRL_DI0_VSYNC_POLARITY;
if (fsl_ldb->lvds_dual_link)
reg |= LDB_CTRL_DI1_VSYNC_POLARITY;
}
if (mode->flags & DRM_MODE_FLAG_PVSYNC)
reg |= (fsl_ldb->ch0_enabled ? LDB_CTRL_DI0_VSYNC_POLARITY : 0) |
(fsl_ldb->ch1_enabled ? LDB_CTRL_DI1_VSYNC_POLARITY : 0);
regmap_write(fsl_ldb->regmap, fsl_ldb->devdata->ldb_ctrl, reg);
......@@ -210,9 +209,8 @@ static void fsl_ldb_atomic_enable(struct drm_bridge *bridge,
/* Wait for VBG to stabilize. */
usleep_range(15, 20);
reg |= LVDS_CTRL_CH0_EN;
if (fsl_ldb->lvds_dual_link)
reg |= LVDS_CTRL_CH1_EN;
reg |= (fsl_ldb->ch0_enabled ? LVDS_CTRL_CH0_EN : 0) |
(fsl_ldb->ch1_enabled ? LVDS_CTRL_CH1_EN : 0);
regmap_write(fsl_ldb->regmap, fsl_ldb->devdata->lvds_ctrl, reg);
}
......@@ -265,7 +263,7 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
{
struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
if (mode->clock > (fsl_ldb->lvds_dual_link ? 160000 : 80000))
if (mode->clock > (fsl_ldb_is_dual(fsl_ldb) ? 160000 : 80000))
return MODE_CLOCK_HIGH;
return MODE_OK;
......@@ -286,7 +284,7 @@ static int fsl_ldb_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *panel_node;
struct device_node *port1, *port2;
struct device_node *remote1, *remote2;
struct drm_panel *panel;
struct fsl_ldb *fsl_ldb;
int dual_link;
......@@ -311,10 +309,23 @@ static int fsl_ldb_probe(struct platform_device *pdev)
if (IS_ERR(fsl_ldb->regmap))
return PTR_ERR(fsl_ldb->regmap);
/* Locate the panel DT node. */
panel_node = of_graph_get_remote_node(dev->of_node, 1, 0);
if (!panel_node)
return -ENXIO;
/* Locate the remote ports and the panel node */
remote1 = of_graph_get_remote_node(dev->of_node, 1, 0);
remote2 = of_graph_get_remote_node(dev->of_node, 2, 0);
fsl_ldb->ch0_enabled = (remote1 != NULL);
fsl_ldb->ch1_enabled = (remote2 != NULL);
panel_node = of_node_get(remote1 ? remote1 : remote2);
of_node_put(remote1);
of_node_put(remote2);
if (!fsl_ldb->ch0_enabled && !fsl_ldb->ch1_enabled) {
of_node_put(panel_node);
return dev_err_probe(dev, -ENXIO, "No panel node found");
}
dev_dbg(dev, "Using %s\n",
fsl_ldb_is_dual(fsl_ldb) ? "dual-link mode" :
fsl_ldb->ch0_enabled ? "channel 0" : "channel 1");
panel = of_drm_find_panel(panel_node);
of_node_put(panel_node);
......@@ -325,20 +336,26 @@ static int fsl_ldb_probe(struct platform_device *pdev)
if (IS_ERR(fsl_ldb->panel_bridge))
return PTR_ERR(fsl_ldb->panel_bridge);
/* Determine whether this is dual-link configuration */
port1 = of_graph_get_port_by_id(dev->of_node, 1);
port2 = of_graph_get_port_by_id(dev->of_node, 2);
dual_link = drm_of_lvds_get_dual_link_pixel_order(port1, port2);
of_node_put(port1);
of_node_put(port2);
if (dual_link == DRM_LVDS_DUAL_LINK_EVEN_ODD_PIXELS) {
dev_err(dev, "LVDS channel pixel swap not supported.\n");
return -EINVAL;
}
if (fsl_ldb_is_dual(fsl_ldb)) {
struct device_node *port1, *port2;
if (dual_link == DRM_LVDS_DUAL_LINK_ODD_EVEN_PIXELS)
fsl_ldb->lvds_dual_link = true;
port1 = of_graph_get_port_by_id(dev->of_node, 1);
port2 = of_graph_get_port_by_id(dev->of_node, 2);
dual_link = drm_of_lvds_get_dual_link_pixel_order(port1, port2);
of_node_put(port1);
of_node_put(port2);
if (dual_link < 0)
return dev_err_probe(dev, dual_link,
"Error getting dual link configuration\n");
/* Only DRM_LVDS_DUAL_LINK_ODD_EVEN_PIXELS is supported */
if (dual_link == DRM_LVDS_DUAL_LINK_EVEN_ODD_PIXELS) {
dev_err(dev, "LVDS channel pixel swap not supported.\n");
return -EINVAL;
}
}
platform_set_drvdata(pdev, fsl_ldb);
......
......@@ -504,7 +504,6 @@ static int lt8912_attach_dsi(struct lt8912 *lt)
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM |
MIPI_DSI_MODE_NO_EOT_PACKET;
......
......@@ -184,7 +184,7 @@ static int _ps8640_wait_hpd_asserted(struct ps8640 *ps_bridge, unsigned long wai
* actually connected to GPIO9).
*/
ret = regmap_read_poll_timeout(map, PAGE2_GPIO_H, status,
status & PS_GPIO9, wait_us / 10, wait_us);
status & PS_GPIO9, 20000, wait_us);
/*
* The first time we see HPD go high after a reset we delay an extra
......
......@@ -1426,9 +1426,9 @@ void dw_hdmi_set_high_tmds_clock_ratio(struct dw_hdmi *hdmi,
/* Control for TMDS Bit Period/TMDS Clock-Period Ratio */
if (dw_hdmi_support_scdc(hdmi, display)) {
if (mtmdsclock > HDMI14_MAX_TMDSCLK)
drm_scdc_set_high_tmds_clock_ratio(hdmi->ddc, 1);
drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 1);
else
drm_scdc_set_high_tmds_clock_ratio(hdmi->ddc, 0);
drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 0);
}
}
EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio);
......@@ -2116,7 +2116,7 @@ static void hdmi_av_composer(struct dw_hdmi *hdmi,
min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));
/* Enabled Scrambling in the Sink */
drm_scdc_set_scrambling(hdmi->ddc, 1);
drm_scdc_set_scrambling(&hdmi->connector, 1);
/*
* To activate the scrambler feature, you must ensure
......@@ -2132,7 +2132,7 @@ static void hdmi_av_composer(struct dw_hdmi *hdmi,
hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);
hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,
HDMI_MC_SWRSTZ);
drm_scdc_set_scrambling(hdmi->ddc, 0);
drm_scdc_set_scrambling(&hdmi->connector, 0);
}
}
......
......@@ -1896,10 +1896,10 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
"failed to create dsi device\n");
tc->dsi = dsi;
dsi->lanes = dsi_lanes;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_CLOCK_NON_CONTINUOUS;
ret = mipi_dsi_attach(dsi);
if (ret < 0) {
......
......@@ -642,7 +642,9 @@ static int sn65dsi83_host_attach(struct sn65dsi83 *ctx)
dsi->lanes = dsi_lanes;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP |
MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET;
ret = devm_mipi_dsi_attach(dev, dsi);
if (ret < 0) {
......@@ -698,8 +700,10 @@ static int sn65dsi83_probe(struct i2c_client *client)
drm_bridge_add(&ctx->bridge);
ret = sn65dsi83_host_attach(ctx);
if (ret)
if (ret) {
dev_err_probe(dev, ret, "failed to attach DSI host\n");
goto err_remove_bridge;
}
return 0;
......
......@@ -363,7 +363,7 @@ static int __maybe_unused ti_sn65dsi86_resume(struct device *dev)
/* td2: min 100 us after regulators before enabling the GPIO */
usleep_range(100, 110);
gpiod_set_value(pdata->enable_gpio, 1);
gpiod_set_value_cansleep(pdata->enable_gpio, 1);
/*
* If we have a reference clock we can enable communication w/ the
......@@ -386,7 +386,7 @@ static int __maybe_unused ti_sn65dsi86_suspend(struct device *dev)
if (pdata->refclk)
ti_sn65dsi86_disable_comms(pdata);
gpiod_set_value(pdata->enable_gpio, 0);
gpiod_set_value_cansleep(pdata->enable_gpio, 0);
ret = regulator_bulk_disable(SN_REGULATOR_SUPPLY_NUM, pdata->supplies);
if (ret)
......
......@@ -26,6 +26,8 @@
#include <linux/delay.h>
#include <drm/display/drm_scdc_helper.h>
#include <drm/drm_connector.h>
#include <drm/drm_device.h>
#include <drm/drm_print.h>
/**
......@@ -140,7 +142,7 @@ EXPORT_SYMBOL(drm_scdc_write);
/**
* drm_scdc_get_scrambling_status - what is status of scrambling?
* @adapter: I2C adapter for DDC channel
* @connector: connector
*
* Reads the scrambler status over SCDC, and checks the
* scrambling status.
......@@ -148,14 +150,16 @@ EXPORT_SYMBOL(drm_scdc_write);
* Returns:
* True if the scrambling is enabled, false otherwise.
*/
bool drm_scdc_get_scrambling_status(struct i2c_adapter *adapter)
bool drm_scdc_get_scrambling_status(struct drm_connector *connector)
{
u8 status;
int ret;
ret = drm_scdc_readb(adapter, SCDC_SCRAMBLER_STATUS, &status);
ret = drm_scdc_readb(connector->ddc, SCDC_SCRAMBLER_STATUS, &status);
if (ret < 0) {
DRM_DEBUG_KMS("Failed to read scrambling status: %d\n", ret);
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] Failed to read scrambling status: %d\n",
connector->base.id, connector->name, ret);
return false;
}
......@@ -165,7 +169,7 @@ EXPORT_SYMBOL(drm_scdc_get_scrambling_status);
/**
* drm_scdc_set_scrambling - enable scrambling
* @adapter: I2C adapter for DDC channel
* @connector: connector
* @enable: bool to indicate if scrambling is to be enabled/disabled
*
* Writes the TMDS config register over SCDC channel, and:
......@@ -175,14 +179,17 @@ EXPORT_SYMBOL(drm_scdc_get_scrambling_status);
* Returns:
* True if scrambling is set/reset successfully, false otherwise.
*/
bool drm_scdc_set_scrambling(struct i2c_adapter *adapter, bool enable)
bool drm_scdc_set_scrambling(struct drm_connector *connector,
bool enable)
{
u8 config;
int ret;
ret = drm_scdc_readb(adapter, SCDC_TMDS_CONFIG, &config);
ret = drm_scdc_readb(connector->ddc, SCDC_TMDS_CONFIG, &config);
if (ret < 0) {
DRM_DEBUG_KMS("Failed to read TMDS config: %d\n", ret);
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] Failed to read TMDS config: %d\n",
connector->base.id, connector->name, ret);
return false;
}
......@@ -191,9 +198,11 @@ bool drm_scdc_set_scrambling(struct i2c_adapter *adapter, bool enable)
else
config &= ~SCDC_SCRAMBLING_ENABLE;
ret = drm_scdc_writeb(adapter, SCDC_TMDS_CONFIG, config);
ret = drm_scdc_writeb(connector->ddc, SCDC_TMDS_CONFIG, config);
if (ret < 0) {
DRM_DEBUG_KMS("Failed to enable scrambling: %d\n", ret);
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] Failed to enable scrambling: %d\n",
connector->base.id, connector->name, ret);
return false;
}
......@@ -203,7 +212,7 @@ EXPORT_SYMBOL(drm_scdc_set_scrambling);
/**
* drm_scdc_set_high_tmds_clock_ratio - set TMDS clock ratio
* @adapter: I2C adapter for DDC channel
* @connector: connector
* @set: ret or reset the high clock ratio
*
*
......@@ -230,14 +239,17 @@ EXPORT_SYMBOL(drm_scdc_set_scrambling);
* Returns:
* True if write is successful, false otherwise.
*/
bool drm_scdc_set_high_tmds_clock_ratio(struct i2c_adapter *adapter, bool set)
bool drm_scdc_set_high_tmds_clock_ratio(struct drm_connector *connector,
bool set)
{
u8 config;
int ret;
ret = drm_scdc_readb(adapter, SCDC_TMDS_CONFIG, &config);
ret = drm_scdc_readb(connector->ddc, SCDC_TMDS_CONFIG, &config);
if (ret < 0) {
DRM_DEBUG_KMS("Failed to read TMDS config: %d\n", ret);
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] Failed to read TMDS config: %d\n",
connector->base.id, connector->name, ret);
return false;
}
......@@ -246,9 +258,11 @@ bool drm_scdc_set_high_tmds_clock_ratio(struct i2c_adapter *adapter, bool set)
else
config &= ~SCDC_TMDS_BIT_CLOCK_RATIO_BY_40;
ret = drm_scdc_writeb(adapter, SCDC_TMDS_CONFIG, config);
ret = drm_scdc_writeb(connector->ddc, SCDC_TMDS_CONFIG, config);
if (ret < 0) {
DRM_DEBUG_KMS("Failed to set TMDS clock ratio: %d\n", ret);
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] Failed to set TMDS clock ratio: %d\n",
connector->base.id, connector->name, ret);
return false;
}
......
......@@ -1528,6 +1528,12 @@ static void set_fence_deadline(struct drm_device *dev,
for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
ktime_t v;
if (drm_atomic_crtc_needs_modeset(new_crtc_state))
continue;
if (!new_crtc_state->active)
continue;
if (drm_crtc_next_vblank_start(crtc, &v))
continue;
......
This diff is collapsed.
......@@ -544,7 +544,8 @@ int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
* Optional pinning of buffers is handled at dma-buf attach and detach time in
* drm_gem_map_attach() and drm_gem_map_detach(). Backing storage itself is
* handled by drm_gem_map_dma_buf() and drm_gem_unmap_dma_buf(), which relies on
* &drm_gem_object_funcs.get_sg_table.
* &drm_gem_object_funcs.get_sg_table. If &drm_gem_object_funcs.get_sg_table is
* unimplemented, exports into another device are rejected.
*
* For kernel-internal access there's drm_gem_dmabuf_vmap() and
* drm_gem_dmabuf_vunmap(). Userspace mmap support is provided by
......@@ -583,6 +584,9 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
{
struct drm_gem_object *obj = dma_buf->priv;
if (!obj->funcs->get_sg_table)
return -ENOSYS;
return drm_gem_pin(obj);
}
EXPORT_SYMBOL(drm_gem_map_attach);
......
This diff is collapsed.
......@@ -3988,8 +3988,8 @@ static int intel_hdmi_reset_link(struct intel_encoder *encoder,
ret = drm_scdc_readb(adapter, SCDC_TMDS_CONFIG, &config);
if (ret < 0) {
drm_err(&dev_priv->drm, "Failed to read TMDS config: %d\n",
ret);
drm_err(&dev_priv->drm, "[CONNECTOR:%d:%s] Failed to read TMDS config: %d\n",
connector->base.base.id, connector->base.name, ret);
return 0;
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment