Commit 34dcc466 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox

Pull mailbox updates from Jassi Brar:

 - redo the omap driver from legacy to mailbox api

 - enable bufferless IPI for zynqmp

 - add mhu-v3 driver

 - convert from tasklet to BH workqueue

 - add qcom MSM8974 APCS compatible IDs

* tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox: (24 commits)
  dt-bindings: mailbox: qcom-ipcc: Document the SDX75 IPCC
  dt-bindings: mailbox: qcom: Add MSM8974 APCS compatible
  mailbox: Convert from tasklet to BH workqueue
  mailbox: mtk-cmdq: Fix pm_runtime_get_sync() warning in mbox shutdown
  mailbox: mtk-cmdq-mailbox: fix module autoloading
  mailbox: zynqmp: handle SGI for shared IPI
  mailbox: arm_mhuv3: Add driver
  dt-bindings: mailbox: arm,mhuv3: Add bindings
  mailbox: omap: Remove kernel FIFO message queuing
  mailbox: omap: Reverse FIFO busy check logic
  mailbox: omap: Remove mbox_chan_to_omap_mbox()
  mailbox: omap: Use mbox_controller channel list directly
  mailbox: omap: Use function local struct mbox_controller
  mailbox: omap: Merge mailbox child node setup loops
  mailbox: omap: Use devm_pm_runtime_enable() helper
  mailbox: omap: Remove device class
  mailbox: omap: Remove unneeded header omap-mailbox.h
  mailbox: omap: Move fifo size check to point of use
  mailbox: omap: Move omap_mbox_irq_t into driver
  mailbox: omap: Remove unused omap_mbox_request_channel() function
  ...
parents ab7b884a 10b98582
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mailbox/arm,mhuv3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ARM MHUv3 Mailbox Controller
maintainers:
- Sudeep Holla <sudeep.holla@arm.com>
- Cristian Marussi <cristian.marussi@arm.com>
description: |
The Arm Message Handling Unit (MHU) Version 3 is a mailbox controller that
enables unidirectional communications with remote processors through various
possible transport protocols.
The controller can optionally support a varying number of extensions that, in
turn, enable different kinds of transport to be used for communication.
Number, type and characteristics of each supported extension can be discovered
dynamically at runtime.
Given the unidirectional nature of the controller, an MHUv3 mailbox controller
is composed of a MHU Sender (MHUS) containing a PostBox (PBX) block and a MHU
Receiver (MHUR) containing a MailBox (MBX) block, where
PBX is used to
- Configure the MHU
- Send Transfers to the Receiver
- Optionally receive acknowledgment of a Transfer from the Receiver
MBX is used to
- Configure the MHU
- Receive Transfers from the Sender
- Optionally acknowledge Transfers sent by the Sender
Both PBX and MBX need to be present and defined in the DT description if you
need to establish a bidirectional communication, since you will have to
acquire two distinct unidirectional channels, one for each block.
As a consequence both blocks needs to be represented separately and specified
as distinct DT nodes in order to properly describe their resources.
Note that, though, thanks to the runtime discoverability, there is no need to
identify the type of blocks with distinct compatibles.
Following are the MHUv3 possible extensions.
- Doorbell Extension (DBE): DBE defines a type of channel called a Doorbell
Channel (DBCH). DBCH enables a single bit Transfer to be sent from the
Sender to Receiver. The Transfer indicates that an event has occurred.
When DBE is implemented, the number of DBCHs that an implementation of the
MHU can support is between 1 and 128, numbered starting from 0 in ascending
order and discoverable at run-time.
Each DBCH contains 32 individual fields, referred to as flags, each of which
can be used independently. It is possible for the Sender to send multiple
Transfers at once using a single DBCH, so long as each Transfer uses
a different flag in the DBCH.
Optionally, data may be transmitted through an out-of-band shared memory
region, wherein the MHU Doorbell is used strictly as an interrupt generation
mechanism, but this is out of the scope of these bindings.
- FastChannel Extension (FCE): FCE defines a type of channel called a Fast
Channel (FCH). FCH is intended for lower overhead communication between
Sender and Receiver at the expense of determinism. An FCH allows the Sender
to update the channel value at any time, regardless of whether the previous
value has been seen by the Receiver. When the Receiver reads the channel's
content it gets the last value written to the channel.
FCH is considered lossy in nature, and means that the Sender has no way of
knowing if, or when, the Receiver will act on the Transfer.
FCHs are expected to behave as RAM which generates interrupts when writes
occur to the locations within the RAM.
When FCE is implemented, the number of FCHs that an implementation of the
MHU can support is between 1-1024, if the FastChannel word-size is 32-bits,
or between 1-512, when the FastChannel word-size is 64-bits.
FCHs are numbered from 0 in ascending order.
Note that the number of FCHs and the word-size are implementation defined,
not configurable but discoverable at run-time.
Optionally, data may be transmitted through an out-of-band shared memory
region, wherein the MHU FastChannel is used as an interrupt generation
mechanism which carries also a pointer to such out-of-band data, but this
is out of the scope of these bindings.
- FIFO Extension (FE): FE defines a Channel type called a FIFO Channel (FFCH).
FFCH allows a Sender to send
- Multiple Transfers to the Receiver without having to wait for the
previous Transfer to be acknowledged by the Receiver, as long as the
FIFO has room for the Transfer.
- Transfers which require the Receiver to provide acknowledgment.
- Transfers which have in-band payload.
In all cases, the data is guaranteed to be observed by the Receiver in the
same order which the Sender sent it.
When FE is implemented, the number of FFCHs that an implementation of the
MHU can support is between 1 and 64, numbered starting from 0 in ascending
order. The number of FFCHs, their depth (same for all implemented FFCHs) and
the access-granularity are implementation defined, not configurable but
discoverable at run-time.
Optionally, additional data may be transmitted through an out-of-band shared
memory region, wherein the MHU FIFO is used to transmit, in order, a small
part of the payload (like a header) and a reference to the shared memory
area holding the remaining, bigger, chunk of the payload, but this is out of
the scope of these bindings.
properties:
compatible:
const: arm,mhuv3
reg:
maxItems: 1
interrupts:
minItems: 1
maxItems: 74
interrupt-names:
description: |
The MHUv3 controller generates a number of events some of which are used
to generate interrupts; as a consequence it can expose a varying number of
optional PBX/MBX interrupts, representing the events generated during the
operation of the various transport protocols associated with different
extensions. All interrupts of the MHU are level-sensitive.
Some of these optional interrupts are defined per-channel, where the
number of channels effectively available is implementation defined and
run-time discoverable.
In the following names are enumerated using patterns, with per-channel
interrupts implicitly capped at the maximum channels allowed by the
specification for each extension type.
For the sake of simplicity maxItems is anyway capped to a most plausible
number, assuming way less channels would be implemented than actually
possible.
The only mandatory interrupts on the MHU are:
- combined
- mbx-fch-xfer-<N> but only if mbx-fcgrp-xfer-<N> is not implemented.
minItems: 1
maxItems: 74
items:
oneOf:
- const: combined
description: PBX/MBX Combined interrupt
- const: combined-ffch
description: PBX/MBX FIFO Combined interrupt
- pattern: '^ffch-low-tide-[0-9]+$'
description: PBX/MBX FIFO Channel <N> Low Tide interrupt
- pattern: '^ffch-high-tide-[0-9]+$'
description: PBX/MBX FIFO Channel <N> High Tide interrupt
- pattern: '^ffch-flush-[0-9]+$'
description: PBX/MBX FIFO Channel <N> Flush interrupt
- pattern: '^mbx-dbch-xfer-[0-9]+$'
description: MBX Doorbell Channel <N> Transfer interrupt
- pattern: '^mbx-fch-xfer-[0-9]+$'
description: MBX FastChannel <N> Transfer interrupt
- pattern: '^mbx-fchgrp-xfer-[0-9]+$'
description: MBX FastChannel <N> Group Transfer interrupt
- pattern: '^mbx-ffch-xfer-[0-9]+$'
description: MBX FIFO Channel <N> Transfer interrupt
- pattern: '^pbx-dbch-xfer-ack-[0-9]+$'
description: PBX Doorbell Channel <N> Transfer Ack interrupt
- pattern: '^pbx-ffch-xfer-ack-[0-9]+$'
description: PBX FIFO Channel <N> Transfer Ack interrupt
'#mbox-cells':
description: |
The first argument in the consumers 'mboxes' property represents the
extension type, the second is for the channel number while the third
depends on extension type.
Extension types constants are defined in <dt-bindings/arm/mhuv3-dt.h>.
Extension type for DBE is DBE_EXT and the third parameter represents the
doorbell flag number to use.
Extension type for FCE is FCE_EXT, third parameter unused.
Extension type for FE is FE_EXT, third parameter unused.
mboxes = <&mhu DBE_EXT 0 5>; // DBE, Doorbell Channel Window 0, doorbell 5.
mboxes = <&mhu DBE_EXT 7>; // DBE, Doorbell Channel Window 1, doorbell 7.
mboxes = <&mhu FCE_EXT 0 0>; // FCE, FastChannel Window 0.
mboxes = <&mhu FCE_EXT 3 0>; // FCE, FastChannel Window 3.
mboxes = <&mhu FE_EXT 1 0>; // FE, FIFO Channel Window 1.
mboxes = <&mhu FE_EXT 7 0>; // FE, FIFO Channel Window 7.
const: 3
clocks:
maxItems: 1
required:
- compatible
- reg
- interrupts
- interrupt-names
- '#mbox-cells'
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
mailbox@2aaa0000 {
compatible = "arm,mhuv3";
#mbox-cells = <3>;
reg = <0 0x2aaa0000 0 0x10000>;
clocks = <&clock 0>;
interrupt-names = "combined", "pbx-dbch-xfer-ack-1",
"ffch-high-tide-0";
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
};
mailbox@2ab00000 {
compatible = "arm,mhuv3";
#mbox-cells = <3>;
reg = <0 0x2aab0000 0 0x10000>;
clocks = <&clock 0>;
interrupt-names = "combined", "mbx-dbch-xfer-1", "ffch-low-tide-0";
interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
};
};
......@@ -30,6 +30,7 @@ properties:
- const: syscon
- items:
- enum:
- qcom,msm8974-apcs-kpss-global
- qcom,msm8976-apcs-kpss-global
- const: qcom,msm8994-apcs-kpss-global
- const: syscon
......
......@@ -28,6 +28,7 @@ properties:
- qcom,sa8775p-ipcc
- qcom,sc7280-ipcc
- qcom,sc8280xp-ipcc
- qcom,sdx75-ipcc
- qcom,sm6350-ipcc
- qcom,sm6375-ipcc
- qcom,sm8250-ipcc
......
......@@ -13195,6 +13195,15 @@ F: Documentation/devicetree/bindings/mailbox/arm,mhuv2.yaml
F: drivers/mailbox/arm_mhuv2.c
F: include/linux/mailbox/arm_mhuv2_message.h
MAILBOX ARM MHUv3
M: Sudeep Holla <sudeep.holla@arm.com>
M: Cristian Marussi <cristian.marussi@arm.com>
L: linux-kernel@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/mailbox/arm,mhuv3.yaml
F: drivers/mailbox/arm_mhuv3.c
MAN-PAGES: MANUAL PAGES FOR LINUX -- Sections 2, 3, 4, 5, and 7
M: Alejandro Colomar <alx@kernel.org>
L: linux-man@vger.kernel.org
......
......@@ -23,6 +23,18 @@ config ARM_MHU_V2
Say Y here if you want to build the ARM MHUv2 controller driver,
which provides unidirectional mailboxes between processing elements.
config ARM_MHU_V3
tristate "ARM MHUv3 Mailbox"
depends on HAS_IOMEM || COMPILE_TEST
depends on OF
help
Say Y here if you want to build the ARM MHUv3 controller driver,
which provides unidirectional mailboxes between processing elements.
ARM MHUv3 controllers can implement a varying number of extensions
that provides different means of transports: supported extensions
will be discovered and possibly managed at probe-time.
config IMX_MBOX
tristate "i.MX Mailbox"
depends on ARCH_MXC || COMPILE_TEST
......@@ -68,15 +80,6 @@ config OMAP2PLUS_MBOX
OMAP2/3; or IPU, IVA HD and DSP in OMAP4/5. Say Y here if you
want to use OMAP2+ Mailbox framework support.
config OMAP_MBOX_KFIFO_SIZE
int "Mailbox kfifo default buffer size (bytes)"
depends on OMAP2PLUS_MBOX
default 256
help
Specify the default size of mailbox's kfifo buffers (bytes).
This can also be changed at runtime (via the mbox_kfifo_size
module parameter).
config ROCKCHIP_MBOX
bool "Rockchip Soc Integrated Mailbox Support"
depends on ARCH_ROCKCHIP || COMPILE_TEST
......
......@@ -9,6 +9,8 @@ obj-$(CONFIG_ARM_MHU) += arm_mhu.o arm_mhu_db.o
obj-$(CONFIG_ARM_MHU_V2) += arm_mhuv2.o
obj-$(CONFIG_ARM_MHU_V3) += arm_mhuv3.o
obj-$(CONFIG_IMX_MBOX) += imx-mailbox.o
obj-$(CONFIG_ARMADA_37XX_RWTM_MBOX) += armada-37xx-rwtm-mailbox.o
......
// SPDX-License-Identifier: GPL-2.0
/*
* ARM Message Handling Unit Version 3 (MHUv3) driver.
*
* Copyright (C) 2024 ARM Ltd.
*
* Based on ARM MHUv2 driver.
*/
#include <linux/bitfield.h>
#include <linux/bitops.h>
#include <linux/bits.h>
#include <linux/cleanup.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/mailbox_controller.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/spinlock.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/types.h>
/* ====== MHUv3 Registers ====== */
/* Maximum number of Doorbell channel windows */
#define MHUV3_DBCW_MAX 128
/* Number of DBCH combined interrupt status registers */
#define MHUV3_DBCH_CMB_INT_ST_REG_CNT 4
/* Number of FFCH combined interrupt status registers */
#define MHUV3_FFCH_CMB_INT_ST_REG_CNT 2
#define MHUV3_FLAG_BITS 32
/* Not a typo ... */
#define MHUV3_MAJOR_VERSION 2
enum {
MHUV3_MBOX_CELL_TYPE,
MHUV3_MBOX_CELL_CHWN,
MHUV3_MBOX_CELL_PARAM,
MHUV3_MBOX_CELLS
};
/* Padding bitfields/fields represents hole in the regs MMIO */
/* CTRL_Page */
struct blk_id {
#define id GENMASK(3, 0)
u32 val;
} __packed;
struct feat_spt0 {
#define dbe_spt GENMASK(3, 0)
#define fe_spt GENMASK(7, 4)
#define fce_spt GENMASK(11, 8)
u32 val;
} __packed;
struct feat_spt1 {
#define auto_op_spt GENMASK(3, 0)
u32 val;
} __packed;
struct dbch_cfg0 {
#define num_dbch GENMASK(7, 0)
u32 val;
} __packed;
struct ffch_cfg0 {
#define num_ffch GENMASK(7, 0)
#define x8ba_spt BIT(8)
#define x16ba_spt BIT(9)
#define x32ba_spt BIT(10)
#define x64ba_spt BIT(11)
#define ffch_depth GENMASK(25, 16)
u32 val;
} __packed;
struct fch_cfg0 {
#define num_fch GENMASK(9, 0)
#define fcgi_spt BIT(10) // MBX-only
#define num_fcg GENMASK(15, 11)
#define num_fch_per_grp GENMASK(20, 16)
#define fch_ws GENMASK(28, 21)
u32 val;
} __packed;
struct ctrl {
#define op_req BIT(0)
#define ch_op_mask BIT(1)
u32 val;
} __packed;
struct fch_ctrl {
#define _int_en BIT(2)
u32 val;
} __packed;
struct iidr {
#define implementer GENMASK(11, 0)
#define revision GENMASK(15, 12)
#define variant GENMASK(19, 16)
#define product_id GENMASK(31, 20)
u32 val;
} __packed;
struct aidr {
#define arch_minor_rev GENMASK(3, 0)
#define arch_major_rev GENMASK(7, 4)
u32 val;
} __packed;
struct ctrl_page {
struct blk_id blk_id;
u8 pad[12];
struct feat_spt0 feat_spt0;
struct feat_spt1 feat_spt1;
u8 pad1[8];
struct dbch_cfg0 dbch_cfg0;
u8 pad2[12];
struct ffch_cfg0 ffch_cfg0;
u8 pad3[12];
struct fch_cfg0 fch_cfg0;
u8 pad4[188];
struct ctrl x_ctrl;
/*-- MBX-only registers --*/
u8 pad5[60];
struct fch_ctrl fch_ctrl;
u32 fcg_int_en;
u8 pad6[696];
/*-- End of MBX-only ---- */
u32 dbch_int_st[MHUV3_DBCH_CMB_INT_ST_REG_CNT];
u32 ffch_int_st[MHUV3_FFCH_CMB_INT_ST_REG_CNT];
/*-- MBX-only registers --*/
u8 pad7[88];
u32 fcg_int_st;
u8 pad8[12];
u32 fcg_grp_int_st[32];
u8 pad9[2760];
/*-- End of MBX-only ---- */
struct iidr iidr;
struct aidr aidr;
u32 imp_def_id[12];
} __packed;
/* DBCW_Page */
struct xbcw_ctrl {
#define comb_en BIT(0)
u32 val;
} __packed;
struct pdbcw_int {
#define tfr_ack BIT(0)
u32 val;
} __packed;
struct pdbcw_page {
u32 st;
u8 pad[8];
u32 set;
struct pdbcw_int int_st;
struct pdbcw_int int_clr;
struct pdbcw_int int_en;
struct xbcw_ctrl ctrl;
} __packed;
struct mdbcw_page {
u32 st;
u32 st_msk;
u32 clr;
u8 pad[4];
u32 msk_st;
u32 msk_set;
u32 msk_clr;
struct xbcw_ctrl ctrl;
} __packed;
struct dummy_page {
u8 pad[SZ_4K];
} __packed;
struct mhu3_pbx_frame_reg {
struct ctrl_page ctrl;
struct pdbcw_page dbcw[MHUV3_DBCW_MAX];
struct dummy_page ffcw;
struct dummy_page fcw;
u8 pad[SZ_4K * 11];
struct dummy_page impdef;
} __packed;
struct mhu3_mbx_frame_reg {
struct ctrl_page ctrl;
struct mdbcw_page dbcw[MHUV3_DBCW_MAX];
struct dummy_page ffcw;
struct dummy_page fcw;
u8 pad[SZ_4K * 11];
struct dummy_page impdef;
} __packed;
/* Macro for reading a bitmask within a physically mapped packed struct */
#define readl_relaxed_bitmask(_regptr, _bitmask) \
({ \
unsigned long _rval; \
_rval = readl_relaxed(_regptr); \
FIELD_GET(_bitmask, _rval); \
})
/* Macro for writing a bitmask within a physically mapped packed struct */
#define writel_relaxed_bitmask(_value, _regptr, _bitmask) \
({ \
unsigned long _rval; \
typeof(_regptr) _rptr = _regptr; \
typeof(_bitmask) _bmask = _bitmask; \
_rval = readl_relaxed(_rptr); \
_rval &= ~(_bmask); \
_rval |= FIELD_PREP((unsigned long long)_bmask, _value);\
writel_relaxed(_rval, _rptr); \
})
/* ====== MHUv3 data structures ====== */
enum mhuv3_frame {
PBX_FRAME,
MBX_FRAME,
};
static char *mhuv3_str[] = {
"PBX",
"MBX"
};
enum mhuv3_extension_type {
DBE_EXT,
FCE_EXT,
FE_EXT,
NUM_EXT
};
static char *mhuv3_ext_str[] = {
"DBE",
"FCE",
"FE"
};
struct mhuv3;
/**
* struct mhuv3_protocol_ops - MHUv3 operations
*
* @rx_startup: Receiver startup callback.
* @rx_shutdown: Receiver shutdown callback.
* @read_data: Read available Sender in-band LE data (if any).
* @rx_complete: Acknowledge data reception to the Sender. Any out-of-band data
* has to have been already retrieved before calling this.
* @tx_startup: Sender startup callback.
* @tx_shutdown: Sender shutdown callback.
* @last_tx_done: Report back to the Sender if the last transfer has completed.
* @send_data: Send data to the receiver.
*
* Each supported transport protocol provides its own implementation of
* these operations.
*/
struct mhuv3_protocol_ops {
int (*rx_startup)(struct mhuv3 *mhu, struct mbox_chan *chan);
void (*rx_shutdown)(struct mhuv3 *mhu, struct mbox_chan *chan);
void *(*read_data)(struct mhuv3 *mhu, struct mbox_chan *chan);
void (*rx_complete)(struct mhuv3 *mhu, struct mbox_chan *chan);
void (*tx_startup)(struct mhuv3 *mhu, struct mbox_chan *chan);
void (*tx_shutdown)(struct mhuv3 *mhu, struct mbox_chan *chan);
int (*last_tx_done)(struct mhuv3 *mhu, struct mbox_chan *chan);
int (*send_data)(struct mhuv3 *mhu, struct mbox_chan *chan, void *arg);
};
/**
* struct mhuv3_mbox_chan_priv - MHUv3 channel private information
*
* @ch_idx: Channel window index associated to this mailbox channel.
* @doorbell: Doorbell bit number within the @ch_idx window.
* Only relevant to Doorbell transport.
* @ops: Transport protocol specific operations for this channel.
*
* Transport specific data attached to mmailbox channel priv data.
*/
struct mhuv3_mbox_chan_priv {
u32 ch_idx;
u32 doorbell;
const struct mhuv3_protocol_ops *ops;
};
/**
* struct mhuv3_extension - MHUv3 extension descriptor
*
* @type: Type of extension
* @num_chans: Max number of channels found for this extension.
* @base_ch_idx: First channel number assigned to this extension, picked from
* the set of all mailbox channels descriptors created.
* @mbox_of_xlate: Extension specific helper to parse DT and lookup associated
* channel from the related 'mboxes' property.
* @combined_irq_setup: Extension specific helper to setup the combined irq.
* @channels_init: Extension specific helper to initialize channels.
* @chan_from_comb_irq_get: Extension specific helper to lookup which channel
* triggered the combined irq.
* @pending_db: Array of per-channel pending doorbells.
* @pending_lock: Protect access to pending_db.
*/
struct mhuv3_extension {
enum mhuv3_extension_type type;
unsigned int num_chans;
unsigned int base_ch_idx;
struct mbox_chan *(*mbox_of_xlate)(struct mhuv3 *mhu,
unsigned int channel,
unsigned int param);
void (*combined_irq_setup)(struct mhuv3 *mhu);
int (*channels_init)(struct mhuv3 *mhu);
struct mbox_chan *(*chan_from_comb_irq_get)(struct mhuv3 *mhu);
u32 pending_db[MHUV3_DBCW_MAX];
/* Protect access to pending_db */
spinlock_t pending_lock;
};
/**
* struct mhuv3 - MHUv3 mailbox controller data
*
* @frame: Frame type: MBX_FRAME or PBX_FRAME.
* @auto_op_full: Flag to indicate if the MHU supports AutoOp full mode.
* @major: MHUv3 controller architectural major version.
* @minor: MHUv3 controller architectural minor version.
* @implem: MHUv3 controller IIDR implementer.
* @rev: MHUv3 controller IIDR revision.
* @var: MHUv3 controller IIDR variant.
* @prod_id: MHUv3 controller IIDR product_id.
* @num_chans: The total number of channnels discovered across all extensions.
* @cmb_irq: Combined IRQ number if any found defined.
* @ctrl: A reference to the MHUv3 control page for this block.
* @pbx: Base address of the PBX register mapping region.
* @mbx: Base address of the MBX register mapping region.
* @ext: Array holding descriptors for any found implemented extension.
* @mbox: Mailbox controller belonging to the MHU frame.
*/
struct mhuv3 {
enum mhuv3_frame frame;
bool auto_op_full;
unsigned int major;
unsigned int minor;
unsigned int implem;
unsigned int rev;
unsigned int var;
unsigned int prod_id;
unsigned int num_chans;
int cmb_irq;
struct ctrl_page __iomem *ctrl;
union {
struct mhu3_pbx_frame_reg __iomem *pbx;
struct mhu3_mbx_frame_reg __iomem *mbx;
};
struct mhuv3_extension *ext[NUM_EXT];
struct mbox_controller mbox;
};
#define mhu_from_mbox(_mbox) container_of(_mbox, struct mhuv3, mbox)
typedef int (*mhuv3_extension_initializer)(struct mhuv3 *mhu);
/* =================== Doorbell transport protocol operations =============== */
static void mhuv3_doorbell_tx_startup(struct mhuv3 *mhu, struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
/* Enable Transfer Acknowledgment events */
writel_relaxed_bitmask(0x1, &mhu->pbx->dbcw[priv->ch_idx].int_en, tfr_ack);
}
static void mhuv3_doorbell_tx_shutdown(struct mhuv3 *mhu, struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
unsigned long flags;
/* Disable Channel Transfer Ack events */
writel_relaxed_bitmask(0x0, &mhu->pbx->dbcw[priv->ch_idx].int_en, tfr_ack);
/* Clear Channel Transfer Ack and pending doorbells */
writel_relaxed_bitmask(0x1, &mhu->pbx->dbcw[priv->ch_idx].int_clr, tfr_ack);
spin_lock_irqsave(&e->pending_lock, flags);
e->pending_db[priv->ch_idx] = 0;
spin_unlock_irqrestore(&e->pending_lock, flags);
}
static int mhuv3_doorbell_rx_startup(struct mhuv3 *mhu, struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
/* Unmask Channel Transfer events */
writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].msk_clr);
return 0;
}
static void mhuv3_doorbell_rx_shutdown(struct mhuv3 *mhu,
struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
/* Mask Channel Transfer events */
writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].msk_set);
}
static void mhuv3_doorbell_rx_complete(struct mhuv3 *mhu, struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
/* Clearing the pending transfer generates the Channel Transfer Ack */
writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].clr);
}
static int mhuv3_doorbell_last_tx_done(struct mhuv3 *mhu,
struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
int done;
done = !(readl_relaxed(&mhu->pbx->dbcw[priv->ch_idx].st) &
BIT(priv->doorbell));
if (done) {
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
unsigned long flags;
/* Take care to clear the pending doorbell also when polling */
spin_lock_irqsave(&e->pending_lock, flags);
e->pending_db[priv->ch_idx] &= ~BIT(priv->doorbell);
spin_unlock_irqrestore(&e->pending_lock, flags);
}
return done;
}
static int mhuv3_doorbell_send_data(struct mhuv3 *mhu, struct mbox_chan *chan,
void *arg)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
scoped_guard(spinlock_irqsave, &e->pending_lock) {
/* Only one in-flight Transfer is allowed per-doorbell */
if (e->pending_db[priv->ch_idx] & BIT(priv->doorbell))
return -EBUSY;
e->pending_db[priv->ch_idx] |= BIT(priv->doorbell);
}
writel_relaxed(BIT(priv->doorbell), &mhu->pbx->dbcw[priv->ch_idx].set);
return 0;
}
static const struct mhuv3_protocol_ops mhuv3_doorbell_ops = {
.tx_startup = mhuv3_doorbell_tx_startup,
.tx_shutdown = mhuv3_doorbell_tx_shutdown,
.rx_startup = mhuv3_doorbell_rx_startup,
.rx_shutdown = mhuv3_doorbell_rx_shutdown,
.rx_complete = mhuv3_doorbell_rx_complete,
.last_tx_done = mhuv3_doorbell_last_tx_done,
.send_data = mhuv3_doorbell_send_data,
};
/* Sender and receiver mailbox ops */
static bool mhuv3_sender_last_tx_done(struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
return priv->ops->last_tx_done(mhu, chan);
}
static int mhuv3_sender_send_data(struct mbox_chan *chan, void *data)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
if (!priv->ops->last_tx_done(mhu, chan))
return -EBUSY;
return priv->ops->send_data(mhu, chan, data);
}
static int mhuv3_sender_startup(struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
if (priv->ops->tx_startup)
priv->ops->tx_startup(mhu, chan);
return 0;
}
static void mhuv3_sender_shutdown(struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
if (priv->ops->tx_shutdown)
priv->ops->tx_shutdown(mhu, chan);
}
static const struct mbox_chan_ops mhuv3_sender_ops = {
.send_data = mhuv3_sender_send_data,
.startup = mhuv3_sender_startup,
.shutdown = mhuv3_sender_shutdown,
.last_tx_done = mhuv3_sender_last_tx_done,
};
static int mhuv3_receiver_startup(struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
return priv->ops->rx_startup(mhu, chan);
}
static void mhuv3_receiver_shutdown(struct mbox_chan *chan)
{
struct mhuv3_mbox_chan_priv *priv = chan->con_priv;
struct mhuv3 *mhu = mhu_from_mbox(chan->mbox);
priv->ops->rx_shutdown(mhu, chan);
}
static int mhuv3_receiver_send_data(struct mbox_chan *chan, void *data)
{
dev_err(chan->mbox->dev,
"Trying to transmit on a MBX MHUv3 frame\n");
return -EIO;
}
static bool mhuv3_receiver_last_tx_done(struct mbox_chan *chan)
{
dev_err(chan->mbox->dev, "Trying to Tx poll on a MBX MHUv3 frame\n");
return true;
}
static const struct mbox_chan_ops mhuv3_receiver_ops = {
.send_data = mhuv3_receiver_send_data,
.startup = mhuv3_receiver_startup,
.shutdown = mhuv3_receiver_shutdown,
.last_tx_done = mhuv3_receiver_last_tx_done,
};
static struct mbox_chan *mhuv3_dbe_mbox_of_xlate(struct mhuv3 *mhu,
unsigned int channel,
unsigned int doorbell)
{
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
struct mbox_controller *mbox = &mhu->mbox;
struct mbox_chan *chans = mbox->chans;
if (channel >= e->num_chans || doorbell >= MHUV3_FLAG_BITS) {
dev_err(mbox->dev, "Couldn't xlate to a valid channel (%d: %d)\n",
channel, doorbell);
return ERR_PTR(-ENODEV);
}
return &chans[e->base_ch_idx + channel * MHUV3_FLAG_BITS + doorbell];
}
static void mhuv3_dbe_combined_irq_setup(struct mhuv3 *mhu)
{
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
int i;
if (mhu->frame == PBX_FRAME) {
struct pdbcw_page __iomem *dbcw = mhu->pbx->dbcw;
for (i = 0; i < e->num_chans; i++) {
writel_relaxed_bitmask(0x1, &dbcw[i].int_clr, tfr_ack);
writel_relaxed_bitmask(0x0, &dbcw[i].int_en, tfr_ack);
writel_relaxed_bitmask(0x1, &dbcw[i].ctrl, comb_en);
}
} else {
struct mdbcw_page __iomem *dbcw = mhu->mbx->dbcw;
for (i = 0; i < e->num_chans; i++) {
writel_relaxed(0xFFFFFFFF, &dbcw[i].clr);
writel_relaxed(0xFFFFFFFF, &dbcw[i].msk_set);
writel_relaxed_bitmask(0x1, &dbcw[i].ctrl, comb_en);
}
}
}
static int mhuv3_dbe_channels_init(struct mhuv3 *mhu)
{
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
struct mbox_controller *mbox = &mhu->mbox;
struct mbox_chan *chans;
int i;
chans = mbox->chans + mbox->num_chans;
e->base_ch_idx = mbox->num_chans;
for (i = 0; i < e->num_chans; i++) {
struct mhuv3_mbox_chan_priv *priv;
int k;
for (k = 0; k < MHUV3_FLAG_BITS; k++) {
priv = devm_kmalloc(mbox->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
priv->ch_idx = i;
priv->ops = &mhuv3_doorbell_ops;
priv->doorbell = k;
chans++->con_priv = priv;
mbox->num_chans++;
}
}
spin_lock_init(&e->pending_lock);
return 0;
}
static bool mhuv3_dbe_doorbell_lookup(struct mhuv3 *mhu, unsigned int channel,
unsigned int *db)
{
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
struct device *dev = mhu->mbox.dev;
u32 st;
if (mhu->frame == PBX_FRAME) {
u32 active_dbs, fired_dbs;
st = readl_relaxed_bitmask(&mhu->pbx->dbcw[channel].int_st,
tfr_ack);
if (!st)
goto err_spurious;
active_dbs = readl_relaxed(&mhu->pbx->dbcw[channel].st);
scoped_guard(spinlock_irqsave, &e->pending_lock) {
fired_dbs = e->pending_db[channel] & ~active_dbs;
if (!fired_dbs)
goto err_spurious;
*db = __ffs(fired_dbs);
e->pending_db[channel] &= ~BIT(*db);
}
fired_dbs &= ~BIT(*db);
/* Clear TFR Ack if no more doorbells pending */
if (!fired_dbs)
writel_relaxed_bitmask(0x1,
&mhu->pbx->dbcw[channel].int_clr,
tfr_ack);
} else {
st = readl_relaxed(&mhu->mbx->dbcw[channel].st_msk);
if (!st)
goto err_spurious;
*db = __ffs(st);
}
return true;
err_spurious:
dev_warn(dev, "Spurious IRQ on %s channel:%d\n",
mhuv3_str[mhu->frame], channel);
return false;
}
static struct mbox_chan *mhuv3_dbe_chan_from_comb_irq_get(struct mhuv3 *mhu)
{
struct mhuv3_extension *e = mhu->ext[DBE_EXT];
struct device *dev = mhu->mbox.dev;
int i;
for (i = 0; i < MHUV3_DBCH_CMB_INT_ST_REG_CNT; i++) {
unsigned int channel, db;
u32 cmb_st;
cmb_st = readl_relaxed(&mhu->ctrl->dbch_int_st[i]);
if (!cmb_st)
continue;
channel = i * MHUV3_FLAG_BITS + __ffs(cmb_st);
if (channel >= e->num_chans) {
dev_err(dev, "Invalid %s channel:%d\n",
mhuv3_str[mhu->frame], channel);
return ERR_PTR(-EIO);
}
if (!mhuv3_dbe_doorbell_lookup(mhu, channel, &db))
continue;
dev_dbg(dev, "Found %s ch[%d]/db[%d]\n",
mhuv3_str[mhu->frame], channel, db);
return &mhu->mbox.chans[channel * MHUV3_FLAG_BITS + db];
}
return ERR_PTR(-EIO);
}
static int mhuv3_dbe_init(struct mhuv3 *mhu)
{
struct device *dev = mhu->mbox.dev;
struct mhuv3_extension *e;
if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, dbe_spt))
return 0;
dev_dbg(dev, "%s: Initializing DBE Extension.\n", mhuv3_str[mhu->frame]);
e = devm_kzalloc(dev, sizeof(*e), GFP_KERNEL);
if (!e)
return -ENOMEM;
e->type = DBE_EXT;
/* Note that, by the spec, the number of channels is (num_dbch + 1) */
e->num_chans =
readl_relaxed_bitmask(&mhu->ctrl->dbch_cfg0, num_dbch) + 1;
e->mbox_of_xlate = mhuv3_dbe_mbox_of_xlate;
e->combined_irq_setup = mhuv3_dbe_combined_irq_setup;
e->channels_init = mhuv3_dbe_channels_init;
e->chan_from_comb_irq_get = mhuv3_dbe_chan_from_comb_irq_get;
mhu->num_chans += e->num_chans * MHUV3_FLAG_BITS;
mhu->ext[DBE_EXT] = e;
dev_dbg(dev, "%s: found %d DBE channels.\n",
mhuv3_str[mhu->frame], e->num_chans);
return 0;
}
static int mhuv3_fce_init(struct mhuv3 *mhu)
{
struct device *dev = mhu->mbox.dev;
if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, fce_spt))
return 0;
dev_dbg(dev, "%s: FCE Extension not supported by driver.\n",
mhuv3_str[mhu->frame]);
return 0;
}
static int mhuv3_fe_init(struct mhuv3 *mhu)
{
struct device *dev = mhu->mbox.dev;
if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, fe_spt))
return 0;
dev_dbg(dev, "%s: FE Extension not supported by driver.\n",
mhuv3_str[mhu->frame]);
return 0;
}
static mhuv3_extension_initializer mhuv3_extension_init[NUM_EXT] = {
mhuv3_dbe_init,
mhuv3_fce_init,
mhuv3_fe_init,
};
static int mhuv3_initialize_channels(struct device *dev, struct mhuv3 *mhu)
{
struct mbox_controller *mbox = &mhu->mbox;
int i, ret = 0;
mbox->chans = devm_kcalloc(dev, mhu->num_chans,
sizeof(*mbox->chans), GFP_KERNEL);
if (!mbox->chans)
return dev_err_probe(dev, -ENOMEM,
"Failed to initialize channels\n");
for (i = 0; i < NUM_EXT && !ret; i++)
if (mhu->ext[i])
ret = mhu->ext[i]->channels_init(mhu);
return ret;
}
static struct mbox_chan *mhuv3_mbox_of_xlate(struct mbox_controller *mbox,
const struct of_phandle_args *pa)
{
struct mhuv3 *mhu = mhu_from_mbox(mbox);
unsigned int type, channel, param;
if (pa->args_count != MHUV3_MBOX_CELLS)
return ERR_PTR(-EINVAL);
type = pa->args[MHUV3_MBOX_CELL_TYPE];
if (type >= NUM_EXT)
return ERR_PTR(-EINVAL);
channel = pa->args[MHUV3_MBOX_CELL_CHWN];
param = pa->args[MHUV3_MBOX_CELL_PARAM];
return mhu->ext[type]->mbox_of_xlate(mhu, channel, param);
}
static void mhu_frame_cleanup_actions(void *data)
{
struct mhuv3 *mhu = data;
writel_relaxed_bitmask(0x0, &mhu->ctrl->x_ctrl, op_req);
}
static int mhuv3_frame_init(struct mhuv3 *mhu, void __iomem *regs)
{
struct device *dev = mhu->mbox.dev;
int i;
mhu->ctrl = regs;
mhu->frame = readl_relaxed_bitmask(&mhu->ctrl->blk_id, id);
if (mhu->frame > MBX_FRAME)
return dev_err_probe(dev, -EINVAL,
"Invalid Frame type- %d\n", mhu->frame);
mhu->major = readl_relaxed_bitmask(&mhu->ctrl->aidr, arch_major_rev);
mhu->minor = readl_relaxed_bitmask(&mhu->ctrl->aidr, arch_minor_rev);
mhu->implem = readl_relaxed_bitmask(&mhu->ctrl->iidr, implementer);
mhu->rev = readl_relaxed_bitmask(&mhu->ctrl->iidr, revision);
mhu->var = readl_relaxed_bitmask(&mhu->ctrl->iidr, variant);
mhu->prod_id = readl_relaxed_bitmask(&mhu->ctrl->iidr, product_id);
if (mhu->major != MHUV3_MAJOR_VERSION)
return dev_err_probe(dev, -EINVAL,
"Unsupported MHU %s block - major:%d minor:%d\n",
mhuv3_str[mhu->frame], mhu->major,
mhu->minor);
mhu->auto_op_full =
!!readl_relaxed_bitmask(&mhu->ctrl->feat_spt1, auto_op_spt);
/* Request the PBX/MBX to remain operational */
if (mhu->auto_op_full) {
writel_relaxed_bitmask(0x1, &mhu->ctrl->x_ctrl, op_req);
devm_add_action_or_reset(dev, mhu_frame_cleanup_actions, mhu);
}
dev_dbg(dev,
"Found MHU %s block - major:%d minor:%d\n implem:0x%X rev:0x%X var:0x%X prod_id:0x%X",
mhuv3_str[mhu->frame], mhu->major, mhu->minor,
mhu->implem, mhu->rev, mhu->var, mhu->prod_id);
if (mhu->frame == PBX_FRAME)
mhu->pbx = regs;
else
mhu->mbx = regs;
for (i = 0; i < NUM_EXT; i++) {
int ret;
/*
* Note that extensions initialization fails only when such
* extension initialization routine fails and the extensions
* was found to be supported in hardware and in software.
*/
ret = mhuv3_extension_init[i](mhu);
if (ret)
return dev_err_probe(dev, ret,
"Failed to initialize %s %s\n",
mhuv3_str[mhu->frame],
mhuv3_ext_str[i]);
}
return 0;
}
static irqreturn_t mhuv3_pbx_comb_interrupt(int irq, void *arg)
{
unsigned int i, found = 0;
struct mhuv3 *mhu = arg;
struct mbox_chan *chan;
struct device *dev;
int ret = IRQ_NONE;
dev = mhu->mbox.dev;
for (i = 0; i < NUM_EXT; i++) {
struct mhuv3_mbox_chan_priv *priv;
/* FCE does not participate to the PBX combined */
if (i == FCE_EXT || !mhu->ext[i])
continue;
chan = mhu->ext[i]->chan_from_comb_irq_get(mhu);
if (IS_ERR(chan))
continue;
found++;
priv = chan->con_priv;
if (!chan->cl) {
dev_warn(dev, "TX Ack on UNBOUND channel (%u)\n",
priv->ch_idx);
continue;
}
mbox_chan_txdone(chan, 0);
ret = IRQ_HANDLED;
}
if (found == 0)
dev_warn_once(dev, "Failed to find channel for the TX interrupt\n");
return ret;
}
static irqreturn_t mhuv3_mbx_comb_interrupt(int irq, void *arg)
{
unsigned int i, found = 0;
struct mhuv3 *mhu = arg;
struct mbox_chan *chan;
struct device *dev;
int ret = IRQ_NONE;
dev = mhu->mbox.dev;
for (i = 0; i < NUM_EXT; i++) {
struct mhuv3_mbox_chan_priv *priv;
void *data __free(kfree) = NULL;
if (!mhu->ext[i])
continue;
/* Process any extension which could be source of the IRQ */
chan = mhu->ext[i]->chan_from_comb_irq_get(mhu);
if (IS_ERR(chan))
continue;
found++;
/* From here on we need to call rx_complete even on error */
priv = chan->con_priv;
if (!chan->cl) {
dev_warn(dev, "RX Data on UNBOUND channel (%u)\n",
priv->ch_idx);
goto rx_ack;
}
/* Read optional in-band LE data first. */
if (priv->ops->read_data) {
data = priv->ops->read_data(mhu, chan);
if (IS_ERR(data)) {
dev_err(dev,
"Failed to read in-band data. err:%ld\n",
PTR_ERR(no_free_ptr(data)));
goto rx_ack;
}
}
mbox_chan_received_data(chan, data);
ret = IRQ_HANDLED;
/*
* Acknowledge transfer after any possible optional
* out-of-band data has also been retrieved via
* mbox_chan_received_data().
*/
rx_ack:
if (priv->ops->rx_complete)
priv->ops->rx_complete(mhu, chan);
}
if (found == 0)
dev_warn_once(dev, "Failed to find channel for the RX interrupt\n");
return ret;
}
static int mhuv3_setup_pbx(struct mhuv3 *mhu)
{
struct device *dev = mhu->mbox.dev;
mhu->mbox.ops = &mhuv3_sender_ops;
if (mhu->cmb_irq > 0) {
int ret, i;
ret = devm_request_threaded_irq(dev, mhu->cmb_irq, NULL,
mhuv3_pbx_comb_interrupt,
IRQF_ONESHOT, "mhuv3-pbx", mhu);
if (ret)
return dev_err_probe(dev, ret,
"Failed to request PBX IRQ\n");
mhu->mbox.txdone_irq = true;
mhu->mbox.txdone_poll = false;
for (i = 0; i < NUM_EXT; i++)
if (mhu->ext[i])
mhu->ext[i]->combined_irq_setup(mhu);
dev_dbg(dev, "MHUv3 PBX IRQs initialized.\n");
return 0;
}
dev_info(dev, "Using PBX in Tx polling mode.\n");
mhu->mbox.txdone_irq = false;
mhu->mbox.txdone_poll = true;
mhu->mbox.txpoll_period = 1;
return 0;
}
static int mhuv3_setup_mbx(struct mhuv3 *mhu)
{
struct device *dev = mhu->mbox.dev;
int ret, i;
mhu->mbox.ops = &mhuv3_receiver_ops;
if (mhu->cmb_irq <= 0)
return dev_err_probe(dev, -EINVAL,
"MBX combined IRQ is missing !\n");
ret = devm_request_threaded_irq(dev, mhu->cmb_irq, NULL,
mhuv3_mbx_comb_interrupt, IRQF_ONESHOT,
"mhuv3-mbx", mhu);
if (ret)
return dev_err_probe(dev, ret, "Failed to request MBX IRQ\n");
for (i = 0; i < NUM_EXT; i++)
if (mhu->ext[i])
mhu->ext[i]->combined_irq_setup(mhu);
dev_dbg(dev, "MHUv3 MBX IRQs initialized.\n");
return ret;
}
static int mhuv3_irqs_init(struct mhuv3 *mhu, struct platform_device *pdev)
{
dev_dbg(mhu->mbox.dev, "Initializing %s block.\n",
mhuv3_str[mhu->frame]);
if (mhu->frame == PBX_FRAME) {
mhu->cmb_irq =
platform_get_irq_byname_optional(pdev, "combined");
return mhuv3_setup_pbx(mhu);
}
mhu->cmb_irq = platform_get_irq_byname(pdev, "combined");
return mhuv3_setup_mbx(mhu);
}
static int mhuv3_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
void __iomem *regs;
struct mhuv3 *mhu;
int ret;
mhu = devm_kzalloc(dev, sizeof(*mhu), GFP_KERNEL);
if (!mhu)
return -ENOMEM;
regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(regs))
return PTR_ERR(regs);
mhu->mbox.dev = dev;
ret = mhuv3_frame_init(mhu, regs);
if (ret)
return ret;
ret = mhuv3_irqs_init(mhu, pdev);
if (ret)
return ret;
mhu->mbox.of_xlate = mhuv3_mbox_of_xlate;
ret = mhuv3_initialize_channels(dev, mhu);
if (ret)
return ret;
ret = devm_mbox_controller_register(dev, &mhu->mbox);
if (ret)
return dev_err_probe(dev, ret,
"Failed to register ARM MHUv3 driver\n");
return ret;
}
static const struct of_device_id mhuv3_of_match[] = {
{ .compatible = "arm,mhuv3", .data = NULL },
{}
};
MODULE_DEVICE_TABLE(of, mhuv3_of_match);
static struct platform_driver mhuv3_driver = {
.driver = {
.name = "arm-mhuv3-mailbox",
.of_match_table = mhuv3_of_match,
},
.probe = mhuv3_probe,
};
module_platform_driver(mhuv3_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ARM MHUv3 Driver");
MODULE_AUTHOR("Cristian Marussi <cristian.marussi@arm.com>");
......@@ -43,6 +43,7 @@
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/workqueue.h>
#define PDC_SUCCESS 0
......@@ -293,8 +294,8 @@ struct pdc_state {
unsigned int pdc_irq;
/* tasklet for deferred processing after DMA rx interrupt */
struct tasklet_struct rx_tasklet;
/* work for deferred processing after DMA rx interrupt */
struct work_struct rx_work;
/* Number of bytes of receive status prior to each rx frame */
u32 rx_status_len;
......@@ -952,18 +953,18 @@ static irqreturn_t pdc_irq_handler(int irq, void *data)
iowrite32(intstatus, pdcs->pdc_reg_vbase + PDC_INTSTATUS_OFFSET);
/* Wakeup IRQ thread */
tasklet_schedule(&pdcs->rx_tasklet);
queue_work(system_bh_wq, &pdcs->rx_work);
return IRQ_HANDLED;
}
/**
* pdc_tasklet_cb() - Tasklet callback that runs the deferred processing after
* pdc_work_cb() - Work callback that runs the deferred processing after
* a DMA receive interrupt. Reenables the receive interrupt.
* @t: Pointer to the Altera sSGDMA channel structure
*/
static void pdc_tasklet_cb(struct tasklet_struct *t)
static void pdc_work_cb(struct work_struct *t)
{
struct pdc_state *pdcs = from_tasklet(pdcs, t, rx_tasklet);
struct pdc_state *pdcs = from_work(pdcs, t, rx_work);
pdc_receive(pdcs);
......@@ -1577,8 +1578,8 @@ static int pdc_probe(struct platform_device *pdev)
pdc_hw_init(pdcs);
/* Init tasklet for deferred DMA rx processing */
tasklet_setup(&pdcs->rx_tasklet, pdc_tasklet_cb);
/* Init work for deferred DMA rx processing */
INIT_WORK(&pdcs->rx_work, pdc_work_cb);
err = pdc_interrupts_init(pdcs);
if (err)
......@@ -1595,7 +1596,7 @@ static int pdc_probe(struct platform_device *pdev)
return PDC_SUCCESS;
cleanup_buf_pool:
tasklet_kill(&pdcs->rx_tasklet);
cancel_work_sync(&pdcs->rx_work);
dma_pool_destroy(pdcs->rx_buf_pool);
cleanup_ring_pool:
......@@ -1611,7 +1612,7 @@ static void pdc_remove(struct platform_device *pdev)
pdc_free_debugfs();
tasklet_kill(&pdcs->rx_tasklet);
cancel_work_sync(&pdcs->rx_work);
pdc_hw_disable(pdcs);
......
......@@ -21,6 +21,7 @@
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#include "mailbox.h"
......@@ -80,7 +81,7 @@ struct imx_mu_con_priv {
char irq_desc[IMX_MU_CHAN_NAME_SIZE];
enum imx_mu_chan_type type;
struct mbox_chan *chan;
struct tasklet_struct txdb_tasklet;
struct work_struct txdb_work;
};
struct imx_mu_priv {
......@@ -232,7 +233,7 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
break;
case IMX_MU_TYPE_TXDB:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
tasklet_schedule(&cp->txdb_tasklet);
queue_work(system_bh_wq, &cp->txdb_work);
break;
case IMX_MU_TYPE_TXDB_V2:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
......@@ -420,7 +421,7 @@ static int imx_mu_seco_tx(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp,
}
/* Simulate hack for mbox framework */
tasklet_schedule(&cp->txdb_tasklet);
queue_work(system_bh_wq, &cp->txdb_work);
break;
default:
......@@ -484,9 +485,9 @@ static int imx_mu_seco_rxdb(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp
return err;
}
static void imx_mu_txdb_tasklet(unsigned long data)
static void imx_mu_txdb_work(struct work_struct *t)
{
struct imx_mu_con_priv *cp = (struct imx_mu_con_priv *)data;
struct imx_mu_con_priv *cp = from_work(cp, t, txdb_work);
mbox_chan_txdone(cp->chan, 0);
}
......@@ -570,8 +571,7 @@ static int imx_mu_startup(struct mbox_chan *chan)
if (cp->type == IMX_MU_TYPE_TXDB) {
/* Tx doorbell don't have ACK support */
tasklet_init(&cp->txdb_tasklet, imx_mu_txdb_tasklet,
(unsigned long)cp);
INIT_WORK(&cp->txdb_work, imx_mu_txdb_work);
return 0;
}
......@@ -615,7 +615,7 @@ static void imx_mu_shutdown(struct mbox_chan *chan)
}
if (cp->type == IMX_MU_TYPE_TXDB) {
tasklet_kill(&cp->txdb_tasklet);
cancel_work_sync(&cp->txdb_work);
pm_runtime_put_sync(priv->dev);
return;
}
......
......@@ -465,7 +465,7 @@ static void cmdq_mbox_shutdown(struct mbox_chan *chan)
struct cmdq_task *task, *tmp;
unsigned long flags;
WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev));
WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev) < 0);
spin_lock_irqsave(&thread->chan->lock, flags);
if (list_empty(&thread->task_busy_list))
......@@ -765,6 +765,7 @@ static const struct of_device_id cmdq_of_ids[] = {
{.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_mt8195},
{}
};
MODULE_DEVICE_TABLE(of, cmdq_of_ids);
static struct platform_driver cmdq_drv = {
.probe = cmdq_probe,
......
......@@ -19,7 +19,6 @@
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/omap-mailbox.h>
#include <linux/mailbox_controller.h>
#include <linux/mailbox_client.h>
......@@ -51,6 +50,11 @@
#define MBOX_INTR_CFG_TYPE1 0
#define MBOX_INTR_CFG_TYPE2 1
typedef enum {
IRQ_TX = 1,
IRQ_RX = 2,
} omap_mbox_irq_t;
struct omap_mbox_fifo {
unsigned long msg;
unsigned long fifo_stat;
......@@ -61,14 +65,6 @@ struct omap_mbox_fifo {
u32 intr_bit;
};
struct omap_mbox_queue {
spinlock_t lock;
struct kfifo fifo;
struct work_struct work;
struct omap_mbox *mbox;
bool full;
};
struct omap_mbox_match_data {
u32 intr_type;
};
......@@ -81,29 +77,11 @@ struct omap_mbox_device {
u32 num_users;
u32 num_fifos;
u32 intr_type;
struct omap_mbox **mboxes;
struct mbox_controller controller;
struct list_head elem;
};
struct omap_mbox_fifo_info {
int tx_id;
int tx_usr;
int tx_irq;
int rx_id;
int rx_usr;
int rx_irq;
const char *name;
bool send_no_irq;
};
struct omap_mbox {
const char *name;
int irq;
struct omap_mbox_queue *rxq;
struct device *dev;
struct omap_mbox_device *parent;
struct omap_mbox_fifo tx_fifo;
struct omap_mbox_fifo rx_fifo;
......@@ -112,22 +90,6 @@ struct omap_mbox {
bool send_no_irq;
};
/* global variables for the mailbox devices */
static DEFINE_MUTEX(omap_mbox_devices_lock);
static LIST_HEAD(omap_mbox_devices);
static unsigned int mbox_kfifo_size = CONFIG_OMAP_MBOX_KFIFO_SIZE;
module_param(mbox_kfifo_size, uint, S_IRUGO);
MODULE_PARM_DESC(mbox_kfifo_size, "Size of omap's mailbox kfifo (bytes)");
static struct omap_mbox *mbox_chan_to_omap_mbox(struct mbox_chan *chan)
{
if (!chan || !chan->con_priv)
return NULL;
return (struct omap_mbox *)chan->con_priv;
}
static inline
unsigned int mbox_read_reg(struct omap_mbox_device *mdev, size_t ofs)
{
......@@ -197,7 +159,7 @@ static int is_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
return (int)(enable & status & bit);
}
static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
static void omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
{
u32 l;
struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ?
......@@ -210,7 +172,7 @@ static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
mbox_write_reg(mbox->parent, l, irqenable);
}
static void _omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
static void omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
{
struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ?
&mbox->tx_fifo : &mbox->rx_fifo;
......@@ -227,87 +189,27 @@ static void _omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
mbox_write_reg(mbox->parent, bit, irqdisable);
}
void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
if (WARN_ON(!mbox))
return;
_omap_mbox_enable_irq(mbox, irq);
}
EXPORT_SYMBOL(omap_mbox_enable_irq);
void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
if (WARN_ON(!mbox))
return;
_omap_mbox_disable_irq(mbox, irq);
}
EXPORT_SYMBOL(omap_mbox_disable_irq);
/*
* Message receiver(workqueue)
*/
static void mbox_rx_work(struct work_struct *work)
{
struct omap_mbox_queue *mq =
container_of(work, struct omap_mbox_queue, work);
mbox_msg_t data;
u32 msg;
int len;
while (kfifo_len(&mq->fifo) >= sizeof(msg)) {
len = kfifo_out(&mq->fifo, (unsigned char *)&msg, sizeof(msg));
WARN_ON(len != sizeof(msg));
data = msg;
mbox_chan_received_data(mq->mbox->chan, (void *)data);
spin_lock_irq(&mq->lock);
if (mq->full) {
mq->full = false;
_omap_mbox_enable_irq(mq->mbox, IRQ_RX);
}
spin_unlock_irq(&mq->lock);
}
}
/*
* Mailbox interrupt handler
*/
static void __mbox_tx_interrupt(struct omap_mbox *mbox)
{
_omap_mbox_disable_irq(mbox, IRQ_TX);
omap_mbox_disable_irq(mbox, IRQ_TX);
ack_mbox_irq(mbox, IRQ_TX);
mbox_chan_txdone(mbox->chan, 0);
}
static void __mbox_rx_interrupt(struct omap_mbox *mbox)
{
struct omap_mbox_queue *mq = mbox->rxq;
u32 msg;
int len;
while (!mbox_fifo_empty(mbox)) {
if (unlikely(kfifo_avail(&mq->fifo) < sizeof(msg))) {
_omap_mbox_disable_irq(mbox, IRQ_RX);
mq->full = true;
goto nomem;
}
msg = mbox_fifo_read(mbox);
len = kfifo_in(&mq->fifo, (unsigned char *)&msg, sizeof(msg));
WARN_ON(len != sizeof(msg));
mbox_chan_received_data(mbox->chan, (void *)(uintptr_t)msg);
}
/* no more messages in the fifo. clear IRQ source. */
/* clear IRQ source. */
ack_mbox_irq(mbox, IRQ_RX);
nomem:
schedule_work(&mbox->rxq->work);
}
static irqreturn_t mbox_interrupt(int irq, void *p)
......@@ -323,188 +225,34 @@ static irqreturn_t mbox_interrupt(int irq, void *p)
return IRQ_HANDLED;
}
static struct omap_mbox_queue *mbox_queue_alloc(struct omap_mbox *mbox,
void (*work)(struct work_struct *))
{
struct omap_mbox_queue *mq;
if (!work)
return NULL;
mq = kzalloc(sizeof(*mq), GFP_KERNEL);
if (!mq)
return NULL;
spin_lock_init(&mq->lock);
if (kfifo_alloc(&mq->fifo, mbox_kfifo_size, GFP_KERNEL))
goto error;
INIT_WORK(&mq->work, work);
return mq;
error:
kfree(mq);
return NULL;
}
static void mbox_queue_free(struct omap_mbox_queue *q)
{
kfifo_free(&q->fifo);
kfree(q);
}
static int omap_mbox_startup(struct omap_mbox *mbox)
{
int ret = 0;
struct omap_mbox_queue *mq;
mq = mbox_queue_alloc(mbox, mbox_rx_work);
if (!mq)
return -ENOMEM;
mbox->rxq = mq;
mq->mbox = mbox;
ret = request_irq(mbox->irq, mbox_interrupt, IRQF_SHARED,
mbox->name, mbox);
ret = request_threaded_irq(mbox->irq, NULL, mbox_interrupt,
IRQF_ONESHOT, mbox->name, mbox);
if (unlikely(ret)) {
pr_err("failed to register mailbox interrupt:%d\n", ret);
goto fail_request_irq;
return ret;
}
if (mbox->send_no_irq)
mbox->chan->txdone_method = TXDONE_BY_ACK;
_omap_mbox_enable_irq(mbox, IRQ_RX);
omap_mbox_enable_irq(mbox, IRQ_RX);
return 0;
fail_request_irq:
mbox_queue_free(mbox->rxq);
return ret;
}
static void omap_mbox_fini(struct omap_mbox *mbox)
{
_omap_mbox_disable_irq(mbox, IRQ_RX);
omap_mbox_disable_irq(mbox, IRQ_RX);
free_irq(mbox->irq, mbox);
flush_work(&mbox->rxq->work);
mbox_queue_free(mbox->rxq);
}
static struct omap_mbox *omap_mbox_device_find(struct omap_mbox_device *mdev,
const char *mbox_name)
{
struct omap_mbox *_mbox, *mbox = NULL;
struct omap_mbox **mboxes = mdev->mboxes;
int i;
if (!mboxes)
return NULL;
for (i = 0; (_mbox = mboxes[i]); i++) {
if (!strcmp(_mbox->name, mbox_name)) {
mbox = _mbox;
break;
}
}
return mbox;
}
struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl,
const char *chan_name)
{
struct device *dev = cl->dev;
struct omap_mbox *mbox = NULL;
struct omap_mbox_device *mdev;
int ret;
if (!dev)
return ERR_PTR(-ENODEV);
if (dev->of_node) {
pr_err("%s: please use mbox_request_channel(), this API is supported only for OMAP non-DT usage\n",
__func__);
return ERR_PTR(-ENODEV);
}
mutex_lock(&omap_mbox_devices_lock);
list_for_each_entry(mdev, &omap_mbox_devices, elem) {
mbox = omap_mbox_device_find(mdev, chan_name);
if (mbox)
break;
}
mutex_unlock(&omap_mbox_devices_lock);
if (!mbox || !mbox->chan)
return ERR_PTR(-ENOENT);
ret = mbox_bind_client(mbox->chan, cl);
if (ret)
return ERR_PTR(ret);
return mbox->chan;
}
EXPORT_SYMBOL(omap_mbox_request_channel);
static struct class omap_mbox_class = { .name = "mbox", };
static int omap_mbox_register(struct omap_mbox_device *mdev)
{
int ret;
int i;
struct omap_mbox **mboxes;
if (!mdev || !mdev->mboxes)
return -EINVAL;
mboxes = mdev->mboxes;
for (i = 0; mboxes[i]; i++) {
struct omap_mbox *mbox = mboxes[i];
mbox->dev = device_create(&omap_mbox_class, mdev->dev,
0, mbox, "%s", mbox->name);
if (IS_ERR(mbox->dev)) {
ret = PTR_ERR(mbox->dev);
goto err_out;
}
}
mutex_lock(&omap_mbox_devices_lock);
list_add(&mdev->elem, &omap_mbox_devices);
mutex_unlock(&omap_mbox_devices_lock);
ret = devm_mbox_controller_register(mdev->dev, &mdev->controller);
err_out:
if (ret) {
while (i--)
device_unregister(mboxes[i]->dev);
}
return ret;
}
static int omap_mbox_unregister(struct omap_mbox_device *mdev)
{
int i;
struct omap_mbox **mboxes;
if (!mdev || !mdev->mboxes)
return -EINVAL;
mutex_lock(&omap_mbox_devices_lock);
list_del(&mdev->elem);
mutex_unlock(&omap_mbox_devices_lock);
mboxes = mdev->mboxes;
for (i = 0; mboxes[i]; i++)
device_unregister(mboxes[i]->dev);
return 0;
}
static int omap_mbox_chan_startup(struct mbox_chan *chan)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
struct omap_mbox_device *mdev = mbox->parent;
int ret = 0;
......@@ -519,7 +267,7 @@ static int omap_mbox_chan_startup(struct mbox_chan *chan)
static void omap_mbox_chan_shutdown(struct mbox_chan *chan)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
struct omap_mbox_device *mdev = mbox->parent;
mutex_lock(&mdev->cfg_lock);
......@@ -530,41 +278,40 @@ static void omap_mbox_chan_shutdown(struct mbox_chan *chan)
static int omap_mbox_chan_send_noirq(struct omap_mbox *mbox, u32 msg)
{
int ret = -EBUSY;
if (mbox_fifo_full(mbox))
return -EBUSY;
if (!mbox_fifo_full(mbox)) {
_omap_mbox_enable_irq(mbox, IRQ_RX);
omap_mbox_enable_irq(mbox, IRQ_RX);
mbox_fifo_write(mbox, msg);
ret = 0;
_omap_mbox_disable_irq(mbox, IRQ_RX);
omap_mbox_disable_irq(mbox, IRQ_RX);
/* we must read and ack the interrupt directly from here */
mbox_fifo_read(mbox);
ack_mbox_irq(mbox, IRQ_RX);
}
return ret;
return 0;
}
static int omap_mbox_chan_send(struct omap_mbox *mbox, u32 msg)
{
int ret = -EBUSY;
if (mbox_fifo_full(mbox)) {
/* always enable the interrupt */
omap_mbox_enable_irq(mbox, IRQ_TX);
return -EBUSY;
}
if (!mbox_fifo_full(mbox)) {
mbox_fifo_write(mbox, msg);
ret = 0;
}
/* always enable the interrupt */
_omap_mbox_enable_irq(mbox, IRQ_TX);
return ret;
omap_mbox_enable_irq(mbox, IRQ_TX);
return 0;
}
static int omap_mbox_chan_send_data(struct mbox_chan *chan, void *data)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
int ret;
u32 msg = omap_mbox_message(data);
u32 msg = (u32)(uintptr_t)(data);
if (!mbox)
return -EINVAL;
......@@ -666,8 +413,9 @@ static struct mbox_chan *omap_mbox_of_xlate(struct mbox_controller *controller,
struct device_node *node;
struct omap_mbox_device *mdev;
struct omap_mbox *mbox;
int i;
mdev = container_of(controller, struct omap_mbox_device, controller);
mdev = dev_get_drvdata(controller->dev);
if (WARN_ON(!mdev))
return ERR_PTR(-EINVAL);
......@@ -678,22 +426,29 @@ static struct mbox_chan *omap_mbox_of_xlate(struct mbox_controller *controller,
return ERR_PTR(-ENODEV);
}
mbox = omap_mbox_device_find(mdev, node->name);
for (i = 0; i < controller->num_chans; i++) {
mbox = controller->chans[i].con_priv;
if (!strcmp(mbox->name, node->name)) {
of_node_put(node);
return &controller->chans[i];
}
}
of_node_put(node);
return mbox ? mbox->chan : ERR_PTR(-ENOENT);
return ERR_PTR(-ENOENT);
}
static int omap_mbox_probe(struct platform_device *pdev)
{
int ret;
struct mbox_chan *chnls;
struct omap_mbox **list, *mbox, *mboxblk;
struct omap_mbox_fifo_info *finfo, *finfoblk;
struct omap_mbox *mbox;
struct omap_mbox_device *mdev;
struct omap_mbox_fifo *fifo;
struct device_node *node = pdev->dev.of_node;
struct device_node *child;
const struct omap_mbox_match_data *match_data;
struct mbox_controller *controller;
u32 intr_type, info_count;
u32 num_users, num_fifos;
u32 tmp[3];
......@@ -722,40 +477,6 @@ static int omap_mbox_probe(struct platform_device *pdev)
return -ENODEV;
}
finfoblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*finfoblk),
GFP_KERNEL);
if (!finfoblk)
return -ENOMEM;
finfo = finfoblk;
child = NULL;
for (i = 0; i < info_count; i++, finfo++) {
child = of_get_next_available_child(node, child);
ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
finfo->tx_id = tmp[0];
finfo->tx_irq = tmp[1];
finfo->tx_usr = tmp[2];
ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
finfo->rx_id = tmp[0];
finfo->rx_irq = tmp[1];
finfo->rx_usr = tmp[2];
finfo->name = child->name;
finfo->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq");
if (finfo->tx_id >= num_fifos || finfo->rx_id >= num_fifos ||
finfo->tx_usr >= num_users || finfo->rx_usr >= num_users)
return -EINVAL;
}
mdev = devm_kzalloc(&pdev->dev, sizeof(*mdev), GFP_KERNEL);
if (!mdev)
return -ENOMEM;
......@@ -769,52 +490,67 @@ static int omap_mbox_probe(struct platform_device *pdev)
if (!mdev->irq_ctx)
return -ENOMEM;
/* allocate one extra for marking end of list */
list = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*list),
GFP_KERNEL);
if (!list)
return -ENOMEM;
chnls = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*chnls),
GFP_KERNEL);
if (!chnls)
return -ENOMEM;
mboxblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*mbox),
GFP_KERNEL);
if (!mboxblk)
child = NULL;
for (i = 0; i < info_count; i++) {
int tx_id, tx_irq, tx_usr;
int rx_id, rx_usr;
mbox = devm_kzalloc(&pdev->dev, sizeof(*mbox), GFP_KERNEL);
if (!mbox)
return -ENOMEM;
mbox = mboxblk;
finfo = finfoblk;
for (i = 0; i < info_count; i++, finfo++) {
child = of_get_next_available_child(node, child);
ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
tx_id = tmp[0];
tx_irq = tmp[1];
tx_usr = tmp[2];
ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
rx_id = tmp[0];
/* rx_irq = tmp[1]; */
rx_usr = tmp[2];
if (tx_id >= num_fifos || rx_id >= num_fifos ||
tx_usr >= num_users || rx_usr >= num_users)
return -EINVAL;
fifo = &mbox->tx_fifo;
fifo->msg = MAILBOX_MESSAGE(finfo->tx_id);
fifo->fifo_stat = MAILBOX_FIFOSTATUS(finfo->tx_id);
fifo->intr_bit = MAILBOX_IRQ_NOTFULL(finfo->tx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->tx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->tx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->tx_usr);
fifo->msg = MAILBOX_MESSAGE(tx_id);
fifo->fifo_stat = MAILBOX_FIFOSTATUS(tx_id);
fifo->intr_bit = MAILBOX_IRQ_NOTFULL(tx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, tx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, tx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, tx_usr);
fifo = &mbox->rx_fifo;
fifo->msg = MAILBOX_MESSAGE(finfo->rx_id);
fifo->msg_stat = MAILBOX_MSGSTATUS(finfo->rx_id);
fifo->intr_bit = MAILBOX_IRQ_NEWMSG(finfo->rx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->rx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->rx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->rx_usr);
mbox->send_no_irq = finfo->send_no_irq;
fifo->msg = MAILBOX_MESSAGE(rx_id);
fifo->msg_stat = MAILBOX_MSGSTATUS(rx_id);
fifo->intr_bit = MAILBOX_IRQ_NEWMSG(rx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, rx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, rx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, rx_usr);
mbox->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq");
mbox->intr_type = intr_type;
mbox->parent = mdev;
mbox->name = finfo->name;
mbox->irq = platform_get_irq(pdev, finfo->tx_irq);
mbox->name = child->name;
mbox->irq = platform_get_irq(pdev, tx_irq);
if (mbox->irq < 0)
return mbox->irq;
mbox->chan = &chnls[i];
chnls[i].con_priv = mbox;
list[i] = mbox++;
}
mutex_init(&mdev->cfg_lock);
......@@ -822,28 +558,30 @@ static int omap_mbox_probe(struct platform_device *pdev)
mdev->num_users = num_users;
mdev->num_fifos = num_fifos;
mdev->intr_type = intr_type;
mdev->mboxes = list;
controller = devm_kzalloc(&pdev->dev, sizeof(*controller), GFP_KERNEL);
if (!controller)
return -ENOMEM;
/*
* OMAP/K3 Mailbox IP does not have a Tx-Done IRQ, but rather a Tx-Ready
* IRQ and is needed to run the Tx state machine
*/
mdev->controller.txdone_irq = true;
mdev->controller.dev = mdev->dev;
mdev->controller.ops = &omap_mbox_chan_ops;
mdev->controller.chans = chnls;
mdev->controller.num_chans = info_count;
mdev->controller.of_xlate = omap_mbox_of_xlate;
ret = omap_mbox_register(mdev);
controller->txdone_irq = true;
controller->dev = mdev->dev;
controller->ops = &omap_mbox_chan_ops;
controller->chans = chnls;
controller->num_chans = info_count;
controller->of_xlate = omap_mbox_of_xlate;
ret = devm_mbox_controller_register(mdev->dev, controller);
if (ret)
return ret;
platform_set_drvdata(pdev, mdev);
pm_runtime_enable(mdev->dev);
devm_pm_runtime_enable(mdev->dev);
ret = pm_runtime_resume_and_get(mdev->dev);
if (ret < 0)
goto unregister;
return ret;
/*
* just print the raw revision register, the format is not
......@@ -854,61 +592,20 @@ static int omap_mbox_probe(struct platform_device *pdev)
ret = pm_runtime_put_sync(mdev->dev);
if (ret < 0 && ret != -ENOSYS)
goto unregister;
devm_kfree(&pdev->dev, finfoblk);
return 0;
unregister:
pm_runtime_disable(mdev->dev);
omap_mbox_unregister(mdev);
return ret;
}
static void omap_mbox_remove(struct platform_device *pdev)
{
struct omap_mbox_device *mdev = platform_get_drvdata(pdev);
pm_runtime_disable(mdev->dev);
omap_mbox_unregister(mdev);
return 0;
}
static struct platform_driver omap_mbox_driver = {
.probe = omap_mbox_probe,
.remove_new = omap_mbox_remove,
.driver = {
.name = "omap-mailbox",
.pm = &omap_mbox_pm_ops,
.of_match_table = of_match_ptr(omap_mailbox_of_match),
},
};
static int __init omap_mbox_init(void)
{
int err;
err = class_register(&omap_mbox_class);
if (err)
return err;
/* kfifo size sanity check: alignment and minimal size */
mbox_kfifo_size = ALIGN(mbox_kfifo_size, sizeof(u32));
mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size, sizeof(u32));
err = platform_driver_register(&omap_mbox_driver);
if (err)
class_unregister(&omap_mbox_class);
return err;
}
subsys_initcall(omap_mbox_init);
static void __exit omap_mbox_exit(void)
{
platform_driver_unregister(&omap_mbox_driver);
class_unregister(&omap_mbox_class);
}
module_exit(omap_mbox_exit);
module_platform_driver(omap_mbox_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("omap mailbox: interrupt driven messaging");
......
......@@ -6,9 +6,11 @@
*/
#include <linux/arm-smccc.h>
#include <linux/cpuhotplug.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/mailbox_controller.h>
......@@ -16,6 +18,7 @@
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
/* IPI agent ID any */
......@@ -52,6 +55,15 @@
#define IPI_MB_CHNL_TX 0 /* IPI mailbox TX channel */
#define IPI_MB_CHNL_RX 1 /* IPI mailbox RX channel */
/* IPI Message Buffer Information */
#define RESP_OFFSET 0x20U
#define DEST_OFFSET 0x40U
#define IPI_BUF_SIZE 0x20U
#define DST_BIT_POS 9U
#define SRC_BITMASK GENMASK(11, 8)
#define MAX_SGI 16
/**
* struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
* @is_opened: indicate if the IPI channel is opened
......@@ -72,6 +84,10 @@ struct zynqmp_ipi_mchan {
unsigned int chan_type;
};
struct zynqmp_ipi_mbox;
typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node);
/**
* struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox
* platform data.
......@@ -81,6 +97,7 @@ struct zynqmp_ipi_mchan {
* @remote_id: remote IPI agent ID
* @mbox: mailbox Controller
* @mchans: array for channels, tx channel and rx channel.
* @setup_ipi_fn: Function Pointer to set up IPI Channels
*/
struct zynqmp_ipi_mbox {
struct zynqmp_ipi_pdata *pdata;
......@@ -88,6 +105,7 @@ struct zynqmp_ipi_mbox {
u32 remote_id;
struct mbox_controller mbox;
struct zynqmp_ipi_mchan mchans[2];
setup_ipi_fn setup_ipi_fn;
};
/**
......@@ -98,6 +116,7 @@ struct zynqmp_ipi_mbox {
* @irq: IPI agent interrupt ID
* @method: IPI SMC or HVC is going to be used
* @local_id: local IPI agent ID
* @virq_sgi: IRQ number mapped to SGI
* @num_mboxes: number of mailboxes of this IPI agent
* @ipi_mboxes: IPI mailboxes of this IPI agent
*/
......@@ -106,10 +125,13 @@ struct zynqmp_ipi_pdata {
int irq;
unsigned int method;
u32 local_id;
int virq_sgi;
int num_mboxes;
struct zynqmp_ipi_mbox ipi_mboxes[] __counted_by(num_mboxes);
};
static DEFINE_PER_CPU(struct zynqmp_ipi_pdata *, per_cpu_pdata);
static struct device_driver zynqmp_ipi_mbox_driver = {
.owner = THIS_MODULE,
.name = "zynqmp-ipi-mbox",
......@@ -163,9 +185,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
if (mchan->is_opened) {
msg = mchan->rx_buf;
if (msg) {
msg->len = mchan->req_buf_size;
memcpy_fromio(msg->data, mchan->req_buf,
msg->len);
}
mbox_chan_received_data(chan, (void *)msg);
status = IRQ_HANDLED;
}
......@@ -174,6 +198,14 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
return status;
}
static irqreturn_t zynqmp_sgi_interrupt(int irq, void *data)
{
struct zynqmp_ipi_pdata **pdata_ptr = data;
struct zynqmp_ipi_pdata *pdata = *pdata_ptr;
return zynqmp_ipi_interrupt(irq, pdata);
}
/**
* zynqmp_ipi_peek_data - Peek to see if there are any rx messages.
*
......@@ -275,26 +307,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
if (mchan->chan_type == IPI_MB_CHNL_TX) {
/* Send request message */
if (msg && msg->len > mchan->req_buf_size) {
if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
dev_err(dev, "channel %d message length %u > max %lu\n",
mchan->chan_type, (unsigned int)msg->len,
mchan->req_buf_size);
return -EINVAL;
}
if (msg && msg->len)
if (msg && msg->len && mchan->req_buf)
memcpy_toio(mchan->req_buf, msg->data, msg->len);
/* Kick IPI mailbox to send message */
arg0 = SMC_IPI_MAILBOX_NOTIFY;
zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
} else {
/* Send response message */
if (msg && msg->len > mchan->resp_buf_size) {
if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
dev_err(dev, "channel %d message length %u > max %lu\n",
mchan->chan_type, (unsigned int)msg->len,
mchan->resp_buf_size);
return -EINVAL;
}
if (msg && msg->len)
if (msg && msg->len && mchan->resp_buf)
memcpy_toio(mchan->resp_buf, msg->data, msg->len);
arg0 = SMC_IPI_MAILBOX_ACK;
zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
......@@ -415,12 +447,6 @@ static struct mbox_chan *zynqmp_ipi_of_xlate(struct mbox_controller *mbox,
return chan;
}
static const struct of_device_id zynqmp_ipi_of_match[] = {
{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
{},
};
MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
/**
* zynqmp_ipi_mbox_get_buf_res - Get buffer resource from the IPI dev node
*
......@@ -470,12 +496,9 @@ static void zynqmp_ipi_mbox_dev_release(struct device *dev)
static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *mchan;
struct mbox_chan *chans;
struct mbox_controller *mbox;
struct resource res;
struct device *dev, *mdev;
const char *name;
int ret;
dev = ipi_mbox->pdata->dev;
......@@ -495,6 +518,73 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
}
mdev = &ipi_mbox->dev;
/* Get the IPI remote agent ID */
ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
if (ret < 0) {
dev_err(dev, "No IPI remote ID is specified.\n");
return ret;
}
ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node);
if (ret) {
dev_err(dev, "Failed to set up IPI Buffers.\n");
return ret;
}
mbox = &ipi_mbox->mbox;
mbox->dev = mdev;
mbox->ops = &zynqmp_ipi_chan_ops;
mbox->num_chans = 2;
mbox->txdone_irq = false;
mbox->txdone_poll = true;
mbox->txpoll_period = 5;
mbox->of_xlate = zynqmp_ipi_of_xlate;
chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
if (!chans)
return -ENOMEM;
mbox->chans = chans;
chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
ret = devm_mbox_controller_register(mdev, mbox);
if (ret)
dev_err(mdev,
"Failed to register mbox_controller(%d)\n", ret);
else
dev_info(mdev,
"Registered ZynqMP IPI mbox with TX/RX channels.\n");
return ret;
}
/**
* zynqmp_ipi_setup - set up IPI Buffers for classic flow
*
* @ipi_mbox: pointer to IPI mailbox private data structure
* @node: IPI mailbox device node
*
* This will be used to set up IPI Buffers for ZynqMP SOC if user
* wishes to use classic driver usage model on new SOC's with only
* buffered IPIs.
*
* Note that bufferless IPIs and mixed usage of buffered and bufferless
* IPIs are not supported with this flow.
*
* This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox".
*
* Return: 0 for success, negative value for failure
*/
static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *mchan;
struct device *mdev;
struct resource res;
const char *name;
int ret;
mdev = &ipi_mbox->dev;
mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
name = "local_request_region";
ret = zynqmp_ipi_mbox_get_buf_res(node, name, &res);
......@@ -569,37 +659,216 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
if (!mchan->rx_buf)
return -ENOMEM;
/* Get the IPI remote agent ID */
ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
if (ret < 0) {
dev_err(dev, "No IPI remote ID is specified.\n");
return ret;
return 0;
}
/**
* versal_ipi_setup - Set up IPIs to support mixed usage of
* Buffered and Bufferless IPIs.
*
* @ipi_mbox: pointer to IPI mailbox private data structure
* @node: IPI mailbox device node
*
* Return: 0 for success, negative value for failure
*/
static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
struct resource host_res, remote_res;
struct device_node *parent_node;
int host_idx, remote_idx;
struct device *mdev;
tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
parent_node = of_get_parent(node);
mdev = &ipi_mbox->dev;
host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
/*
* Only set up buffers if both sides claim to have msg buffers.
* This is because each buffered IPI's corresponding msg buffers
* are reserved for use by other buffered IPI's.
*/
if (!host_idx && !remote_idx) {
u32 host_src, host_dst, remote_src, remote_dst;
u32 buff_sz;
buff_sz = resource_size(&host_res);
host_src = host_res.start & SRC_BITMASK;
remote_src = remote_res.start & SRC_BITMASK;
host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
/* Validate that IPI IDs is within IPI Message buffer space. */
if (host_dst >= buff_sz || remote_dst >= buff_sz) {
dev_err(mdev,
"Invalid IPI Message buffer values: %x %x\n",
host_dst, remote_dst);
return -EINVAL;
}
mbox = &ipi_mbox->mbox;
mbox->dev = mdev;
mbox->ops = &zynqmp_ipi_chan_ops;
mbox->num_chans = 2;
mbox->txdone_irq = false;
mbox->txdone_poll = true;
mbox->txpoll_period = 5;
mbox->of_xlate = zynqmp_ipi_of_xlate;
chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
if (!chans)
tx_mchan->req_buf = devm_ioremap(mdev,
host_res.start | remote_dst,
IPI_BUF_SIZE);
if (!tx_mchan->req_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
mbox->chans = chans;
chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
ret = devm_mbox_controller_register(mdev, mbox);
if (ret)
dev_err(mdev,
"Failed to register mbox_controller(%d)\n", ret);
else
dev_info(mdev,
"Registered ZynqMP IPI mbox with TX/RX channels.\n");
}
tx_mchan->resp_buf = devm_ioremap(mdev,
(remote_res.start | host_dst) +
RESP_OFFSET, IPI_BUF_SIZE);
if (!tx_mchan->resp_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
rx_mchan->req_buf = devm_ioremap(mdev,
remote_res.start | host_dst,
IPI_BUF_SIZE);
if (!rx_mchan->req_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
rx_mchan->resp_buf = devm_ioremap(mdev,
(host_res.start | remote_dst) +
RESP_OFFSET, IPI_BUF_SIZE);
if (!rx_mchan->resp_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
tx_mchan->resp_buf_size = IPI_BUF_SIZE;
tx_mchan->req_buf_size = IPI_BUF_SIZE;
tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
sizeof(struct zynqmp_ipi_message),
GFP_KERNEL);
if (!tx_mchan->rx_buf)
return -ENOMEM;
rx_mchan->resp_buf_size = IPI_BUF_SIZE;
rx_mchan->req_buf_size = IPI_BUF_SIZE;
rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
sizeof(struct zynqmp_ipi_message),
GFP_KERNEL);
if (!rx_mchan->rx_buf)
return -ENOMEM;
}
return 0;
}
static int xlnx_mbox_cpuhp_start(unsigned int cpu)
{
struct zynqmp_ipi_pdata *pdata;
pdata = get_cpu_var(per_cpu_pdata);
put_cpu_var(per_cpu_pdata);
enable_percpu_irq(pdata->virq_sgi, IRQ_TYPE_NONE);
return 0;
}
static int xlnx_mbox_cpuhp_down(unsigned int cpu)
{
struct zynqmp_ipi_pdata *pdata;
pdata = get_cpu_var(per_cpu_pdata);
put_cpu_var(per_cpu_pdata);
disable_percpu_irq(pdata->virq_sgi);
return 0;
}
static void xlnx_disable_percpu_irq(void *data)
{
struct zynqmp_ipi_pdata *pdata;
pdata = *this_cpu_ptr(&per_cpu_pdata);
disable_percpu_irq(pdata->virq_sgi);
}
static int xlnx_mbox_init_sgi(struct platform_device *pdev,
int sgi_num,
struct zynqmp_ipi_pdata *pdata)
{
int ret = 0;
int cpu;
/*
* IRQ related structures are used for the following:
* for each SGI interrupt ensure its mapped by GIC IRQ domain
* and that each corresponding linux IRQ for the HW IRQ has
* a handler for when receiving an interrupt from the remote
* processor.
*/
struct irq_domain *domain;
struct irq_fwspec sgi_fwspec;
struct device_node *interrupt_parent = NULL;
struct device *dev = &pdev->dev;
/* Find GIC controller to map SGIs. */
interrupt_parent = of_irq_find_parent(dev->of_node);
if (!interrupt_parent) {
dev_err(&pdev->dev, "Failed to find property for Interrupt parent\n");
return -EINVAL;
}
/* Each SGI needs to be associated with GIC's IRQ domain. */
domain = irq_find_host(interrupt_parent);
of_node_put(interrupt_parent);
/* Each mapping needs GIC domain when finding IRQ mapping. */
sgi_fwspec.fwnode = domain->fwnode;
/*
* When irq domain looks at mapping each arg is as follows:
* 3 args for: interrupt type (SGI), interrupt # (set later), type
*/
sgi_fwspec.param_count = 1;
/* Set SGI's hwirq */
sgi_fwspec.param[0] = sgi_num;
pdata->virq_sgi = irq_create_fwspec_mapping(&sgi_fwspec);
for_each_possible_cpu(cpu)
per_cpu(per_cpu_pdata, cpu) = pdata;
ret = request_percpu_irq(pdata->virq_sgi, zynqmp_sgi_interrupt, pdev->name,
&per_cpu_pdata);
WARN_ON(ret);
if (ret) {
irq_dispose_mapping(pdata->virq_sgi);
return ret;
}
irq_to_desc(pdata->virq_sgi);
irq_set_status_flags(pdata->virq_sgi, IRQ_PER_CPU);
/* Setup function for the CPU hot-plug cases */
cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mailbox/sgi:starting",
xlnx_mbox_cpuhp_start, xlnx_mbox_cpuhp_down);
return ret;
}
static void xlnx_mbox_cleanup_sgi(struct zynqmp_ipi_pdata *pdata)
{
cpuhp_remove_state(CPUHP_AP_ONLINE_DYN);
on_each_cpu(xlnx_disable_percpu_irq, NULL, 1);
irq_clear_status_flags(pdata->virq_sgi, IRQ_PER_CPU);
free_percpu_irq(pdata->virq_sgi, &per_cpu_pdata);
irq_dispose_mapping(pdata->virq_sgi);
}
/**
......@@ -612,6 +881,9 @@ static void zynqmp_ipi_free_mboxes(struct zynqmp_ipi_pdata *pdata)
struct zynqmp_ipi_mbox *ipi_mbox;
int i;
if (pdata->irq < MAX_SGI)
xlnx_mbox_cleanup_sgi(pdata);
i = pdata->num_mboxes;
for (; i >= 0; i--) {
ipi_mbox = &pdata->ipi_mboxes[i];
......@@ -627,9 +899,11 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *nc, *np = pdev->dev.of_node;
struct zynqmp_ipi_pdata *pdata;
struct zynqmp_ipi_pdata __percpu *pdata;
struct of_phandle_args out_irq;
struct zynqmp_ipi_mbox *mbox;
int num_mboxes, ret = -EINVAL;
setup_ipi_fn ipi_fn;
num_mboxes = of_get_available_child_count(np);
if (num_mboxes == 0) {
......@@ -650,9 +924,18 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
return ret;
}
ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev);
if (!ipi_fn) {
dev_err(dev,
"Mbox Compatible String is missing IPI Setup fn.\n");
return -ENODEV;
}
pdata->num_mboxes = num_mboxes;
mbox = pdata->ipi_mboxes;
mbox->setup_ipi_fn = ipi_fn;
for_each_available_child_of_node(np, nc) {
mbox->pdata = pdata;
ret = zynqmp_ipi_mbox_probe(mbox, nc);
......@@ -665,7 +948,23 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
mbox++;
}
/* IPI IRQ */
ret = of_irq_parse_one(dev_of_node(dev), 0, &out_irq);
if (ret < 0) {
dev_err(dev, "failed to parse interrupts\n");
goto free_mbox_dev;
}
ret = out_irq.args[1];
/*
* If Interrupt number is in SGI range, then request SGI else request
* IPI system IRQ.
*/
if (ret < MAX_SGI) {
pdata->irq = ret;
ret = xlnx_mbox_init_sgi(pdev, pdata->irq, pdata);
if (ret)
goto free_mbox_dev;
} else {
ret = platform_get_irq(pdev, 0);
if (ret < 0)
goto free_mbox_dev;
......@@ -673,6 +972,8 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
pdata->irq = ret;
ret = devm_request_irq(dev, pdata->irq, zynqmp_ipi_interrupt,
IRQF_SHARED, dev_name(dev), pdata);
}
if (ret) {
dev_err(dev, "IRQ %d is not requested successfully.\n",
pdata->irq);
......@@ -695,6 +996,17 @@ static void zynqmp_ipi_remove(struct platform_device *pdev)
zynqmp_ipi_free_mboxes(pdata);
}
static const struct of_device_id zynqmp_ipi_of_match[] = {
{ .compatible = "xlnx,zynqmp-ipi-mailbox",
.data = &zynqmp_ipi_setup,
},
{ .compatible = "xlnx,versal-ipi-mailbox",
.data = &versal_ipi_setup,
},
{},
};
MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
static struct platform_driver zynqmp_ipi_driver = {
.probe = zynqmp_ipi_probe,
.remove_new = zynqmp_ipi_remove,
......
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
/*
* This header provides constants for the defined MHUv3 types.
*/
#ifndef _DT_BINDINGS_ARM_MHUV3_DT_H
#define _DT_BINDINGS_ARM_MHUV3_DT_H
#define DBE_EXT 0
#define FCE_EXT 1
#define FE_EXT 2
#endif /* _DT_BINDINGS_ARM_MHUV3_DT_H */
......@@ -10,17 +10,4 @@ typedef uintptr_t mbox_msg_t;
#define omap_mbox_message(data) (u32)(mbox_msg_t)(data)
typedef int __bitwise omap_mbox_irq_t;
#define IRQ_TX ((__force omap_mbox_irq_t) 1)
#define IRQ_RX ((__force omap_mbox_irq_t) 2)
struct mbox_chan;
struct mbox_client;
struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl,
const char *chan_name);
void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq);
void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq);
#endif /* OMAP_MAILBOX_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment