Commit e2b4a5bf authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'spi-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
 "The diffstat for this release is dominated by the new Airoha driver,
  mainly as a result of this being a generally quite quiet release.
  There were a couple of cleanups in the core but nothing substantial,
  the updates here are almost all driver specific ones.

   - Support for multi-word mode in the OMAP2 McSPI driver

   - Overhaul of the PXA2xx driver, mostly API updates

   - A number of DT binding conversions

   - Support for Airoha NAND controllers, Cirrus Logic CS35L56, Mobileye
     EYEQ5 and Renesas R8A779H0"

* tag 'spi-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (87 commits)
  spi: dw: Bail out early on unsupported target mode
  spi: Remove unneded check for orig_nents
  MAINTAINERS: repair file entry in AIROHA SPI SNFI DRIVER
  spi: pxa2xx: Drop the stale entry in documentation TOC
  spi: pxa2xx: Don't provide struct chip_data for others
  spi: pxa2xx: Remove timeout field from struct chip_data
  spi: pxa2xx: Remove DMA parameters from struct chip_data
  spi: pxa2xx: Drop struct pxa2xx_spi_chip
  spi: pxa2xx: Don't use "proxy" headers
  spi: pxa2xx: Remove outdated documentation
  spi: pxa2xx: Move contents of linux/spi/pxa2xx_spi.h to a local one
  spi: pxa2xx: Provide num-cs for Sharp PDAs via device properties
  spi: pxa2xx: Allow number of chip select pins to be read from property
  spi: dt-bindings: ti,qspi: convert to dtschema
  spi: bitbang: Add missing MODULE_DESCRIPTION()
  spi: bitbang: Use NSEC_PER_*SEC rather than hard coding
  spi: dw: Drop default number of CS setting
  spi: dw: Convert dw_spi::num_cs to u32
  spi: dw: Add a number of native CS auto-detection
  spi: dw: Convert to using BITS_TO_BYTES() macro
  ...
parents 07bbfc6a d6e7ffd4
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/airoha,en7581-snand.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SPI-NAND flash controller for Airoha ARM SoCs
maintainers:
- Lorenzo Bianconi <lorenzo@kernel.org>
allOf:
- $ref: spi-controller.yaml#
properties:
compatible:
const: airoha,en7581-snand
reg:
items:
- description: spi base address
- description: nfi2spi base address
clocks:
maxItems: 1
clock-names:
items:
- const: spi
required:
- compatible
- reg
- clocks
- clock-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/en7523-clk.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
spi@1fa10000 {
compatible = "airoha,en7581-snand";
reg = <0x0 0x1fa10000 0x0 0x140>,
<0x0 0x1fa11000 0x0 0x160>;
clocks = <&scuclk EN7523_CLK_SPI>;
clock-names = "spi";
#address-cells = <1>;
#size-cells = <0>;
flash@0 {
compatible = "spi-nand";
reg = <0>;
spi-tx-bus-width = <1>;
spi-rx-bus-width = <2>;
};
};
};
......@@ -68,12 +68,13 @@ properties:
- items:
- enum:
- amd,pensando-elba-qspi
- ti,k2g-qspi
- ti,am654-ospi
- intel,lgm-qspi
- xlnx,versal-ospi-1.0
- intel,socfpga-qspi
- mobileye,eyeq5-ospi
- starfive,jh7110-qspi
- ti,am654-ospi
- ti,k2g-qspi
- xlnx,versal-ospi-1.0
- const: cdns,qspi-nor
- const: cdns,qspi-nor
......@@ -145,7 +146,6 @@ required:
- reg
- interrupts
- clocks
- cdns,fifo-depth
- cdns,fifo-width
- cdns,trigger-address
- '#address-cells'
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/marvell,armada-3700-spi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Marvell Armada 3700 SPI Controller
description:
The SPI controller on Marvell Armada 3700 SoC.
maintainers:
- Kousik Sanagavarapu <five231003@gmail.com>
allOf:
- $ref: spi-controller.yaml#
properties:
compatible:
const: marvell,armada-3700-spi
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
num-cs:
maxItems: 1
required:
- compatible
- reg
- interrupts
- clocks
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
spi0: spi@10600 {
compatible = "marvell,armada-3700-spi";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x10600 0x5d>;
clocks = <&nb_perih_clk 7>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
num-cs = <4>;
};
...
......@@ -54,6 +54,7 @@ properties:
- renesas,msiof-r8a779a0 # R-Car V3U
- renesas,msiof-r8a779f0 # R-Car S4-8
- renesas,msiof-r8a779g0 # R-Car V4H
- renesas,msiof-r8a779h0 # R-Car V4M
- const: renesas,rcar-gen4-msiof # generic R-Car Gen4
# compatible device
- items:
......
* Marvell Armada 3700 SPI Controller
Required Properties:
- compatible: should be "marvell,armada-3700-spi"
- reg: physical base address of the controller and length of memory mapped
region.
- interrupts: The interrupt number. The interrupt specifier format depends on
the interrupt controller and of its driver.
- clocks: Must contain the clock source, usually from the North Bridge clocks.
- num-cs: The number of chip selects that is supported by this SPI Controller
- #address-cells: should be 1.
- #size-cells: should be 0.
Example:
spi0: spi@10600 {
compatible = "marvell,armada-3700-spi";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x10600 0x5d>;
clocks = <&nb_perih_clk 7>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
num-cs = <4>;
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/ti,qspi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI QSPI controller
maintainers:
- Kousik Sanagavarapu <five231003@gmail.com>
allOf:
- $ref: spi-controller.yaml#
properties:
compatible:
enum:
- ti,am4372-qspi
- ti,dra7xxx-qspi
reg:
items:
- description: base registers
- description: mapped memory
reg-names:
items:
- const: qspi_base
- const: qspi_mmap
clocks:
maxItems: 1
clock-names:
items:
- const: fck
interrupts:
maxItems: 1
num-cs:
minimum: 1
maximum: 4
default: 1
ti,hwmods:
description:
Name of the hwmod associated to the QSPI. This is for legacy
platforms only.
$ref: /schemas/types.yaml#/definitions/string
deprecated: true
syscon-chipselects:
description:
Handle to system control region containing QSPI chipselect register
and offset of that register.
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to system control register
- description: register offset
spi-max-frequency:
description: Maximum SPI clocking speed of the controller in Hz.
$ref: /schemas/types.yaml#/definitions/uint32
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- interrupts
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/dra7.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
spi@4b300000 {
compatible = "ti,dra7xxx-qspi";
reg = <0x4b300000 0x100>,
<0x5c000000 0x4000000>;
reg-names = "qspi_base", "qspi_mmap";
syscon-chipselects = <&scm_conf 0x558>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&l4per2_clkctrl DRA7_L4PER2_QSPI_CLKCTRL 25>;
clock-names = "fck";
num-cs = <4>;
spi-max-frequency = <48000000>;
interrupts = <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>;
};
...
TI QSPI controller.
Required properties:
- compatible : should be "ti,dra7xxx-qspi" or "ti,am4372-qspi".
- reg: Should contain QSPI registers location and length.
- reg-names: Should contain the resource reg names.
- qspi_base: Qspi configuration register Address space
- qspi_mmap: Memory mapped Address space
- (optional) qspi_ctrlmod: Control module Address space
- interrupts: should contain the qspi interrupt number.
- #address-cells, #size-cells : Must be present if the device has sub-nodes
- ti,hwmods: Name of the hwmod associated to the QSPI
Recommended properties:
- spi-max-frequency: Definition as per
Documentation/devicetree/bindings/spi/spi-bus.txt
Optional properties:
- syscon-chipselects: Handle to system control region contains QSPI
chipselect register and offset of that register.
NOTE: TI QSPI controller requires different pinmux and IODelay
parameters for Mode-0 and Mode-3 operations, which needs to be set up by
the bootloader (U-Boot). Default configuration only supports Mode-0
operation. Hence, "spi-cpol" and "spi-cpha" DT properties cannot be
specified in the slave nodes of TI QSPI controller without appropriate
modification to bootloader.
Example:
For am4372:
qspi: qspi@47900000 {
compatible = "ti,am4372-qspi";
reg = <0x47900000 0x100>, <0x30000000 0x4000000>;
reg-names = "qspi_base", "qspi_mmap";
#address-cells = <1>;
#size-cells = <0>;
spi-max-frequency = <25000000>;
ti,hwmods = "qspi";
};
For dra7xx:
qspi: qspi@4b300000 {
compatible = "ti,dra7xxx-qspi";
reg = <0x4b300000 0x100>,
<0x5c000000 0x4000000>,
reg-names = "qspi_base", "qspi_mmap";
syscon-chipselects = <&scm_conf 0x558>;
#address-cells = <1>;
#size-cells = <0>;
spi-max-frequency = <48000000>;
ti,hwmods = "qspi";
};
......@@ -10,7 +10,6 @@ Serial Peripheral Interface (SPI)
spi-summary
spidev
butterfly
pxa2xx
spi-lm70llp
spi-sc18is602
......
==============================
PXA2xx SPI on SSP driver HOWTO
==============================
This a mini HOWTO on the pxa2xx_spi driver. The driver turns a PXA2xx
synchronous serial port into an SPI host controller
(see Documentation/spi/spi-summary.rst). The driver has the following features
- Support for any PXA2xx and compatible SSP.
- SSP PIO and SSP DMA data transfers.
- External and Internal (SSPFRM) chip selects.
- Per peripheral device (chip) configuration.
- Full suspend, freeze, resume support.
The driver is built around a &struct spi_message FIFO serviced by kernel
thread. The kernel thread, spi_pump_messages(), drives message FIFO and
is responsible for queuing SPI transactions and setting up and launching
the DMA or interrupt driven transfers.
Declaring PXA2xx host controllers
---------------------------------
Typically, for a legacy platform, an SPI host controller is defined in the
arch/.../mach-*/board-*.c as a "platform device". The host controller configuration
is passed to the driver via a table found in include/linux/spi/pxa2xx_spi.h::
struct pxa2xx_spi_controller {
u16 num_chipselect;
u8 enable_dma;
...
};
The "pxa2xx_spi_controller.num_chipselect" field is used to determine the number of
peripheral devices (chips) attached to this SPI host controller.
The "pxa2xx_spi_controller.enable_dma" field informs the driver that SSP DMA should
be used. This caused the driver to acquire two DMA channels: Rx channel and
Tx channel. The Rx channel has a higher DMA service priority than the Tx channel.
See the "PXA2xx Developer Manual" section "DMA Controller".
For the new platforms the description of the controller and peripheral devices
comes from Device Tree or ACPI.
NSSP HOST SAMPLE
----------------
Below is a sample configuration using the PXA255 NSSP for a legacy platform::
static struct resource pxa_spi_nssp_resources[] = {
[0] = {
.start = __PREG(SSCR0_P(2)), /* Start address of NSSP */
.end = __PREG(SSCR0_P(2)) + 0x2c, /* Range of registers */
.flags = IORESOURCE_MEM,
},
[1] = {
.start = IRQ_NSSP, /* NSSP IRQ */
.end = IRQ_NSSP,
.flags = IORESOURCE_IRQ,
},
};
static struct pxa2xx_spi_controller pxa_nssp_controller_info = {
.num_chipselect = 1, /* Matches the number of chips attached to NSSP */
.enable_dma = 1, /* Enables NSSP DMA */
};
static struct platform_device pxa_spi_nssp = {
.name = "pxa2xx-spi", /* MUST BE THIS VALUE, so device match driver */
.id = 2, /* Bus number, MUST MATCH SSP number 1..n */
.resource = pxa_spi_nssp_resources,
.num_resources = ARRAY_SIZE(pxa_spi_nssp_resources),
.dev = {
.platform_data = &pxa_nssp_controller_info, /* Passed to driver */
},
};
static struct platform_device *devices[] __initdata = {
&pxa_spi_nssp,
};
static void __init board_init(void)
{
(void)platform_add_device(devices, ARRAY_SIZE(devices));
}
Declaring peripheral devices
----------------------------
Typically, for a legacy platform, each SPI peripheral device (chip) is defined in the
arch/.../mach-*/board-*.c using the "spi_board_info" structure found in
"linux/spi/spi.h". See "Documentation/spi/spi-summary.rst" for additional
information.
Each peripheral device (chip) attached to the PXA2xx must provide specific chip configuration
information via the structure "pxa2xx_spi_chip" found in
"include/linux/spi/pxa2xx_spi.h". The PXA2xx host controller driver will use
the configuration whenever the driver communicates with the peripheral
device. All fields are optional.
::
struct pxa2xx_spi_chip {
u8 tx_threshold;
u8 rx_threshold;
u8 dma_burst_size;
u32 timeout;
};
The "pxa2xx_spi_chip.tx_threshold" and "pxa2xx_spi_chip.rx_threshold" fields are
used to configure the SSP hardware FIFO. These fields are critical to the
performance of pxa2xx_spi driver and misconfiguration will result in rx
FIFO overruns (especially in PIO mode transfers). Good default values are::
.tx_threshold = 8,
.rx_threshold = 8,
The range is 1 to 16 where zero indicates "use default".
The "pxa2xx_spi_chip.dma_burst_size" field is used to configure PXA2xx DMA
engine and is related the "spi_device.bits_per_word" field. Read and understand
the PXA2xx "Developer Manual" sections on the DMA controller and SSP Controllers
to determine the correct value. An SSP configured for byte-wide transfers would
use a value of 8. The driver will determine a reasonable default if
dma_burst_size == 0.
The "pxa2xx_spi_chip.timeout" fields is used to efficiently handle
trailing bytes in the SSP receiver FIFO. The correct value for this field is
dependent on the SPI bus speed ("spi_board_info.max_speed_hz") and the specific
peripheral device. Please note that the PXA2xx SSP 1 does not support trailing byte
timeouts and must busy-wait any trailing bytes.
NOTE: the SPI driver cannot control the chip select if SSPFRM is used, so the
chipselect is dropped after each spi_transfer. Most devices need chip select
asserted around the complete message. Use SSPFRM as a GPIO (through a descriptor)
to accommodate these chips.
NSSP PERIPHERAL SAMPLE
----------------------
For a legacy platform or in some other cases, the pxa2xx_spi_chip structure
is passed to the pxa2xx_spi driver in the "spi_board_info.controller_data"
field. Below is a sample configuration using the PXA255 NSSP.
::
static struct pxa2xx_spi_chip cs8415a_chip_info = {
.tx_threshold = 8, /* SSP hardware FIFO threshold */
.rx_threshold = 8, /* SSP hardware FIFO threshold */
.dma_burst_size = 8, /* Byte wide transfers used so 8 byte bursts */
.timeout = 235, /* See Intel documentation */
};
static struct pxa2xx_spi_chip cs8405a_chip_info = {
.tx_threshold = 8, /* SSP hardware FIFO threshold */
.rx_threshold = 8, /* SSP hardware FIFO threshold */
.dma_burst_size = 8, /* Byte wide transfers used so 8 byte bursts */
.timeout = 235, /* See Intel documentation */
};
static struct spi_board_info streetracer_spi_board_info[] __initdata = {
{
.modalias = "cs8415a", /* Name of spi_driver for this device */
.max_speed_hz = 3686400, /* Run SSP as fast a possible */
.bus_num = 2, /* Framework bus number */
.chip_select = 0, /* Framework chip select */
.platform_data = NULL; /* No spi_driver specific config */
.controller_data = &cs8415a_chip_info, /* Host controller config */
.irq = STREETRACER_APCI_IRQ, /* Peripheral device interrupt */
},
{
.modalias = "cs8405a", /* Name of spi_driver for this device */
.max_speed_hz = 3686400, /* Run SSP as fast a possible */
.bus_num = 2, /* Framework bus number */
.chip_select = 1, /* Framework chip select */
.controller_data = &cs8405a_chip_info, /* Host controller config */
.irq = STREETRACER_APCI_IRQ, /* Peripheral device interrupt */
},
};
static void __init streetracer_init(void)
{
spi_register_board_info(streetracer_spi_board_info,
ARRAY_SIZE(streetracer_spi_board_info));
}
DMA and PIO I/O Support
-----------------------
The pxa2xx_spi driver supports both DMA and interrupt driven PIO message
transfers. The driver defaults to PIO mode and DMA transfers must be enabled
by setting the "enable_dma" flag in the "pxa2xx_spi_controller" structure.
For the newer platforms, that are known to support DMA, the driver will enable
it automatically and try it first with a possible fallback to PIO. The DMA
mode supports both coherent and stream based DMA mappings.
The following logic is used to determine the type of I/O to be used on
a per "spi_transfer" basis::
if spi_message.len > 65536 then
if spi_message.is_dma_mapped or rx_dma_buf != 0 or tx_dma_buf != 0 then
reject premapped transfers
print "rate limited" warning
use PIO transfers
if enable_dma and the size is in the range [DMA burst size..65536] then
use streaming DMA mode
otherwise
use PIO transfer
THANKS TO
---------
David Brownell and others for mentoring the development of this driver.
......@@ -348,7 +348,6 @@ SPI protocol drivers somewhat resemble platform device drivers::
static struct spi_driver CHIP_driver = {
.driver = {
.name = "CHIP",
.owner = THIS_MODULE,
.pm = &CHIP_pm_ops,
},
......@@ -419,10 +418,6 @@ any more such messages.
to make extra copies unless the hardware requires it (e.g. working
around hardware errata that force the use of bounce buffering).
If standard dma_map_single() handling of these buffers is inappropriate,
you can use spi_message.is_dma_mapped to tell the controller driver
that you've already provided the relevant DMA addresses.
- The basic I/O primitive is spi_async(). Async requests may be
issued in any context (irq handler, task, etc) and completion
is reported using a callback provided with the message.
......
......@@ -653,6 +653,15 @@ S: Supported
F: fs/aio.c
F: include/linux/*aio*.h
AIROHA SPI SNFI DRIVER
M: Lorenzo Bianconi <lorenzo@kernel.org>
M: Ray Liu <ray.liu@airoha.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-spi@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/spi/airoha,en7581-snand.yaml
F: drivers/spi/spi-airoha-snfi.c
AIRSPY MEDIA DRIVER
L: linux-media@vger.kernel.org
S: Orphan
......@@ -21915,7 +21924,7 @@ F: Documentation/devicetree/bindings/sound/tas2552.txt
F: Documentation/devicetree/bindings/sound/tas2562.yaml
F: Documentation/devicetree/bindings/sound/tas2770.yaml
F: Documentation/devicetree/bindings/sound/tas27xx.yaml
F: Documentation/devicetree/bindings/sound/ti,pcm1681.txt
F: Documentation/devicetree/bindings/sound/ti,pcm1681.yaml
F: Documentation/devicetree/bindings/sound/ti,pcm3168a.yaml
F: Documentation/devicetree/bindings/sound/ti,tlv320*.yaml
F: Documentation/devicetree/bindings/sound/tlv320adcx140.yaml
......
......@@ -7,7 +7,6 @@
#include <linux/clk-provider.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/platform_data/i2c-pxa.h>
#include <linux/soc/pxa/cpu.h>
......@@ -665,23 +664,6 @@ struct platform_device pxa27x_device_gpio = {
.resource = pxa_resource_gpio,
};
/* pxa2xx-spi platform-device ID equals respective SSP platform-device ID + 1.
* See comment in arch/arm/mach-pxa/ssp.c::ssp_probe() */
void __init pxa2xx_set_spi_info(unsigned id, struct pxa2xx_spi_controller *info)
{
struct platform_device *pd;
pd = platform_device_alloc("pxa2xx-spi", id);
if (pd == NULL) {
printk(KERN_ERR "pxa2xx-spi: failed to allocate device id %d\n",
id);
return;
}
pd->dev.platform_data = info;
platform_device_add(pd);
}
static struct resource pxa_dma_resource[] = {
[0] = {
.start = 0x40000000,
......
......@@ -18,10 +18,10 @@
#include <linux/i2c.h>
#include <linux/platform_data/i2c-pxa.h>
#include <linux/platform_data/pca953x.h>
#include <linux/property.h>
#include <linux/spi/spi.h>
#include <linux/spi/ads7846.h>
#include <linux/spi/corgi_lcd.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/mtd/sharpsl.h>
#include <linux/mtd/physmap.h>
#include <linux/input-event-codes.h>
......@@ -569,10 +569,6 @@ static struct spi_board_info spitz_spi_devices[] = {
},
};
static struct pxa2xx_spi_controller spitz_spi_info = {
.num_chipselect = 3,
};
static struct gpiod_lookup_table spitz_spi_gpio_table = {
.dev_id = "spi2",
.table = {
......@@ -583,8 +579,21 @@ static struct gpiod_lookup_table spitz_spi_gpio_table = {
},
};
static const struct property_entry spitz_spi_properties[] = {
PROPERTY_ENTRY_U32("num-cs", 3),
{ }
};
static const struct software_node spitz_spi_node = {
.properties = spitz_spi_properties,
};
static void __init spitz_spi_init(void)
{
struct platform_device *pd;
int id = 2;
int err;
if (machine_is_akita())
gpiod_add_lookup_table(&akita_lcdcon_gpio_table);
else
......@@ -592,7 +601,21 @@ static void __init spitz_spi_init(void)
gpiod_add_lookup_table(&spitz_ads7846_gpio_table);
gpiod_add_lookup_table(&spitz_spi_gpio_table);
pxa2xx_set_spi_info(2, &spitz_spi_info);
/* pxa2xx-spi platform-device ID equals respective SSP platform-device ID + 1 */
pd = platform_device_alloc("pxa2xx-spi", id);
if (pd == NULL) {
pr_err("pxa2xx-spi: failed to allocate device id %d\n", id);
} else {
err = device_add_software_node(&pd->dev, &spitz_spi_node);
if (err) {
platform_device_put(pd);
pr_err("pxa2xx-spi: failed to add software node\n");
} else {
platform_device_add(pd);
}
}
spi_register_board_info(ARRAY_AND_SIZE(spitz_spi_devices));
}
#else
......
......@@ -103,6 +103,15 @@ config GPIO_REGMAP
select REGMAP
tristate
config GPIO_SWNODE_UNDEFINED
bool
help
This adds a special place holder for software nodes to contain an
undefined GPIO reference, this is primarily used by SPI to allow a
list of GPIO chip selects to mark a certain chip select as being
controlled the SPI device's internal chip select mechanism and not
a GPIO.
# put drivers in the right section, in alphabetical order
# This symbol is selected by both I2C and SPI expanders
......
......@@ -4,8 +4,13 @@
*
* Copyright 2022 Google LLC
*/
#define pr_fmt(fmt) "gpiolib: swnode: " fmt
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/printk.h>
#include <linux/property.h>
......@@ -17,6 +22,8 @@
#include "gpiolib.h"
#include "gpiolib-swnode.h"
#define GPIOLIB_SWNODE_UNDEFINED_NAME "swnode-gpio-undefined"
static void swnode_format_propname(const char *con_id, char *propname,
size_t max_size)
{
......@@ -40,6 +47,14 @@ static struct gpio_device *swnode_get_gpio_device(struct fwnode_handle *fwnode)
if (!gdev_node || !gdev_node->name)
return ERR_PTR(-EINVAL);
/*
* Check for a special node that identifies undefined GPIOs, this is
* primarily used as a key for internal chip selects in SPI bindings.
*/
if (IS_ENABLED(CONFIG_GPIO_SWNODE_UNDEFINED) &&
!strcmp(gdev_node->name, GPIOLIB_SWNODE_UNDEFINED_NAME))
return ERR_PTR(-ENOENT);
gdev = gpio_device_find_by_label(gdev_node->name);
return gdev ?: ERR_PTR(-EPROBE_DEFER);
}
......@@ -121,3 +136,32 @@ int swnode_gpio_count(const struct fwnode_handle *fwnode, const char *con_id)
return count ?: -ENOENT;
}
#if IS_ENABLED(CONFIG_GPIO_SWNODE_UNDEFINED)
/*
* A special node that identifies undefined GPIOs, this is primarily used as
* a key for internal chip selects in SPI bindings.
*/
const struct software_node swnode_gpio_undefined = {
.name = GPIOLIB_SWNODE_UNDEFINED_NAME,
};
EXPORT_SYMBOL_NS_GPL(swnode_gpio_undefined, GPIO_SWNODE);
static int __init swnode_gpio_init(void)
{
int ret;
ret = software_node_register(&swnode_gpio_undefined);
if (ret < 0)
pr_err("failed to register swnode: %d\n", ret);
return ret;
}
subsys_initcall(swnode_gpio_init);
static void __exit swnode_gpio_cleanup(void)
{
software_node_unregister(&swnode_gpio_undefined);
}
__exitcall(swnode_gpio_cleanup);
#endif
......@@ -25,7 +25,7 @@
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/pxa2xx_ssp.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_device.h>
......
......@@ -57,6 +57,16 @@ config SPI_MEM
comment "SPI Master Controller Drivers"
config SPI_AIROHA_SNFI
tristate "Airoha SPI NAND Flash Interface"
depends on ARCH_AIROHA || COMPILE_TEST
depends on SPI_MASTER
select REGMAP_MMIO
help
This enables support for SPI-NAND mode on the Airoha NAND
Flash Interface found on Airoha ARM SoCs. This controller
is implemented as a SPI-MEM controller.
config SPI_ALTERA
tristate "Altera SPI Controller platform driver"
select SPI_ALTERA_CORE
......@@ -216,11 +226,11 @@ config SPI_BCMBCA_HSSPI
explicitly.
config SPI_BITBANG
tristate "Utilities for Bitbanging SPI masters"
tristate "Utilities for Bitbanging SPI host controllers"
help
With a few GPIO pins, your system can bitbang the SPI protocol.
Select this to get SPI support through I/O pins (GPIO, parallel
port, etc). Or, some systems' SPI master controller drivers use
port, etc). Or, some systems' SPI host controller drivers use
this code to manage the per-word or per-transfer accesses to the
hardware shift registers.
......@@ -246,7 +256,7 @@ config SPI_CADENCE
config SPI_CADENCE_QUADSPI
tristate "Cadence Quad SPI controller"
depends on OF && (ARM || ARM64 || X86 || RISCV || COMPILE_TEST)
depends on OF && (ARM || ARM64 || X86 || RISCV || MIPS || COMPILE_TEST)
help
Enable support for the Cadence Quad SPI Flash controller.
......@@ -284,6 +294,7 @@ config SPI_COLDFIRE_QSPI
config SPI_CS42L43
tristate "Cirrus Logic CS42L43 SPI controller"
depends on MFD_CS42L43 && PINCTRL_CS42L43
select GPIO_SWNODE_UNDEFINED
help
This enables support for the SPI controller inside the Cirrus Logic
CS42L43 audio codec.
......@@ -817,12 +828,11 @@ config SPI_PPC4xx
config SPI_PXA2XX
tristate "PXA2xx SSP SPI master"
depends on ARCH_PXA || ARCH_MMP || PCI || ACPI || COMPILE_TEST
depends on ARCH_PXA || ARCH_MMP || (X86 && (PCI || ACPI)) || COMPILE_TEST
select PXA_SSP if ARCH_PXA || ARCH_MMP
help
This enables using a PXA2xx or Sodaville SSP port as a SPI master
controller. The driver can be configured to use any SSP port and
additional documentation can be found a Documentation/spi/pxa2xx.rst.
controller. The driver can be configured to use any SSP port.
config SPI_PXA2XX_PCI
def_tristate SPI_PXA2XX && PCI && COMMON_CLK
......
......@@ -14,6 +14,7 @@ obj-$(CONFIG_SPI_SPIDEV) += spidev.o
obj-$(CONFIG_SPI_LOOPBACK_TEST) += spi-loopback-test.o
# SPI master controller drivers (bus)
obj-$(CONFIG_SPI_AIROHA_SNFI) += spi-airoha-snfi.o
obj-$(CONFIG_SPI_ALTERA) += spi-altera-platform.o
obj-$(CONFIG_SPI_ALTERA_CORE) += spi-altera-core.o
obj-$(CONFIG_SPI_ALTERA_DFL) += spi-altera-dfl.o
......
This diff is collapsed.
......@@ -169,4 +169,3 @@ module_platform_driver(altera_spi_driver);
MODULE_DESCRIPTION("Altera SPI driver");
MODULE_AUTHOR("Thomas Chou <thomas@wytron.com.tw>");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:" DRV_NAME);
......@@ -13,6 +13,7 @@
#include <linux/delay.h>
#include <linux/spi/spi.h>
#include <linux/iopoll.h>
#include <linux/spi/spi-mem.h>
#define AMD_SPI_CTRL0_REG 0x00
#define AMD_SPI_EXEC_CMD BIT(16)
......@@ -35,6 +36,7 @@
#define AMD_SPI_FIFO_SIZE 70
#define AMD_SPI_MEM_SIZE 200
#define AMD_SPI_MAX_DATA 64
#define AMD_SPI_ENA_REG 0x20
#define AMD_SPI_ALT_SPD_SHIFT 20
......@@ -358,6 +360,115 @@ static inline int amd_spi_fifo_xfer(struct amd_spi *amd_spi,
return message->status;
}
static bool amd_spi_supports_op(struct spi_mem *mem,
const struct spi_mem_op *op)
{
/* bus width is number of IO lines used to transmit */
if (op->cmd.buswidth > 1 || op->addr.buswidth > 1 ||
op->data.buswidth > 1 || op->data.nbytes > AMD_SPI_MAX_DATA)
return false;
return spi_mem_default_supports_op(mem, op);
}
static int amd_spi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
{
op->data.nbytes = clamp_val(op->data.nbytes, 0, AMD_SPI_MAX_DATA);
return 0;
}
static void amd_spi_set_addr(struct amd_spi *amd_spi,
const struct spi_mem_op *op)
{
u8 nbytes = op->addr.nbytes;
u64 addr_val = op->addr.val;
int base_addr, i;
base_addr = AMD_SPI_FIFO_BASE + nbytes;
for (i = 0; i < nbytes; i++) {
amd_spi_writereg8(amd_spi, base_addr - i - 1, addr_val &
GENMASK(7, 0));
addr_val >>= 8;
}
}
static void amd_spi_mem_data_out(struct amd_spi *amd_spi,
const struct spi_mem_op *op)
{
int base_addr = AMD_SPI_FIFO_BASE + op->addr.nbytes;
u8 *buf = (u8 *)op->data.buf.out;
u32 nbytes = op->data.nbytes;
int i;
amd_spi_set_opcode(amd_spi, op->cmd.opcode);
amd_spi_set_addr(amd_spi, op);
for (i = 0; i < nbytes; i++)
amd_spi_writereg8(amd_spi, (base_addr + i), buf[i]);
amd_spi_set_tx_count(amd_spi, op->addr.nbytes + op->data.nbytes);
amd_spi_set_rx_count(amd_spi, 0);
amd_spi_clear_fifo_ptr(amd_spi);
amd_spi_execute_opcode(amd_spi);
}
static void amd_spi_mem_data_in(struct amd_spi *amd_spi,
const struct spi_mem_op *op)
{
int offset = (op->addr.nbytes == 0) ? 0 : 1;
u8 *buf = (u8 *)op->data.buf.in;
u32 nbytes = op->data.nbytes;
int base_addr, i;
base_addr = AMD_SPI_FIFO_BASE + op->addr.nbytes + offset;
amd_spi_set_opcode(amd_spi, op->cmd.opcode);
amd_spi_set_addr(amd_spi, op);
amd_spi_set_tx_count(amd_spi, op->addr.nbytes);
amd_spi_set_rx_count(amd_spi, op->data.nbytes + 1);
amd_spi_clear_fifo_ptr(amd_spi);
amd_spi_execute_opcode(amd_spi);
amd_spi_busy_wait(amd_spi);
for (i = 0; i < nbytes; i++)
buf[i] = amd_spi_readreg8(amd_spi, base_addr + i);
}
static int amd_spi_exec_mem_op(struct spi_mem *mem,
const struct spi_mem_op *op)
{
struct amd_spi *amd_spi;
int ret;
amd_spi = spi_controller_get_devdata(mem->spi->controller);
ret = amd_set_spi_freq(amd_spi, mem->spi->max_speed_hz);
if (ret)
return ret;
switch (op->data.dir) {
case SPI_MEM_DATA_IN:
amd_spi_mem_data_in(amd_spi, op);
break;
case SPI_MEM_DATA_OUT:
fallthrough;
case SPI_MEM_NO_DATA:
amd_spi_mem_data_out(amd_spi, op);
break;
default:
ret = -EOPNOTSUPP;
}
return ret;
}
static const struct spi_controller_mem_ops amd_spi_mem_ops = {
.exec_op = amd_spi_exec_mem_op,
.adjust_op_size = amd_spi_adjust_op_size,
.supports_op = amd_spi_supports_op,
};
static int amd_spi_host_transfer(struct spi_controller *host,
struct spi_message *msg)
{
......@@ -409,6 +520,7 @@ static int amd_spi_probe(struct platform_device *pdev)
host->min_speed_hz = AMD_SPI_MIN_HZ;
host->setup = amd_spi_host_setup;
host->transfer_one_message = amd_spi_host_transfer;
host->mem_ops = &amd_spi_mem_ops;
host->max_transfer_size = amd_spi_max_transfer_size;
host->max_message_size = amd_spi_max_transfer_size;
......
......@@ -339,7 +339,7 @@ static irqreturn_t a3700_spi_interrupt(int irq, void *dev_id)
static bool a3700_spi_wait_completion(struct spi_device *spi)
{
struct a3700_spi *a3700_spi;
unsigned int timeout;
unsigned long time_left;
unsigned int ctrl_reg;
unsigned long timeout_jiffies;
......@@ -361,12 +361,12 @@ static bool a3700_spi_wait_completion(struct spi_device *spi)
a3700_spi->wait_mask);
timeout_jiffies = msecs_to_jiffies(A3700_SPI_TIMEOUT);
timeout = wait_for_completion_timeout(&a3700_spi->done,
timeout_jiffies);
time_left = wait_for_completion_timeout(&a3700_spi->done,
timeout_jiffies);
a3700_spi->wait_mask = 0;
if (timeout)
if (time_left)
return true;
/* there might be the case that right after we checked the
......
......@@ -987,8 +987,6 @@ static void atmel_spi_pdc_next_xfer(struct spi_controller *host,
* For DMA, tx_buf/tx_dma have the same relationship as rx_buf/rx_dma:
* - The buffer is either valid for CPU access, else NULL
* - If the buffer is valid, so is its DMA address
*
* This driver manages the dma address unless message->is_dma_mapped.
*/
static int
atmel_spi_dma_map_xfer(struct atmel_spi *as, struct spi_transfer *xfer)
......@@ -1374,8 +1372,7 @@ static int atmel_spi_one_transfer(struct spi_controller *host,
* DMA map early, for performance (empties dcache ASAP) and
* better fault reporting.
*/
if ((!host->cur_msg->is_dma_mapped)
&& as->use_pdc) {
if (as->use_pdc) {
if (atmel_spi_dma_map_xfer(as, xfer) < 0)
return -ENOMEM;
}
......@@ -1454,8 +1451,7 @@ static int atmel_spi_one_transfer(struct spi_controller *host,
}
}
if (!host->cur_msg->is_dma_mapped
&& as->use_pdc)
if (as->use_pdc)
atmel_spi_dma_unmap_xfer(host, xfer);
if (as->use_pdc)
......
......@@ -314,11 +314,8 @@ static int au1550_spi_dma_txrxb(struct spi_device *spi, struct spi_transfer *t)
hw->tx = t->tx_buf;
hw->rx = t->rx_buf;
dma_tx_addr = t->tx_dma;
dma_rx_addr = t->rx_dma;
/*
* check if buffers are already dma mapped, map them otherwise:
* - first map the TX buffer, so cache data gets written to memory
* - then map the RX buffer, so that cache entries (with
* soon-to-be-stale data) get removed
......@@ -326,23 +323,17 @@ static int au1550_spi_dma_txrxb(struct spi_device *spi, struct spi_transfer *t)
* use temp rx buffer (preallocated or realloc to fit) for rx dma
*/
if (t->tx_buf) {
if (t->tx_dma == 0) { /* if DMA_ADDR_INVALID, map it */
dma_tx_addr = dma_map_single(hw->dev,
(void *)t->tx_buf,
t->len, DMA_TO_DEVICE);
if (dma_mapping_error(hw->dev, dma_tx_addr))
dev_err(hw->dev, "tx dma map error\n");
}
dma_tx_addr = dma_map_single(hw->dev, (void *)t->tx_buf,
t->len, DMA_TO_DEVICE);
if (dma_mapping_error(hw->dev, dma_tx_addr))
dev_err(hw->dev, "tx dma map error\n");
}
if (t->rx_buf) {
if (t->rx_dma == 0) { /* if DMA_ADDR_INVALID, map it */
dma_rx_addr = dma_map_single(hw->dev,
(void *)t->rx_buf,
t->len, DMA_FROM_DEVICE);
if (dma_mapping_error(hw->dev, dma_rx_addr))
dev_err(hw->dev, "rx dma map error\n");
}
dma_rx_addr = dma_map_single(hw->dev, (void *)t->rx_buf,
t->len, DMA_FROM_DEVICE);
if (dma_mapping_error(hw->dev, dma_rx_addr))
dev_err(hw->dev, "rx dma map error\n");
} else {
if (t->len > hw->dma_rx_tmpbuf_size) {
int ret;
......@@ -398,10 +389,10 @@ static int au1550_spi_dma_txrxb(struct spi_device *spi, struct spi_transfer *t)
DMA_FROM_DEVICE);
}
/* unmap buffers if mapped above */
if (t->rx_buf && t->rx_dma == 0)
if (t->rx_buf)
dma_unmap_single(hw->dev, dma_rx_addr, t->len,
DMA_FROM_DEVICE);
if (t->tx_buf && t->tx_dma == 0)
if (t->tx_buf)
dma_unmap_single(hw->dev, dma_tx_addr, t->len,
DMA_TO_DEVICE);
......
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* polling/bitbanging SPI master controller driver utilities
* Polling/bitbanging SPI host controller controller driver utilities
*/
#include <linux/spinlock.h>
......@@ -11,6 +11,7 @@
#include <linux/errno.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/time64.h>
#include <linux/spi/spi.h>
#include <linux/spi/spi_bitbang.h>
......@@ -168,8 +169,8 @@ int spi_bitbang_setup_transfer(struct spi_device *spi, struct spi_transfer *t)
if (!hz)
hz = spi->max_speed_hz;
if (hz) {
cs->nsecs = (1000000000/2) / hz;
if (cs->nsecs > (MAX_UDELAY_MS * 1000 * 1000))
cs->nsecs = (NSEC_PER_SEC / 2) / hz;
if (cs->nsecs > (MAX_UDELAY_MS * NSEC_PER_MSEC))
return -EINVAL;
}
......@@ -393,12 +394,12 @@ int spi_bitbang_init(struct spi_bitbang *bitbang)
EXPORT_SYMBOL_GPL(spi_bitbang_init);
/**
* spi_bitbang_start - start up a polled/bitbanging SPI master driver
* spi_bitbang_start - start up a polled/bitbanging SPI host controller driver
* @bitbang: driver handle
*
* Caller should have zero-initialized all parts of the structure, and then
* provided callbacks for chip selection and I/O loops. If the master has
* a transfer method, its final step should call spi_bitbang_transfer; or,
* provided callbacks for chip selection and I/O loops. If the host controller has
* a transfer method, its final step should call spi_bitbang_transfer(); or,
* that's the default if the transfer routine is not initialized. It should
* also set up the bus number and number of chipselects.
*
......@@ -406,9 +407,9 @@ EXPORT_SYMBOL_GPL(spi_bitbang_init);
* hardware that basically exposes a shift register) or per-spi_transfer
* (which takes better advantage of hardware like fifos or DMA engines).
*
* Drivers using per-word I/O loops should use (or call) spi_bitbang_setup,
* spi_bitbang_cleanup and spi_bitbang_setup_transfer to handle those spi
* master methods. Those methods are the defaults if the bitbang->txrx_bufs
* Drivers using per-word I/O loops should use (or call) spi_bitbang_setup(),
* spi_bitbang_cleanup() and spi_bitbang_setup_transfer() to handle those SPI
* host controller methods. Those methods are the defaults if the bitbang->txrx_bufs
* routine isn't initialized.
*
* This routine registers the spi_controller, which will process requests in a
......@@ -417,7 +418,7 @@ EXPORT_SYMBOL_GPL(spi_bitbang_init);
*
* On success, this routine will take a reference to the controller. The caller
* is responsible for calling spi_bitbang_stop() to decrement the reference and
* spi_controller_put() as counterpart of spi_alloc_master() to prevent a memory
* spi_controller_put() as counterpart of spi_alloc_host() to prevent a memory
* leak.
*/
int spi_bitbang_start(struct spi_bitbang *bitbang)
......@@ -450,4 +451,4 @@ void spi_bitbang_stop(struct spi_bitbang *bitbang)
EXPORT_SYMBOL_GPL(spi_bitbang_stop);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Utilities for Bitbanging SPI host controllers");
......@@ -42,6 +42,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
#define CQSPI_NO_SUPPORT_WR_COMPLETION BIT(3)
#define CQSPI_SLOW_SRAM BIT(4)
#define CQSPI_NEEDS_APB_AHB_HAZARD_WAR BIT(5)
#define CQSPI_RD_NO_IRQ BIT(6)
/* Capabilities */
#define CQSPI_SUPPORTS_OCTAL BIT(0)
......@@ -102,6 +103,8 @@ struct cqspi_st {
bool apb_ahb_hazard;
bool is_jh7110; /* Flag for StarFive JH7110 SoC */
const struct cqspi_driver_platdata *ddata;
};
struct cqspi_driver_platdata {
......@@ -117,6 +120,7 @@ struct cqspi_driver_platdata {
/* Operation timeout value */
#define CQSPI_TIMEOUT_MS 500
#define CQSPI_READ_TIMEOUT_MS 10
#define CQSPI_BUSYWAIT_TIMEOUT_US 500
/* Runtime_pm autosuspend delay */
#define CQSPI_AUTOSUSPEND_TIMEOUT 2000
......@@ -295,13 +299,27 @@ struct cqspi_driver_platdata {
#define CQSPI_REG_VERSAL_DMA_VAL 0x602
static int cqspi_wait_for_bit(void __iomem *reg, const u32 mask, bool clr)
static int cqspi_wait_for_bit(const struct cqspi_driver_platdata *ddata,
void __iomem *reg, const u32 mask, bool clr,
bool busywait)
{
u64 timeout_us = CQSPI_TIMEOUT_MS * USEC_PER_MSEC;
u32 val;
if (busywait) {
int ret = readl_relaxed_poll_timeout(reg, val,
(((clr ? ~val : val) & mask) == mask),
0, CQSPI_BUSYWAIT_TIMEOUT_US);
if (ret != -ETIMEDOUT)
return ret;
timeout_us -= CQSPI_BUSYWAIT_TIMEOUT_US;
}
return readl_relaxed_poll_timeout(reg, val,
(((clr ? ~val : val) & mask) == mask),
10, CQSPI_TIMEOUT_MS * 1000);
10, timeout_us);
}
static bool cqspi_is_idle(struct cqspi_st *cqspi)
......@@ -334,11 +352,8 @@ static u32 cqspi_get_versal_dma_status(struct cqspi_st *cqspi)
static irqreturn_t cqspi_irq_handler(int this_irq, void *dev)
{
struct cqspi_st *cqspi = dev;
const struct cqspi_driver_platdata *ddata = cqspi->ddata;
unsigned int irq_status;
struct device *device = &cqspi->pdev->dev;
const struct cqspi_driver_platdata *ddata;
ddata = of_device_get_match_data(device);
/* Read interrupt status */
irq_status = readl(cqspi->iobase + CQSPI_REG_IRQSTATUS);
......@@ -434,8 +449,8 @@ static int cqspi_exec_flash_cmd(struct cqspi_st *cqspi, unsigned int reg)
writel(reg, reg_base + CQSPI_REG_CMDCTRL);
/* Polling for completion. */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_CMDCTRL,
CQSPI_REG_CMDCTRL_INPROGRESS_MASK, 1);
ret = cqspi_wait_for_bit(cqspi->ddata, reg_base + CQSPI_REG_CMDCTRL,
CQSPI_REG_CMDCTRL_INPROGRESS_MASK, 1, true);
if (ret) {
dev_err(&cqspi->pdev->dev,
"Flash command execution timed out.\n");
......@@ -492,8 +507,11 @@ static int cqspi_enable_dtr(struct cqspi_flash_pdata *f_pdata,
if (ret)
return ret;
} else {
reg &= ~CQSPI_REG_CONFIG_DTR_PROTO;
reg &= ~CQSPI_REG_CONFIG_DUAL_OPCODE;
unsigned int mask = CQSPI_REG_CONFIG_DTR_PROTO | CQSPI_REG_CONFIG_DUAL_OPCODE;
/* Shortcut if DTR is already disabled. */
if ((reg & mask) == 0)
return 0;
reg &= ~mask;
}
writel(reg, reg_base + CQSPI_REG_CONFIG);
......@@ -700,6 +718,7 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
const size_t n_rx)
{
struct cqspi_st *cqspi = f_pdata->cqspi;
bool use_irq = !(cqspi->ddata && cqspi->ddata->quirks & CQSPI_RD_NO_IRQ);
struct device *dev = &cqspi->pdev->dev;
void __iomem *reg_base = cqspi->iobase;
void __iomem *ahb_base = cqspi->ahb_base;
......@@ -723,17 +742,20 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
* all the read interrupts disabled for max performance.
*/
if (!cqspi->slow_sram)
if (use_irq && cqspi->slow_sram)
writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK);
else if (use_irq)
writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK);
else
writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK);
writel(0, reg_base + CQSPI_REG_IRQMASK);
reinit_completion(&cqspi->transfer_complete);
writel(CQSPI_REG_INDIRECTRD_START_MASK,
reg_base + CQSPI_REG_INDIRECTRD);
while (remaining > 0) {
if (!wait_for_completion_timeout(&cqspi->transfer_complete,
if (use_irq &&
!wait_for_completion_timeout(&cqspi->transfer_complete,
msecs_to_jiffies(CQSPI_READ_TIMEOUT_MS)))
ret = -ETIMEDOUT;
......@@ -775,7 +797,7 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
bytes_to_read = cqspi_get_rd_sram_level(cqspi);
}
if (remaining > 0) {
if (use_irq && remaining > 0) {
reinit_completion(&cqspi->transfer_complete);
if (cqspi->slow_sram)
writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK);
......@@ -783,8 +805,8 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
}
/* Check indirect done status */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_INDIRECTRD,
CQSPI_REG_INDIRECTRD_DONE_MASK, 0);
ret = cqspi_wait_for_bit(cqspi->ddata, reg_base + CQSPI_REG_INDIRECTRD,
CQSPI_REG_INDIRECTRD_DONE_MASK, 0, true);
if (ret) {
dev_err(dev, "Indirect read completion error (%i)\n", ret);
goto failrd;
......@@ -1084,8 +1106,8 @@ static int cqspi_indirect_write_execute(struct cqspi_flash_pdata *f_pdata,
}
/* Check indirect done status */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_INDIRECTWR,
CQSPI_REG_INDIRECTWR_DONE_MASK, 0);
ret = cqspi_wait_for_bit(cqspi->ddata, reg_base + CQSPI_REG_INDIRECTWR,
CQSPI_REG_INDIRECTWR_DONE_MASK, 0, false);
if (ret) {
dev_err(dev, "Indirect write completion error (%i)\n", ret);
goto failwr;
......@@ -1358,16 +1380,13 @@ static ssize_t cqspi_read(struct cqspi_flash_pdata *f_pdata,
const struct spi_mem_op *op)
{
struct cqspi_st *cqspi = f_pdata->cqspi;
struct device *dev = &cqspi->pdev->dev;
const struct cqspi_driver_platdata *ddata;
const struct cqspi_driver_platdata *ddata = cqspi->ddata;
loff_t from = op->addr.val;
size_t len = op->data.nbytes;
u_char *buf = op->data.buf.in;
u64 dma_align = (u64)(uintptr_t)buf;
int ret;
ddata = of_device_get_match_data(dev);
ret = cqspi_read_setup(f_pdata, op);
if (ret)
return ret;
......@@ -1511,8 +1530,8 @@ static int cqspi_of_get_pdata(struct cqspi_st *cqspi)
cqspi->is_decoded_cs = of_property_read_bool(np, "cdns,is-decoded-cs");
if (of_property_read_u32(np, "cdns,fifo-depth", &cqspi->fifo_depth)) {
dev_err(dev, "couldn't determine fifo-depth\n");
return -ENXIO;
/* Zero signals FIFO depth should be runtime detected. */
cqspi->fifo_depth = 0;
}
if (of_property_read_u32(np, "cdns,fifo-width", &cqspi->fifo_width)) {
......@@ -1542,8 +1561,6 @@ static void cqspi_controller_init(struct cqspi_st *cqspi)
{
u32 reg;
cqspi_controller_enable(cqspi, 0);
/* Configure the remap address register, no remap */
writel(0, cqspi->iobase + CQSPI_REG_REMAP);
......@@ -1577,8 +1594,29 @@ static void cqspi_controller_init(struct cqspi_st *cqspi)
reg |= CQSPI_REG_CONFIG_DMA_MASK;
writel(reg, cqspi->iobase + CQSPI_REG_CONFIG);
}
}
cqspi_controller_enable(cqspi, 1);
static void cqspi_controller_detect_fifo_depth(struct cqspi_st *cqspi)
{
struct device *dev = &cqspi->pdev->dev;
u32 reg, fifo_depth;
/*
* Bits N-1:0 are writable while bits 31:N are read as zero, with 2^N
* the FIFO depth.
*/
writel(U32_MAX, cqspi->iobase + CQSPI_REG_SRAMPARTITION);
reg = readl(cqspi->iobase + CQSPI_REG_SRAMPARTITION);
fifo_depth = reg + 1;
/* FIFO depth of zero means no value from devicetree was provided. */
if (cqspi->fifo_depth == 0) {
cqspi->fifo_depth = fifo_depth;
dev_dbg(dev, "using FIFO depth of %u\n", fifo_depth);
} else if (fifo_depth != cqspi->fifo_depth) {
dev_warn(dev, "detected FIFO depth (%u) different from config (%u)\n",
fifo_depth, cqspi->fifo_depth);
}
}
static int cqspi_request_mmap_dma(struct cqspi_st *cqspi)
......@@ -1731,6 +1769,7 @@ static int cqspi_probe(struct platform_device *pdev)
cqspi->pdev = pdev;
cqspi->host = host;
cqspi->is_jh7110 = false;
cqspi->ddata = ddata = of_device_get_match_data(dev);
platform_set_drvdata(pdev, cqspi);
/* Obtain configuration from OF. */
......@@ -1822,7 +1861,6 @@ static int cqspi_probe(struct platform_device *pdev)
/* write completion is supported by default */
cqspi->wr_completion = true;
ddata = of_device_get_match_data(dev);
if (ddata) {
if (ddata->quirks & CQSPI_NEEDS_WR_DELAY)
cqspi->wr_delay = 50 * DIV_ROUND_UP(NSEC_PER_SEC,
......@@ -1864,7 +1902,10 @@ static int cqspi_probe(struct platform_device *pdev)
}
cqspi_wait_idle(cqspi);
cqspi_controller_enable(cqspi, 0);
cqspi_controller_detect_fifo_depth(cqspi);
cqspi_controller_init(cqspi);
cqspi_controller_enable(cqspi, 1);
cqspi->current_cs = -1;
cqspi->sclk = 0;
......@@ -1947,7 +1988,9 @@ static int cqspi_runtime_resume(struct device *dev)
clk_prepare_enable(cqspi->clk);
cqspi_wait_idle(cqspi);
cqspi_controller_enable(cqspi, 0);
cqspi_controller_init(cqspi);
cqspi_controller_enable(cqspi, 1);
cqspi->current_cs = -1;
cqspi->sclk = 0;
......@@ -2012,6 +2055,12 @@ static const struct cqspi_driver_platdata pensando_cdns_qspi = {
.quirks = CQSPI_NEEDS_APB_AHB_HAZARD_WAR | CQSPI_DISABLE_DAC_MODE,
};
static const struct cqspi_driver_platdata mobileye_eyeq5_ospi = {
.hwcaps_mask = CQSPI_SUPPORTS_OCTAL,
.quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION |
CQSPI_RD_NO_IRQ,
};
static const struct of_device_id cqspi_dt_ids[] = {
{
.compatible = "cdns,qspi-nor",
......@@ -2045,6 +2094,10 @@ static const struct of_device_id cqspi_dt_ids[] = {
.compatible = "amd,pensando-elba-qspi",
.data = &pensando_cdns_qspi,
},
{
.compatible = "mobileye,eyeq5-ospi",
.data = &mobileye_eyeq5_ospi,
},
{ /* end of table */ }
};
......
......@@ -486,20 +486,14 @@ static irqreturn_t cdns_xspi_irq_handler(int this_irq, void *dev)
static int cdns_xspi_of_get_plat_data(struct platform_device *pdev)
{
struct device_node *node_prop = pdev->dev.of_node;
struct device_node *node_child;
unsigned int cs;
for_each_child_of_node(node_prop, node_child) {
if (!of_device_is_available(node_child))
continue;
for_each_available_child_of_node_scoped(node_prop, node_child) {
if (of_property_read_u32(node_child, "reg", &cs)) {
dev_err(&pdev->dev, "Couldn't get memory chip select\n");
of_node_put(node_child);
return -ENXIO;
} else if (cs >= CDNS_XSPI_MAX_BANKS) {
dev_err(&pdev->dev, "reg (cs) parameter value too large\n");
of_node_put(node_child);
return -ENXIO;
}
}
......
......@@ -500,7 +500,6 @@ static const struct dev_pm_ops mcfqspi_pm = {
static struct platform_driver mcfqspi_driver = {
.driver.name = DRIVER_NAME,
.driver.owner = THIS_MODULE,
.driver.pm = &mcfqspi_pm,
.probe = mcfqspi_probe,
.remove_new = mcfqspi_remove,
......
......@@ -5,10 +5,14 @@
// Copyright (C) 2022-2023 Cirrus Logic, Inc. and
// Cirrus Logic International Semiconductor Ltd.
#include <linux/acpi.h>
#include <linux/array_size.h>
#include <linux/bits.h>
#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/gpio/machine.h>
#include <linux/gpio/property.h>
#include <linux/mfd/cs42l43.h>
#include <linux/mfd/cs42l43-regs.h>
#include <linux/mod_devicetable.h>
......@@ -16,6 +20,7 @@
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/property.h>
#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include <linux/units.h>
......@@ -39,6 +44,44 @@ static const unsigned int cs42l43_clock_divs[] = {
2, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30
};
static const struct software_node ampl = {
.name = "cs35l56-left",
};
static const struct software_node ampr = {
.name = "cs35l56-right",
};
static struct spi_board_info ampl_info = {
.modalias = "cs35l56",
.max_speed_hz = 20 * HZ_PER_MHZ,
.chip_select = 0,
.mode = SPI_MODE_0,
.swnode = &ampl,
};
static struct spi_board_info ampr_info = {
.modalias = "cs35l56",
.max_speed_hz = 20 * HZ_PER_MHZ,
.chip_select = 1,
.mode = SPI_MODE_0,
.swnode = &ampr,
};
static const struct software_node cs42l43_gpiochip_swnode = {
.name = "cs42l43-pinctrl",
};
static const struct software_node_ref_args cs42l43_cs_refs[] = {
SOFTWARE_NODE_REFERENCE(&cs42l43_gpiochip_swnode, 0, GPIO_ACTIVE_LOW),
SOFTWARE_NODE_REFERENCE(&swnode_gpio_undefined),
};
static const struct property_entry cs42l43_cs_props[] = {
PROPERTY_ENTRY_REF_ARRAY("cs-gpios", cs42l43_cs_refs),
{}
};
static int cs42l43_spi_tx(struct regmap *regmap, const u8 *buf, unsigned int len)
{
const u8 *end = buf + len;
......@@ -203,16 +246,59 @@ static size_t cs42l43_spi_max_length(struct spi_device *spi)
return CS42L43_SPI_MAX_LENGTH;
}
static bool cs42l43_has_sidecar(struct fwnode_handle *fwnode)
{
static const u32 func_smart_amp = 0x1;
struct fwnode_handle *child_fwnode, *ext_fwnode;
unsigned int val;
u32 function;
int ret;
fwnode_for_each_child_node(fwnode, child_fwnode) {
acpi_handle handle = ACPI_HANDLE_FWNODE(child_fwnode);
ret = acpi_get_local_address(handle, &function);
if (ret || function != func_smart_amp)
continue;
ext_fwnode = fwnode_get_named_child_node(child_fwnode,
"mipi-sdca-function-expansion-subproperties");
if (!ext_fwnode)
continue;
ret = fwnode_property_read_u32(ext_fwnode,
"01fa-sidecar-instances",
&val);
fwnode_handle_put(ext_fwnode);
if (ret)
continue;
fwnode_handle_put(child_fwnode);
return !!val;
}
return false;
}
static void cs42l43_release_of_node(void *data)
{
fwnode_handle_put(data);
}
static void cs42l43_release_sw_node(void *data)
{
software_node_unregister(&cs42l43_gpiochip_swnode);
}
static int cs42l43_spi_probe(struct platform_device *pdev)
{
struct cs42l43 *cs42l43 = dev_get_drvdata(pdev->dev.parent);
struct cs42l43_spi *priv;
struct fwnode_handle *fwnode = dev_fwnode(cs42l43->dev);
bool has_sidecar = cs42l43_has_sidecar(fwnode);
int ret;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
......@@ -259,21 +345,45 @@ static int cs42l43_spi_probe(struct platform_device *pdev)
if (is_of_node(fwnode)) {
fwnode = fwnode_get_named_child_node(fwnode, "spi");
ret = devm_add_action(priv->dev, cs42l43_release_of_node, fwnode);
if (ret) {
fwnode_handle_put(fwnode);
ret = devm_add_action_or_reset(priv->dev, cs42l43_release_of_node, fwnode);
if (ret)
return ret;
}
}
device_set_node(&priv->ctlr->dev, fwnode);
if (has_sidecar) {
ret = software_node_register(&cs42l43_gpiochip_swnode);
if (ret)
return dev_err_probe(priv->dev, ret,
"Failed to register gpio swnode\n");
ret = devm_add_action_or_reset(priv->dev, cs42l43_release_sw_node, NULL);
if (ret)
return ret;
ret = device_create_managed_software_node(&priv->ctlr->dev,
cs42l43_cs_props, NULL);
if (ret)
return dev_err_probe(priv->dev, ret, "Failed to add swnode\n");
} else {
device_set_node(&priv->ctlr->dev, fwnode);
}
ret = devm_spi_register_controller(priv->dev, priv->ctlr);
if (ret) {
dev_err(priv->dev, "Failed to register SPI controller: %d\n", ret);
if (ret)
return dev_err_probe(priv->dev, ret,
"Failed to register SPI controller\n");
if (has_sidecar) {
if (!spi_new_device(priv->ctlr, &ampl_info))
return dev_err_probe(priv->dev, -ENODEV,
"Failed to create left amp slave\n");
if (!spi_new_device(priv->ctlr, &ampr_info))
return dev_err_probe(priv->dev, -ENODEV,
"Failed to create right amp slave\n");
}
return ret;
return 0;
}
static const struct platform_device_id cs42l43_spi_id_table[] = {
......@@ -291,6 +401,7 @@ static struct platform_driver cs42l43_spi_driver = {
};
module_platform_driver(cs42l43_spi_driver);
MODULE_IMPORT_NS(GPIO_SWNODE);
MODULE_DESCRIPTION("CS42L43 SPI Driver");
MODULE_AUTHOR("Lucas Tanure <tanureal@opensource.cirrus.com>");
MODULE_AUTHOR("Maciej Strozek <mstrozek@opensource.cirrus.com>");
......
......@@ -6,6 +6,7 @@
*/
#include <linux/bitfield.h>
#include <linux/bitops.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/module.h>
......@@ -421,10 +422,7 @@ static int dw_spi_transfer_one(struct spi_controller *host,
int ret;
dws->dma_mapped = 0;
dws->n_bytes =
roundup_pow_of_two(DIV_ROUND_UP(transfer->bits_per_word,
BITS_PER_BYTE));
dws->n_bytes = roundup_pow_of_two(BITS_TO_BYTES(transfer->bits_per_word));
dws->tx = (void *)transfer->tx_buf;
dws->tx_len = transfer->len / dws->n_bytes;
dws->rx = transfer->rx_buf;
......@@ -836,6 +834,20 @@ static void dw_spi_hw_init(struct device *dev, struct dw_spi *dws)
DW_SPI_GET_BYTE(dws->ver, 1));
}
/*
* Try to detect the number of native chip-selects if the platform
* driver didn't set it up. There can be up to 16 lines configured.
*/
if (!dws->num_cs) {
u32 ser;
dw_writel(dws, DW_SPI_SER, 0xffff);
ser = dw_readl(dws, DW_SPI_SER);
dw_writel(dws, DW_SPI_SER, 0);
dws->num_cs = hweight16(ser);
}
/*
* Try to detect the FIFO depth if not set by interface driver,
* the depth could be from 2 to 256 from HW spec
......
......@@ -320,7 +320,11 @@ static int dw_spi_mmio_probe(struct platform_device *pdev)
struct resource *mem;
struct dw_spi *dws;
int ret;
int num_cs;
if (device_property_read_bool(&pdev->dev, "spi-slave")) {
dev_warn(&pdev->dev, "spi-slave is not yet supported\n");
return -ENODEV;
}
dwsmmio = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_mmio),
GFP_KERNEL);
......@@ -364,11 +368,8 @@ static int dw_spi_mmio_probe(struct platform_device *pdev)
&dws->reg_io_width))
dws->reg_io_width = 4;
num_cs = 4;
device_property_read_u32(&pdev->dev, "num-cs", &num_cs);
dws->num_cs = num_cs;
/* Rely on the auto-detection if no property specified */
device_property_read_u32(&pdev->dev, "num-cs", &dws->num_cs);
init_func = device_get_match_data(&pdev->dev);
if (init_func) {
......
......@@ -164,8 +164,8 @@ struct dw_spi {
u32 max_freq; /* max bus freq supported */
u32 reg_io_width; /* DR I/O width in bytes */
u32 num_cs; /* chip select lines */
u16 bus_num;
u16 num_cs; /* supported slave numbers */
void (*set_cs)(struct spi_device *spi, bool enable);
/* Current message transfer state info */
......
......@@ -98,19 +98,13 @@ static void fsl_spi_cpm_bufs_start(struct mpc8xxx_spi *mspi)
mpc8xxx_spi_write_reg(&reg_base->command, SPCOM_STR);
}
int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
struct spi_transfer *t, bool is_dma_mapped)
int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi, struct spi_transfer *t)
{
struct device *dev = mspi->dev;
struct fsl_spi_reg __iomem *reg_base = mspi->reg_base;
if (is_dma_mapped) {
mspi->map_tx_dma = 0;
mspi->map_rx_dma = 0;
} else {
mspi->map_tx_dma = 1;
mspi->map_rx_dma = 1;
}
mspi->map_tx_dma = 1;
mspi->map_rx_dma = 1;
if (!t->tx_buf) {
mspi->tx_dma = mspi->dma_dummy_tx;
......@@ -147,7 +141,7 @@ int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
return -ENOMEM;
}
} else if (t->tx_buf) {
mspi->tx_dma = t->tx_dma;
mspi->tx_dma = 0;
}
if (mspi->map_rx_dma) {
......
......@@ -20,7 +20,7 @@
#ifdef CONFIG_FSL_SOC
extern void fsl_spi_cpm_reinit_txrx(struct mpc8xxx_spi *mspi);
extern int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
struct spi_transfer *t, bool is_dma_mapped);
struct spi_transfer *t);
extern void fsl_spi_cpm_bufs_complete(struct mpc8xxx_spi *mspi);
extern void fsl_spi_cpm_irq(struct mpc8xxx_spi *mspi, u32 events);
extern int fsl_spi_cpm_init(struct mpc8xxx_spi *mspi);
......@@ -28,8 +28,7 @@ extern void fsl_spi_cpm_free(struct mpc8xxx_spi *mspi);
#else
static inline void fsl_spi_cpm_reinit_txrx(struct mpc8xxx_spi *mspi) { }
static inline int fsl_spi_cpm_bufs(struct mpc8xxx_spi *mspi,
struct spi_transfer *t,
bool is_dma_mapped) { return 0; }
struct spi_transfer *t) { return 0; }
static inline void fsl_spi_cpm_bufs_complete(struct mpc8xxx_spi *mspi) { }
static inline void fsl_spi_cpm_irq(struct mpc8xxx_spi *mspi, u32 events) { }
static inline int fsl_spi_cpm_init(struct mpc8xxx_spi *mspi) { return 0; }
......
......@@ -1458,7 +1458,6 @@ static void dspi_shutdown(struct platform_device *pdev)
static struct platform_driver fsl_dspi_driver = {
.driver.name = DRIVER_NAME,
.driver.of_match_table = fsl_dspi_dt_ids,
.driver.owner = THIS_MODULE,
.driver.pm = &dspi_pm,
.probe = dspi_probe,
.remove_new = dspi_remove,
......
......@@ -553,7 +553,7 @@ static int fsl_lpspi_dma_transfer(struct spi_controller *controller,
{
struct dma_async_tx_descriptor *desc_tx, *desc_rx;
unsigned long transfer_timeout;
unsigned long timeout;
unsigned long time_left;
struct sg_table *tx = &transfer->tx_sg, *rx = &transfer->rx_sg;
int ret;
......@@ -594,9 +594,9 @@ static int fsl_lpspi_dma_transfer(struct spi_controller *controller,
transfer->len);
/* Wait eDMA to finish the data transfer.*/
timeout = wait_for_completion_timeout(&fsl_lpspi->dma_tx_completion,
transfer_timeout);
if (!timeout) {
time_left = wait_for_completion_timeout(&fsl_lpspi->dma_tx_completion,
transfer_timeout);
if (!time_left) {
dev_err(fsl_lpspi->dev, "I/O Error in DMA TX\n");
dmaengine_terminate_all(controller->dma_tx);
dmaengine_terminate_all(controller->dma_rx);
......@@ -604,9 +604,9 @@ static int fsl_lpspi_dma_transfer(struct spi_controller *controller,
return -ETIMEDOUT;
}
timeout = wait_for_completion_timeout(&fsl_lpspi->dma_rx_completion,
transfer_timeout);
if (!timeout) {
time_left = wait_for_completion_timeout(&fsl_lpspi->dma_rx_completion,
transfer_timeout);
if (!time_left) {
dev_err(fsl_lpspi->dev, "I/O Error in DMA RX\n");
dmaengine_terminate_all(controller->dma_tx);
dmaengine_terminate_all(controller->dma_rx);
......
......@@ -249,8 +249,7 @@ static int fsl_spi_cpu_bufs(struct mpc8xxx_spi *mspi,
return 0;
}
static int fsl_spi_bufs(struct spi_device *spi, struct spi_transfer *t,
bool is_dma_mapped)
static int fsl_spi_bufs(struct spi_device *spi, struct spi_transfer *t)
{
struct mpc8xxx_spi *mpc8xxx_spi = spi_controller_get_devdata(spi->controller);
struct fsl_spi_reg __iomem *reg_base;
......@@ -274,7 +273,7 @@ static int fsl_spi_bufs(struct spi_device *spi, struct spi_transfer *t,
reinit_completion(&mpc8xxx_spi->done);
if (mpc8xxx_spi->flags & SPI_CPM_MODE)
ret = fsl_spi_cpm_bufs(mpc8xxx_spi, t, is_dma_mapped);
ret = fsl_spi_cpm_bufs(mpc8xxx_spi, t);
else
ret = fsl_spi_cpu_bufs(mpc8xxx_spi, t, len);
if (ret)
......@@ -353,7 +352,7 @@ static int fsl_spi_transfer_one(struct spi_controller *controller,
if (status < 0)
return status;
if (t->len)
status = fsl_spi_bufs(spi, t, !!t->tx_dma || !!t->rx_dma);
status = fsl_spi_bufs(spi, t);
if (status > 0)
return -EMSGSIZE;
......
......@@ -1405,7 +1405,7 @@ static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx,
{
struct dma_async_tx_descriptor *desc_tx, *desc_rx;
unsigned long transfer_timeout;
unsigned long timeout;
unsigned long time_left;
struct spi_controller *controller = spi_imx->controller;
struct sg_table *tx = &transfer->tx_sg, *rx = &transfer->rx_sg;
struct scatterlist *last_sg = sg_last(rx->sgl, rx->nents);
......@@ -1471,18 +1471,18 @@ static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx,
transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len);
/* Wait SDMA to finish the data transfer.*/
timeout = wait_for_completion_timeout(&spi_imx->dma_tx_completion,
time_left = wait_for_completion_timeout(&spi_imx->dma_tx_completion,
transfer_timeout);
if (!timeout) {
if (!time_left) {
dev_err(spi_imx->dev, "I/O Error in DMA TX\n");
dmaengine_terminate_all(controller->dma_tx);
dmaengine_terminate_all(controller->dma_rx);
return -ETIMEDOUT;
}
timeout = wait_for_completion_timeout(&spi_imx->dma_rx_completion,
transfer_timeout);
if (!timeout) {
time_left = wait_for_completion_timeout(&spi_imx->dma_rx_completion,
transfer_timeout);
if (!time_left) {
dev_err(&controller->dev, "I/O Error in DMA RX\n");
spi_imx->devtype_data->reset(spi_imx);
dmaengine_terminate_all(controller->dma_rx);
......@@ -1501,7 +1501,7 @@ static int spi_imx_pio_transfer(struct spi_device *spi,
{
struct spi_imx_data *spi_imx = spi_controller_get_devdata(spi->controller);
unsigned long transfer_timeout;
unsigned long timeout;
unsigned long time_left;
spi_imx->tx_buf = transfer->tx_buf;
spi_imx->rx_buf = transfer->rx_buf;
......@@ -1517,9 +1517,9 @@ static int spi_imx_pio_transfer(struct spi_device *spi,
transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len);
timeout = wait_for_completion_timeout(&spi_imx->xfer_done,
transfer_timeout);
if (!timeout) {
time_left = wait_for_completion_timeout(&spi_imx->xfer_done,
transfer_timeout);
if (!time_left) {
dev_err(&spi->dev, "I/O Error in PIO\n");
spi_imx->devtype_data->reset(spi_imx);
return -ETIMEDOUT;
......
......@@ -396,7 +396,6 @@ MODULE_DEVICE_TABLE(of, spi_loopback_test_of_match);
static struct spi_driver spi_loopback_test_driver = {
.driver = {
.name = "spi-loopback-test",
.owner = THIS_MODULE,
.of_match_table = spi_loopback_test_of_match,
},
.probe = spi_loopback_test_probe,
......
......@@ -748,7 +748,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
u32 cmd, reg_val, cnt, remainder, len;
struct spi_controller *host = dev_id;
struct mtk_spi *mdata = spi_controller_get_devdata(host);
struct spi_transfer *trans = mdata->cur_transfer;
struct spi_transfer *xfer = mdata->cur_transfer;
reg_val = readl(mdata->base + SPI_STATUS0_REG);
if (reg_val & MTK_SPI_PAUSE_INT_STATUS)
......@@ -762,42 +762,40 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}
if (!host->can_dma(host, NULL, trans)) {
if (trans->rx_buf) {
if (!host->can_dma(host, NULL, xfer)) {
if (xfer->rx_buf) {
cnt = mdata->xfer_len / 4;
ioread32_rep(mdata->base + SPI_RX_DATA_REG,
trans->rx_buf + mdata->num_xfered, cnt);
xfer->rx_buf + mdata->num_xfered, cnt);
remainder = mdata->xfer_len % 4;
if (remainder > 0) {
reg_val = readl(mdata->base + SPI_RX_DATA_REG);
memcpy(trans->rx_buf +
mdata->num_xfered +
(cnt * 4),
memcpy(xfer->rx_buf + (cnt * 4) + mdata->num_xfered,
&reg_val,
remainder);
}
}
mdata->num_xfered += mdata->xfer_len;
if (mdata->num_xfered == trans->len) {
if (mdata->num_xfered == xfer->len) {
spi_finalize_current_transfer(host);
return IRQ_HANDLED;
}
len = trans->len - mdata->num_xfered;
len = xfer->len - mdata->num_xfered;
mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
mtk_spi_setup_packet(host);
if (trans->tx_buf) {
if (xfer->tx_buf) {
cnt = mdata->xfer_len / 4;
iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
trans->tx_buf + mdata->num_xfered, cnt);
xfer->tx_buf + mdata->num_xfered, cnt);
remainder = mdata->xfer_len % 4;
if (remainder > 0) {
reg_val = 0;
memcpy(&reg_val,
trans->tx_buf + (cnt * 4) + mdata->num_xfered,
xfer->tx_buf + (cnt * 4) + mdata->num_xfered,
remainder);
writel(reg_val, mdata->base + SPI_TX_DATA_REG);
}
......@@ -809,21 +807,21 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
}
if (mdata->tx_sgl)
trans->tx_dma += mdata->xfer_len;
xfer->tx_dma += mdata->xfer_len;
if (mdata->rx_sgl)
trans->rx_dma += mdata->xfer_len;
xfer->rx_dma += mdata->xfer_len;
if (mdata->tx_sgl && (mdata->tx_sgl_len == 0)) {
mdata->tx_sgl = sg_next(mdata->tx_sgl);
if (mdata->tx_sgl) {
trans->tx_dma = sg_dma_address(mdata->tx_sgl);
xfer->tx_dma = sg_dma_address(mdata->tx_sgl);
mdata->tx_sgl_len = sg_dma_len(mdata->tx_sgl);
}
}
if (mdata->rx_sgl && (mdata->rx_sgl_len == 0)) {
mdata->rx_sgl = sg_next(mdata->rx_sgl);
if (mdata->rx_sgl) {
trans->rx_dma = sg_dma_address(mdata->rx_sgl);
xfer->rx_dma = sg_dma_address(mdata->rx_sgl);
mdata->rx_sgl_len = sg_dma_len(mdata->rx_sgl);
}
}
......@@ -841,7 +839,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
mtk_spi_update_mdata_len(host);
mtk_spi_setup_packet(host);
mtk_spi_setup_dma_addr(host, trans);
mtk_spi_setup_dma_addr(host, xfer);
mtk_spi_enable_transfer(host);
return IRQ_HANDLED;
......
......@@ -52,6 +52,8 @@
#define MT7621_CPOL BIT(4)
#define MT7621_LSB_FIRST BIT(3)
#define MT7621_NATIVE_CS_COUNT 2
struct mt7621_spi {
struct spi_controller *host;
void __iomem *base;
......@@ -75,10 +77,11 @@ static inline void mt7621_spi_write(struct mt7621_spi *rs, u32 reg, u32 val)
iowrite32(val, rs->base + reg);
}
static void mt7621_spi_set_cs(struct spi_device *spi, int enable)
static void mt7621_spi_set_native_cs(struct spi_device *spi, bool enable)
{
struct mt7621_spi *rs = spidev_to_mt7621_spi(spi);
int cs = spi_get_chipselect(spi, 0);
bool active = spi->mode & SPI_CS_HIGH ? enable : !enable;
u32 polar = 0;
u32 host;
......@@ -94,7 +97,7 @@ static void mt7621_spi_set_cs(struct spi_device *spi, int enable)
rs->pending_write = 0;
if (enable)
if (active)
polar = BIT(cs);
mt7621_spi_write(rs, MT7621_SPI_POLAR, polar);
}
......@@ -154,6 +157,23 @@ static inline int mt7621_spi_wait_till_ready(struct mt7621_spi *rs)
return -ETIMEDOUT;
}
static int mt7621_spi_prepare_message(struct spi_controller *host,
struct spi_message *m)
{
struct mt7621_spi *rs = spi_controller_get_devdata(host);
struct spi_device *spi = m->spi;
unsigned int speed = spi->max_speed_hz;
struct spi_transfer *t = NULL;
mt7621_spi_wait_till_ready(rs);
list_for_each_entry(t, &m->transfers, transfer_list)
if (t->speed_hz < speed)
speed = t->speed_hz;
return mt7621_spi_prepare(spi, speed);
}
static void mt7621_spi_read_half_duplex(struct mt7621_spi *rs,
int rx_len, u8 *buf)
{
......@@ -243,59 +263,30 @@ static void mt7621_spi_write_half_duplex(struct mt7621_spi *rs,
}
rs->pending_write = len;
mt7621_spi_flush(rs);
}
static int mt7621_spi_transfer_one_message(struct spi_controller *host,
struct spi_message *m)
static int mt7621_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct mt7621_spi *rs = spi_controller_get_devdata(host);
struct spi_device *spi = m->spi;
unsigned int speed = spi->max_speed_hz;
struct spi_transfer *t = NULL;
int status = 0;
mt7621_spi_wait_till_ready(rs);
list_for_each_entry(t, &m->transfers, transfer_list)
if (t->speed_hz < speed)
speed = t->speed_hz;
if (mt7621_spi_prepare(spi, speed)) {
status = -EIO;
goto msg_done;
}
/* Assert CS */
mt7621_spi_set_cs(spi, 1);
m->actual_length = 0;
list_for_each_entry(t, &m->transfers, transfer_list) {
if ((t->rx_buf) && (t->tx_buf)) {
/*
* This controller will shift some extra data out
* of spi_opcode if (mosi_bit_cnt > 0) &&
* (cmd_bit_cnt == 0). So the claimed full-duplex
* support is broken since we have no way to read
* the MISO value during that bit.
*/
status = -EIO;
goto msg_done;
} else if (t->rx_buf) {
mt7621_spi_read_half_duplex(rs, t->len, t->rx_buf);
} else if (t->tx_buf) {
mt7621_spi_write_half_duplex(rs, t->len, t->tx_buf);
}
m->actual_length += t->len;
if ((t->rx_buf) && (t->tx_buf)) {
/*
* This controller will shift some extra data out
* of spi_opcode if (mosi_bit_cnt > 0) &&
* (cmd_bit_cnt == 0). So the claimed full-duplex
* support is broken since we have no way to read
* the MISO value during that bit.
*/
return -EIO;
} else if (t->rx_buf) {
mt7621_spi_read_half_duplex(rs, t->len, t->rx_buf);
} else if (t->tx_buf) {
mt7621_spi_write_half_duplex(rs, t->len, t->tx_buf);
}
/* Flush data and deassert CS */
mt7621_spi_flush(rs);
mt7621_spi_set_cs(spi, 0);
msg_done:
m->status = status;
spi_finalize_current_message(host);
return 0;
}
......@@ -353,10 +344,14 @@ static int mt7621_spi_probe(struct platform_device *pdev)
host->mode_bits = SPI_LSB_FIRST;
host->flags = SPI_CONTROLLER_HALF_DUPLEX;
host->setup = mt7621_spi_setup;
host->transfer_one_message = mt7621_spi_transfer_one_message;
host->prepare_message = mt7621_spi_prepare_message;
host->set_cs = mt7621_spi_set_native_cs;
host->transfer_one = mt7621_spi_transfer_one;
host->bits_per_word_mask = SPI_BPW_MASK(8);
host->dev.of_node = pdev->dev.of_node;
host->num_chipselect = 2;
host->max_native_cs = MT7621_NATIVE_CS_COUNT;
host->num_chipselect = MT7621_NATIVE_CS_COUNT;
host->use_gpio_descriptors = true;
dev_set_drvdata(&pdev->dev, host);
......
......@@ -68,6 +68,8 @@ static int spi_mux_select(struct spi_device *spi)
priv->current_cs = spi_get_chipselect(spi, 0);
spi_setup(priv->spi);
return 0;
}
......
......@@ -184,8 +184,6 @@ static irqreturn_t tiny_spi_irq(int irq, void *dev)
}
#ifdef CONFIG_OF
#include <linux/of_gpio.h>
static int tiny_spi_of_probe(struct platform_device *pdev)
{
struct tiny_spi *hw = platform_get_drvdata(pdev);
......
......@@ -131,6 +131,7 @@ struct omap2_mcspi {
unsigned int pin_dir:1;
size_t max_xfer_len;
u32 ref_clk_hz;
bool use_multi_mode;
};
struct omap2_mcspi_cs {
......@@ -256,10 +257,15 @@ static void omap2_mcspi_set_cs(struct spi_device *spi, bool enable)
l = mcspi_cached_chconf0(spi);
if (enable)
/* Only enable chip select manually if single mode is used */
if (mcspi->use_multi_mode) {
l &= ~OMAP2_MCSPI_CHCONF_FORCE;
else
l |= OMAP2_MCSPI_CHCONF_FORCE;
} else {
if (enable)
l &= ~OMAP2_MCSPI_CHCONF_FORCE;
else
l |= OMAP2_MCSPI_CHCONF_FORCE;
}
mcspi_write_chconf0(spi, l);
......@@ -283,7 +289,12 @@ static void omap2_mcspi_set_mode(struct spi_controller *ctlr)
l |= (OMAP2_MCSPI_MODULCTRL_MS);
} else {
l &= ~(OMAP2_MCSPI_MODULCTRL_MS);
l |= OMAP2_MCSPI_MODULCTRL_SINGLE;
/* Enable single mode if needed */
if (mcspi->use_multi_mode)
l &= ~OMAP2_MCSPI_MODULCTRL_SINGLE;
else
l |= OMAP2_MCSPI_MODULCTRL_SINGLE;
}
mcspi_write_reg(ctlr, OMAP2_MCSPI_MODULCTRL, l);
......@@ -1175,13 +1186,6 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
t->bits_per_word == spi->bits_per_word)
par_override = 0;
}
if (cd && cd->cs_per_word) {
chconf = mcspi->ctx.modulctrl;
chconf &= ~OMAP2_MCSPI_MODULCTRL_SINGLE;
mcspi_write_reg(ctlr, OMAP2_MCSPI_MODULCTRL, chconf);
mcspi->ctx.modulctrl =
mcspi_read_cs_reg(spi, OMAP2_MCSPI_MODULCTRL);
}
chconf = mcspi_cached_chconf0(spi);
chconf &= ~OMAP2_MCSPI_CHCONF_TRM_MASK;
......@@ -1240,14 +1244,6 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
status = omap2_mcspi_setup_transfer(spi, NULL);
}
if (cd && cd->cs_per_word) {
chconf = mcspi->ctx.modulctrl;
chconf |= OMAP2_MCSPI_MODULCTRL_SINGLE;
mcspi_write_reg(ctlr, OMAP2_MCSPI_MODULCTRL, chconf);
mcspi->ctx.modulctrl =
mcspi_read_cs_reg(spi, OMAP2_MCSPI_MODULCTRL);
}
omap2_mcspi_set_enable(spi, 0);
if (spi_get_csgpiod(spi, 0))
......@@ -1265,15 +1261,72 @@ static int omap2_mcspi_prepare_message(struct spi_controller *ctlr,
struct omap2_mcspi *mcspi = spi_controller_get_devdata(ctlr);
struct omap2_mcspi_regs *ctx = &mcspi->ctx;
struct omap2_mcspi_cs *cs;
struct spi_transfer *tr;
u8 bits_per_word;
/*
* The conditions are strict, it is mandatory to check each transfer of the list to see if
* multi-mode is applicable.
*/
mcspi->use_multi_mode = true;
list_for_each_entry(tr, &msg->transfers, transfer_list) {
if (!tr->bits_per_word)
bits_per_word = msg->spi->bits_per_word;
else
bits_per_word = tr->bits_per_word;
/* Only a single channel can have the FORCE bit enabled
/*
* Check if this transfer contains only one word;
* OR contains 1 to 4 words, with bits_per_word == 8 and no delay between each word
* OR contains 1 to 2 words, with bits_per_word == 16 and no delay between each word
*
* If one of the two last case is true, this also change the bits_per_word of this
* transfer to make it a bit faster.
* It's not an issue to change the bits_per_word here even if the multi-mode is not
* applicable for this message, the signal on the wire will be the same.
*/
if (bits_per_word < 8 && tr->len == 1) {
/* multi-mode is applicable, only one word (1..7 bits) */
} else if (tr->word_delay.value == 0 && bits_per_word == 8 && tr->len <= 4) {
/* multi-mode is applicable, only one "bigger" word (8,16,24,32 bits) */
tr->bits_per_word = tr->len * bits_per_word;
} else if (tr->word_delay.value == 0 && bits_per_word == 16 && tr->len <= 2) {
/* multi-mode is applicable, only one "bigger" word (16,32 bits) */
tr->bits_per_word = tr->len * bits_per_word / 2;
} else if (bits_per_word >= 8 && tr->len == bits_per_word / 8) {
/* multi-mode is applicable, only one word (9..15,17..32 bits) */
} else {
/* multi-mode is not applicable: more than one word in the transfer */
mcspi->use_multi_mode = false;
}
/* Check if transfer asks to change the CS status after the transfer */
if (!tr->cs_change)
mcspi->use_multi_mode = false;
/*
* If at least one message is not compatible, switch back to single mode
*
* The bits_per_word of certain transfer can be different, but it will have no
* impact on the signal itself.
*/
if (!mcspi->use_multi_mode)
break;
}
omap2_mcspi_set_mode(ctlr);
/* In single mode only a single channel can have the FORCE bit enabled
* in its chconf0 register.
* Scan all channels and disable them except the current one.
* A FORCE can remain from a last transfer having cs_change enabled
*
* In multi mode all FORCE bits must be disabled.
*/
list_for_each_entry(cs, &ctx->cs, node) {
if (msg->spi->controller_state == cs)
if (msg->spi->controller_state == cs && !mcspi->use_multi_mode) {
continue;
}
if ((cs->chconf0 & OMAP2_MCSPI_CHCONF_FORCE)) {
cs->chconf0 &= ~OMAP2_MCSPI_CHCONF_FORCE;
......
......@@ -344,7 +344,7 @@ static int pic32_sqi_one_message(struct spi_controller *host,
struct spi_transfer *xfer;
struct pic32_sqi *sqi;
int ret = 0, mode;
unsigned long timeout;
unsigned long time_left;
u32 val;
sqi = spi_controller_get_devdata(host);
......@@ -410,8 +410,8 @@ static int pic32_sqi_one_message(struct spi_controller *host,
writel(val, sqi->regs + PESQI_BD_CTRL_REG);
/* wait for xfer completion */
timeout = wait_for_completion_timeout(&sqi->xfer_done, 5 * HZ);
if (timeout == 0) {
time_left = wait_for_completion_timeout(&sqi->xfer_done, 5 * HZ);
if (time_left == 0) {
dev_err(&sqi->host->dev, "wait timedout/interrupted\n");
ret = -ETIMEDOUT;
msg->status = ret;
......
......@@ -498,7 +498,7 @@ static int pic32_spi_one_transfer(struct spi_controller *host,
{
struct pic32_spi *pic32s;
bool dma_issued = false;
unsigned long timeout;
unsigned long time_left;
int ret;
pic32s = spi_controller_get_devdata(host);
......@@ -545,8 +545,8 @@ static int pic32_spi_one_transfer(struct spi_controller *host,
}
/* wait for completion */
timeout = wait_for_completion_timeout(&pic32s->xfer_done, 2 * HZ);
if (timeout == 0) {
time_left = wait_for_completion_timeout(&pic32s->xfer_done, 2 * HZ);
if (time_left == 0) {
dev_err(&spi->dev, "wait error/timedout\n");
if (dma_issued) {
dmaengine_terminate_all(host->dma_rx);
......
......@@ -6,17 +6,22 @@
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/device.h>
#include <linux/atomic.h>
#include <linux/dev_printk.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/errno.h>
#include <linux/irqreturn.h>
#include <linux/scatterlist.h>
#include <linux/sizes.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/spi/spi.h>
#include "spi-pxa2xx.h"
struct device;
static void pxa2xx_spi_dma_transfer_complete(struct driver_data *drv_data,
bool error)
{
......@@ -63,8 +68,6 @@ pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
enum dma_transfer_direction dir,
struct spi_transfer *xfer)
{
struct chip_data *chip =
spi_get_ctldata(drv_data->controller->cur_msg->spi);
enum dma_slave_buswidth width;
struct dma_slave_config cfg;
struct dma_chan *chan;
......@@ -89,14 +92,14 @@ pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
if (dir == DMA_MEM_TO_DEV) {
cfg.dst_addr = drv_data->ssp->phys_base + SSDR;
cfg.dst_addr_width = width;
cfg.dst_maxburst = chip->dma_burst_size;
cfg.dst_maxburst = drv_data->controller_info->dma_burst_size;
sgt = &xfer->tx_sg;
chan = drv_data->controller->dma_tx;
} else {
cfg.src_addr = drv_data->ssp->phys_base + SSDR;
cfg.src_addr_width = width;
cfg.src_maxburst = chip->dma_burst_size;
cfg.src_maxburst = drv_data->controller_info->dma_burst_size;
sgt = &xfer->rx_sg;
chan = drv_data->controller->dma_rx;
......@@ -220,24 +223,3 @@ void pxa2xx_spi_dma_release(struct driver_data *drv_data)
controller->dma_tx = NULL;
}
}
int pxa2xx_spi_set_dma_burst_and_threshold(struct chip_data *chip,
struct spi_device *spi,
u8 bits_per_word, u32 *burst_code,
u32 *threshold)
{
struct pxa2xx_spi_chip *chip_info = spi->controller_data;
struct driver_data *drv_data = spi_controller_get_devdata(spi->controller);
u32 dma_burst_size = drv_data->controller_info->dma_burst_size;
/*
* If the DMA burst size is given in chip_info we use that,
* otherwise we use the default. Also we use the default FIFO
* thresholds for now.
*/
*burst_code = chip_info ? chip_info->dma_burst_size : dma_burst_size;
*threshold = SSCR1_RxTresh(RX_THRESH_DFLT)
| SSCR1_TxTresh(TX_THRESH_DFLT);
return 0;
}
......@@ -6,15 +6,21 @@
* Copyright (C) 2016, 2021 Intel Corporation
*/
#include <linux/clk-provider.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/property.h>
#include <linux/sprintf.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
#include "spi-pxa2xx.h"
#define PCI_DEVICE_ID_INTEL_QUARK_X1000 0x0935
#define PCI_DEVICE_ID_INTEL_BYT 0x0f0e
#define PCI_DEVICE_ID_INTEL_MRFLD 0x1194
......
This diff is collapsed.
......@@ -7,15 +7,34 @@
#ifndef SPI_PXA2XX_H
#define SPI_PXA2XX_H
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/dmaengine.h>
#include <linux/irqreturn.h>
#include <linux/types.h>
#include <linux/sizes.h>
#include <linux/pxa2xx_ssp.h>
struct gpio_desc;
struct pxa2xx_spi_controller;
/*
* The platform data for SSP controller devices
* (resides in device.platform_data).
*/
struct pxa2xx_spi_controller {
u8 num_chipselect;
u8 enable_dma;
u8 dma_burst_size;
bool is_target;
/* DMA engine specific config */
dma_filter_fn dma_filter;
void *tx_param;
void *rx_param;
/* For non-PXA arches */
struct ssp_device ssp;
};
struct spi_controller;
struct spi_device;
struct spi_transfer;
......@@ -56,18 +75,6 @@ struct driver_data {
struct gpio_desc *gpiod_ready;
};
struct chip_data {
u32 cr1;
u32 dds_rate;
u32 timeout;
u8 enable_dma;
u32 dma_burst_size;
u32 dma_threshold;
u32 threshold;
u16 lpss_rx_threshold;
u16 lpss_tx_threshold;
};
static inline u32 pxa2xx_spi_read(const struct driver_data *drv_data, u32 reg)
{
return pxa_ssp_read_reg(drv_data->ssp, reg);
......@@ -123,10 +130,5 @@ extern void pxa2xx_spi_dma_start(struct driver_data *drv_data);
extern void pxa2xx_spi_dma_stop(struct driver_data *drv_data);
extern int pxa2xx_spi_dma_setup(struct driver_data *drv_data);
extern void pxa2xx_spi_dma_release(struct driver_data *drv_data);
extern int pxa2xx_spi_set_dma_burst_and_threshold(struct chip_data *chip,
struct spi_device *spi,
u8 bits_per_word,
u32 *burst_code,
u32 *threshold);
#endif /* SPI_PXA2XX_H */
......@@ -24,7 +24,6 @@
#include <linux/reset.h>
#include <linux/sh_dma.h>
#include <linux/spi/spi.h>
#include <linux/spi/rspi.h>
#include <linux/spinlock.h>
#define RSPI_SPCR 0x00 /* Control Register */
......@@ -1131,16 +1130,12 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev,
static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr,
const struct resource *res)
{
const struct rspi_plat_data *rspi_pd = dev_get_platdata(dev);
unsigned int dma_tx_id, dma_rx_id;
if (dev->of_node) {
/* In the OF case we will get the slave IDs from the DT */
dma_tx_id = 0;
dma_rx_id = 0;
} else if (rspi_pd && rspi_pd->dma_tx_id && rspi_pd->dma_rx_id) {
dma_tx_id = rspi_pd->dma_tx_id;
dma_rx_id = rspi_pd->dma_rx_id;
} else {
/* The driver assumes no error. */
return 0;
......@@ -1290,7 +1285,6 @@ static int rspi_probe(struct platform_device *pdev)
struct spi_controller *ctlr;
struct rspi_data *rspi;
int ret;
const struct rspi_plat_data *rspi_pd;
const struct spi_ops *ops;
unsigned long clksrc;
......@@ -1305,11 +1299,7 @@ static int rspi_probe(struct platform_device *pdev)
goto error1;
} else {
ops = (struct spi_ops *)pdev->id_entry->driver_data;
rspi_pd = dev_get_platdata(&pdev->dev);
if (rspi_pd && rspi_pd->num_chipselect)
ctlr->num_chipselect = rspi_pd->num_chipselect;
else
ctlr->num_chipselect = 2; /* default */
ctlr->num_chipselect = 2; /* default */
}
rspi = spi_controller_get_devdata(ctlr);
......
......@@ -950,7 +950,7 @@ static struct s3c64xx_spi_csinfo *s3c64xx_get_target_ctrldata(
struct spi_device *spi)
{
struct s3c64xx_spi_csinfo *cs;
struct device_node *target_np, *data_np = NULL;
struct device_node *target_np;
u32 fb_delay = 0;
target_np = spi->dev.of_node;
......@@ -963,7 +963,8 @@ static struct s3c64xx_spi_csinfo *s3c64xx_get_target_ctrldata(
if (!cs)
return ERR_PTR(-ENOMEM);
data_np = of_get_child_by_name(target_np, "controller-data");
struct device_node *data_np __free(device_node) =
of_get_child_by_name(target_np, "controller-data");
if (!data_np) {
dev_info(&spi->dev, "feedback delay set to default (0)\n");
return cs;
......@@ -971,7 +972,6 @@ static struct s3c64xx_spi_csinfo *s3c64xx_get_target_ctrldata(
of_property_read_u32(data_np, "samsung,spi-feedback-delay", &fb_delay);
cs->fb_delay = fb_delay;
of_node_put(data_np);
return cs;
}
......
......@@ -206,7 +206,8 @@ static int sun4i_spi_transfer_one(struct spi_controller *host,
struct spi_transfer *tfr)
{
struct sun4i_spi *sspi = spi_controller_get_devdata(host);
unsigned int mclk_rate, div, timeout;
unsigned int mclk_rate, div;
unsigned long time_left;
unsigned int start, end, tx_time;
unsigned int tx_len = 0;
int ret = 0;
......@@ -327,10 +328,10 @@ static int sun4i_spi_transfer_one(struct spi_controller *host,
tx_time = max(tfr->len * 8 * 2 / (tfr->speed_hz / 1000), 100U);
start = jiffies;
timeout = wait_for_completion_timeout(&sspi->done,
msecs_to_jiffies(tx_time));
time_left = wait_for_completion_timeout(&sspi->done,
msecs_to_jiffies(tx_time));
end = jiffies;
if (!timeout) {
if (!time_left) {
dev_warn(&host->dev,
"%s: timeout transferring %u bytes@%iHz for %i(%i)ms",
dev_name(&spi->dev), tfr->len, tfr->speed_hz,
......
......@@ -277,7 +277,8 @@ static int sun6i_spi_transfer_one(struct spi_controller *host,
struct spi_transfer *tfr)
{
struct sun6i_spi *sspi = spi_controller_get_devdata(host);
unsigned int div, div_cdr1, div_cdr2, timeout;
unsigned int div, div_cdr1, div_cdr2;
unsigned long time_left;
unsigned int start, end, tx_time;
unsigned int trig_level;
unsigned int tx_len = 0, rx_len = 0, nbits = 0;
......@@ -488,26 +489,26 @@ static int sun6i_spi_transfer_one(struct spi_controller *host,
tx_time = spi_controller_xfer_timeout(host, tfr);
start = jiffies;
timeout = wait_for_completion_timeout(&sspi->done,
msecs_to_jiffies(tx_time));
time_left = wait_for_completion_timeout(&sspi->done,
msecs_to_jiffies(tx_time));
if (!use_dma) {
sun6i_spi_drain_fifo(sspi);
} else {
if (timeout && rx_len) {
if (time_left && rx_len) {
/*
* Even though RX on the peripheral side has finished
* RX DMA might still be in flight
*/
timeout = wait_for_completion_timeout(&sspi->dma_rx_done,
timeout);
if (!timeout)
time_left = wait_for_completion_timeout(&sspi->dma_rx_done,
time_left);
if (!time_left)
dev_warn(&host->dev, "RX DMA timeout\n");
}
}
end = jiffies;
if (!timeout) {
if (!time_left) {
dev_warn(&host->dev,
"%s: timeout transferring %u bytes@%iHz for %i(%i)ms",
dev_name(&spi->dev), tfr->len, tfr->speed_hz,
......
......@@ -270,7 +270,7 @@ static int xlp_spi_xfer_block(struct xlp_spi_priv *xs,
const unsigned char *tx_buf,
unsigned char *rx_buf, int xfer_len, int cmd_cont)
{
int timeout;
unsigned long time_left;
u32 intr_mask = 0;
xs->tx_buf = tx_buf;
......@@ -299,11 +299,11 @@ static int xlp_spi_xfer_block(struct xlp_spi_priv *xs,
intr_mask |= XLP_SPI_INTR_DONE;
xlp_spi_reg_write(xs, xs->cs, XLP_SPI_INTR_EN, intr_mask);
timeout = wait_for_completion_timeout(&xs->done,
msecs_to_jiffies(1000));
time_left = wait_for_completion_timeout(&xs->done,
msecs_to_jiffies(1000));
/* Disable interrupts */
xlp_spi_reg_write(xs, xs->cs, XLP_SPI_INTR_EN, 0x0);
if (!timeout) {
if (!time_left) {
dev_err(&xs->dev, "xfer timedout!\n");
goto out;
}
......
......@@ -312,7 +312,7 @@ static const struct attribute_group *spi_master_groups[] = {
static void spi_statistics_add_transfer_stats(struct spi_statistics __percpu *pcpu_stats,
struct spi_transfer *xfer,
struct spi_controller *ctlr)
struct spi_message *msg)
{
int l2len = min(fls(xfer->len), SPI_STATISTICS_HISTO_SIZE) - 1;
struct spi_statistics *stats;
......@@ -328,11 +328,9 @@ static void spi_statistics_add_transfer_stats(struct spi_statistics __percpu *pc
u64_stats_inc(&stats->transfer_bytes_histo[l2len]);
u64_stats_add(&stats->bytes, xfer->len);
if ((xfer->tx_buf) &&
(xfer->tx_buf != ctlr->dummy_tx))
if (spi_valid_txbuf(msg, xfer))
u64_stats_add(&stats->bytes_tx, xfer->len);
if ((xfer->rx_buf) &&
(xfer->rx_buf != ctlr->dummy_rx))
if (spi_valid_rxbuf(msg, xfer))
u64_stats_add(&stats->bytes_rx, xfer->len);
u64_stats_update_end(&stats->syncp);
......@@ -597,10 +595,16 @@ EXPORT_SYMBOL_GPL(spi_alloc_device);
static void spi_dev_set_name(struct spi_device *spi)
{
struct acpi_device *adev = ACPI_COMPANION(&spi->dev);
struct device *dev = &spi->dev;
struct fwnode_handle *fwnode = dev_fwnode(dev);
if (adev) {
dev_set_name(&spi->dev, "spi-%s", acpi_dev_name(adev));
if (is_acpi_device_node(fwnode)) {
dev_set_name(dev, "spi-%s", acpi_dev_name(to_acpi_device_node(fwnode)));
return;
}
if (is_software_node(fwnode)) {
dev_set_name(dev, "spi-%pfwP", fwnode);
return;
}
......@@ -822,14 +826,10 @@ struct spi_device *spi_new_device(struct spi_controller *ctlr,
proxy->controller_data = chip->controller_data;
proxy->controller_state = NULL;
/*
* spi->chip_select[i] gives the corresponding physical CS for logical CS i
* logical CS number is represented by setting the ith bit in spi->cs_index_mask
* So, for example, if spi->cs_index_mask = 0x01 then logical CS number is 0 and
* spi->chip_select[0] will give the physical CS.
* By default spi->chip_select[0] will hold the physical CS number so, set
* spi->cs_index_mask as 0x01.
* By default spi->chip_select[0] will hold the physical CS number,
* so set bit 0 in spi->cs_index_mask.
*/
proxy->cs_index_mask = 0x01;
proxy->cs_index_mask = BIT(0);
if (chip->swnode) {
status = device_add_software_node(&proxy->dev, chip->swnode);
......@@ -1022,20 +1022,45 @@ static void spi_res_release(struct spi_controller *ctlr, struct spi_message *mes
}
/*-------------------------------------------------------------------------*/
#define spi_for_each_valid_cs(spi, idx) \
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) \
if (!(spi->cs_index_mask & BIT(idx))) {} else
static inline bool spi_is_last_cs(struct spi_device *spi)
{
u8 idx;
bool last = false;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
if (spi->cs_index_mask & BIT(idx)) {
if (spi->controller->last_cs[idx] == spi_get_chipselect(spi, idx))
last = true;
}
spi_for_each_valid_cs(spi, idx) {
if (spi->controller->last_cs[idx] == spi_get_chipselect(spi, idx))
last = true;
}
return last;
}
static void spi_toggle_csgpiod(struct spi_device *spi, u8 idx, bool enable, bool activate)
{
/*
* Historically ACPI has no means of the GPIO polarity and
* thus the SPISerialBus() resource defines it on the per-chip
* basis. In order to avoid a chain of negations, the GPIO
* polarity is considered being Active High. Even for the cases
* when _DSD() is involved (in the updated versions of ACPI)
* the GPIO CS polarity must be defined Active High to avoid
* ambiguity. That's why we use enable, that takes SPI_CS_HIGH
* into account.
*/
if (has_acpi_companion(&spi->dev))
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx), !enable);
else
/* Polarity handled by GPIO library */
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx), activate);
if (activate)
spi_delay_exec(&spi->cs_setup, NULL);
else
spi_delay_exec(&spi->cs_inactive, NULL);
}
static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
{
......@@ -1072,31 +1097,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
if (spi_is_csgpiod(spi)) {
if (!(spi->mode & SPI_NO_CS)) {
/*
* Historically ACPI has no means of the GPIO polarity and
* thus the SPISerialBus() resource defines it on the per-chip
* basis. In order to avoid a chain of negations, the GPIO
* polarity is considered being Active High. Even for the cases
* when _DSD() is involved (in the updated versions of ACPI)
* the GPIO CS polarity must be defined Active High to avoid
* ambiguity. That's why we use enable, that takes SPI_CS_HIGH
* into account.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
if ((spi->cs_index_mask & BIT(idx)) && spi_get_csgpiod(spi, idx)) {
if (has_acpi_companion(&spi->dev))
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx),
!enable);
else
/* Polarity handled by GPIO library */
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx),
activate);
if (activate)
spi_delay_exec(&spi->cs_setup, NULL);
else
spi_delay_exec(&spi->cs_inactive, NULL);
}
spi_for_each_valid_cs(spi, idx) {
if (spi_get_csgpiod(spi, idx))
spi_toggle_csgpiod(spi, idx, enable, activate);
}
}
/* Some SPI masters need both GPIO CS & slave_select */
......@@ -1205,12 +1208,10 @@ static void spi_unmap_buf_attrs(struct spi_controller *ctlr,
enum dma_data_direction dir,
unsigned long attrs)
{
if (sgt->orig_nents) {
dma_unmap_sgtable(dev, sgt, dir, attrs);
sg_free_table(sgt);
sgt->orig_nents = 0;
sgt->nents = 0;
}
dma_unmap_sgtable(dev, sgt, dir, attrs);
sg_free_table(sgt);
sgt->orig_nents = 0;
sgt->nents = 0;
}
void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev,
......@@ -1315,10 +1316,8 @@ static void spi_dma_sync_for_device(struct spi_controller *ctlr,
if (!ctlr->cur_msg_mapped)
return;
if (xfer->tx_sg.orig_nents)
dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE);
if (xfer->rx_sg.orig_nents)
dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE);
dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE);
dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE);
}
static void spi_dma_sync_for_cpu(struct spi_controller *ctlr,
......@@ -1330,10 +1329,8 @@ static void spi_dma_sync_for_cpu(struct spi_controller *ctlr,
if (!ctlr->cur_msg_mapped)
return;
if (xfer->rx_sg.orig_nents)
dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE);
if (xfer->tx_sg.orig_nents)
dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE);
dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE);
dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE);
}
#else /* !CONFIG_HAS_DMA */
static inline int __spi_map_msg(struct spi_controller *ctlr,
......@@ -1613,8 +1610,8 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
trace_spi_transfer_start(msg, xfer);
spi_statistics_add_transfer_stats(statm, xfer, ctlr);
spi_statistics_add_transfer_stats(stats, xfer, ctlr);
spi_statistics_add_transfer_stats(statm, xfer, msg);
spi_statistics_add_transfer_stats(stats, xfer, msg);
if (!ctlr->ptp_sts_supported) {
xfer->ptp_sts_word_pre = 0;
......@@ -3709,9 +3706,6 @@ static int __spi_split_transfer_maxsize(struct spi_controller *ctlr,
* to the same values as *xferp, so tx_buf, rx_buf and len
* are all identical (as well as most others)
* so we just have to fix up len and the pointers.
*
* This also includes support for the depreciated
* spi_message.is_dma_mapped interface.
*/
/*
......@@ -3725,12 +3719,8 @@ static int __spi_split_transfer_maxsize(struct spi_controller *ctlr,
/* Update rx_buf, tx_buf and DMA */
if (xfers[i].rx_buf)
xfers[i].rx_buf += offset;
if (xfers[i].rx_dma)
xfers[i].rx_dma += offset;
if (xfers[i].tx_buf)
xfers[i].tx_buf += offset;
if (xfers[i].tx_dma)
xfers[i].tx_dma += offset;
/* Update length */
xfers[i].len = min(maxsize, xfers[i].len - offset);
......
......@@ -4,7 +4,11 @@
#include <linux/property.h>
struct software_node;
#define PROPERTY_ENTRY_GPIO(_name_, _chip_node_, _idx_, _flags_) \
PROPERTY_ENTRY_REF(_name_, _chip_node_, _idx_, _flags_)
extern const struct software_node swnode_gpio_undefined;
#endif /* __LINUX_GPIO_PROPERTY_H */
......@@ -16,9 +16,6 @@ struct omap2_mcspi_platform_config {
struct omap2_mcspi_device_config {
unsigned turbo_mode:1;
/* toggle chip select after every word */
unsigned cs_per_word:1;
};
#endif
......@@ -217,9 +217,9 @@ enum pxa_ssp_type {
PXA27x_SSP,
PXA3xx_SSP,
PXA168_SSP,
MMP2_SSP,
PXA910_SSP,
CE4100_SSP,
MMP2_SSP,
MRFLD_SSP,
QUARK_X1000_SSP,
/* Keep LPSS types sorted with lpss_platforms[] */
......
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2005 Stephen Street / StreetFire Sound Labs
*/
#ifndef __LINUX_SPI_PXA2XX_SPI_H
#define __LINUX_SPI_PXA2XX_SPI_H
#include <linux/dmaengine.h>
#include <linux/types.h>
#include <linux/pxa2xx_ssp.h>
struct dma_chan;
/*
* The platform data for SSP controller devices
* (resides in device.platform_data).
*/
struct pxa2xx_spi_controller {
u16 num_chipselect;
u8 enable_dma;
u8 dma_burst_size;
bool is_target;
/* DMA engine specific config */
dma_filter_fn dma_filter;
void *tx_param;
void *rx_param;
/* For non-PXA arches */
struct ssp_device ssp;
};
/*
* The controller specific data for SPI target devices
* (resides in spi_board_info.controller_data),
* copied to spi_device.platform_data ... mostly for
* DMA tuning.
*/
struct pxa2xx_spi_chip {
u8 tx_threshold;
u8 tx_hi_threshold;
u8 rx_threshold;
u8 dma_burst_size;
u32 timeout;
};
#if defined(CONFIG_ARCH_PXA) || defined(CONFIG_ARCH_MMP)
#include <linux/clk.h>
extern void pxa2xx_set_spi_info(unsigned id, struct pxa2xx_spi_controller *info);
#endif
#endif /* __LINUX_SPI_PXA2XX_SPI_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Renesas SPI driver
*
* Copyright (C) 2012 Renesas Solutions Corp.
*/
#ifndef __LINUX_SPI_RENESAS_SPI_H__
#define __LINUX_SPI_RENESAS_SPI_H__
struct rspi_plat_data {
unsigned int dma_tx_id;
unsigned int dma_rx_id;
u16 num_chipselect;
};
#endif
......@@ -453,6 +453,7 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch
* @last_cs_mode_high: was (mode & SPI_CS_HIGH) true on the last call to set_cs.
* @last_cs: the last chip_select that is recorded by set_cs, -1 on non chip
* selected
* @last_cs_index_mask: bit mask the last chip selects that were used
* @xfer_completion: used by core transfer_one_message()
* @busy: message pump is busy
* @running: message pump is running
......@@ -955,8 +956,8 @@ struct spi_res {
* struct spi_transfer - a read/write buffer pair
* @tx_buf: data to be written (DMA-safe memory), or NULL
* @rx_buf: data to be read (DMA-safe memory), or NULL
* @tx_dma: DMA address of tx_buf, if @spi_message.is_dma_mapped
* @rx_dma: DMA address of rx_buf, if @spi_message.is_dma_mapped
* @tx_dma: DMA address of tx_buf, currently not for client use
* @rx_dma: DMA address of rx_buf, currently not for client use
* @tx_nbits: number of bits used for writing. If 0 the default
* (SPI_NBITS_SINGLE) is used.
* @rx_nbits: number of bits used for reading. If 0 the default
......@@ -1066,8 +1067,7 @@ struct spi_transfer {
/*
* It's okay if tx_buf == rx_buf (right?).
* For MicroWire, one buffer must be NULL.
* Buffers must work with dma_*map_single() calls, unless
* spi_message.is_dma_mapped reports a pre-existing mapping.
* Buffers must work with dma_*map_single() calls.
*/
const void *tx_buf;
void *rx_buf;
......@@ -1111,8 +1111,6 @@ struct spi_transfer {
* struct spi_message - one multi-segment SPI transaction
* @transfers: list of transfer segments in this transaction
* @spi: SPI device to which the transaction is queued
* @is_dma_mapped: if true, the caller provided both DMA and CPU virtual
* addresses for each transfer buffer
* @pre_optimized: peripheral driver pre-optimized the message
* @optimized: the message is in the optimized state
* @prepared: spi_prepare_message was called for the this message
......@@ -1146,8 +1144,6 @@ struct spi_message {
struct spi_device *spi;
unsigned is_dma_mapped:1;
/* spi_optimize_message() was called for this message */
bool pre_optimized;
/* __spi_optimize_message() was called for this message */
......
......@@ -2,19 +2,23 @@
#ifndef __LINUX_SPI_XILINX_SPI_H
#define __LINUX_SPI_XILINX_SPI_H
#include <linux/types.h>
struct spi_board_info;
/**
* struct xspi_platform_data - Platform data of the Xilinx SPI driver
* @num_chipselect: Number of chip select by the IP.
* @little_endian: If registers should be accessed little endian or not.
* @bits_per_word: Number of bits per word.
* @devices: Devices to add when the driver is probed.
* @num_devices: Number of devices in the devices array.
* @num_chipselect: Number of chip select by the IP.
* @bits_per_word: Number of bits per word.
* @force_irq: If set, forces QSPI transaction requirements.
*/
struct xspi_platform_data {
u16 num_chipselect;
u8 bits_per_word;
struct spi_board_info *devices;
u8 num_devices;
u8 num_chipselect;
u8 bits_per_word;
bool force_irq;
};
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment