Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
nexedi
linux
Commits
9d540b0d
Commit
9d540b0d
authored
Jul 03, 2017
by
Mark Brown
Browse files
Options
Browse Files
Download
Plain Diff
Merge remote-tracking branch 'spi/topic/master' into spi-next
parents
096bf6b7
8caab75f
Changes
9
Hide whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
1201 additions
and
674 deletions
+1201
-674
Documentation/devicetree/bindings/spi/spi-bus.txt
Documentation/devicetree/bindings/spi/spi-bus.txt
+45
-31
Documentation/spi/spi-summary
Documentation/spi/spi-summary
+20
-7
drivers/spi/Kconfig
drivers/spi/Kconfig
+25
-1
drivers/spi/Makefile
drivers/spi/Makefile
+4
-0
drivers/spi/spi-slave-system-control.c
drivers/spi/spi-slave-system-control.c
+154
-0
drivers/spi/spi-slave-time.c
drivers/spi/spi-slave-time.c
+129
-0
drivers/spi/spi.c
drivers/spi/spi.c
+675
-541
include/linux/spi/spi.h
include/linux/spi/spi.h
+136
-81
include/trace/events/spi.h
include/trace/events/spi.h
+13
-13
No files found.
Documentation/devicetree/bindings/spi/spi-bus.txt
View file @
9d540b0d
SPI (Serial Peripheral Interface) busses
SPI busses can be described with a node for the SPI master device
and a set of child nodes for each SPI slave on the bus. For this
discussion, it is assumed that the system's SPI controller is in
SPI master mode. This binding does not describe SPI controllers
in slave mode.
SPI busses can be described with a node for the SPI controller device
and a set of child nodes for each SPI slave on the bus. The system's SPI
controller may be described for use in SPI master mode or in SPI slave mode,
but not for both at the same time.
The SPI master node requires the following properties:
The SPI controller node requires the following properties:
- compatible - Name of SPI bus controller following generic names
recommended practice.
In master mode, the SPI controller node requires the following additional
properties:
- #address-cells - number of cells required to define a chip select
address on the SPI bus.
- #size-cells - should be zero.
- compatible - name of SPI bus controller following generic names
recommended practice.
In slave mode, the SPI controller node requires one additional property:
- spi-slave - Empty property.
No other properties are required in the SPI bus node. It is assumed
that a driver for an SPI bus device will understand that it is an SPI bus.
However, the binding does not attempt to define the specific method for
...
...
@@ -21,7 +27,7 @@ assumption that board specific platform code will be used to manage
chip selects. Individual drivers can define additional properties to
support describing the chip select layout.
Optional properties:
Optional properties
(master mode only)
:
- cs-gpios - gpios chip select.
- num-cs - total number of chipselects.
...
...
@@ -41,28 +47,36 @@ cs1 : native
cs2 : &gpio1 1 0
cs3 : &gpio1 2 0
SPI slave nodes must be children of the SPI master node and can
contain the following properties.
- reg - (required) chip select address of device.
- compatible - (required) name of SPI device following generic names
recommended practice.
- spi-max-frequency - (required) Maximum SPI clocking speed of device in Hz.
- spi-cpol - (optional) Empty property indicating device requires
inverse clock polarity (CPOL) mode.
- spi-cpha - (optional) Empty property indicating device requires
shifted clock phase (CPHA) mode.
- spi-cs-high - (optional) Empty property indicating device requires
chip select active high.
- spi-3wire - (optional) Empty property indicating device requires
3-wire mode.
- spi-lsb-first - (optional) Empty property indicating device requires
LSB first mode.
- spi-tx-bus-width - (optional) The bus width (number of data wires) that is
used for MOSI. Defaults to 1 if not present.
- spi-rx-bus-width - (optional) The bus width (number of data wires) that is
used for MISO. Defaults to 1 if not present.
- spi-rx-delay-us - (optional) Microsecond delay after a read transfer.
- spi-tx-delay-us - (optional) Microsecond delay after a write transfer.
SPI slave nodes must be children of the SPI controller node.
In master mode, one or more slave nodes (up to the number of chip selects) can
be present. Required properties are:
- compatible - Name of SPI device following generic names recommended
practice.
- reg - Chip select address of device.
- spi-max-frequency - Maximum SPI clocking speed of device in Hz.
In slave mode, the (single) slave node is optional.
If present, it must be called "slave". Required properties are:
- compatible - Name of SPI device following generic names recommended
practice.
All slave nodes can contain the following optional properties:
- spi-cpol - Empty property indicating device requires inverse clock
polarity (CPOL) mode.
- spi-cpha - Empty property indicating device requires shifted clock
phase (CPHA) mode.
- spi-cs-high - Empty property indicating device requires chip select
active high.
- spi-3wire - Empty property indicating device requires 3-wire mode.
- spi-lsb-first - Empty property indicating device requires LSB first mode.
- spi-tx-bus-width - The bus width (number of data wires) that is used for MOSI.
Defaults to 1 if not present.
- spi-rx-bus-width - The bus width (number of data wires) that is used for MISO.
Defaults to 1 if not present.
- spi-rx-delay-us - Microsecond delay after a read transfer.
- spi-tx-delay-us - Microsecond delay after a write transfer.
Some SPI controllers and devices support Dual and Quad SPI transfer mode.
It allows data in the SPI system to be transferred using 2 wires (DUAL) or 4
...
...
Documentation/spi/spi-summary
View file @
9d540b0d
...
...
@@ -62,8 +62,8 @@ chips described as using "three wire" signaling: SCK, data, nCSx.
(That data line is sometimes called MOMI or SISO.)
Microcontrollers often support both master and slave sides of the SPI
protocol. This document (and Linux)
currently only supports the master
side of SPI interactions.
protocol. This document (and Linux)
supports both the master and slave
side
s
of SPI interactions.
Who uses it? On what kinds of systems?
...
...
@@ -154,9 +154,8 @@ control audio interfaces, present touchscreen sensors as input interfaces,
or monitor temperature and voltage levels during industrial processing.
And those might all be sharing the same controller driver.
A "struct spi_device" encapsulates the master-side interface between
those two types of driver. At this writing, Linux has no slave side
programming interface.
A "struct spi_device" encapsulates the controller-side interface between
those two types of drivers.
There is a minimal core of SPI programming interfaces, focussing on
using the driver model to connect controller and protocol drivers using
...
...
@@ -177,10 +176,24 @@ shows up in sysfs in several locations:
/sys/bus/spi/drivers/D ... driver for one or more spi*.* devices
/sys/class/spi_master/spiB ... symlink (or actual device node) to
a logical node which could hold class related state for the
controller managing bus "B". All spiB.* devices share one
a logical node which could hold class related state for the
SPI
master
controller managing bus "B". All spiB.* devices share one
physical SPI bus segment, with SCLK, MOSI, and MISO.
/sys/devices/.../CTLR/slave ... virtual file for (un)registering the
slave device for an SPI slave controller.
Writing the driver name of an SPI slave handler to this file
registers the slave device; writing "(null)" unregisters the slave
device.
Reading from this file shows the name of the slave device ("(null)"
if not registered).
/sys/class/spi_slave/spiB ... symlink (or actual device node) to
a logical node which could hold class related state for the SPI
slave controller on bus "B". When registered, a single spiB.*
device is present here, possible sharing the physical SPI bus
segment with other SPI slave devices.
Note that the actual location of the controller's class state depends
on whether you enabled CONFIG_SYSFS_DEPRECATED or not. At this time,
the only class-specific state is the bus number ("B" in "spiB"), so
...
...
drivers/spi/Kconfig
View file @
9d540b0d
...
...
@@ -785,6 +785,30 @@ config SPI_TLE62X0
endif # SPI_MASTER
# (slave support would go here)
#
# SLAVE side ... listening to other SPI masters
#
config SPI_SLAVE
bool "SPI slave protocol handlers"
help
If your system has a slave-capable SPI controller, you can enable
slave protocol handlers.
if SPI_SLAVE
config SPI_SLAVE_TIME
tristate "SPI slave handler reporting boot up time"
help
SPI slave handler responding with the time of reception of the last
SPI message.
config SPI_SLAVE_SYSTEM_CONTROL
tristate "SPI slave handler controlling system state"
help
SPI slave handler to allow remote control of system reboot, power
off, halt, and suspend.
endif # SPI_SLAVE
endif # SPI
drivers/spi/Makefile
View file @
9d540b0d
...
...
@@ -105,3 +105,7 @@ obj-$(CONFIG_SPI_XILINX) += spi-xilinx.o
obj-$(CONFIG_SPI_XLP)
+=
spi-xlp.o
obj-$(CONFIG_SPI_XTENSA_XTFPGA)
+=
spi-xtensa-xtfpga.o
obj-$(CONFIG_SPI_ZYNQMP_GQSPI)
+=
spi-zynqmp-gqspi.o
# SPI slave protocol handlers
obj-$(CONFIG_SPI_SLAVE_TIME)
+=
spi-slave-time.o
obj-$(CONFIG_SPI_SLAVE_SYSTEM_CONTROL)
+=
spi-slave-system-control.o
drivers/spi/spi-slave-system-control.c
0 → 100644
View file @
9d540b0d
/*
* SPI slave handler controlling system state
*
* This SPI slave handler allows remote control of system reboot, power off,
* halt, and suspend.
*
* Copyright (C) 2016-2017 Glider bvba
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Usage (assuming /dev/spidev2.0 corresponds to the SPI master on the remote
* system):
*
* # reboot='\x7c\x50'
* # poweroff='\x71\x3f'
* # halt='\x38\x76'
* # suspend='\x1b\x1b'
* # spidev_test -D /dev/spidev2.0 -p $suspend # or $reboot, $poweroff, $halt
*/
#include <linux/completion.h>
#include <linux/module.h>
#include <linux/reboot.h>
#include <linux/suspend.h>
#include <linux/spi/spi.h>
/*
* The numbers are chosen to display something human-readable on two 7-segment
* displays connected to two 74HC595 shift registers
*/
#define CMD_REBOOT 0x7c50
/* rb */
#define CMD_POWEROFF 0x713f
/* OF */
#define CMD_HALT 0x3876
/* HL */
#define CMD_SUSPEND 0x1b1b
/* ZZ */
struct
spi_slave_system_control_priv
{
struct
spi_device
*
spi
;
struct
completion
finished
;
struct
spi_transfer
xfer
;
struct
spi_message
msg
;
__be16
cmd
;
};
static
int
spi_slave_system_control_submit
(
struct
spi_slave_system_control_priv
*
priv
);
static
void
spi_slave_system_control_complete
(
void
*
arg
)
{
struct
spi_slave_system_control_priv
*
priv
=
arg
;
u16
cmd
;
int
ret
;
if
(
priv
->
msg
.
status
)
goto
terminate
;
cmd
=
be16_to_cpu
(
priv
->
cmd
);
switch
(
cmd
)
{
case
CMD_REBOOT
:
dev_info
(
&
priv
->
spi
->
dev
,
"Rebooting system...
\n
"
);
kernel_restart
(
NULL
);
case
CMD_POWEROFF
:
dev_info
(
&
priv
->
spi
->
dev
,
"Powering off system...
\n
"
);
kernel_power_off
();
break
;
case
CMD_HALT
:
dev_info
(
&
priv
->
spi
->
dev
,
"Halting system...
\n
"
);
kernel_halt
();
break
;
case
CMD_SUSPEND
:
dev_info
(
&
priv
->
spi
->
dev
,
"Suspending system...
\n
"
);
pm_suspend
(
PM_SUSPEND_MEM
);
break
;
default:
dev_warn
(
&
priv
->
spi
->
dev
,
"Unknown command 0x%x
\n
"
,
cmd
);
break
;
}
ret
=
spi_slave_system_control_submit
(
priv
);
if
(
ret
)
goto
terminate
;
return
;
terminate:
dev_info
(
&
priv
->
spi
->
dev
,
"Terminating
\n
"
);
complete
(
&
priv
->
finished
);
}
static
int
spi_slave_system_control_submit
(
struct
spi_slave_system_control_priv
*
priv
)
{
int
ret
;
spi_message_init_with_transfers
(
&
priv
->
msg
,
&
priv
->
xfer
,
1
);
priv
->
msg
.
complete
=
spi_slave_system_control_complete
;
priv
->
msg
.
context
=
priv
;
ret
=
spi_async
(
priv
->
spi
,
&
priv
->
msg
);
if
(
ret
)
dev_err
(
&
priv
->
spi
->
dev
,
"spi_async() failed %d
\n
"
,
ret
);
return
ret
;
}
static
int
spi_slave_system_control_probe
(
struct
spi_device
*
spi
)
{
struct
spi_slave_system_control_priv
*
priv
;
int
ret
;
priv
=
devm_kzalloc
(
&
spi
->
dev
,
sizeof
(
*
priv
),
GFP_KERNEL
);
if
(
!
priv
)
return
-
ENOMEM
;
priv
->
spi
=
spi
;
init_completion
(
&
priv
->
finished
);
priv
->
xfer
.
rx_buf
=
&
priv
->
cmd
;
priv
->
xfer
.
len
=
sizeof
(
priv
->
cmd
);
ret
=
spi_slave_system_control_submit
(
priv
);
if
(
ret
)
return
ret
;
spi_set_drvdata
(
spi
,
priv
);
return
0
;
}
static
int
spi_slave_system_control_remove
(
struct
spi_device
*
spi
)
{
struct
spi_slave_system_control_priv
*
priv
=
spi_get_drvdata
(
spi
);
spi_slave_abort
(
spi
);
wait_for_completion
(
&
priv
->
finished
);
return
0
;
}
static
struct
spi_driver
spi_slave_system_control_driver
=
{
.
driver
=
{
.
name
=
"spi-slave-system-control"
,
},
.
probe
=
spi_slave_system_control_probe
,
.
remove
=
spi_slave_system_control_remove
,
};
module_spi_driver
(
spi_slave_system_control_driver
);
MODULE_AUTHOR
(
"Geert Uytterhoeven <geert+renesas@glider.be>"
);
MODULE_DESCRIPTION
(
"SPI slave handler controlling system state"
);
MODULE_LICENSE
(
"GPL v2"
);
drivers/spi/spi-slave-time.c
0 → 100644
View file @
9d540b0d
/*
* SPI slave handler reporting uptime at reception of previous SPI message
*
* This SPI slave handler sends the time of reception of the last SPI message
* as two 32-bit unsigned integers in binary format and in network byte order,
* representing the number of seconds and fractional seconds (in microseconds)
* since boot up.
*
* Copyright (C) 2016-2017 Glider bvba
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Usage (assuming /dev/spidev2.0 corresponds to the SPI master on the remote
* system):
*
* # spidev_test -D /dev/spidev2.0 -p dummy-8B
* spi mode: 0x0
* bits per word: 8
* max speed: 500000 Hz (500 KHz)
* RX | 00 00 04 6D 00 09 5B BB ...
* ^^^^^ ^^^^^^^^
* seconds microseconds
*/
#include <linux/completion.h>
#include <linux/module.h>
#include <linux/sched/clock.h>
#include <linux/spi/spi.h>
struct
spi_slave_time_priv
{
struct
spi_device
*
spi
;
struct
completion
finished
;
struct
spi_transfer
xfer
;
struct
spi_message
msg
;
__be32
buf
[
2
];
};
static
int
spi_slave_time_submit
(
struct
spi_slave_time_priv
*
priv
);
static
void
spi_slave_time_complete
(
void
*
arg
)
{
struct
spi_slave_time_priv
*
priv
=
arg
;
int
ret
;
ret
=
priv
->
msg
.
status
;
if
(
ret
)
goto
terminate
;
ret
=
spi_slave_time_submit
(
priv
);
if
(
ret
)
goto
terminate
;
return
;
terminate:
dev_info
(
&
priv
->
spi
->
dev
,
"Terminating
\n
"
);
complete
(
&
priv
->
finished
);
}
static
int
spi_slave_time_submit
(
struct
spi_slave_time_priv
*
priv
)
{
u32
rem_us
;
int
ret
;
u64
ts
;
ts
=
local_clock
();
rem_us
=
do_div
(
ts
,
1000000000
)
/
1000
;
priv
->
buf
[
0
]
=
cpu_to_be32
(
ts
);
priv
->
buf
[
1
]
=
cpu_to_be32
(
rem_us
);
spi_message_init_with_transfers
(
&
priv
->
msg
,
&
priv
->
xfer
,
1
);
priv
->
msg
.
complete
=
spi_slave_time_complete
;
priv
->
msg
.
context
=
priv
;
ret
=
spi_async
(
priv
->
spi
,
&
priv
->
msg
);
if
(
ret
)
dev_err
(
&
priv
->
spi
->
dev
,
"spi_async() failed %d
\n
"
,
ret
);
return
ret
;
}
static
int
spi_slave_time_probe
(
struct
spi_device
*
spi
)
{
struct
spi_slave_time_priv
*
priv
;
int
ret
;
priv
=
devm_kzalloc
(
&
spi
->
dev
,
sizeof
(
*
priv
),
GFP_KERNEL
);
if
(
!
priv
)
return
-
ENOMEM
;
priv
->
spi
=
spi
;
init_completion
(
&
priv
->
finished
);
priv
->
xfer
.
tx_buf
=
priv
->
buf
;
priv
->
xfer
.
len
=
sizeof
(
priv
->
buf
);
ret
=
spi_slave_time_submit
(
priv
);
if
(
ret
)
return
ret
;
spi_set_drvdata
(
spi
,
priv
);
return
0
;
}
static
int
spi_slave_time_remove
(
struct
spi_device
*
spi
)
{
struct
spi_slave_time_priv
*
priv
=
spi_get_drvdata
(
spi
);
spi_slave_abort
(
spi
);
wait_for_completion
(
&
priv
->
finished
);
return
0
;
}
static
struct
spi_driver
spi_slave_time_driver
=
{
.
driver
=
{
.
name
=
"spi-slave-time"
,
},
.
probe
=
spi_slave_time_probe
,
.
remove
=
spi_slave_time_remove
,
};
module_spi_driver
(
spi_slave_time_driver
);
MODULE_AUTHOR
(
"Geert Uytterhoeven <geert+renesas@glider.be>"
);
MODULE_DESCRIPTION
(
"SPI slave reporting uptime at previous SPI message"
);
MODULE_LICENSE
(
"GPL v2"
);
drivers/spi/spi.c
View file @
9d540b0d
...
...
@@ -48,11 +48,11 @@ static void spidev_release(struct device *dev)
{
struct
spi_device
*
spi
=
to_spi_device
(
dev
);
/* spi
mast
ers may cleanup for released devices */
if
(
spi
->
mast
er
->
cleanup
)
spi
->
mast
er
->
cleanup
(
spi
);
/* spi
controll
ers may cleanup for released devices */
if
(
spi
->
controll
er
->
cleanup
)
spi
->
controll
er
->
cleanup
(
spi
);
spi_
master_put
(
spi
->
mast
er
);
spi_
controller_put
(
spi
->
controll
er
);
kfree
(
spi
);
}
...
...
@@ -71,17 +71,17 @@ modalias_show(struct device *dev, struct device_attribute *a, char *buf)
static
DEVICE_ATTR_RO
(
modalias
);
#define SPI_STATISTICS_ATTRS(field, file) \
static ssize_t spi_
master_##field##_show(struct device *dev,
\
struct device_attribute *attr,
\
char *buf) \
static ssize_t spi_
controller_##field##_show(struct device *dev,
\
struct device_attribute *attr,
\
char *buf) \
{ \
struct spi_
master *maste
r = container_of(dev, \
struct spi_master, dev);
\
return spi_statistics_##field##_show(&
maste
r->statistics, buf); \
struct spi_
controller *ctl
r = container_of(dev, \
struct spi_controller, dev);
\
return spi_statistics_##field##_show(&
ctl
r->statistics, buf); \
} \
static struct device_attribute dev_attr_spi_
master_##field = {
\
static struct device_attribute dev_attr_spi_
controller_##field = {
\
.attr = { .name = file, .mode = 0444 }, \
.show = spi_
mast
er_##field##_show, \
.show = spi_
controll
er_##field##_show, \
}; \
static ssize_t spi_device_##field##_show(struct device *dev, \
struct device_attribute *attr, \
...
...
@@ -201,51 +201,51 @@ static const struct attribute_group *spi_dev_groups[] = {
NULL
,
};
static
struct
attribute
*
spi_
mast
er_statistics_attrs
[]
=
{
&
dev_attr_spi_
mast
er_messages
.
attr
,
&
dev_attr_spi_
mast
er_transfers
.
attr
,
&
dev_attr_spi_
mast
er_errors
.
attr
,
&
dev_attr_spi_
mast
er_timedout
.
attr
,
&
dev_attr_spi_
mast
er_spi_sync
.
attr
,
&
dev_attr_spi_
mast
er_spi_sync_immediate
.
attr
,
&
dev_attr_spi_
mast
er_spi_async
.
attr
,
&
dev_attr_spi_
mast
er_bytes
.
attr
,
&
dev_attr_spi_
mast
er_bytes_rx
.
attr
,
&
dev_attr_spi_
mast
er_bytes_tx
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo0
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo1
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo2
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo3
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo4
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo5
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo6
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo7
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo8
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo9
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo10
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo11
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo12
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo13
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo14
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo15
.
attr
,
&
dev_attr_spi_
mast
er_transfer_bytes_histo16
.
attr
,
&
dev_attr_spi_
mast
er_transfers_split_maxsize
.
attr
,
static
struct
attribute
*
spi_
controll
er_statistics_attrs
[]
=
{
&
dev_attr_spi_
controll
er_messages
.
attr
,
&
dev_attr_spi_
controll
er_transfers
.
attr
,
&
dev_attr_spi_
controll
er_errors
.
attr
,
&
dev_attr_spi_
controll
er_timedout
.
attr
,
&
dev_attr_spi_
controll
er_spi_sync
.
attr
,
&
dev_attr_spi_
controll
er_spi_sync_immediate
.
attr
,
&
dev_attr_spi_
controll
er_spi_async
.
attr
,
&
dev_attr_spi_
controll
er_bytes
.
attr
,
&
dev_attr_spi_
controll
er_bytes_rx
.
attr
,
&
dev_attr_spi_
controll
er_bytes_tx
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo0
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo1
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo2
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo3
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo4
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo5
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo6
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo7
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo8
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo9
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo10
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo11
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo12
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo13
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo14
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo15
.
attr
,
&
dev_attr_spi_
controll
er_transfer_bytes_histo16
.
attr
,
&
dev_attr_spi_
controll
er_transfers_split_maxsize
.
attr
,
NULL
,
};
static
const
struct
attribute_group
spi_
mast
er_statistics_group
=
{
static
const
struct
attribute_group
spi_
controll
er_statistics_group
=
{
.
name
=
"statistics"
,
.
attrs
=
spi_
mast
er_statistics_attrs
,
.
attrs
=
spi_
controll
er_statistics_attrs
,
};
static
const
struct
attribute_group
*
spi_master_groups
[]
=
{
&
spi_
mast
er_statistics_group
,
&
spi_
controll
er_statistics_group
,
NULL
,
};
void
spi_statistics_add_transfer_stats
(
struct
spi_statistics
*
stats
,
struct
spi_transfer
*
xfer
,
struct
spi_
master
*
maste
r
)
struct
spi_
controller
*
ctl
r
)
{
unsigned
long
flags
;
int
l2len
=
min
(
fls
(
xfer
->
len
),
SPI_STATISTICS_HISTO_SIZE
)
-
1
;
...
...
@@ -260,10 +260,10 @@ void spi_statistics_add_transfer_stats(struct spi_statistics *stats,
stats
->
bytes
+=
xfer
->
len
;
if
((
xfer
->
tx_buf
)
&&
(
xfer
->
tx_buf
!=
maste
r
->
dummy_tx
))
(
xfer
->
tx_buf
!=
ctl
r
->
dummy_tx
))
stats
->
bytes_tx
+=
xfer
->
len
;
if
((
xfer
->
rx_buf
)
&&
(
xfer
->
rx_buf
!=
maste
r
->
dummy_rx
))
(
xfer
->
rx_buf
!=
ctl
r
->
dummy_rx
))
stats
->
bytes_rx
+=
xfer
->
len
;
spin_unlock_irqrestore
(
&
stats
->
lock
,
flags
);
...
...
@@ -405,7 +405,7 @@ EXPORT_SYMBOL_GPL(__spi_register_driver);
/*-------------------------------------------------------------------------*/
/* SPI devices should normally not be created by SPI device drivers; that
* would make them board-specific. Similarly with SPI
mast
er drivers.
* would make them board-specific. Similarly with SPI
controll
er drivers.
* Device registration normally goes into like arch/.../mach.../board-YYY.c
* with other readonly (flashable) information about mainboard devices.
*/
...
...
@@ -416,17 +416,17 @@ struct boardinfo {
};
static
LIST_HEAD
(
board_list
);
static
LIST_HEAD
(
spi_
mast
er_list
);
static
LIST_HEAD
(
spi_
controll
er_list
);
/*
* Used to protect add/del opertion for board_info list and
* spi_
mast
er list, and their matching process
* spi_
controll
er list, and their matching process
*/
static
DEFINE_MUTEX
(
board_lock
);
/**
* spi_alloc_device - Allocate a new SPI device
* @
maste
r: Controller to which device is connected
* @
ctl
r: Controller to which device is connected
* Context: can sleep
*
* Allows a driver to allocate and initialize a spi_device without
...
...
@@ -435,27 +435,27 @@ static DEFINE_MUTEX(board_lock);
* spi_add_device() on it.
*
* Caller is responsible to call spi_add_device() on the returned
* spi_device structure to add it to the SPI
mast
er. If the caller
* spi_device structure to add it to the SPI
controll
er. If the caller
* needs to discard the spi_device without adding it, then it should
* call spi_dev_put() on it.
*
* Return: a pointer to the new device, or NULL.
*/
struct
spi_device
*
spi_alloc_device
(
struct
spi_
master
*
maste
r
)
struct
spi_device
*
spi_alloc_device
(
struct
spi_
controller
*
ctl
r
)
{
struct
spi_device
*
spi
;
if
(
!
spi_
master_get
(
maste
r
))
if
(
!
spi_
controller_get
(
ctl
r
))
return
NULL
;
spi
=
kzalloc
(
sizeof
(
*
spi
),
GFP_KERNEL
);
if
(
!
spi
)
{
spi_
master_put
(
maste
r
);
spi_
controller_put
(
ctl
r
);
return
NULL
;
}
spi
->
master
=
maste
r
;
spi
->
dev
.
parent
=
&
maste
r
->
dev
;
spi
->
master
=
spi
->
controller
=
ctl
r
;
spi
->
dev
.
parent
=
&
ctl
r
->
dev
;
spi
->
dev
.
bus
=
&
spi_bus_type
;
spi
->
dev
.
release
=
spidev_release
;
spi
->
cs_gpio
=
-
ENOENT
;
...
...
@@ -476,7 +476,7 @@ static void spi_dev_set_name(struct spi_device *spi)
return
;
}
dev_set_name
(
&
spi
->
dev
,
"%s.%u"
,
dev_name
(
&
spi
->
mast
er
->
dev
),
dev_set_name
(
&
spi
->
dev
,
"%s.%u"
,
dev_name
(
&
spi
->
controll
er
->
dev
),
spi
->
chip_select
);
}
...
...
@@ -485,7 +485,7 @@ static int spi_dev_check(struct device *dev, void *data)
struct
spi_device
*
spi
=
to_spi_device
(
dev
);
struct
spi_device
*
new_spi
=
data
;
if
(
spi
->
master
==
new_spi
->
mast
er
&&
if
(
spi
->
controller
==
new_spi
->
controll
er
&&
spi
->
chip_select
==
new_spi
->
chip_select
)
return
-
EBUSY
;
return
0
;
...
...
@@ -503,15 +503,14 @@ static int spi_dev_check(struct device *dev, void *data)
int
spi_add_device
(
struct
spi_device
*
spi
)
{
static
DEFINE_MUTEX
(
spi_add_lock
);
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
device
*
dev
=
maste
r
->
dev
.
parent
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
struct
device
*
dev
=
ctl
r
->
dev
.
parent
;
int
status
;
/* Chipselects are numbered 0..max; validate. */
if
(
spi
->
chip_select
>=
master
->
num_chipselect
)
{
dev_err
(
dev
,
"cs%d >= max %d
\n
"
,
spi
->
chip_select
,
master
->
num_chipselect
);
if
(
spi
->
chip_select
>=
ctlr
->
num_chipselect
)
{
dev_err
(
dev
,
"cs%d >= max %d
\n
"
,
spi
->
chip_select
,
ctlr
->
num_chipselect
);
return
-
EINVAL
;
}
...
...
@@ -531,8 +530,8 @@ int spi_add_device(struct spi_device *spi)
goto
done
;
}
if
(
maste
r
->
cs_gpios
)
spi
->
cs_gpio
=
maste
r
->
cs_gpios
[
spi
->
chip_select
];
if
(
ctl
r
->
cs_gpios
)
spi
->
cs_gpio
=
ctl
r
->
cs_gpios
[
spi
->
chip_select
];
/* Drivers may modify this initial i/o setup, but will
* normally rely on the device being setup. Devices
...
...
@@ -561,7 +560,7 @@ EXPORT_SYMBOL_GPL(spi_add_device);
/**
* spi_new_device - instantiate one new SPI device
* @
maste
r: Controller to which device is connected
* @
ctl
r: Controller to which device is connected
* @chip: Describes the SPI device
* Context: can sleep
*
...
...
@@ -573,7 +572,7 @@ EXPORT_SYMBOL_GPL(spi_add_device);
*
* Return: the new device, or NULL.
*/
struct
spi_device
*
spi_new_device
(
struct
spi_
master
*
maste
r
,
struct
spi_device
*
spi_new_device
(
struct
spi_
controller
*
ctl
r
,
struct
spi_board_info
*
chip
)
{
struct
spi_device
*
proxy
;
...
...
@@ -586,7 +585,7 @@ struct spi_device *spi_new_device(struct spi_master *master,
* suggests syslogged diagnostics are best here (ugh).
*/
proxy
=
spi_alloc_device
(
maste
r
);
proxy
=
spi_alloc_device
(
ctl
r
);
if
(
!
proxy
)
return
NULL
;
...
...
@@ -604,7 +603,7 @@ struct spi_device *spi_new_device(struct spi_master *master,
if
(
chip
->
properties
)
{
status
=
device_add_properties
(
&
proxy
->
dev
,
chip
->
properties
);
if
(
status
)
{
dev_err
(
&
maste
r
->
dev
,
dev_err
(
&
ctl
r
->
dev
,
"failed to add properties to '%s': %d
\n
"
,
chip
->
modalias
,
status
);
goto
err_dev_put
;
...
...
@@ -631,7 +630,7 @@ EXPORT_SYMBOL_GPL(spi_new_device);
* @spi: spi_device to unregister
*
* Start making the passed SPI device vanish. Normally this would be handled
* by spi_unregister_
mast
er().
* by spi_unregister_
controll
er().
*/
void
spi_unregister_device
(
struct
spi_device
*
spi
)
{
...
...
@@ -648,17 +647,17 @@ void spi_unregister_device(struct spi_device *spi)
}
EXPORT_SYMBOL_GPL
(
spi_unregister_device
);
static
void
spi_match_
master_to_boardinfo
(
struct
spi_master
*
maste
r
,
struct
spi_board_info
*
bi
)
static
void
spi_match_
controller_to_boardinfo
(
struct
spi_controller
*
ctl
r
,
struct
spi_board_info
*
bi
)
{
struct
spi_device
*
dev
;
if
(
maste
r
->
bus_num
!=
bi
->
bus_num
)
if
(
ctl
r
->
bus_num
!=
bi
->
bus_num
)
return
;
dev
=
spi_new_device
(
maste
r
,
bi
);
dev
=
spi_new_device
(
ctl
r
,
bi
);
if
(
!
dev
)
dev_err
(
maste
r
->
dev
.
parent
,
"can't create new device for %s
\n
"
,
dev_err
(
ctl
r
->
dev
.
parent
,
"can't create new device for %s
\n
"
,
bi
->
modalias
);
}
...
...
@@ -697,7 +696,7 @@ int spi_register_board_info(struct spi_board_info const *info, unsigned n)
return
-
ENOMEM
;
for
(
i
=
0
;
i
<
n
;
i
++
,
bi
++
,
info
++
)
{
struct
spi_
master
*
maste
r
;
struct
spi_
controller
*
ctl
r
;
memcpy
(
&
bi
->
board_info
,
info
,
sizeof
(
*
info
));
if
(
info
->
properties
)
{
...
...
@@ -709,8 +708,9 @@ int spi_register_board_info(struct spi_board_info const *info, unsigned n)
mutex_lock
(
&
board_lock
);
list_add_tail
(
&
bi
->
list
,
&
board_list
);
list_for_each_entry
(
master
,
&
spi_master_list
,
list
)
spi_match_master_to_boardinfo
(
master
,
&
bi
->
board_info
);
list_for_each_entry
(
ctlr
,
&
spi_controller_list
,
list
)
spi_match_controller_to_boardinfo
(
ctlr
,
&
bi
->
board_info
);
mutex_unlock
(
&
board_lock
);
}
...
...
@@ -727,16 +727,16 @@ static void spi_set_cs(struct spi_device *spi, bool enable)
if
(
gpio_is_valid
(
spi
->
cs_gpio
))
{
gpio_set_value
(
spi
->
cs_gpio
,
!
enable
);
/* Some SPI masters need both GPIO CS & slave_select */
if
((
spi
->
mast
er
->
flags
&
SPI_MASTER_GPIO_SS
)
&&
spi
->
mast
er
->
set_cs
)
spi
->
mast
er
->
set_cs
(
spi
,
!
enable
);
}
else
if
(
spi
->
mast
er
->
set_cs
)
{
spi
->
mast
er
->
set_cs
(
spi
,
!
enable
);
if
((
spi
->
controll
er
->
flags
&
SPI_MASTER_GPIO_SS
)
&&
spi
->
controll
er
->
set_cs
)
spi
->
controll
er
->
set_cs
(
spi
,
!
enable
);
}
else
if
(
spi
->
controll
er
->
set_cs
)
{
spi
->
controll
er
->
set_cs
(
spi
,
!
enable
);
}
}
#ifdef CONFIG_HAS_DMA
static
int
spi_map_buf
(
struct
spi_
master
*
maste
r
,
struct
device
*
dev
,
static
int
spi_map_buf
(
struct
spi_
controller
*
ctl
r
,
struct
device
*
dev
,
struct
sg_table
*
sgt
,
void
*
buf
,
size_t
len
,
enum
dma_data_direction
dir
)
{
...
...
@@ -761,7 +761,7 @@ static int spi_map_buf(struct spi_master *master, struct device *dev,
desc_len
=
min_t
(
int
,
max_seg_size
,
PAGE_SIZE
);
sgs
=
DIV_ROUND_UP
(
len
+
offset_in_page
(
buf
),
desc_len
);
}
else
if
(
virt_addr_valid
(
buf
))
{
desc_len
=
min_t
(
int
,
max_seg_size
,
maste
r
->
max_dma_len
);
desc_len
=
min_t
(
int
,
max_seg_size
,
ctl
r
->
max_dma_len
);
sgs
=
DIV_ROUND_UP
(
len
,
desc_len
);
}
else
{
return
-
EINVAL
;
...
...
@@ -811,7 +811,7 @@ static int spi_map_buf(struct spi_master *master, struct device *dev,
return
0
;
}
static
void
spi_unmap_buf
(
struct
spi_
master
*
maste
r
,
struct
device
*
dev
,
static
void
spi_unmap_buf
(
struct
spi_
controller
*
ctl
r
,
struct
device
*
dev
,
struct
sg_table
*
sgt
,
enum
dma_data_direction
dir
)
{
if
(
sgt
->
orig_nents
)
{
...
...
@@ -820,31 +820,31 @@ static void spi_unmap_buf(struct spi_master *master, struct device *dev,
}
}
static
int
__spi_map_msg
(
struct
spi_
master
*
maste
r
,
struct
spi_message
*
msg
)
static
int
__spi_map_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
struct
device
*
tx_dev
,
*
rx_dev
;
struct
spi_transfer
*
xfer
;
int
ret
;
if
(
!
maste
r
->
can_dma
)
if
(
!
ctl
r
->
can_dma
)
return
0
;
if
(
maste
r
->
dma_tx
)
tx_dev
=
maste
r
->
dma_tx
->
device
->
dev
;
if
(
ctl
r
->
dma_tx
)
tx_dev
=
ctl
r
->
dma_tx
->
device
->
dev
;
else
tx_dev
=
maste
r
->
dev
.
parent
;
tx_dev
=
ctl
r
->
dev
.
parent
;
if
(
maste
r
->
dma_rx
)
rx_dev
=
maste
r
->
dma_rx
->
device
->
dev
;
if
(
ctl
r
->
dma_rx
)
rx_dev
=
ctl
r
->
dma_rx
->
device
->
dev
;
else
rx_dev
=
maste
r
->
dev
.
parent
;
rx_dev
=
ctl
r
->
dev
.
parent
;
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
if
(
!
master
->
can_dma
(
maste
r
,
msg
->
spi
,
xfer
))
if
(
!
ctlr
->
can_dma
(
ctl
r
,
msg
->
spi
,
xfer
))
continue
;
if
(
xfer
->
tx_buf
!=
NULL
)
{
ret
=
spi_map_buf
(
maste
r
,
tx_dev
,
&
xfer
->
tx_sg
,
ret
=
spi_map_buf
(
ctl
r
,
tx_dev
,
&
xfer
->
tx_sg
,
(
void
*
)
xfer
->
tx_buf
,
xfer
->
len
,
DMA_TO_DEVICE
);
if
(
ret
!=
0
)
...
...
@@ -852,79 +852,78 @@ static int __spi_map_msg(struct spi_master *master, struct spi_message *msg)
}
if
(
xfer
->
rx_buf
!=
NULL
)
{
ret
=
spi_map_buf
(
maste
r
,
rx_dev
,
&
xfer
->
rx_sg
,
ret
=
spi_map_buf
(
ctl
r
,
rx_dev
,
&
xfer
->
rx_sg
,
xfer
->
rx_buf
,
xfer
->
len
,
DMA_FROM_DEVICE
);
if
(
ret
!=
0
)
{
spi_unmap_buf
(
maste
r
,
tx_dev
,
&
xfer
->
tx_sg
,
spi_unmap_buf
(
ctl
r
,
tx_dev
,
&
xfer
->
tx_sg
,
DMA_TO_DEVICE
);
return
ret
;
}
}
}
maste
r
->
cur_msg_mapped
=
true
;
ctl
r
->
cur_msg_mapped
=
true
;
return
0
;
}
static
int
__spi_unmap_msg
(
struct
spi_
master
*
maste
r
,
struct
spi_message
*
msg
)
static
int
__spi_unmap_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
struct
spi_transfer
*
xfer
;
struct
device
*
tx_dev
,
*
rx_dev
;
if
(
!
master
->
cur_msg_mapped
||
!
maste
r
->
can_dma
)
if
(
!
ctlr
->
cur_msg_mapped
||
!
ctl
r
->
can_dma
)
return
0
;
if
(
maste
r
->
dma_tx
)
tx_dev
=
maste
r
->
dma_tx
->
device
->
dev
;
if
(
ctl
r
->
dma_tx
)
tx_dev
=
ctl
r
->
dma_tx
->
device
->
dev
;
else
tx_dev
=
maste
r
->
dev
.
parent
;
tx_dev
=
ctl
r
->
dev
.
parent
;
if
(
maste
r
->
dma_rx
)
rx_dev
=
maste
r
->
dma_rx
->
device
->
dev
;
if
(
ctl
r
->
dma_rx
)
rx_dev
=
ctl
r
->
dma_rx
->
device
->
dev
;
else
rx_dev
=
maste
r
->
dev
.
parent
;
rx_dev
=
ctl
r
->
dev
.
parent
;
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
if
(
!
master
->
can_dma
(
maste
r
,
msg
->
spi
,
xfer
))
if
(
!
ctlr
->
can_dma
(
ctl
r
,
msg
->
spi
,
xfer
))
continue
;
spi_unmap_buf
(
maste
r
,
rx_dev
,
&
xfer
->
rx_sg
,
DMA_FROM_DEVICE
);
spi_unmap_buf
(
maste
r
,
tx_dev
,
&
xfer
->
tx_sg
,
DMA_TO_DEVICE
);
spi_unmap_buf
(
ctl
r
,
rx_dev
,
&
xfer
->
rx_sg
,
DMA_FROM_DEVICE
);
spi_unmap_buf
(
ctl
r
,
tx_dev
,
&
xfer
->
tx_sg
,
DMA_TO_DEVICE
);
}
return
0
;
}
#else
/* !CONFIG_HAS_DMA */
static
inline
int
spi_map_buf
(
struct
spi_master
*
master
,
struct
device
*
dev
,
struct
sg_table
*
sgt
,
void
*
buf
,
size_t
len
,
static
inline
int
spi_map_buf
(
struct
spi_controller
*
ctlr
,
struct
device
*
dev
,
struct
sg_table
*
sgt
,
void
*
buf
,
size_t
len
,
enum
dma_data_direction
dir
)
{
return
-
EINVAL
;
}
static
inline
void
spi_unmap_buf
(
struct
spi_
master
*
maste
r
,
static
inline
void
spi_unmap_buf
(
struct
spi_
controller
*
ctl
r
,
struct
device
*
dev
,
struct
sg_table
*
sgt
,
enum
dma_data_direction
dir
)
{
}
static
inline
int
__spi_map_msg
(
struct
spi_
master
*
maste
r
,
static
inline
int
__spi_map_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
return
0
;
}
static
inline
int
__spi_unmap_msg
(
struct
spi_
master
*
maste
r
,
static
inline
int
__spi_unmap_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
return
0
;
}
#endif
/* !CONFIG_HAS_DMA */
static
inline
int
spi_unmap_msg
(
struct
spi_
master
*
maste
r
,
static
inline
int
spi_unmap_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
struct
spi_transfer
*
xfer
;
...
...
@@ -934,63 +933,63 @@ static inline int spi_unmap_msg(struct spi_master *master,
* Restore the original value of tx_buf or rx_buf if they are
* NULL.
*/
if
(
xfer
->
tx_buf
==
maste
r
->
dummy_tx
)
if
(
xfer
->
tx_buf
==
ctl
r
->
dummy_tx
)
xfer
->
tx_buf
=
NULL
;
if
(
xfer
->
rx_buf
==
maste
r
->
dummy_rx
)
if
(
xfer
->
rx_buf
==
ctl
r
->
dummy_rx
)
xfer
->
rx_buf
=
NULL
;
}
return
__spi_unmap_msg
(
maste
r
,
msg
);
return
__spi_unmap_msg
(
ctl
r
,
msg
);
}
static
int
spi_map_msg
(
struct
spi_
master
*
maste
r
,
struct
spi_message
*
msg
)
static
int
spi_map_msg
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
struct
spi_transfer
*
xfer
;
void
*
tmp
;
unsigned
int
max_tx
,
max_rx
;
if
(
master
->
flags
&
(
SPI_MASTER_MUST_RX
|
SPI_MAST
ER_MUST_TX
))
{
if
(
ctlr
->
flags
&
(
SPI_CONTROLLER_MUST_RX
|
SPI_CONTROLL
ER_MUST_TX
))
{
max_tx
=
0
;
max_rx
=
0
;
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
if
((
master
->
flags
&
SPI_MAST
ER_MUST_TX
)
&&
if
((
ctlr
->
flags
&
SPI_CONTROLL
ER_MUST_TX
)
&&
!
xfer
->
tx_buf
)
max_tx
=
max
(
xfer
->
len
,
max_tx
);
if
((
master
->
flags
&
SPI_MAST
ER_MUST_RX
)
&&
if
((
ctlr
->
flags
&
SPI_CONTROLL
ER_MUST_RX
)
&&
!
xfer
->
rx_buf
)
max_rx
=
max
(
xfer
->
len
,
max_rx
);
}
if
(
max_tx
)
{
tmp
=
krealloc
(
maste
r
->
dummy_tx
,
max_tx
,
tmp
=
krealloc
(
ctl
r
->
dummy_tx
,
max_tx
,
GFP_KERNEL
|
GFP_DMA
);
if
(
!
tmp
)
return
-
ENOMEM
;
maste
r
->
dummy_tx
=
tmp
;
ctl
r
->
dummy_tx
=
tmp
;
memset
(
tmp
,
0
,
max_tx
);
}
if
(
max_rx
)
{
tmp
=
krealloc
(
maste
r
->
dummy_rx
,
max_rx
,
tmp
=
krealloc
(
ctl
r
->
dummy_rx
,
max_rx
,
GFP_KERNEL
|
GFP_DMA
);
if
(
!
tmp
)
return
-
ENOMEM
;
maste
r
->
dummy_rx
=
tmp
;
ctl
r
->
dummy_rx
=
tmp
;
}
if
(
max_tx
||
max_rx
)
{
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
if
(
!
xfer
->
tx_buf
)
xfer
->
tx_buf
=
maste
r
->
dummy_tx
;
xfer
->
tx_buf
=
ctl
r
->
dummy_tx
;
if
(
!
xfer
->
rx_buf
)
xfer
->
rx_buf
=
maste
r
->
dummy_rx
;
xfer
->
rx_buf
=
ctl
r
->
dummy_rx
;
}
}
}
return
__spi_map_msg
(
maste
r
,
msg
);
return
__spi_map_msg
(
ctl
r
,
msg
);
}
/*
...
...
@@ -1000,14 +999,14 @@ static int spi_map_msg(struct spi_master *master, struct spi_message *msg)
* drivers which implement a transfer_one() operation. It provides
* standard handling of delays and chip select management.
*/
static
int
spi_transfer_one_message
(
struct
spi_
master
*
maste
r
,
static
int
spi_transfer_one_message
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
)
{
struct
spi_transfer
*
xfer
;
bool
keep_cs
=
false
;
int
ret
=
0
;
unsigned
long
long
ms
=
1
;
struct
spi_statistics
*
statm
=
&
maste
r
->
statistics
;
struct
spi_statistics
*
statm
=
&
ctl
r
->
statistics
;
struct
spi_statistics
*
stats
=
&
msg
->
spi
->
statistics
;
spi_set_cs
(
msg
->
spi
,
true
);
...
...
@@ -1018,13 +1017,13 @@ static int spi_transfer_one_message(struct spi_master *master,
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
trace_spi_transfer_start
(
msg
,
xfer
);
spi_statistics_add_transfer_stats
(
statm
,
xfer
,
maste
r
);
spi_statistics_add_transfer_stats
(
stats
,
xfer
,
maste
r
);
spi_statistics_add_transfer_stats
(
statm
,
xfer
,
ctl
r
);
spi_statistics_add_transfer_stats
(
stats
,
xfer
,
ctl
r
);
if
(
xfer
->
tx_buf
||
xfer
->
rx_buf
)
{
reinit_completion
(
&
maste
r
->
xfer_completion
);
reinit_completion
(
&
ctl
r
->
xfer_completion
);
ret
=
master
->
transfer_one
(
maste
r
,
msg
->
spi
,
xfer
);
ret
=
ctlr
->
transfer_one
(
ctl
r
,
msg
->
spi
,
xfer
);
if
(
ret
<
0
)
{
SPI_STATISTICS_INCREMENT_FIELD
(
statm
,
errors
);
...
...
@@ -1044,7 +1043,7 @@ static int spi_transfer_one_message(struct spi_master *master,
if
(
ms
>
UINT_MAX
)
ms
=
UINT_MAX
;
ms
=
wait_for_completion_timeout
(
&
maste
r
->
xfer_completion
,
ms
=
wait_for_completion_timeout
(
&
ctl
r
->
xfer_completion
,
msecs_to_jiffies
(
ms
));
}
...
...
@@ -1099,33 +1098,33 @@ static int spi_transfer_one_message(struct spi_master *master,
if
(
msg
->
status
==
-
EINPROGRESS
)
msg
->
status
=
ret
;
if
(
msg
->
status
&&
maste
r
->
handle_err
)
master
->
handle_err
(
maste
r
,
msg
);
if
(
msg
->
status
&&
ctl
r
->
handle_err
)
ctlr
->
handle_err
(
ctl
r
,
msg
);
spi_res_release
(
maste
r
,
msg
);
spi_res_release
(
ctl
r
,
msg
);
spi_finalize_current_message
(
maste
r
);
spi_finalize_current_message
(
ctl
r
);
return
ret
;
}
/**
* spi_finalize_current_transfer - report completion of a transfer
* @
master: the mast
er reporting completion
* @
ctlr: the controll
er reporting completion
*
* Called by SPI drivers using the core transfer_one_message()
* implementation to notify it that the current interrupt driven
* transfer has finished and the next one may be scheduled.
*/
void
spi_finalize_current_transfer
(
struct
spi_
master
*
maste
r
)
void
spi_finalize_current_transfer
(
struct
spi_
controller
*
ctl
r
)
{
complete
(
&
maste
r
->
xfer_completion
);
complete
(
&
ctl
r
->
xfer_completion
);
}
EXPORT_SYMBOL_GPL
(
spi_finalize_current_transfer
);
/**
* __spi_pump_messages - function which processes spi message queue
* @
master: mast
er to process queue for
* @
ctlr: controll
er to process queue for
* @in_kthread: true if we are in the context of the message pump thread
*
* This function checks if there is any spi message in the queue that
...
...
@@ -1136,136 +1135,136 @@ EXPORT_SYMBOL_GPL(spi_finalize_current_transfer);
* inside spi_sync(); the queue extraction handling at the top of the
* function should deal with this safely.
*/
static
void
__spi_pump_messages
(
struct
spi_
master
*
maste
r
,
bool
in_kthread
)
static
void
__spi_pump_messages
(
struct
spi_
controller
*
ctl
r
,
bool
in_kthread
)
{
unsigned
long
flags
;
bool
was_busy
=
false
;
int
ret
;
/* Lock queue */
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
/* Make sure we are not already running a message */
if
(
maste
r
->
cur_msg
)
{
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
if
(
ctl
r
->
cur_msg
)
{
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
;
}
/* If another context is idling the device then defer */
if
(
maste
r
->
idling
)
{
kthread_queue_work
(
&
master
->
kworker
,
&
maste
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
if
(
ctl
r
->
idling
)
{
kthread_queue_work
(
&
ctlr
->
kworker
,
&
ctl
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
;
}
/* Check if the queue is idle */
if
(
list_empty
(
&
master
->
queue
)
||
!
maste
r
->
running
)
{
if
(
!
maste
r
->
busy
)
{
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
if
(
list_empty
(
&
ctlr
->
queue
)
||
!
ctl
r
->
running
)
{
if
(
!
ctl
r
->
busy
)
{
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
;
}
/* Only do teardown in the thread */
if
(
!
in_kthread
)
{
kthread_queue_work
(
&
maste
r
->
kworker
,
&
maste
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
kthread_queue_work
(
&
ctl
r
->
kworker
,
&
ctl
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
;
}
maste
r
->
busy
=
false
;
maste
r
->
idling
=
true
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
kfree
(
maste
r
->
dummy_rx
);
maste
r
->
dummy_rx
=
NULL
;
kfree
(
maste
r
->
dummy_tx
);
maste
r
->
dummy_tx
=
NULL
;
if
(
maste
r
->
unprepare_transfer_hardware
&&
master
->
unprepare_transfer_hardware
(
maste
r
))
dev_err
(
&
maste
r
->
dev
,
ctl
r
->
busy
=
false
;
ctl
r
->
idling
=
true
;
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
kfree
(
ctl
r
->
dummy_rx
);
ctl
r
->
dummy_rx
=
NULL
;
kfree
(
ctl
r
->
dummy_tx
);
ctl
r
->
dummy_tx
=
NULL
;
if
(
ctl
r
->
unprepare_transfer_hardware
&&
ctlr
->
unprepare_transfer_hardware
(
ctl
r
))
dev_err
(
&
ctl
r
->
dev
,
"failed to unprepare transfer hardware
\n
"
);
if
(
maste
r
->
auto_runtime_pm
)
{
pm_runtime_mark_last_busy
(
maste
r
->
dev
.
parent
);
pm_runtime_put_autosuspend
(
maste
r
->
dev
.
parent
);
if
(
ctl
r
->
auto_runtime_pm
)
{
pm_runtime_mark_last_busy
(
ctl
r
->
dev
.
parent
);
pm_runtime_put_autosuspend
(
ctl
r
->
dev
.
parent
);
}
trace_spi_
master_idle
(
maste
r
);
trace_spi_
controller_idle
(
ctl
r
);
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
maste
r
->
idling
=
false
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
ctl
r
->
idling
=
false
;
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
;
}
/* Extract head of queue */
maste
r
->
cur_msg
=
list_first_entry
(
&
maste
r
->
queue
,
struct
spi_message
,
queue
);
ctl
r
->
cur_msg
=
list_first_entry
(
&
ctl
r
->
queue
,
struct
spi_message
,
queue
);
list_del_init
(
&
maste
r
->
cur_msg
->
queue
);
if
(
maste
r
->
busy
)
list_del_init
(
&
ctl
r
->
cur_msg
->
queue
);
if
(
ctl
r
->
busy
)
was_busy
=
true
;
else
maste
r
->
busy
=
true
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
ctl
r
->
busy
=
true
;
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
mutex_lock
(
&
maste
r
->
io_mutex
);
mutex_lock
(
&
ctl
r
->
io_mutex
);
if
(
!
was_busy
&&
maste
r
->
auto_runtime_pm
)
{
ret
=
pm_runtime_get_sync
(
maste
r
->
dev
.
parent
);
if
(
!
was_busy
&&
ctl
r
->
auto_runtime_pm
)
{
ret
=
pm_runtime_get_sync
(
ctl
r
->
dev
.
parent
);
if
(
ret
<
0
)
{
dev_err
(
&
maste
r
->
dev
,
"Failed to power device: %d
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"Failed to power device: %d
\n
"
,
ret
);
mutex_unlock
(
&
maste
r
->
io_mutex
);
mutex_unlock
(
&
ctl
r
->
io_mutex
);
return
;
}
}
if
(
!
was_busy
)
trace_spi_
master_busy
(
maste
r
);
trace_spi_
controller_busy
(
ctl
r
);
if
(
!
was_busy
&&
maste
r
->
prepare_transfer_hardware
)
{
ret
=
master
->
prepare_transfer_hardware
(
maste
r
);
if
(
!
was_busy
&&
ctl
r
->
prepare_transfer_hardware
)
{
ret
=
ctlr
->
prepare_transfer_hardware
(
ctl
r
);
if
(
ret
)
{
dev_err
(
&
maste
r
->
dev
,
dev_err
(
&
ctl
r
->
dev
,
"failed to prepare transfer hardware
\n
"
);
if
(
maste
r
->
auto_runtime_pm
)
pm_runtime_put
(
maste
r
->
dev
.
parent
);
mutex_unlock
(
&
maste
r
->
io_mutex
);
if
(
ctl
r
->
auto_runtime_pm
)
pm_runtime_put
(
ctl
r
->
dev
.
parent
);
mutex_unlock
(
&
ctl
r
->
io_mutex
);
return
;
}
}
trace_spi_message_start
(
maste
r
->
cur_msg
);
trace_spi_message_start
(
ctl
r
->
cur_msg
);
if
(
maste
r
->
prepare_message
)
{
ret
=
master
->
prepare_message
(
master
,
maste
r
->
cur_msg
);
if
(
ctl
r
->
prepare_message
)
{
ret
=
ctlr
->
prepare_message
(
ctlr
,
ctl
r
->
cur_msg
);
if
(
ret
)
{
dev_err
(
&
master
->
dev
,
"failed to prepare message: %d
\n
"
,
ret
);
maste
r
->
cur_msg
->
status
=
ret
;
spi_finalize_current_message
(
maste
r
);
dev_err
(
&
ctlr
->
dev
,
"failed to prepare message: %d
\n
"
,
ret
);
ctl
r
->
cur_msg
->
status
=
ret
;
spi_finalize_current_message
(
ctl
r
);
goto
out
;
}
maste
r
->
cur_msg_prepared
=
true
;
ctl
r
->
cur_msg_prepared
=
true
;
}
ret
=
spi_map_msg
(
master
,
maste
r
->
cur_msg
);
ret
=
spi_map_msg
(
ctlr
,
ctl
r
->
cur_msg
);
if
(
ret
)
{
maste
r
->
cur_msg
->
status
=
ret
;
spi_finalize_current_message
(
maste
r
);
ctl
r
->
cur_msg
->
status
=
ret
;
spi_finalize_current_message
(
ctl
r
);
goto
out
;
}
ret
=
master
->
transfer_one_message
(
master
,
maste
r
->
cur_msg
);
ret
=
ctlr
->
transfer_one_message
(
ctlr
,
ctl
r
->
cur_msg
);
if
(
ret
)
{
dev_err
(
&
maste
r
->
dev
,
dev_err
(
&
ctl
r
->
dev
,
"failed to transfer one message from queue
\n
"
);
goto
out
;
}
out:
mutex_unlock
(
&
maste
r
->
io_mutex
);
mutex_unlock
(
&
ctl
r
->
io_mutex
);
/* Prod the scheduler in case transfer_one() was busy waiting */
if
(
!
ret
)
...
...
@@ -1274,44 +1273,43 @@ static void __spi_pump_messages(struct spi_master *master, bool in_kthread)
/**
* spi_pump_messages - kthread work function which processes spi message queue
* @work: pointer to kthread work struct contained in the
mast
er struct
* @work: pointer to kthread work struct contained in the
controll
er struct
*/
static
void
spi_pump_messages
(
struct
kthread_work
*
work
)
{
struct
spi_
master
*
maste
r
=
container_of
(
work
,
struct
spi_
mast
er
,
pump_messages
);
struct
spi_
controller
*
ctl
r
=
container_of
(
work
,
struct
spi_
controll
er
,
pump_messages
);
__spi_pump_messages
(
maste
r
,
true
);
__spi_pump_messages
(
ctl
r
,
true
);
}
static
int
spi_init_queue
(
struct
spi_
master
*
maste
r
)
static
int
spi_init_queue
(
struct
spi_
controller
*
ctl
r
)
{
struct
sched_param
param
=
{
.
sched_priority
=
MAX_RT_PRIO
-
1
};
maste
r
->
running
=
false
;
maste
r
->
busy
=
false
;
ctl
r
->
running
=
false
;
ctl
r
->
busy
=
false
;
kthread_init_worker
(
&
master
->
kworker
);
master
->
kworker_task
=
kthread_run
(
kthread_worker_fn
,
&
master
->
kworker
,
"%s"
,
dev_name
(
&
master
->
dev
));
if
(
IS_ERR
(
master
->
kworker_task
))
{
dev_err
(
&
master
->
dev
,
"failed to create message pump task
\n
"
);
return
PTR_ERR
(
master
->
kworker_task
);
kthread_init_worker
(
&
ctlr
->
kworker
);
ctlr
->
kworker_task
=
kthread_run
(
kthread_worker_fn
,
&
ctlr
->
kworker
,
"%s"
,
dev_name
(
&
ctlr
->
dev
));
if
(
IS_ERR
(
ctlr
->
kworker_task
))
{
dev_err
(
&
ctlr
->
dev
,
"failed to create message pump task
\n
"
);
return
PTR_ERR
(
ctlr
->
kworker_task
);
}
kthread_init_work
(
&
maste
r
->
pump_messages
,
spi_pump_messages
);
kthread_init_work
(
&
ctl
r
->
pump_messages
,
spi_pump_messages
);
/*
*
Mast
er config will indicate if this controller should run the
*
Controll
er config will indicate if this controller should run the
* message pump with high (realtime) priority to reduce the transfer
* latency on the bus by minimising the delay between a transfer
* request and the scheduling of the message pump thread. Without this
* setting the message pump thread will remain at default priority.
*/
if
(
maste
r
->
rt
)
{
dev_info
(
&
maste
r
->
dev
,
if
(
ctl
r
->
rt
)
{
dev_info
(
&
ctl
r
->
dev
,
"will run message pump with realtime priority
\n
"
);
sched_setscheduler
(
maste
r
->
kworker_task
,
SCHED_FIFO
,
&
param
);
sched_setscheduler
(
ctl
r
->
kworker_task
,
SCHED_FIFO
,
&
param
);
}
return
0
;
...
...
@@ -1320,23 +1318,23 @@ static int spi_init_queue(struct spi_master *master)
/**
* spi_get_next_queued_message() - called by driver to check for queued
* messages
* @
master: the mast
er to check for queued messages
* @
ctlr: the controll
er to check for queued messages
*
* If there are more messages in the queue, the next message is returned from
* this call.
*
* Return: the next message in the queue, else NULL if the queue is empty.
*/
struct
spi_message
*
spi_get_next_queued_message
(
struct
spi_
master
*
maste
r
)
struct
spi_message
*
spi_get_next_queued_message
(
struct
spi_
controller
*
ctl
r
)
{
struct
spi_message
*
next
;
unsigned
long
flags
;
/* get a pointer to the next message, if any */
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
next
=
list_first_entry_or_null
(
&
maste
r
->
queue
,
struct
spi_message
,
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
next
=
list_first_entry_or_null
(
&
ctl
r
->
queue
,
struct
spi_message
,
queue
);
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
next
;
}
...
...
@@ -1344,36 +1342,36 @@ EXPORT_SYMBOL_GPL(spi_get_next_queued_message);
/**
* spi_finalize_current_message() - the current message is complete
* @
master: the mast
er to return the message to
* @
ctlr: the controll
er to return the message to
*
* Called by the driver to notify the core that the message in the front of the
* queue is complete and can be removed from the queue.
*/
void
spi_finalize_current_message
(
struct
spi_
master
*
maste
r
)
void
spi_finalize_current_message
(
struct
spi_
controller
*
ctl
r
)
{
struct
spi_message
*
mesg
;
unsigned
long
flags
;
int
ret
;
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
mesg
=
maste
r
->
cur_msg
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
mesg
=
ctl
r
->
cur_msg
;
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
spi_unmap_msg
(
maste
r
,
mesg
);
spi_unmap_msg
(
ctl
r
,
mesg
);
if
(
master
->
cur_msg_prepared
&&
maste
r
->
unprepare_message
)
{
ret
=
master
->
unprepare_message
(
maste
r
,
mesg
);
if
(
ctlr
->
cur_msg_prepared
&&
ctl
r
->
unprepare_message
)
{
ret
=
ctlr
->
unprepare_message
(
ctl
r
,
mesg
);
if
(
ret
)
{
dev_err
(
&
master
->
dev
,
"failed to unprepare message: %d
\n
"
,
ret
);
dev_err
(
&
ctlr
->
dev
,
"failed to unprepare message: %d
\n
"
,
ret
);
}
}
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
maste
r
->
cur_msg
=
NULL
;
maste
r
->
cur_msg_prepared
=
false
;
kthread_queue_work
(
&
master
->
kworker
,
&
maste
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
ctl
r
->
cur_msg
=
NULL
;
ctl
r
->
cur_msg_prepared
=
false
;
kthread_queue_work
(
&
ctlr
->
kworker
,
&
ctl
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
trace_spi_message_done
(
mesg
);
...
...
@@ -1383,66 +1381,65 @@ void spi_finalize_current_message(struct spi_master *master)
}
EXPORT_SYMBOL_GPL
(
spi_finalize_current_message
);
static
int
spi_start_queue
(
struct
spi_
master
*
maste
r
)
static
int
spi_start_queue
(
struct
spi_
controller
*
ctl
r
)
{
unsigned
long
flags
;
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
if
(
master
->
running
||
maste
r
->
busy
)
{
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
if
(
ctlr
->
running
||
ctl
r
->
busy
)
{
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
-
EBUSY
;
}
maste
r
->
running
=
true
;
maste
r
->
cur_msg
=
NULL
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
ctl
r
->
running
=
true
;
ctl
r
->
cur_msg
=
NULL
;
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
kthread_queue_work
(
&
master
->
kworker
,
&
maste
r
->
pump_messages
);
kthread_queue_work
(
&
ctlr
->
kworker
,
&
ctl
r
->
pump_messages
);
return
0
;
}
static
int
spi_stop_queue
(
struct
spi_
master
*
maste
r
)
static
int
spi_stop_queue
(
struct
spi_
controller
*
ctl
r
)
{
unsigned
long
flags
;
unsigned
limit
=
500
;
int
ret
=
0
;
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
/*
* This is a bit lame, but is optimized for the common execution path.
* A wait_queue on the
maste
r->busy could be used, but then the common
* A wait_queue on the
ctl
r->busy could be used, but then the common
* execution path (pump_messages) would be required to call wake_up or
* friends on every SPI message. Do this instead.
*/
while
((
!
list_empty
(
&
master
->
queue
)
||
maste
r
->
busy
)
&&
limit
--
)
{
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
while
((
!
list_empty
(
&
ctlr
->
queue
)
||
ctl
r
->
busy
)
&&
limit
--
)
{
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
usleep_range
(
10000
,
11000
);
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
}
if
(
!
list_empty
(
&
master
->
queue
)
||
maste
r
->
busy
)
if
(
!
list_empty
(
&
ctlr
->
queue
)
||
ctl
r
->
busy
)
ret
=
-
EBUSY
;
else
maste
r
->
running
=
false
;
ctl
r
->
running
=
false
;
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
if
(
ret
)
{
dev_warn
(
&
master
->
dev
,
"could not stop message queue
\n
"
);
dev_warn
(
&
ctlr
->
dev
,
"could not stop message queue
\n
"
);
return
ret
;
}
return
ret
;
}
static
int
spi_destroy_queue
(
struct
spi_
master
*
maste
r
)
static
int
spi_destroy_queue
(
struct
spi_
controller
*
ctl
r
)
{
int
ret
;
ret
=
spi_stop_queue
(
maste
r
);
ret
=
spi_stop_queue
(
ctl
r
);
/*
* kthread_flush_worker will block until all work is done.
...
...
@@ -1451,12 +1448,12 @@ static int spi_destroy_queue(struct spi_master *master)
* return anyway.
*/
if
(
ret
)
{
dev_err
(
&
maste
r
->
dev
,
"problem destroying queue
\n
"
);
dev_err
(
&
ctl
r
->
dev
,
"problem destroying queue
\n
"
);
return
ret
;
}
kthread_flush_worker
(
&
maste
r
->
kworker
);
kthread_stop
(
maste
r
->
kworker_task
);
kthread_flush_worker
(
&
ctl
r
->
kworker
);
kthread_stop
(
ctl
r
->
kworker_task
);
return
0
;
}
...
...
@@ -1465,23 +1462,23 @@ static int __spi_queued_transfer(struct spi_device *spi,
struct
spi_message
*
msg
,
bool
need_pump
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
unsigned
long
flags
;
spin_lock_irqsave
(
&
maste
r
->
queue_lock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
queue_lock
,
flags
);
if
(
!
maste
r
->
running
)
{
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
if
(
!
ctl
r
->
running
)
{
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
-
ESHUTDOWN
;
}
msg
->
actual_length
=
0
;
msg
->
status
=
-
EINPROGRESS
;
list_add_tail
(
&
msg
->
queue
,
&
maste
r
->
queue
);
if
(
!
maste
r
->
busy
&&
need_pump
)
kthread_queue_work
(
&
master
->
kworker
,
&
maste
r
->
pump_messages
);
list_add_tail
(
&
msg
->
queue
,
&
ctl
r
->
queue
);
if
(
!
ctl
r
->
busy
&&
need_pump
)
kthread_queue_work
(
&
ctlr
->
kworker
,
&
ctl
r
->
pump_messages
);
spin_unlock_irqrestore
(
&
maste
r
->
queue_lock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
queue_lock
,
flags
);
return
0
;
}
...
...
@@ -1497,31 +1494,31 @@ static int spi_queued_transfer(struct spi_device *spi, struct spi_message *msg)
return
__spi_queued_transfer
(
spi
,
msg
,
true
);
}
static
int
spi_
master_initialize_queue
(
struct
spi_master
*
maste
r
)
static
int
spi_
controller_initialize_queue
(
struct
spi_controller
*
ctl
r
)
{
int
ret
;
maste
r
->
transfer
=
spi_queued_transfer
;
if
(
!
maste
r
->
transfer_one_message
)
maste
r
->
transfer_one_message
=
spi_transfer_one_message
;
ctl
r
->
transfer
=
spi_queued_transfer
;
if
(
!
ctl
r
->
transfer_one_message
)
ctl
r
->
transfer_one_message
=
spi_transfer_one_message
;
/* Initialize and start queue */
ret
=
spi_init_queue
(
maste
r
);
ret
=
spi_init_queue
(
ctl
r
);
if
(
ret
)
{
dev_err
(
&
maste
r
->
dev
,
"problem initializing queue
\n
"
);
dev_err
(
&
ctl
r
->
dev
,
"problem initializing queue
\n
"
);
goto
err_init_queue
;
}
maste
r
->
queued
=
true
;
ret
=
spi_start_queue
(
maste
r
);
ctl
r
->
queued
=
true
;
ret
=
spi_start_queue
(
ctl
r
);
if
(
ret
)
{
dev_err
(
&
maste
r
->
dev
,
"problem starting queue
\n
"
);
dev_err
(
&
ctl
r
->
dev
,
"problem starting queue
\n
"
);
goto
err_start_queue
;
}
return
0
;
err_start_queue:
spi_destroy_queue
(
maste
r
);
spi_destroy_queue
(
ctl
r
);
err_init_queue:
return
ret
;
}
...
...
@@ -1529,21 +1526,12 @@ static int spi_master_initialize_queue(struct spi_master *master)
/*-------------------------------------------------------------------------*/
#if defined(CONFIG_OF)
static
int
of_spi_parse_dt
(
struct
spi_
master
*
maste
r
,
struct
spi_device
*
spi
,
static
int
of_spi_parse_dt
(
struct
spi_
controller
*
ctl
r
,
struct
spi_device
*
spi
,
struct
device_node
*
nc
)
{
u32
value
;
int
rc
;
/* Device address */
rc
=
of_property_read_u32
(
nc
,
"reg"
,
&
value
);
if
(
rc
)
{
dev_err
(
&
master
->
dev
,
"%s has no valid 'reg' property (%d)
\n
"
,
nc
->
full_name
,
rc
);
return
rc
;
}
spi
->
chip_select
=
value
;
/* Mode (clock phase/polarity/etc.) */
if
(
of_find_property
(
nc
,
"spi-cpha"
,
NULL
))
spi
->
mode
|=
SPI_CPHA
;
...
...
@@ -1568,7 +1556,7 @@ static int of_spi_parse_dt(struct spi_master *master, struct spi_device *spi,
spi
->
mode
|=
SPI_TX_QUAD
;
break
;
default:
dev_warn
(
&
maste
r
->
dev
,
dev_warn
(
&
ctl
r
->
dev
,
"spi-tx-bus-width %d not supported
\n
"
,
value
);
break
;
...
...
@@ -1586,17 +1574,36 @@ static int of_spi_parse_dt(struct spi_master *master, struct spi_device *spi,
spi
->
mode
|=
SPI_RX_QUAD
;
break
;
default:
dev_warn
(
&
maste
r
->
dev
,
dev_warn
(
&
ctl
r
->
dev
,
"spi-rx-bus-width %d not supported
\n
"
,
value
);
break
;
}
}
if
(
spi_controller_is_slave
(
ctlr
))
{
if
(
strcmp
(
nc
->
name
,
"slave"
))
{
dev_err
(
&
ctlr
->
dev
,
"%s is not called 'slave'
\n
"
,
nc
->
full_name
);
return
-
EINVAL
;
}
return
0
;
}
/* Device address */
rc
=
of_property_read_u32
(
nc
,
"reg"
,
&
value
);
if
(
rc
)
{
dev_err
(
&
ctlr
->
dev
,
"%s has no valid 'reg' property (%d)
\n
"
,
nc
->
full_name
,
rc
);
return
rc
;
}
spi
->
chip_select
=
value
;
/* Device speed */
rc
=
of_property_read_u32
(
nc
,
"spi-max-frequency"
,
&
value
);
if
(
rc
)
{
dev_err
(
&
master
->
dev
,
"%s has no valid 'spi-max-frequency' property (%d)
\n
"
,
dev_err
(
&
ctlr
->
dev
,
"%s has no valid 'spi-max-frequency' property (%d)
\n
"
,
nc
->
full_name
,
rc
);
return
rc
;
}
...
...
@@ -1606,15 +1613,15 @@ static int of_spi_parse_dt(struct spi_master *master, struct spi_device *spi,
}
static
struct
spi_device
*
of_register_spi_device
(
struct
spi_
master
*
maste
r
,
struct
device_node
*
nc
)
of_register_spi_device
(
struct
spi_
controller
*
ctl
r
,
struct
device_node
*
nc
)
{
struct
spi_device
*
spi
;
int
rc
;
/* Alloc an spi_device */
spi
=
spi_alloc_device
(
maste
r
);
spi
=
spi_alloc_device
(
ctl
r
);
if
(
!
spi
)
{
dev_err
(
&
maste
r
->
dev
,
"spi_device alloc error for %s
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"spi_device alloc error for %s
\n
"
,
nc
->
full_name
);
rc
=
-
ENOMEM
;
goto
err_out
;
...
...
@@ -1624,12 +1631,12 @@ of_register_spi_device(struct spi_master *master, struct device_node *nc)
rc
=
of_modalias_node
(
nc
,
spi
->
modalias
,
sizeof
(
spi
->
modalias
));
if
(
rc
<
0
)
{
dev_err
(
&
maste
r
->
dev
,
"cannot find modalias for %s
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"cannot find modalias for %s
\n
"
,
nc
->
full_name
);
goto
err_out
;
}
rc
=
of_spi_parse_dt
(
maste
r
,
spi
,
nc
);
rc
=
of_spi_parse_dt
(
ctl
r
,
spi
,
nc
);
if
(
rc
)
goto
err_out
;
...
...
@@ -1640,7 +1647,7 @@ of_register_spi_device(struct spi_master *master, struct device_node *nc)
/* Register the new device */
rc
=
spi_add_device
(
spi
);
if
(
rc
)
{
dev_err
(
&
maste
r
->
dev
,
"spi_device register error %s
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"spi_device register error %s
\n
"
,
nc
->
full_name
);
goto
err_of_node_put
;
}
...
...
@@ -1656,39 +1663,40 @@ of_register_spi_device(struct spi_master *master, struct device_node *nc)
/**
* of_register_spi_devices() - Register child devices onto the SPI bus
* @
master: Pointer to spi_mast
er device
* @
ctlr: Pointer to spi_controll
er device
*
* Registers an spi_device for each child node of
master node which has a 'reg'
*
property
.
* Registers an spi_device for each child node of
controller node which
*
represents a valid SPI slave
.
*/
static
void
of_register_spi_devices
(
struct
spi_
master
*
maste
r
)
static
void
of_register_spi_devices
(
struct
spi_
controller
*
ctl
r
)
{
struct
spi_device
*
spi
;
struct
device_node
*
nc
;
if
(
!
maste
r
->
dev
.
of_node
)
if
(
!
ctl
r
->
dev
.
of_node
)
return
;
for_each_available_child_of_node
(
maste
r
->
dev
.
of_node
,
nc
)
{
for_each_available_child_of_node
(
ctl
r
->
dev
.
of_node
,
nc
)
{
if
(
of_node_test_and_set_flag
(
nc
,
OF_POPULATED
))
continue
;
spi
=
of_register_spi_device
(
maste
r
,
nc
);
spi
=
of_register_spi_device
(
ctl
r
,
nc
);
if
(
IS_ERR
(
spi
))
{
dev_warn
(
&
master
->
dev
,
"Failed to create SPI device for %s
\n
"
,
nc
->
full_name
);
dev_warn
(
&
ctlr
->
dev
,
"Failed to create SPI device for %s
\n
"
,
nc
->
full_name
);
of_node_clear_flag
(
nc
,
OF_POPULATED
);
}
}
}
#else
static
void
of_register_spi_devices
(
struct
spi_
master
*
maste
r
)
{
}
static
void
of_register_spi_devices
(
struct
spi_
controller
*
ctl
r
)
{
}
#endif
#ifdef CONFIG_ACPI
static
int
acpi_spi_add_resource
(
struct
acpi_resource
*
ares
,
void
*
data
)
{
struct
spi_device
*
spi
=
data
;
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
if
(
ares
->
type
==
ACPI_RESOURCE_TYPE_SERIAL_BUS
)
{
struct
acpi_resource_spi_serialbus
*
sb
;
...
...
@@ -1702,8 +1710,8 @@ static int acpi_spi_add_resource(struct acpi_resource *ares, void *data)
* 0 .. max - 1 so we need to ask the driver to
* translate between the two schemes.
*/
if
(
maste
r
->
fw_translate_cs
)
{
int
cs
=
master
->
fw_translate_cs
(
maste
r
,
if
(
ctl
r
->
fw_translate_cs
)
{
int
cs
=
ctlr
->
fw_translate_cs
(
ctl
r
,
sb
->
device_selection
);
if
(
cs
<
0
)
return
cs
;
...
...
@@ -1732,7 +1740,7 @@ static int acpi_spi_add_resource(struct acpi_resource *ares, void *data)
return
1
;
}
static
acpi_status
acpi_register_spi_device
(
struct
spi_
master
*
maste
r
,
static
acpi_status
acpi_register_spi_device
(
struct
spi_
controller
*
ctl
r
,
struct
acpi_device
*
adev
)
{
struct
list_head
resource_list
;
...
...
@@ -1743,9 +1751,9 @@ static acpi_status acpi_register_spi_device(struct spi_master *master,
acpi_device_enumerated
(
adev
))
return
AE_OK
;
spi
=
spi_alloc_device
(
maste
r
);
spi
=
spi_alloc_device
(
ctl
r
);
if
(
!
spi
)
{
dev_err
(
&
maste
r
->
dev
,
"failed to allocate SPI device for %s
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"failed to allocate SPI device for %s
\n
"
,
dev_name
(
&
adev
->
dev
));
return
AE_NO_MEMORY
;
}
...
...
@@ -1774,7 +1782,7 @@ static acpi_status acpi_register_spi_device(struct spi_master *master,
adev
->
power
.
flags
.
ignore_parent
=
true
;
if
(
spi_add_device
(
spi
))
{
adev
->
power
.
flags
.
ignore_parent
=
false
;
dev_err
(
&
maste
r
->
dev
,
"failed to add SPI device %s from ACPI
\n
"
,
dev_err
(
&
ctl
r
->
dev
,
"failed to add SPI device %s from ACPI
\n
"
,
dev_name
(
&
adev
->
dev
));
spi_dev_put
(
spi
);
}
...
...
@@ -1785,104 +1793,211 @@ static acpi_status acpi_register_spi_device(struct spi_master *master,
static
acpi_status
acpi_spi_add_device
(
acpi_handle
handle
,
u32
level
,
void
*
data
,
void
**
return_value
)
{
struct
spi_
master
*
maste
r
=
data
;
struct
spi_
controller
*
ctl
r
=
data
;
struct
acpi_device
*
adev
;
if
(
acpi_bus_get_device
(
handle
,
&
adev
))
return
AE_OK
;
return
acpi_register_spi_device
(
maste
r
,
adev
);
return
acpi_register_spi_device
(
ctl
r
,
adev
);
}
static
void
acpi_register_spi_devices
(
struct
spi_
master
*
maste
r
)
static
void
acpi_register_spi_devices
(
struct
spi_
controller
*
ctl
r
)
{
acpi_status
status
;
acpi_handle
handle
;
handle
=
ACPI_HANDLE
(
maste
r
->
dev
.
parent
);
handle
=
ACPI_HANDLE
(
ctl
r
->
dev
.
parent
);
if
(
!
handle
)
return
;
status
=
acpi_walk_namespace
(
ACPI_TYPE_DEVICE
,
handle
,
1
,
acpi_spi_add_device
,
NULL
,
master
,
NULL
);
acpi_spi_add_device
,
NULL
,
ctlr
,
NULL
);
if
(
ACPI_FAILURE
(
status
))
dev_warn
(
&
maste
r
->
dev
,
"failed to enumerate SPI slaves
\n
"
);
dev_warn
(
&
ctl
r
->
dev
,
"failed to enumerate SPI slaves
\n
"
);
}
#else
static
inline
void
acpi_register_spi_devices
(
struct
spi_
master
*
maste
r
)
{}
static
inline
void
acpi_register_spi_devices
(
struct
spi_
controller
*
ctl
r
)
{}
#endif
/* CONFIG_ACPI */
static
void
spi_
mast
er_release
(
struct
device
*
dev
)
static
void
spi_
controll
er_release
(
struct
device
*
dev
)
{
struct
spi_
master
*
maste
r
;
struct
spi_
controller
*
ctl
r
;
master
=
container_of
(
dev
,
struct
spi_mast
er
,
dev
);
kfree
(
maste
r
);
ctlr
=
container_of
(
dev
,
struct
spi_controll
er
,
dev
);
kfree
(
ctl
r
);
}
static
struct
class
spi_master_class
=
{
.
name
=
"spi_master"
,
.
owner
=
THIS_MODULE
,
.
dev_release
=
spi_
mast
er_release
,
.
dev_release
=
spi_
controll
er_release
,
.
dev_groups
=
spi_master_groups
,
};
#ifdef CONFIG_SPI_SLAVE
/**
* spi_slave_abort - abort the ongoing transfer request on an SPI slave
* controller
* @spi: device used for the current transfer
*/
int
spi_slave_abort
(
struct
spi_device
*
spi
)
{
struct
spi_controller
*
ctlr
=
spi
->
controller
;
if
(
spi_controller_is_slave
(
ctlr
)
&&
ctlr
->
slave_abort
)
return
ctlr
->
slave_abort
(
ctlr
);
return
-
ENOTSUPP
;
}
EXPORT_SYMBOL_GPL
(
spi_slave_abort
);
static
int
match_true
(
struct
device
*
dev
,
void
*
data
)
{
return
1
;
}
static
ssize_t
spi_slave_show
(
struct
device
*
dev
,
struct
device_attribute
*
attr
,
char
*
buf
)
{
struct
spi_controller
*
ctlr
=
container_of
(
dev
,
struct
spi_controller
,
dev
);
struct
device
*
child
;
child
=
device_find_child
(
&
ctlr
->
dev
,
NULL
,
match_true
);
return
sprintf
(
buf
,
"%s
\n
"
,
child
?
to_spi_device
(
child
)
->
modalias
:
NULL
);
}
static
ssize_t
spi_slave_store
(
struct
device
*
dev
,
struct
device_attribute
*
attr
,
const
char
*
buf
,
size_t
count
)
{
struct
spi_controller
*
ctlr
=
container_of
(
dev
,
struct
spi_controller
,
dev
);
struct
spi_device
*
spi
;
struct
device
*
child
;
char
name
[
32
];
int
rc
;
rc
=
sscanf
(
buf
,
"%31s"
,
name
);
if
(
rc
!=
1
||
!
name
[
0
])
return
-
EINVAL
;
child
=
device_find_child
(
&
ctlr
->
dev
,
NULL
,
match_true
);
if
(
child
)
{
/* Remove registered slave */
device_unregister
(
child
);
put_device
(
child
);
}
if
(
strcmp
(
name
,
"(null)"
))
{
/* Register new slave */
spi
=
spi_alloc_device
(
ctlr
);
if
(
!
spi
)
return
-
ENOMEM
;
strlcpy
(
spi
->
modalias
,
name
,
sizeof
(
spi
->
modalias
));
rc
=
spi_add_device
(
spi
);
if
(
rc
)
{
spi_dev_put
(
spi
);
return
rc
;
}
}
return
count
;
}
static
DEVICE_ATTR
(
slave
,
0644
,
spi_slave_show
,
spi_slave_store
);
static
struct
attribute
*
spi_slave_attrs
[]
=
{
&
dev_attr_slave
.
attr
,
NULL
,
};
static
const
struct
attribute_group
spi_slave_group
=
{
.
attrs
=
spi_slave_attrs
,
};
static
const
struct
attribute_group
*
spi_slave_groups
[]
=
{
&
spi_controller_statistics_group
,
&
spi_slave_group
,
NULL
,
};
static
struct
class
spi_slave_class
=
{
.
name
=
"spi_slave"
,
.
owner
=
THIS_MODULE
,
.
dev_release
=
spi_controller_release
,
.
dev_groups
=
spi_slave_groups
,
};
#else
extern
struct
class
spi_slave_class
;
/* dummy */
#endif
/**
*
spi_alloc_master - allocate SPI master
controller
*
__spi_alloc_controller - allocate an SPI master or slave
controller
* @dev: the controller, possibly using the platform_bus
* @size: how much zeroed driver-private data to allocate; the pointer to this
* memory is in the driver_data field of the returned device,
* accessible with spi_master_get_devdata().
* accessible with spi_controller_get_devdata().
* @slave: flag indicating whether to allocate an SPI master (false) or SPI
* slave (true) controller
* Context: can sleep
*
* This call is used only by SPI
master
controller drivers, which are the
* This call is used only by SPI controller drivers, which are the
* only ones directly touching chip registers. It's how they allocate
* an spi_
master structure, prior to calling spi_register_mast
er().
* an spi_
controller structure, prior to calling spi_register_controll
er().
*
* This must be called from context that can sleep.
*
* The caller is responsible for assigning the bus number and initializing
* the master's methods before calling spi_register_master(); and (after errors
* adding the device) calling spi_master_put() to prevent a memory leak.
* The caller is responsible for assigning the bus number and initializing the
* controller's methods before calling spi_register_controller(); and (after
* errors adding the device) calling spi_controller_put() to prevent a memory
* leak.
*
* Return: the SPI
mast
er structure on success, else NULL.
* Return: the SPI
controll
er structure on success, else NULL.
*/
struct
spi_master
*
spi_alloc_master
(
struct
device
*
dev
,
unsigned
size
)
struct
spi_controller
*
__spi_alloc_controller
(
struct
device
*
dev
,
unsigned
int
size
,
bool
slave
)
{
struct
spi_
master
*
maste
r
;
struct
spi_
controller
*
ctl
r
;
if
(
!
dev
)
return
NULL
;
master
=
kzalloc
(
size
+
sizeof
(
*
maste
r
),
GFP_KERNEL
);
if
(
!
maste
r
)
ctlr
=
kzalloc
(
size
+
sizeof
(
*
ctl
r
),
GFP_KERNEL
);
if
(
!
ctl
r
)
return
NULL
;
device_initialize
(
&
master
->
dev
);
master
->
bus_num
=
-
1
;
master
->
num_chipselect
=
1
;
master
->
dev
.
class
=
&
spi_master_class
;
master
->
dev
.
parent
=
dev
;
pm_suspend_ignore_children
(
&
master
->
dev
,
true
);
spi_master_set_devdata
(
master
,
&
master
[
1
]);
device_initialize
(
&
ctlr
->
dev
);
ctlr
->
bus_num
=
-
1
;
ctlr
->
num_chipselect
=
1
;
ctlr
->
slave
=
slave
;
if
(
IS_ENABLED
(
CONFIG_SPI_SLAVE
)
&&
slave
)
ctlr
->
dev
.
class
=
&
spi_slave_class
;
else
ctlr
->
dev
.
class
=
&
spi_master_class
;
ctlr
->
dev
.
parent
=
dev
;
pm_suspend_ignore_children
(
&
ctlr
->
dev
,
true
);
spi_controller_set_devdata
(
ctlr
,
&
ctlr
[
1
]);
return
maste
r
;
return
ctl
r
;
}
EXPORT_SYMBOL_GPL
(
spi_alloc_mast
er
);
EXPORT_SYMBOL_GPL
(
__spi_alloc_controll
er
);
#ifdef CONFIG_OF
static
int
of_spi_register_master
(
struct
spi_
master
*
maste
r
)
static
int
of_spi_register_master
(
struct
spi_
controller
*
ctl
r
)
{
int
nb
,
i
,
*
cs
;
struct
device_node
*
np
=
maste
r
->
dev
.
of_node
;
struct
device_node
*
np
=
ctl
r
->
dev
.
of_node
;
if
(
!
np
)
return
0
;
nb
=
of_gpio_named_count
(
np
,
"cs-gpios"
);
master
->
num_chipselect
=
max_t
(
int
,
nb
,
maste
r
->
num_chipselect
);
ctlr
->
num_chipselect
=
max_t
(
int
,
nb
,
ctl
r
->
num_chipselect
);
/* Return error only for an incorrectly formed cs-gpios property */
if
(
nb
==
0
||
nb
==
-
ENOENT
)
...
...
@@ -1890,15 +2005,14 @@ static int of_spi_register_master(struct spi_master *master)
else
if
(
nb
<
0
)
return
nb
;
cs
=
devm_kzalloc
(
&
master
->
dev
,
sizeof
(
int
)
*
master
->
num_chipselect
,
cs
=
devm_kzalloc
(
&
ctlr
->
dev
,
sizeof
(
int
)
*
ctlr
->
num_chipselect
,
GFP_KERNEL
);
maste
r
->
cs_gpios
=
cs
;
ctl
r
->
cs_gpios
=
cs
;
if
(
!
maste
r
->
cs_gpios
)
if
(
!
ctl
r
->
cs_gpios
)
return
-
ENOMEM
;
for
(
i
=
0
;
i
<
maste
r
->
num_chipselect
;
i
++
)
for
(
i
=
0
;
i
<
ctl
r
->
num_chipselect
;
i
++
)
cs
[
i
]
=
-
ENOENT
;
for
(
i
=
0
;
i
<
nb
;
i
++
)
...
...
@@ -1907,20 +2021,21 @@ static int of_spi_register_master(struct spi_master *master)
return
0
;
}
#else
static
int
of_spi_register_master
(
struct
spi_
master
*
maste
r
)
static
int
of_spi_register_master
(
struct
spi_
controller
*
ctl
r
)
{
return
0
;
}
#endif
/**
* spi_register_master - register SPI master controller
* @master: initialized master, originally from spi_alloc_master()
* spi_register_controller - register SPI master or slave controller
* @ctlr: initialized master, originally from spi_alloc_master() or
* spi_alloc_slave()
* Context: can sleep
*
* SPI
master
controllers connect to their drivers using some non-SPI bus,
* SPI controllers connect to their drivers using some non-SPI bus,
* such as the platform bus. The final stage of probe() in that code
* includes calling spi_register_
mast
er() to hook up to this SPI bus glue.
* includes calling spi_register_
controll
er() to hook up to this SPI bus glue.
*
* SPI controllers use board specific (often SOC specific) bus numbers,
* and board-specific addressing for SPI devices combines those numbers
...
...
@@ -1929,16 +2044,16 @@ static int of_spi_register_master(struct spi_master *master)
* chip is at which address.
*
* This must be called from context that can sleep. It returns zero on
* success, else a negative error code (dropping the
mast
er's refcount).
* success, else a negative error code (dropping the
controll
er's refcount).
* After a successful return, the caller is responsible for calling
* spi_unregister_
mast
er().
* spi_unregister_
controll
er().
*
* Return: zero on success, else a negative error code.
*/
int
spi_register_
master
(
struct
spi_master
*
maste
r
)
int
spi_register_
controller
(
struct
spi_controller
*
ctl
r
)
{
static
atomic_t
dyn_bus_id
=
ATOMIC_INIT
((
1
<<
15
)
-
1
);
struct
device
*
dev
=
maste
r
->
dev
.
parent
;
struct
device
*
dev
=
ctl
r
->
dev
.
parent
;
struct
boardinfo
*
bi
;
int
status
=
-
ENODEV
;
int
dynamic
=
0
;
...
...
@@ -1946,103 +2061,109 @@ int spi_register_master(struct spi_master *master)
if
(
!
dev
)
return
-
ENODEV
;
status
=
of_spi_register_master
(
master
);
if
(
status
)
return
status
;
if
(
!
spi_controller_is_slave
(
ctlr
))
{
status
=
of_spi_register_master
(
ctlr
);
if
(
status
)
return
status
;
}
/* even if it's just one always-selected device, there must
* be at least one chipselect
*/
if
(
maste
r
->
num_chipselect
==
0
)
if
(
ctl
r
->
num_chipselect
==
0
)
return
-
EINVAL
;
if
((
master
->
bus_num
<
0
)
&&
maste
r
->
dev
.
of_node
)
master
->
bus_num
=
of_alias_get_id
(
maste
r
->
dev
.
of_node
,
"spi"
);
if
((
ctlr
->
bus_num
<
0
)
&&
ctl
r
->
dev
.
of_node
)
ctlr
->
bus_num
=
of_alias_get_id
(
ctl
r
->
dev
.
of_node
,
"spi"
);
/* convention: dynamically assigned bus IDs count down from the max */
if
(
maste
r
->
bus_num
<
0
)
{
if
(
ctl
r
->
bus_num
<
0
)
{
/* FIXME switch to an IDR based scheme, something like
* I2C now uses, so we can't run out of "dynamic" IDs
*/
maste
r
->
bus_num
=
atomic_dec_return
(
&
dyn_bus_id
);
ctl
r
->
bus_num
=
atomic_dec_return
(
&
dyn_bus_id
);
dynamic
=
1
;
}
INIT_LIST_HEAD
(
&
maste
r
->
queue
);
spin_lock_init
(
&
maste
r
->
queue_lock
);
spin_lock_init
(
&
maste
r
->
bus_lock_spinlock
);
mutex_init
(
&
maste
r
->
bus_lock_mutex
);
mutex_init
(
&
maste
r
->
io_mutex
);
maste
r
->
bus_lock_flag
=
0
;
init_completion
(
&
maste
r
->
xfer_completion
);
if
(
!
maste
r
->
max_dma_len
)
maste
r
->
max_dma_len
=
INT_MAX
;
INIT_LIST_HEAD
(
&
ctl
r
->
queue
);
spin_lock_init
(
&
ctl
r
->
queue_lock
);
spin_lock_init
(
&
ctl
r
->
bus_lock_spinlock
);
mutex_init
(
&
ctl
r
->
bus_lock_mutex
);
mutex_init
(
&
ctl
r
->
io_mutex
);
ctl
r
->
bus_lock_flag
=
0
;
init_completion
(
&
ctl
r
->
xfer_completion
);
if
(
!
ctl
r
->
max_dma_len
)
ctl
r
->
max_dma_len
=
INT_MAX
;
/* register the device, then userspace will see it.
* registration fails if the bus ID is in use.
*/
dev_set_name
(
&
master
->
dev
,
"spi%u"
,
maste
r
->
bus_num
);
status
=
device_add
(
&
maste
r
->
dev
);
dev_set_name
(
&
ctlr
->
dev
,
"spi%u"
,
ctl
r
->
bus_num
);
status
=
device_add
(
&
ctl
r
->
dev
);
if
(
status
<
0
)
goto
done
;
dev_dbg
(
dev
,
"registered master %s%s
\n
"
,
dev_name
(
&
master
->
dev
),
dynamic
?
" (dynamic)"
:
""
);
dev_dbg
(
dev
,
"registered %s %s%s
\n
"
,
spi_controller_is_slave
(
ctlr
)
?
"slave"
:
"master"
,
dev_name
(
&
ctlr
->
dev
),
dynamic
?
" (dynamic)"
:
""
);
/* If we're using a queued driver, start the queue */
if
(
maste
r
->
transfer
)
dev_info
(
dev
,
"
mast
er is unqueued, this is deprecated
\n
"
);
if
(
ctl
r
->
transfer
)
dev_info
(
dev
,
"
controll
er is unqueued, this is deprecated
\n
"
);
else
{
status
=
spi_
master_initialize_queue
(
maste
r
);
status
=
spi_
controller_initialize_queue
(
ctl
r
);
if
(
status
)
{
device_del
(
&
maste
r
->
dev
);
device_del
(
&
ctl
r
->
dev
);
goto
done
;
}
}
/* add statistics */
spin_lock_init
(
&
maste
r
->
statistics
.
lock
);
spin_lock_init
(
&
ctl
r
->
statistics
.
lock
);
mutex_lock
(
&
board_lock
);
list_add_tail
(
&
master
->
list
,
&
spi_mast
er_list
);
list_add_tail
(
&
ctlr
->
list
,
&
spi_controll
er_list
);
list_for_each_entry
(
bi
,
&
board_list
,
list
)
spi_match_
master_to_boardinfo
(
maste
r
,
&
bi
->
board_info
);
spi_match_
controller_to_boardinfo
(
ctl
r
,
&
bi
->
board_info
);
mutex_unlock
(
&
board_lock
);
/* Register devices from the device tree and ACPI */
of_register_spi_devices
(
maste
r
);
acpi_register_spi_devices
(
maste
r
);
of_register_spi_devices
(
ctl
r
);
acpi_register_spi_devices
(
ctl
r
);
done:
return
status
;
}
EXPORT_SYMBOL_GPL
(
spi_register_
mast
er
);
EXPORT_SYMBOL_GPL
(
spi_register_
controll
er
);
static
void
devm_spi_unregister
(
struct
device
*
dev
,
void
*
res
)
{
spi_unregister_
master
(
*
(
struct
spi_mast
er
**
)
res
);
spi_unregister_
controller
(
*
(
struct
spi_controll
er
**
)
res
);
}
/**
* devm_spi_register_master - register managed SPI master controller
* @dev: device managing SPI master
* @master: initialized master, originally from spi_alloc_master()
* devm_spi_register_controller - register managed SPI master or slave
* controller
* @dev: device managing SPI controller
* @ctlr: initialized controller, originally from spi_alloc_master() or
* spi_alloc_slave()
* Context: can sleep
*
* Register a SPI device as with spi_register_
mast
er() which will
* Register a SPI device as with spi_register_
controll
er() which will
* automatically be unregister
*
* Return: zero on success, else a negative error code.
*/
int
devm_spi_register_master
(
struct
device
*
dev
,
struct
spi_master
*
master
)
int
devm_spi_register_controller
(
struct
device
*
dev
,
struct
spi_controller
*
ctlr
)
{
struct
spi_
mast
er
**
ptr
;
struct
spi_
controll
er
**
ptr
;
int
ret
;
ptr
=
devres_alloc
(
devm_spi_unregister
,
sizeof
(
*
ptr
),
GFP_KERNEL
);
if
(
!
ptr
)
return
-
ENOMEM
;
ret
=
spi_register_
master
(
maste
r
);
ret
=
spi_register_
controller
(
ctl
r
);
if
(
!
ret
)
{
*
ptr
=
maste
r
;
*
ptr
=
ctl
r
;
devres_add
(
dev
,
ptr
);
}
else
{
devres_free
(
ptr
);
...
...
@@ -2050,7 +2171,7 @@ int devm_spi_register_master(struct device *dev, struct spi_master *master)
return
ret
;
}
EXPORT_SYMBOL_GPL
(
devm_spi_register_
mast
er
);
EXPORT_SYMBOL_GPL
(
devm_spi_register_
controll
er
);
static
int
__unregister
(
struct
device
*
dev
,
void
*
null
)
{
...
...
@@ -2059,71 +2180,71 @@ static int __unregister(struct device *dev, void *null)
}
/**
* spi_unregister_
master - unregister SPI master
controller
* @
master: the mast
er being unregistered
* spi_unregister_
controller - unregister SPI master or slave
controller
* @
ctlr: the controll
er being unregistered
* Context: can sleep
*
* This call is used only by SPI
master
controller drivers, which are the
* This call is used only by SPI controller drivers, which are the
* only ones directly touching chip registers.
*
* This must be called from context that can sleep.
*/
void
spi_unregister_
master
(
struct
spi_master
*
maste
r
)
void
spi_unregister_
controller
(
struct
spi_controller
*
ctl
r
)
{
int
dummy
;
if
(
maste
r
->
queued
)
{
if
(
spi_destroy_queue
(
maste
r
))
dev_err
(
&
maste
r
->
dev
,
"queue remove failed
\n
"
);
if
(
ctl
r
->
queued
)
{
if
(
spi_destroy_queue
(
ctl
r
))
dev_err
(
&
ctl
r
->
dev
,
"queue remove failed
\n
"
);
}
mutex_lock
(
&
board_lock
);
list_del
(
&
maste
r
->
list
);
list_del
(
&
ctl
r
->
list
);
mutex_unlock
(
&
board_lock
);
dummy
=
device_for_each_child
(
&
maste
r
->
dev
,
NULL
,
__unregister
);
device_unregister
(
&
maste
r
->
dev
);
dummy
=
device_for_each_child
(
&
ctl
r
->
dev
,
NULL
,
__unregister
);
device_unregister
(
&
ctl
r
->
dev
);
}
EXPORT_SYMBOL_GPL
(
spi_unregister_
mast
er
);
EXPORT_SYMBOL_GPL
(
spi_unregister_
controll
er
);
int
spi_
master_suspend
(
struct
spi_master
*
maste
r
)
int
spi_
controller_suspend
(
struct
spi_controller
*
ctl
r
)
{
int
ret
;
/* Basically no-ops for non-queued
mast
ers */
if
(
!
maste
r
->
queued
)
/* Basically no-ops for non-queued
controll
ers */
if
(
!
ctl
r
->
queued
)
return
0
;
ret
=
spi_stop_queue
(
maste
r
);
ret
=
spi_stop_queue
(
ctl
r
);
if
(
ret
)
dev_err
(
&
maste
r
->
dev
,
"queue stop failed
\n
"
);
dev_err
(
&
ctl
r
->
dev
,
"queue stop failed
\n
"
);
return
ret
;
}
EXPORT_SYMBOL_GPL
(
spi_
mast
er_suspend
);
EXPORT_SYMBOL_GPL
(
spi_
controll
er_suspend
);
int
spi_
master_resume
(
struct
spi_master
*
maste
r
)
int
spi_
controller_resume
(
struct
spi_controller
*
ctl
r
)
{
int
ret
;
if
(
!
maste
r
->
queued
)
if
(
!
ctl
r
->
queued
)
return
0
;
ret
=
spi_start_queue
(
maste
r
);
ret
=
spi_start_queue
(
ctl
r
);
if
(
ret
)
dev_err
(
&
maste
r
->
dev
,
"queue restart failed
\n
"
);
dev_err
(
&
ctl
r
->
dev
,
"queue restart failed
\n
"
);
return
ret
;
}
EXPORT_SYMBOL_GPL
(
spi_
mast
er_resume
);
EXPORT_SYMBOL_GPL
(
spi_
controll
er_resume
);
static
int
__spi_
mast
er_match
(
struct
device
*
dev
,
const
void
*
data
)
static
int
__spi_
controll
er_match
(
struct
device
*
dev
,
const
void
*
data
)
{
struct
spi_
master
*
m
;
struct
spi_
controller
*
ctlr
;
const
u16
*
bus_num
=
data
;
m
=
container_of
(
dev
,
struct
spi_mast
er
,
dev
);
return
m
->
bus_num
==
*
bus_num
;
ctlr
=
container_of
(
dev
,
struct
spi_controll
er
,
dev
);
return
ctlr
->
bus_num
==
*
bus_num
;
}
/**
...
...
@@ -2133,22 +2254,22 @@ static int __spi_master_match(struct device *dev, const void *data)
*
* This call may be used with devices that are registered after
* arch init time. It returns a refcounted pointer to the relevant
* spi_
mast
er (which the caller must release), or NULL if there is
* spi_
controll
er (which the caller must release), or NULL if there is
* no such master registered.
*
* Return: the SPI master structure on success, else NULL.
*/
struct
spi_
mast
er
*
spi_busnum_to_master
(
u16
bus_num
)
struct
spi_
controll
er
*
spi_busnum_to_master
(
u16
bus_num
)
{
struct
device
*
dev
;
struct
spi_
master
*
maste
r
=
NULL
;
struct
spi_
controller
*
ctl
r
=
NULL
;
dev
=
class_find_device
(
&
spi_master_class
,
NULL
,
&
bus_num
,
__spi_
mast
er_match
);
__spi_
controll
er_match
);
if
(
dev
)
master
=
container_of
(
dev
,
struct
spi_mast
er
,
dev
);
ctlr
=
container_of
(
dev
,
struct
spi_controll
er
,
dev
);
/* reference got in class_find_device */
return
maste
r
;
return
ctl
r
;
}
EXPORT_SYMBOL_GPL
(
spi_busnum_to_master
);
...
...
@@ -2168,7 +2289,7 @@ EXPORT_SYMBOL_GPL(spi_busnum_to_master);
* Return: the pointer to the allocated data
*
* This may get enhanced in the future to allocate from a memory pool
* of the @spi_device or @spi_
mast
er to avoid repeated allocations.
* of the @spi_device or @spi_
controll
er to avoid repeated allocations.
*/
void
*
spi_res_alloc
(
struct
spi_device
*
spi
,
spi_res_release_t
release
,
...
...
@@ -2220,11 +2341,10 @@ EXPORT_SYMBOL_GPL(spi_res_add);
/**
* spi_res_release - release all spi resources for this message
* @
master: the @spi_mast
er
* @
ctlr: the @spi_controll
er
* @message: the @spi_message
*/
void
spi_res_release
(
struct
spi_master
*
master
,
struct
spi_message
*
message
)
void
spi_res_release
(
struct
spi_controller
*
ctlr
,
struct
spi_message
*
message
)
{
struct
spi_res
*
res
;
...
...
@@ -2233,7 +2353,7 @@ void spi_res_release(struct spi_master *master,
struct
spi_res
,
entry
);
if
(
res
->
release
)
res
->
release
(
maste
r
,
message
,
res
->
data
);
res
->
release
(
ctl
r
,
message
,
res
->
data
);
list_del
(
&
res
->
entry
);
...
...
@@ -2246,7 +2366,7 @@ EXPORT_SYMBOL_GPL(spi_res_release);
/* Core methods for spi_message alterations */
static
void
__spi_replace_transfers_release
(
struct
spi_
master
*
maste
r
,
static
void
__spi_replace_transfers_release
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
void
*
res
)
{
...
...
@@ -2255,7 +2375,7 @@ static void __spi_replace_transfers_release(struct spi_master *master,
/* call extra callback if requested */
if
(
rxfer
->
release
)
rxfer
->
release
(
maste
r
,
msg
,
res
);
rxfer
->
release
(
ctl
r
,
msg
,
res
);
/* insert replaced transfers back into the message */
list_splice
(
&
rxfer
->
replaced_transfers
,
rxfer
->
replaced_after
);
...
...
@@ -2375,7 +2495,7 @@ struct spi_replaced_transfers *spi_replace_transfers(
}
EXPORT_SYMBOL_GPL
(
spi_replace_transfers
);
static
int
__spi_split_transfer_maxsize
(
struct
spi_
master
*
maste
r
,
static
int
__spi_split_transfer_maxsize
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
struct
spi_transfer
**
xferp
,
size_t
maxsize
,
...
...
@@ -2437,7 +2557,7 @@ static int __spi_split_transfer_maxsize(struct spi_master *master,
*
xferp
=
&
xfers
[
count
-
1
];
/* increment statistics counters */
SPI_STATISTICS_INCREMENT_FIELD
(
&
maste
r
->
statistics
,
SPI_STATISTICS_INCREMENT_FIELD
(
&
ctl
r
->
statistics
,
transfers_split_maxsize
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
msg
->
spi
->
statistics
,
transfers_split_maxsize
);
...
...
@@ -2449,14 +2569,14 @@ static int __spi_split_transfer_maxsize(struct spi_master *master,
* spi_split_tranfers_maxsize - split spi transfers into multiple transfers
* when an individual transfer exceeds a
* certain size
* @
master: the @spi_mast
er for this transfer
* @
ctlr: the @spi_controll
er for this transfer
* @msg: the @spi_message to transform
* @maxsize: the maximum when to apply this
* @gfp: GFP allocation flags
*
* Return: status of transformation
*/
int
spi_split_transfers_maxsize
(
struct
spi_
master
*
maste
r
,
int
spi_split_transfers_maxsize
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
size_t
maxsize
,
gfp_t
gfp
)
...
...
@@ -2472,8 +2592,8 @@ int spi_split_transfers_maxsize(struct spi_master *master,
*/
list_for_each_entry
(
xfer
,
&
msg
->
transfers
,
transfer_list
)
{
if
(
xfer
->
len
>
maxsize
)
{
ret
=
__spi_split_transfer_maxsize
(
master
,
msg
,
&
xfer
,
maxsize
,
gfp
);
ret
=
__spi_split_transfer_maxsize
(
ctlr
,
msg
,
&
xfer
,
maxsize
,
gfp
);
if
(
ret
)
return
ret
;
}
...
...
@@ -2485,18 +2605,18 @@ EXPORT_SYMBOL_GPL(spi_split_transfers_maxsize);
/*-------------------------------------------------------------------------*/
/* Core methods for SPI
mast
er protocol drivers. Some of the
/* Core methods for SPI
controll
er protocol drivers. Some of the
* other core methods are currently defined as inline functions.
*/
static
int
__spi_validate_bits_per_word
(
struct
spi_master
*
master
,
u8
bits_per_word
)
static
int
__spi_validate_bits_per_word
(
struct
spi_controller
*
ctlr
,
u8
bits_per_word
)
{
if
(
maste
r
->
bits_per_word_mask
)
{
if
(
ctl
r
->
bits_per_word_mask
)
{
/* Only 32 bits fit in the mask */
if
(
bits_per_word
>
32
)
return
-
EINVAL
;
if
(
!
(
master
->
bits_per_word_mask
&
SPI_BPW_MASK
(
bits_per_word
)))
if
(
!
(
ctlr
->
bits_per_word_mask
&
SPI_BPW_MASK
(
bits_per_word
)))
return
-
EINVAL
;
}
...
...
@@ -2542,9 +2662,9 @@ int spi_setup(struct spi_device *spi)
(
SPI_TX_DUAL
|
SPI_TX_QUAD
|
SPI_RX_DUAL
|
SPI_RX_QUAD
)))
return
-
EINVAL
;
/* help drivers fail *cleanly* when they need options
* that aren't supported with their current
mast
er
* that aren't supported with their current
controll
er
*/
bad_bits
=
spi
->
mode
&
~
spi
->
mast
er
->
mode_bits
;
bad_bits
=
spi
->
mode
&
~
spi
->
controll
er
->
mode_bits
;
ugly_bits
=
bad_bits
&
(
SPI_TX_DUAL
|
SPI_TX_QUAD
|
SPI_RX_DUAL
|
SPI_RX_QUAD
);
if
(
ugly_bits
)
{
...
...
@@ -2563,15 +2683,16 @@ int spi_setup(struct spi_device *spi)
if
(
!
spi
->
bits_per_word
)
spi
->
bits_per_word
=
8
;
status
=
__spi_validate_bits_per_word
(
spi
->
master
,
spi
->
bits_per_word
);
status
=
__spi_validate_bits_per_word
(
spi
->
controller
,
spi
->
bits_per_word
);
if
(
status
)
return
status
;
if
(
!
spi
->
max_speed_hz
)
spi
->
max_speed_hz
=
spi
->
mast
er
->
max_speed_hz
;
spi
->
max_speed_hz
=
spi
->
controll
er
->
max_speed_hz
;
if
(
spi
->
mast
er
->
setup
)
status
=
spi
->
mast
er
->
setup
(
spi
);
if
(
spi
->
controll
er
->
setup
)
status
=
spi
->
controll
er
->
setup
(
spi
);
spi_set_cs
(
spi
,
false
);
...
...
@@ -2590,7 +2711,7 @@ EXPORT_SYMBOL_GPL(spi_setup);
static
int
__spi_validate
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
struct
spi_transfer
*
xfer
;
int
w_size
;
...
...
@@ -2602,16 +2723,16 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
* either MOSI or MISO is missing. They can also be caused by
* software limitations.
*/
if
((
master
->
flags
&
SPI_MASTER_HALF_DUPLEX
)
||
(
spi
->
mode
&
SPI_3WIRE
))
{
unsigned
flags
=
maste
r
->
flags
;
if
((
ctlr
->
flags
&
SPI_CONTROLLER_HALF_DUPLEX
)
||
(
spi
->
mode
&
SPI_3WIRE
))
{
unsigned
flags
=
ctl
r
->
flags
;
list_for_each_entry
(
xfer
,
&
message
->
transfers
,
transfer_list
)
{
if
(
xfer
->
rx_buf
&&
xfer
->
tx_buf
)
return
-
EINVAL
;
if
((
flags
&
SPI_
MAST
ER_NO_TX
)
&&
xfer
->
tx_buf
)
if
((
flags
&
SPI_
CONTROLL
ER_NO_TX
)
&&
xfer
->
tx_buf
)
return
-
EINVAL
;
if
((
flags
&
SPI_
MAST
ER_NO_RX
)
&&
xfer
->
rx_buf
)
if
((
flags
&
SPI_
CONTROLL
ER_NO_RX
)
&&
xfer
->
rx_buf
)
return
-
EINVAL
;
}
}
...
...
@@ -2631,13 +2752,12 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
if
(
!
xfer
->
speed_hz
)
xfer
->
speed_hz
=
spi
->
max_speed_hz
;
if
(
!
xfer
->
speed_hz
)
xfer
->
speed_hz
=
maste
r
->
max_speed_hz
;
xfer
->
speed_hz
=
ctl
r
->
max_speed_hz
;
if
(
master
->
max_speed_hz
&&
xfer
->
speed_hz
>
master
->
max_speed_hz
)
xfer
->
speed_hz
=
master
->
max_speed_hz
;
if
(
ctlr
->
max_speed_hz
&&
xfer
->
speed_hz
>
ctlr
->
max_speed_hz
)
xfer
->
speed_hz
=
ctlr
->
max_speed_hz
;
if
(
__spi_validate_bits_per_word
(
maste
r
,
xfer
->
bits_per_word
))
if
(
__spi_validate_bits_per_word
(
ctl
r
,
xfer
->
bits_per_word
))
return
-
EINVAL
;
/*
...
...
@@ -2655,8 +2775,8 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
if
(
xfer
->
len
%
w_size
)
return
-
EINVAL
;
if
(
xfer
->
speed_hz
&&
maste
r
->
min_speed_hz
&&
xfer
->
speed_hz
<
maste
r
->
min_speed_hz
)
if
(
xfer
->
speed_hz
&&
ctl
r
->
min_speed_hz
&&
xfer
->
speed_hz
<
ctl
r
->
min_speed_hz
)
return
-
EINVAL
;
if
(
xfer
->
tx_buf
&&
!
xfer
->
tx_nbits
)
...
...
@@ -2701,16 +2821,16 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
static
int
__spi_async
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
message
->
spi
=
spi
;
SPI_STATISTICS_INCREMENT_FIELD
(
&
maste
r
->
statistics
,
spi_async
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
ctl
r
->
statistics
,
spi_async
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
spi
->
statistics
,
spi_async
);
trace_spi_message_submit
(
message
);
return
maste
r
->
transfer
(
spi
,
message
);
return
ctl
r
->
transfer
(
spi
,
message
);
}
/**
...
...
@@ -2746,7 +2866,7 @@ static int __spi_async(struct spi_device *spi, struct spi_message *message)
*/
int
spi_async
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
int
ret
;
unsigned
long
flags
;
...
...
@@ -2754,14 +2874,14 @@ int spi_async(struct spi_device *spi, struct spi_message *message)
if
(
ret
!=
0
)
return
ret
;
spin_lock_irqsave
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
if
(
maste
r
->
bus_lock_flag
)
if
(
ctl
r
->
bus_lock_flag
)
ret
=
-
EBUSY
;
else
ret
=
__spi_async
(
spi
,
message
);
spin_unlock_irqrestore
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
return
ret
;
}
...
...
@@ -2800,7 +2920,7 @@ EXPORT_SYMBOL_GPL(spi_async);
*/
int
spi_async_locked
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
int
ret
;
unsigned
long
flags
;
...
...
@@ -2808,11 +2928,11 @@ int spi_async_locked(struct spi_device *spi, struct spi_message *message)
if
(
ret
!=
0
)
return
ret
;
spin_lock_irqsave
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
ret
=
__spi_async
(
spi
,
message
);
spin_unlock_irqrestore
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
return
ret
;
...
...
@@ -2824,7 +2944,7 @@ int spi_flash_read(struct spi_device *spi,
struct
spi_flash_read_message
*
msg
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
master
=
spi
->
controll
er
;
struct
device
*
rx_dev
=
NULL
;
int
ret
;
...
...
@@ -2878,7 +2998,7 @@ EXPORT_SYMBOL_GPL(spi_flash_read);
/*-------------------------------------------------------------------------*/
/* Utility methods for SPI
master
protocol drivers, layered on
/* Utility methods for SPI protocol drivers, layered on
* top of the core. Some other utility methods are defined as
* inline functions.
*/
...
...
@@ -2892,7 +3012,7 @@ static int __spi_sync(struct spi_device *spi, struct spi_message *message)
{
DECLARE_COMPLETION_ONSTACK
(
done
);
int
status
;
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
unsigned
long
flags
;
status
=
__spi_validate
(
spi
,
message
);
...
...
@@ -2903,7 +3023,7 @@ static int __spi_sync(struct spi_device *spi, struct spi_message *message)
message
->
context
=
&
done
;
message
->
spi
=
spi
;
SPI_STATISTICS_INCREMENT_FIELD
(
&
maste
r
->
statistics
,
spi_sync
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
ctl
r
->
statistics
,
spi_sync
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
spi
->
statistics
,
spi_sync
);
/* If we're not using the legacy transfer method then we will
...
...
@@ -2911,14 +3031,14 @@ static int __spi_sync(struct spi_device *spi, struct spi_message *message)
* This code would be less tricky if we could remove the
* support for driver implemented message queues.
*/
if
(
maste
r
->
transfer
==
spi_queued_transfer
)
{
spin_lock_irqsave
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
if
(
ctl
r
->
transfer
==
spi_queued_transfer
)
{
spin_lock_irqsave
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
trace_spi_message_submit
(
message
);
status
=
__spi_queued_transfer
(
spi
,
message
,
false
);
spin_unlock_irqrestore
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_unlock_irqrestore
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
}
else
{
status
=
spi_async_locked
(
spi
,
message
);
}
...
...
@@ -2927,12 +3047,12 @@ static int __spi_sync(struct spi_device *spi, struct spi_message *message)
/* Push out the messages in the calling context if we
* can.
*/
if
(
maste
r
->
transfer
==
spi_queued_transfer
)
{
SPI_STATISTICS_INCREMENT_FIELD
(
&
maste
r
->
statistics
,
if
(
ctl
r
->
transfer
==
spi_queued_transfer
)
{
SPI_STATISTICS_INCREMENT_FIELD
(
&
ctl
r
->
statistics
,
spi_sync_immediate
);
SPI_STATISTICS_INCREMENT_FIELD
(
&
spi
->
statistics
,
spi_sync_immediate
);
__spi_pump_messages
(
maste
r
,
false
);
__spi_pump_messages
(
ctl
r
,
false
);
}
wait_for_completion
(
&
done
);
...
...
@@ -2967,9 +3087,9 @@ int spi_sync(struct spi_device *spi, struct spi_message *message)
{
int
ret
;
mutex_lock
(
&
spi
->
mast
er
->
bus_lock_mutex
);
mutex_lock
(
&
spi
->
controll
er
->
bus_lock_mutex
);
ret
=
__spi_sync
(
spi
,
message
);
mutex_unlock
(
&
spi
->
mast
er
->
bus_lock_mutex
);
mutex_unlock
(
&
spi
->
controll
er
->
bus_lock_mutex
);
return
ret
;
}
...
...
@@ -2999,7 +3119,7 @@ EXPORT_SYMBOL_GPL(spi_sync_locked);
/**
* spi_bus_lock - obtain a lock for exclusive SPI bus usage
* @
maste
r: SPI bus master that should be locked for exclusive bus access
* @
ctl
r: SPI bus master that should be locked for exclusive bus access
* Context: can sleep
*
* This call may only be used from a context that may sleep. The sleep
...
...
@@ -3012,15 +3132,15 @@ EXPORT_SYMBOL_GPL(spi_sync_locked);
*
* Return: always zero.
*/
int
spi_bus_lock
(
struct
spi_
master
*
maste
r
)
int
spi_bus_lock
(
struct
spi_
controller
*
ctl
r
)
{
unsigned
long
flags
;
mutex_lock
(
&
maste
r
->
bus_lock_mutex
);
mutex_lock
(
&
ctl
r
->
bus_lock_mutex
);
spin_lock_irqsave
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
maste
r
->
bus_lock_flag
=
1
;
spin_unlock_irqrestore
(
&
maste
r
->
bus_lock_spinlock
,
flags
);
spin_lock_irqsave
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
ctl
r
->
bus_lock_flag
=
1
;
spin_unlock_irqrestore
(
&
ctl
r
->
bus_lock_spinlock
,
flags
);
/* mutex remains locked until spi_bus_unlock is called */
...
...
@@ -3030,7 +3150,7 @@ EXPORT_SYMBOL_GPL(spi_bus_lock);
/**
* spi_bus_unlock - release the lock for exclusive SPI bus usage
* @
maste
r: SPI bus master that was locked for exclusive bus access
* @
ctl
r: SPI bus master that was locked for exclusive bus access
* Context: can sleep
*
* This call may only be used from a context that may sleep. The sleep
...
...
@@ -3041,11 +3161,11 @@ EXPORT_SYMBOL_GPL(spi_bus_lock);
*
* Return: always zero.
*/
int
spi_bus_unlock
(
struct
spi_
master
*
maste
r
)
int
spi_bus_unlock
(
struct
spi_
controller
*
ctl
r
)
{
maste
r
->
bus_lock_flag
=
0
;
ctl
r
->
bus_lock_flag
=
0
;
mutex_unlock
(
&
maste
r
->
bus_lock_mutex
);
mutex_unlock
(
&
ctl
r
->
bus_lock_mutex
);
return
0
;
}
...
...
@@ -3147,45 +3267,48 @@ static struct spi_device *of_find_spi_device_by_node(struct device_node *node)
return
dev
?
to_spi_device
(
dev
)
:
NULL
;
}
static
int
__spi_of_
mast
er_match
(
struct
device
*
dev
,
const
void
*
data
)
static
int
__spi_of_
controll
er_match
(
struct
device
*
dev
,
const
void
*
data
)
{
return
dev
->
of_node
==
data
;
}
/* the spi
mast
ers are not using spi_bus, so we find it with another way */
static
struct
spi_
master
*
of_find_spi_mast
er_by_node
(
struct
device_node
*
node
)
/* the spi
controll
ers are not using spi_bus, so we find it with another way */
static
struct
spi_
controller
*
of_find_spi_controll
er_by_node
(
struct
device_node
*
node
)
{
struct
device
*
dev
;
dev
=
class_find_device
(
&
spi_master_class
,
NULL
,
node
,
__spi_of_master_match
);
__spi_of_controller_match
);
if
(
!
dev
&&
IS_ENABLED
(
CONFIG_SPI_SLAVE
))
dev
=
class_find_device
(
&
spi_slave_class
,
NULL
,
node
,
__spi_of_controller_match
);
if
(
!
dev
)
return
NULL
;
/* reference got in class_find_device */
return
container_of
(
dev
,
struct
spi_
mast
er
,
dev
);
return
container_of
(
dev
,
struct
spi_
controll
er
,
dev
);
}
static
int
of_spi_notify
(
struct
notifier_block
*
nb
,
unsigned
long
action
,
void
*
arg
)
{
struct
of_reconfig_data
*
rd
=
arg
;
struct
spi_
master
*
maste
r
;
struct
spi_
controller
*
ctl
r
;
struct
spi_device
*
spi
;
switch
(
of_reconfig_get_state_change
(
action
,
arg
))
{
case
OF_RECONFIG_CHANGE_ADD
:
master
=
of_find_spi_mast
er_by_node
(
rd
->
dn
->
parent
);
if
(
maste
r
==
NULL
)
ctlr
=
of_find_spi_controll
er_by_node
(
rd
->
dn
->
parent
);
if
(
ctl
r
==
NULL
)
return
NOTIFY_OK
;
/* not for us */
if
(
of_node_test_and_set_flag
(
rd
->
dn
,
OF_POPULATED
))
{
put_device
(
&
maste
r
->
dev
);
put_device
(
&
ctl
r
->
dev
);
return
NOTIFY_OK
;
}
spi
=
of_register_spi_device
(
maste
r
,
rd
->
dn
);
put_device
(
&
maste
r
->
dev
);
spi
=
of_register_spi_device
(
ctl
r
,
rd
->
dn
);
put_device
(
&
ctl
r
->
dev
);
if
(
IS_ERR
(
spi
))
{
pr_err
(
"%s: failed to create for '%s'
\n
"
,
...
...
@@ -3224,7 +3347,7 @@ extern struct notifier_block spi_of_notifier;
#endif
/* IS_ENABLED(CONFIG_OF_DYNAMIC) */
#if IS_ENABLED(CONFIG_ACPI)
static
int
spi_acpi_
mast
er_match
(
struct
device
*
dev
,
const
void
*
data
)
static
int
spi_acpi_
controll
er_match
(
struct
device
*
dev
,
const
void
*
data
)
{
return
ACPI_COMPANION
(
dev
->
parent
)
==
data
;
}
...
...
@@ -3234,16 +3357,19 @@ static int spi_acpi_device_match(struct device *dev, void *data)
return
ACPI_COMPANION
(
dev
)
==
data
;
}
static
struct
spi_
master
*
acpi_spi_find_mast
er_by_adev
(
struct
acpi_device
*
adev
)
static
struct
spi_
controller
*
acpi_spi_find_controll
er_by_adev
(
struct
acpi_device
*
adev
)
{
struct
device
*
dev
;
dev
=
class_find_device
(
&
spi_master_class
,
NULL
,
adev
,
spi_acpi_master_match
);
spi_acpi_controller_match
);
if
(
!
dev
&&
IS_ENABLED
(
CONFIG_SPI_SLAVE
))
dev
=
class_find_device
(
&
spi_slave_class
,
NULL
,
adev
,
spi_acpi_controller_match
);
if
(
!
dev
)
return
NULL
;
return
container_of
(
dev
,
struct
spi_
mast
er
,
dev
);
return
container_of
(
dev
,
struct
spi_
controll
er
,
dev
);
}
static
struct
spi_device
*
acpi_spi_find_device_by_adev
(
struct
acpi_device
*
adev
)
...
...
@@ -3259,17 +3385,17 @@ static int acpi_spi_notify(struct notifier_block *nb, unsigned long value,
void
*
arg
)
{
struct
acpi_device
*
adev
=
arg
;
struct
spi_
master
*
maste
r
;
struct
spi_
controller
*
ctl
r
;
struct
spi_device
*
spi
;
switch
(
value
)
{
case
ACPI_RECONFIG_DEVICE_ADD
:
master
=
acpi_spi_find_mast
er_by_adev
(
adev
->
parent
);
if
(
!
maste
r
)
ctlr
=
acpi_spi_find_controll
er_by_adev
(
adev
->
parent
);
if
(
!
ctl
r
)
break
;
acpi_register_spi_device
(
maste
r
,
adev
);
put_device
(
&
maste
r
->
dev
);
acpi_register_spi_device
(
ctl
r
,
adev
);
put_device
(
&
ctl
r
->
dev
);
break
;
case
ACPI_RECONFIG_DEVICE_REMOVE
:
if
(
!
acpi_device_enumerated
(
adev
))
...
...
@@ -3312,6 +3438,12 @@ static int __init spi_init(void)
if
(
status
<
0
)
goto
err2
;
if
(
IS_ENABLED
(
CONFIG_SPI_SLAVE
))
{
status
=
class_register
(
&
spi_slave_class
);
if
(
status
<
0
)
goto
err3
;
}
if
(
IS_ENABLED
(
CONFIG_OF_DYNAMIC
))
WARN_ON
(
of_reconfig_notifier_register
(
&
spi_of_notifier
));
if
(
IS_ENABLED
(
CONFIG_ACPI
))
...
...
@@ -3319,6 +3451,8 @@ static int __init spi_init(void)
return
0
;
err3:
class_unregister
(
&
spi_master_class
);
err2:
bus_unregister
(
&
spi_bus_type
);
err1:
...
...
include/linux/spi/spi.h
View file @
9d540b0d
...
...
@@ -24,13 +24,13 @@
struct
dma_chan
;
struct
property_entry
;
struct
spi_
mast
er
;
struct
spi_
controll
er
;
struct
spi_transfer
;
struct
spi_flash_read_message
;
/*
* INTERFACES between SPI master-side drivers and SPI
infrastructure.
*
(There's no SPI slave support for Linux yet...)
* INTERFACES between SPI master-side drivers and SPI
slave protocol handlers,
*
and SPI infrastructure.
*/
extern
struct
bus_type
spi_bus_type
;
...
...
@@ -84,7 +84,7 @@ struct spi_statistics {
void
spi_statistics_add_transfer_stats
(
struct
spi_statistics
*
stats
,
struct
spi_transfer
*
xfer
,
struct
spi_
master
*
maste
r
);
struct
spi_
controller
*
ctl
r
);
#define SPI_STATISTICS_ADD_TO_FIELD(stats, field, count) \
do { \
...
...
@@ -98,13 +98,14 @@ void spi_statistics_add_transfer_stats(struct spi_statistics *stats,
SPI_STATISTICS_ADD_TO_FIELD(stats, field, 1)
/**
* struct spi_device -
Mast
er side proxy for an SPI slave device
* struct spi_device -
Controll
er side proxy for an SPI slave device
* @dev: Driver model representation of the device.
* @master: SPI controller used with the device.
* @controller: SPI controller used with the device.
* @master: Copy of controller, for backwards compatibility.
* @max_speed_hz: Maximum clock rate to be used with this chip
* (on this board); may be changed by the device's driver.
* The spi_transfer.speed_hz can override this for each transfer.
* @chip_select: Chipselect, distinguishing chips handled by @
mast
er.
* @chip_select: Chipselect, distinguishing chips handled by @
controll
er.
* @mode: The spi mode defines how data is clocked out and in.
* This may be changed by the device's driver.
* The "active low" default for chipselect mode can be overridden
...
...
@@ -140,7 +141,8 @@ void spi_statistics_add_transfer_stats(struct spi_statistics *stats,
*/
struct
spi_device
{
struct
device
dev
;
struct
spi_master
*
master
;
struct
spi_controller
*
controller
;
struct
spi_controller
*
master
;
/* compatibility layer */
u32
max_speed_hz
;
u8
chip_select
;
u8
bits_per_word
;
...
...
@@ -198,7 +200,7 @@ static inline void spi_dev_put(struct spi_device *spi)
put_device
(
&
spi
->
dev
);
}
/* ctldata is for the bus_
mast
er driver's runtime state */
/* ctldata is for the bus_
controll
er driver's runtime state */
static
inline
void
*
spi_get_ctldata
(
struct
spi_device
*
spi
)
{
return
spi
->
controller_state
;
...
...
@@ -292,9 +294,9 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
spi_unregister_driver)
/**
* struct spi_
master - interface to SPI master
controller
* struct spi_
controller - interface to SPI master or slave
controller
* @dev: device interface to this driver
* @list: link with the global spi_
mast
er list
* @list: link with the global spi_
controll
er list
* @bus_num: board-specific (and often SOC-specific) identifier for a
* given SPI controller.
* @num_chipselect: chipselects are used to distinguish individual
...
...
@@ -311,6 +313,7 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* @min_speed_hz: Lowest supported transfer speed
* @max_speed_hz: Highest supported transfer speed
* @flags: other constraints relevant to this driver
* @slave: indicates that this is an SPI slave controller
* @max_transfer_size: function that returns the max transfer size for
* a &spi_device; may be %NULL, so the default %SIZE_MAX will be used.
* @max_message_size: function that returns the max message size for
...
...
@@ -326,8 +329,8 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* the device whose settings are being modified.
* @transfer: adds a message to the controller's transfer queue.
* @cleanup: frees controller-specific state
* @can_dma: determine whether this
mast
er supports DMA
* @queued: whether this
mast
er is providing an internal message queue
* @can_dma: determine whether this
controll
er supports DMA
* @queued: whether this
controll
er is providing an internal message queue
* @kworker: thread struct for message pump
* @kworker_task: pointer to task for message pump kworker thread
* @pump_messages: work struct for scheduling work to the message pump
...
...
@@ -374,6 +377,7 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* @handle_err: the subsystem calls the driver to handle an error that occurs
* in the generic implementation of transfer_one_message().
* @unprepare_message: undo any work done by prepare_message().
* @slave_abort: abort the ongoing transfer request on an SPI slave controller
* @spi_flash_read: to support spi-controller hardwares that provide
* accelerated interface to read from flash devices.
* @spi_flash_can_dma: analogous to can_dma() interface, but for
...
...
@@ -382,7 +386,7 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* @cs_gpios: Array of GPIOs to use as chip select lines; one per CS
* number. Any individual value may be -ENOENT for CS lines that
* are not GPIOs (driven by the SPI controller itself).
* @statistics: statistics for the spi_
mast
er
* @statistics: statistics for the spi_
controll
er
* @dma_tx: DMA transmit channel
* @dma_rx: DMA receive channel
* @dummy_rx: dummy receive buffer for full-duplex devices
...
...
@@ -391,7 +395,7 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* what Linux expects, this optional hook can be used to translate
* between the two.
*
* Each SPI
master
controller can communicate with one or more @spi_device
* Each SPI controller can communicate with one or more @spi_device
* children. These make a small bus, sharing MOSI, MISO and SCK signals
* but not chip select signals. Each device may be configured to use a
* different clock rate, since those shared signals are ignored unless
...
...
@@ -402,7 +406,7 @@ static inline void spi_unregister_driver(struct spi_driver *sdrv)
* an SPI slave device. For each such message it queues, it calls the
* message's completion function when the transaction completes.
*/
struct
spi_
mast
er
{
struct
spi_
controll
er
{
struct
device
dev
;
struct
list_head
list
;
...
...
@@ -440,12 +444,16 @@ struct spi_master {
/* other constraints relevant to this driver */
u16
flags
;
#define SPI_MASTER_HALF_DUPLEX BIT(0)
/* can't do full duplex */
#define SPI_MASTER_NO_RX BIT(1)
/* can't do buffer read */
#define SPI_MASTER_NO_TX BIT(2)
/* can't do buffer write */
#define SPI_MASTER_MUST_RX BIT(3)
/* requires rx */
#define SPI_MASTER_MUST_TX BIT(4)
/* requires tx */
#define SPI_MASTER_GPIO_SS BIT(5)
/* GPIO CS must select slave */
#define SPI_CONTROLLER_HALF_DUPLEX BIT(0)
/* can't do full duplex */
#define SPI_CONTROLLER_NO_RX BIT(1)
/* can't do buffer read */
#define SPI_CONTROLLER_NO_TX BIT(2)
/* can't do buffer write */
#define SPI_CONTROLLER_MUST_RX BIT(3)
/* requires rx */
#define SPI_CONTROLLER_MUST_TX BIT(4)
/* requires tx */
#define SPI_MASTER_GPIO_SS BIT(5)
/* GPIO CS must select slave */
/* flag indicating this is an SPI slave controller */
bool
slave
;
/*
* on some hardware transfer / message size may be constrained
...
...
@@ -480,8 +488,8 @@ struct spi_master {
* any other request management
* + To a given spi_device, message queueing is pure fifo
*
* + The
mast
er's main job is to process its message queue,
* selecting a chip then transferring data
* + The
controll
er's main job is to process its message queue,
* selecting a chip
(for masters),
then transferring data
* + If there are multiple spi_device children, the i/o queue
* arbitration algorithm is unspecified (round robin, fifo,
* priority, reservations, preemption, etc)
...
...
@@ -494,7 +502,7 @@ struct spi_master {
int
(
*
transfer
)(
struct
spi_device
*
spi
,
struct
spi_message
*
mesg
);
/* called on release() to free memory provided by spi_
mast
er */
/* called on release() to free memory provided by spi_
controll
er */
void
(
*
cleanup
)(
struct
spi_device
*
spi
);
/*
...
...
@@ -504,13 +512,13 @@ struct spi_master {
* not modify or store xfer and dma_tx and dma_rx must be set
* while the device is prepared.
*/
bool
(
*
can_dma
)(
struct
spi_
master
*
maste
r
,
bool
(
*
can_dma
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_device
*
spi
,
struct
spi_transfer
*
xfer
);
/*
* These hooks are for drivers that want to use the generic
*
mast
er transfer queueing mechanism. If these are used, the
*
controll
er transfer queueing mechanism. If these are used, the
* transfer() function above must NOT be specified by the driver.
* Over time we expect SPI drivers to be phased over to this API.
*/
...
...
@@ -531,14 +539,15 @@ struct spi_master {
struct
completion
xfer_completion
;
size_t
max_dma_len
;
int
(
*
prepare_transfer_hardware
)(
struct
spi_
master
*
maste
r
);
int
(
*
transfer_one_message
)(
struct
spi_
master
*
maste
r
,
int
(
*
prepare_transfer_hardware
)(
struct
spi_
controller
*
ctl
r
);
int
(
*
transfer_one_message
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
mesg
);
int
(
*
unprepare_transfer_hardware
)(
struct
spi_
master
*
maste
r
);
int
(
*
prepare_message
)(
struct
spi_
master
*
maste
r
,
int
(
*
unprepare_transfer_hardware
)(
struct
spi_
controller
*
ctl
r
);
int
(
*
prepare_message
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
message
);
int
(
*
unprepare_message
)(
struct
spi_
master
*
maste
r
,
int
(
*
unprepare_message
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
message
);
int
(
*
slave_abort
)(
struct
spi_controller
*
ctlr
);
int
(
*
spi_flash_read
)(
struct
spi_device
*
spi
,
struct
spi_flash_read_message
*
msg
);
bool
(
*
spi_flash_can_dma
)(
struct
spi_device
*
spi
,
...
...
@@ -550,9 +559,9 @@ struct spi_master {
* of transfer_one_message() provied by the core.
*/
void
(
*
set_cs
)(
struct
spi_device
*
spi
,
bool
enable
);
int
(
*
transfer_one
)(
struct
spi_
master
*
maste
r
,
struct
spi_device
*
spi
,
int
(
*
transfer_one
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_device
*
spi
,
struct
spi_transfer
*
transfer
);
void
(
*
handle_err
)(
struct
spi_
master
*
maste
r
,
void
(
*
handle_err
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
message
);
/* gpio chip select */
...
...
@@ -569,57 +578,78 @@ struct spi_master {
void
*
dummy_rx
;
void
*
dummy_tx
;
int
(
*
fw_translate_cs
)(
struct
spi_
master
*
maste
r
,
unsigned
cs
);
int
(
*
fw_translate_cs
)(
struct
spi_
controller
*
ctl
r
,
unsigned
cs
);
};
static
inline
void
*
spi_
master_get_devdata
(
struct
spi_master
*
maste
r
)
static
inline
void
*
spi_
controller_get_devdata
(
struct
spi_controller
*
ctl
r
)
{
return
dev_get_drvdata
(
&
maste
r
->
dev
);
return
dev_get_drvdata
(
&
ctl
r
->
dev
);
}
static
inline
void
spi_master_set_devdata
(
struct
spi_master
*
master
,
void
*
data
)
static
inline
void
spi_controller_set_devdata
(
struct
spi_controller
*
ctlr
,
void
*
data
)
{
dev_set_drvdata
(
&
maste
r
->
dev
,
data
);
dev_set_drvdata
(
&
ctl
r
->
dev
,
data
);
}
static
inline
struct
spi_
master
*
spi_master_get
(
struct
spi_master
*
maste
r
)
static
inline
struct
spi_
controller
*
spi_controller_get
(
struct
spi_controller
*
ctl
r
)
{
if
(
!
master
||
!
get_device
(
&
maste
r
->
dev
))
if
(
!
ctlr
||
!
get_device
(
&
ctl
r
->
dev
))
return
NULL
;
return
master
;
return
ctlr
;
}
static
inline
void
spi_controller_put
(
struct
spi_controller
*
ctlr
)
{
if
(
ctlr
)
put_device
(
&
ctlr
->
dev
);
}
static
inline
void
spi_master_put
(
struct
spi_master
*
maste
r
)
static
inline
bool
spi_controller_is_slave
(
struct
spi_controller
*
ctl
r
)
{
if
(
master
)
put_device
(
&
master
->
dev
);
return
IS_ENABLED
(
CONFIG_SPI_SLAVE
)
&&
ctlr
->
slave
;
}
/* PM calls that need to be issued by the driver */
extern
int
spi_
master_suspend
(
struct
spi_master
*
maste
r
);
extern
int
spi_
master_resume
(
struct
spi_master
*
maste
r
);
extern
int
spi_
controller_suspend
(
struct
spi_controller
*
ctl
r
);
extern
int
spi_
controller_resume
(
struct
spi_controller
*
ctl
r
);
/* Calls the driver make to interact with the message queue */
extern
struct
spi_message
*
spi_get_next_queued_message
(
struct
spi_
master
*
maste
r
);
extern
void
spi_finalize_current_message
(
struct
spi_
master
*
maste
r
);
extern
void
spi_finalize_current_transfer
(
struct
spi_
master
*
maste
r
);
extern
struct
spi_message
*
spi_get_next_queued_message
(
struct
spi_
controller
*
ctl
r
);
extern
void
spi_finalize_current_message
(
struct
spi_
controller
*
ctl
r
);
extern
void
spi_finalize_current_transfer
(
struct
spi_
controller
*
ctl
r
);
/* the spi driver core manages memory for the spi_
mast
er classdev */
extern
struct
spi_
master
*
spi_alloc_master
(
struct
device
*
host
,
unsigned
siz
e
);
/* the spi driver core manages memory for the spi_
controll
er classdev */
extern
struct
spi_
controller
*
__spi_alloc_controller
(
struct
device
*
host
,
unsigned
int
size
,
bool
slav
e
);
extern
int
spi_register_master
(
struct
spi_master
*
master
);
extern
int
devm_spi_register_master
(
struct
device
*
dev
,
struct
spi_master
*
master
);
extern
void
spi_unregister_master
(
struct
spi_master
*
master
);
static
inline
struct
spi_controller
*
spi_alloc_master
(
struct
device
*
host
,
unsigned
int
size
)
{
return
__spi_alloc_controller
(
host
,
size
,
false
);
}
extern
struct
spi_master
*
spi_busnum_to_master
(
u16
busnum
);
static
inline
struct
spi_controller
*
spi_alloc_slave
(
struct
device
*
host
,
unsigned
int
size
)
{
if
(
!
IS_ENABLED
(
CONFIG_SPI_SLAVE
))
return
NULL
;
return
__spi_alloc_controller
(
host
,
size
,
true
);
}
extern
int
spi_register_controller
(
struct
spi_controller
*
ctlr
);
extern
int
devm_spi_register_controller
(
struct
device
*
dev
,
struct
spi_controller
*
ctlr
);
extern
void
spi_unregister_controller
(
struct
spi_controller
*
ctlr
);
extern
struct
spi_controller
*
spi_busnum_to_master
(
u16
busnum
);
/*
* SPI resource management while processing a SPI message
*/
typedef
void
(
*
spi_res_release_t
)(
struct
spi_
master
*
maste
r
,
typedef
void
(
*
spi_res_release_t
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
void
*
res
);
...
...
@@ -644,7 +674,7 @@ extern void *spi_res_alloc(struct spi_device *spi,
extern
void
spi_res_add
(
struct
spi_message
*
message
,
void
*
res
);
extern
void
spi_res_free
(
void
*
res
);
extern
void
spi_res_release
(
struct
spi_
master
*
maste
r
,
extern
void
spi_res_release
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
message
);
/*---------------------------------------------------------------------------*/
...
...
@@ -828,7 +858,7 @@ struct spi_message {
/* for optional use by whatever driver currently owns the
* spi_message ... between calls to spi_async and then later
* complete(), that's the spi_
mast
er controller driver.
* complete(), that's the spi_
controll
er controller driver.
*/
struct
list_head
queue
;
void
*
state
;
...
...
@@ -912,25 +942,27 @@ extern int spi_setup(struct spi_device *spi);
extern
int
spi_async
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
);
extern
int
spi_async_locked
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
);
extern
int
spi_slave_abort
(
struct
spi_device
*
spi
);
static
inline
size_t
spi_max_message_size
(
struct
spi_device
*
spi
)
{
struct
spi_master
*
master
=
spi
->
master
;
if
(
!
master
->
max_message_size
)
struct
spi_controller
*
ctlr
=
spi
->
controller
;
if
(
!
ctlr
->
max_message_size
)
return
SIZE_MAX
;
return
maste
r
->
max_message_size
(
spi
);
return
ctl
r
->
max_message_size
(
spi
);
}
static
inline
size_t
spi_max_transfer_size
(
struct
spi_device
*
spi
)
{
struct
spi_
master
*
master
=
spi
->
mast
er
;
struct
spi_
controller
*
ctlr
=
spi
->
controll
er
;
size_t
tr_max
=
SIZE_MAX
;
size_t
msg_max
=
spi_max_message_size
(
spi
);
if
(
maste
r
->
max_transfer_size
)
tr_max
=
maste
r
->
max_transfer_size
(
spi
);
if
(
ctl
r
->
max_transfer_size
)
tr_max
=
ctl
r
->
max_transfer_size
(
spi
);
/* transfer size limit must not be greater than messsage size limit */
return
min
(
tr_max
,
msg_max
);
...
...
@@ -941,7 +973,7 @@ spi_max_transfer_size(struct spi_device *spi)
/* SPI transfer replacement methods which make use of spi_res */
struct
spi_replaced_transfers
;
typedef
void
(
*
spi_replaced_release_t
)(
struct
spi_
master
*
maste
r
,
typedef
void
(
*
spi_replaced_release_t
)(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
struct
spi_replaced_transfers
*
res
);
/**
...
...
@@ -985,7 +1017,7 @@ extern struct spi_replaced_transfers *spi_replace_transfers(
/* SPI transfer transformation methods */
extern
int
spi_split_transfers_maxsize
(
struct
spi_
master
*
maste
r
,
extern
int
spi_split_transfers_maxsize
(
struct
spi_
controller
*
ctl
r
,
struct
spi_message
*
msg
,
size_t
maxsize
,
gfp_t
gfp
);
...
...
@@ -999,8 +1031,8 @@ extern int spi_split_transfers_maxsize(struct spi_master *master,
extern
int
spi_sync
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
);
extern
int
spi_sync_locked
(
struct
spi_device
*
spi
,
struct
spi_message
*
message
);
extern
int
spi_bus_lock
(
struct
spi_
master
*
maste
r
);
extern
int
spi_bus_unlock
(
struct
spi_
master
*
maste
r
);
extern
int
spi_bus_lock
(
struct
spi_
controller
*
ctl
r
);
extern
int
spi_bus_unlock
(
struct
spi_
controller
*
ctl
r
);
/**
* spi_sync_transfer - synchronous SPI data transfer
...
...
@@ -1185,9 +1217,9 @@ struct spi_flash_read_message {
/* SPI core interface for flash read support */
static
inline
bool
spi_flash_read_supported
(
struct
spi_device
*
spi
)
{
return
spi
->
mast
er
->
spi_flash_read
&&
(
!
spi
->
mast
er
->
flash_read_supported
||
spi
->
mast
er
->
flash_read_supported
(
spi
));
return
spi
->
controll
er
->
spi_flash_read
&&
(
!
spi
->
controll
er
->
flash_read_supported
||
spi
->
controll
er
->
flash_read_supported
(
spi
));
}
int
spi_flash_read
(
struct
spi_device
*
spi
,
...
...
@@ -1220,7 +1252,7 @@ int spi_flash_read(struct spi_device *spi,
* @irq: Initializes spi_device.irq; depends on how the board is wired.
* @max_speed_hz: Initializes spi_device.max_speed_hz; based on limits
* from the chip datasheet and board-specific signal quality issues.
* @bus_num: Identifies which spi_
mast
er parents the spi_device; unused
* @bus_num: Identifies which spi_
controll
er parents the spi_device; unused
* by spi_new_device(), and otherwise depends on board wiring.
* @chip_select: Initializes spi_device.chip_select; depends on how
* the board is wired.
...
...
@@ -1261,7 +1293,7 @@ struct spi_board_info {
/* bus_num is board specific and matches the bus_num of some
* spi_
mast
er that will probably be registered later.
* spi_
controll
er that will probably be registered later.
*
* chip_select reflects how this chip is wired to that master;
* it's less than num_chipselect.
...
...
@@ -1295,7 +1327,7 @@ spi_register_board_info(struct spi_board_info const *info, unsigned n)
/* If you're hotplugging an adapter with devices (parport, usb, etc)
* use spi_new_device() to describe each device. You can also call
* spi_unregister_device() to start making that device vanish, but
* normally that would be handled by spi_unregister_
mast
er().
* normally that would be handled by spi_unregister_
controll
er().
*
* You can also use spi_alloc_device() and spi_add_device() to use a two
* stage registration sequence for each spi_device. This gives the caller
...
...
@@ -1304,13 +1336,13 @@ spi_register_board_info(struct spi_board_info const *info, unsigned n)
* be defined using the board info.
*/
extern
struct
spi_device
*
spi_alloc_device
(
struct
spi_
master
*
maste
r
);
spi_alloc_device
(
struct
spi_
controller
*
ctl
r
);
extern
int
spi_add_device
(
struct
spi_device
*
spi
);
extern
struct
spi_device
*
spi_new_device
(
struct
spi_
mast
er
*
,
struct
spi_board_info
*
);
spi_new_device
(
struct
spi_
controll
er
*
,
struct
spi_board_info
*
);
extern
void
spi_unregister_device
(
struct
spi_device
*
spi
);
...
...
@@ -1318,9 +1350,32 @@ extern const struct spi_device_id *
spi_get_device_id
(
const
struct
spi_device
*
sdev
);
static
inline
bool
spi_transfer_is_last
(
struct
spi_
master
*
maste
r
,
struct
spi_transfer
*
xfer
)
spi_transfer_is_last
(
struct
spi_
controller
*
ctl
r
,
struct
spi_transfer
*
xfer
)
{
return
list_is_last
(
&
xfer
->
transfer_list
,
&
maste
r
->
cur_msg
->
transfers
);
return
list_is_last
(
&
xfer
->
transfer_list
,
&
ctl
r
->
cur_msg
->
transfers
);
}
/* Compatibility layer */
#define spi_master spi_controller
#define SPI_MASTER_HALF_DUPLEX SPI_CONTROLLER_HALF_DUPLEX
#define SPI_MASTER_NO_RX SPI_CONTROLLER_NO_RX
#define SPI_MASTER_NO_TX SPI_CONTROLLER_NO_TX
#define SPI_MASTER_MUST_RX SPI_CONTROLLER_MUST_RX
#define SPI_MASTER_MUST_TX SPI_CONTROLLER_MUST_TX
#define spi_master_get_devdata(_ctlr) spi_controller_get_devdata(_ctlr)
#define spi_master_set_devdata(_ctlr, _data) \
spi_controller_set_devdata(_ctlr, _data)
#define spi_master_get(_ctlr) spi_controller_get(_ctlr)
#define spi_master_put(_ctlr) spi_controller_put(_ctlr)
#define spi_master_suspend(_ctlr) spi_controller_suspend(_ctlr)
#define spi_master_resume(_ctlr) spi_controller_resume(_ctlr)
#define spi_register_master(_ctlr) spi_register_controller(_ctlr)
#define devm_spi_register_master(_dev, _ctlr) \
devm_spi_register_controller(_dev, _ctlr)
#define spi_unregister_master(_ctlr) spi_unregister_controller(_ctlr)
#endif
/* __LINUX_SPI_H */
include/trace/events/spi.h
View file @
9d540b0d
...
...
@@ -7,37 +7,37 @@
#include <linux/ktime.h>
#include <linux/tracepoint.h>
DECLARE_EVENT_CLASS
(
spi_
mast
er
,
DECLARE_EVENT_CLASS
(
spi_
controll
er
,
TP_PROTO
(
struct
spi_
master
*
mast
er
),
TP_PROTO
(
struct
spi_
controller
*
controll
er
),
TP_ARGS
(
mast
er
),
TP_ARGS
(
controll
er
),
TP_STRUCT__entry
(
__field
(
int
,
bus_num
)
),
TP_fast_assign
(
__entry
->
bus_num
=
mast
er
->
bus_num
;
__entry
->
bus_num
=
controll
er
->
bus_num
;
),
TP_printk
(
"spi%d"
,
(
int
)
__entry
->
bus_num
)
);
DEFINE_EVENT
(
spi_
master
,
spi_mast
er_idle
,
DEFINE_EVENT
(
spi_
controller
,
spi_controll
er_idle
,
TP_PROTO
(
struct
spi_
master
*
mast
er
),
TP_PROTO
(
struct
spi_
controller
*
controll
er
),
TP_ARGS
(
mast
er
)
TP_ARGS
(
controll
er
)
);
DEFINE_EVENT
(
spi_
master
,
spi_mast
er_busy
,
DEFINE_EVENT
(
spi_
controller
,
spi_controll
er_busy
,
TP_PROTO
(
struct
spi_
master
*
mast
er
),
TP_PROTO
(
struct
spi_
controller
*
controll
er
),
TP_ARGS
(
mast
er
)
TP_ARGS
(
controll
er
)
);
...
...
@@ -54,7 +54,7 @@ DECLARE_EVENT_CLASS(spi_message,
),
TP_fast_assign
(
__entry
->
bus_num
=
msg
->
spi
->
mast
er
->
bus_num
;
__entry
->
bus_num
=
msg
->
spi
->
controll
er
->
bus_num
;
__entry
->
chip_select
=
msg
->
spi
->
chip_select
;
__entry
->
msg
=
msg
;
),
...
...
@@ -95,7 +95,7 @@ TRACE_EVENT(spi_message_done,
),
TP_fast_assign
(
__entry
->
bus_num
=
msg
->
spi
->
mast
er
->
bus_num
;
__entry
->
bus_num
=
msg
->
spi
->
controll
er
->
bus_num
;
__entry
->
chip_select
=
msg
->
spi
->
chip_select
;
__entry
->
msg
=
msg
;
__entry
->
frame
=
msg
->
frame_length
;
...
...
@@ -122,7 +122,7 @@ DECLARE_EVENT_CLASS(spi_transfer,
),
TP_fast_assign
(
__entry
->
bus_num
=
msg
->
spi
->
mast
er
->
bus_num
;
__entry
->
bus_num
=
msg
->
spi
->
controll
er
->
bus_num
;
__entry
->
chip_select
=
msg
->
spi
->
chip_select
;
__entry
->
xfer
=
xfer
;
__entry
->
len
=
xfer
->
len
;
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment