Commit 71bad7f0 authored by popcornmix's avatar popcornmix Committed by Greg Kroah-Hartman

staging: add bcm2708 vchiq driver

Signed-off-by: default avatarpopcornmix <popcornmix@gmail.com>

vchiq: create_pagelist copes with vmalloc memory
Signed-off-by: default avatarDaniel Stone <daniels@collabora.com>

vchiq: fix the shim message release
Signed-off-by: default avatarDaniel Stone <daniels@collabora.com>

vchiq: export additional symbols
Signed-off-by: default avatarDaniel Stone <daniels@collabora.com>

VCHIQ: Make service closure fully synchronous (drv)

This is one half of a two-part patch, the other half of which is to
the vchiq_lib user library. With these patches, calls to
vchiq_close_service and vchiq_remove_service won't return until any
associated callbacks have been delivered to the callback thread.

VCHIQ: Add per-service tracing

The new service option VCHIQ_SERVICE_OPTION_TRACE is a boolean that
toggles tracing for the specified service.

This commit also introduces vchi_service_set_option and the associated
option VCHI_SERVICE_OPTION_TRACE.

vchiq: Make the synchronous-CLOSE logic more tolerant

vchiq: Move logging control into debugfs

vchiq: Take care of a corner case tickled by VCSM

Closing a connection that isn't fully open requires care, since one
side does not know the other side's port number. Code was present to
handle the case where a CLOSE is sent immediately after an OPEN, i.e.
before the OPENACK has been received, but this was incorrectly being
used when an OPEN from a client using port 0 was rejected.

(In the observed failure, the host was attempting to use the VCSM
service, which isn't present in the 'cutdown' firmware. The failure
was intermittent because sometimes the keepalive service would
grab port 0.)

This case can be distinguished because the client's remoteport will
still be VCHIQ_PORT_FREE, and the srvstate will be OPENING. Either
condition is sufficient to differentiate it from the special case
described above.

vchiq: Avoid high load when blocked and unkillable

vchiq: Include SIGSTOP and SIGCONT in list of signals not-masked by vchiq to allow gdb to work

vchiq_arm: Complete support for SYNCHRONOUS mode

vchiq: Remove inline from suspend/resume

vchiq: Allocation does not need to be atomic

vchiq: Fix wrong condition check

The log level is checked from within the log call. Remove the check in the call.
Signed-off-by: default avatarPranith Kumar <bobby.prani@gmail.com>

BCM270x: Add vchiq device to platform file and Device Tree

Prepare to turn the vchiq module into a driver.
Signed-off-by: default avatarNoralf Trønnes <noralf@tronnes.org>

bcm2708: vchiq: Add Device Tree support

Turn vchiq into a driver and stop hardcoding resources.
Use devm_* functions in probe path to simplify cleanup.
A global variable is used to hold the register address. This is done
to keep this patch as small as possible.
Also make available on ARCH_BCM2835.
Based on work by Lubomir Rintel.
Signed-off-by: default avatarNoralf Trønnes <noralf@tronnes.org>

vchiq: Change logging level for inbound data

vchiq_arm: Two cacheing fixes

1) Make fragment size vary with cache line size
Without this patch, non-cache-line-aligned transfers may corrupt
(or be corrupted by) adjacent data structures.

Both ARM and VC need to be updated to enable this feature. This is
ensured by having the loader apply a new DT parameter -
cache-line-size. The existence of this parameter guarantees that the
kernel is capable, and the parameter will only be modified from the
safe default if the loader is capable.

2) Flush/invalidate vmalloc'd memory, and invalidate after reads

vchiq: fix NULL pointer dereference when closing driver

The following code run as root will cause a null pointer dereference oops:

        int fd = open("/dev/vc-cma", O_RDONLY);
        if (fd < 0)
                err(1, "open failed");
        (void)close(fd);

[ 1704.877721] Unable to handle kernel NULL pointer dereference at virtual address 00000000
[ 1704.877725] pgd = b899c000
[ 1704.877736] [00000000] *pgd=37fab831, *pte=00000000, *ppte=00000000
[ 1704.877748] Internal error: Oops: 817 [#1] PREEMPT SMP ARM
[ 1704.877765] Modules linked in: evdev i2c_bcm2708 uio_pdrv_genirq uio
[ 1704.877774] CPU: 2 PID: 3656 Comm: stress-ng-fstat Not tainted 3.19.1-12-generic-bcm2709 #12-Ubuntu
[ 1704.877777] Hardware name: BCM2709
[ 1704.877783] task: b8ab9b00 ti: b7e68000 task.ti: b7e68000
[ 1704.877798] PC is at __down_interruptible+0x50/0xec
[ 1704.877806] LR is at down_interruptible+0x5c/0x68
[ 1704.877813] pc : [<80630ee8>]    lr : [<800704b0>]    psr: 60080093
sp : b7e69e50  ip : b7e69e88  fp : b7e69e84
[ 1704.877817] r10: b88123c8  r9 : 00000010  r8 : 00000001
[ 1704.877822] r7 : b8ab9b00  r6 : 7fffffff  r5 : 80a1cc34  r4 : 80a1cc34
[ 1704.877826] r3 : b7e69e50  r2 : 00000000  r1 : 00000000  r0 : 80a1cc34
[ 1704.877833] Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment user
[ 1704.877838] Control: 10c5387d  Table: 3899c06a  DAC: 00000015
[ 1704.877843] Process do-oops (pid: 3656, stack limit = 0xb7e68238)
[ 1704.877848] Stack: (0xb7e69e50 to 0xb7e6a000)
[ 1704.877856] 9e40:                                     80a1cc3c 00000000 00000010 b88123c8
[ 1704.877865] 9e60: b7e69e84 80a1cc34 fff9fee9 ffffffff b7e68000 00000009 b7e69ea4 b7e69e88
[ 1704.877874] 9e80: 800704b0 80630ea4 fff9fee9 60080013 80a1cc28 fff9fee9 b7e69edc b7e69ea8
[ 1704.877884] 9ea0: 8040f558 80070460 fff9fee9 ffffffff 00000000 00000000 00000009 80a1cb7c
[ 1704.877893] 9ec0: 00000000 80a1cb7c 00000000 00000010 b7e69ef4 b7e69ee0 803e1ba4 8040f514
[ 1704.877902] 9ee0: 00000e48 80a1cb7c b7e69f14 b7e69ef8 803e1c9c 803e1b74 b88123c0 b92acb18
[ 1704.877911] 9f00: b8812790 b8d815d8 b7e69f24 b7e69f18 803e2250 803e1bc8 b7e69f5c b7e69f28
[ 1704.877921] 9f20: 80167bac 803e222c 00000000 00000000 b7e69f54 b8ab9ffc 00000000 8098c794
[ 1704.877930] 9f40: b8ab9b00 8000efc4 b7e68000 00000000 b7e69f6c b7e69f60 80167d6c 80167b28
[ 1704.877939] 9f60: b7e69f8c b7e69f70 80047d38 80167d60 b7e68000 b7e68010 8000efc4 b7e69fb0
[ 1704.877949] 9f80: b7e69fac b7e69f90 80012820 80047c84 01155490 011549a8 00000001 00000006
[ 1704.877957] 9fa0: 00000000 b7e69fb0 8000ee5c 80012790 00000000 353d8c0f 7efc4308 00000000
[ 1704.877966] 9fc0: 01155490 011549a8 00000001 00000006 00000000 00000000 76cf3ba0 00000003
[ 1704.877975] 9fe0: 00000000 7efc42e4 0002272f 76e2ed66 60080030 00000003 00000000 00000000
[ 1704.877998] [<80630ee8>] (__down_interruptible) from [<800704b0>] (down_interruptible+0x5c/0x68)
[ 1704.878015] [<800704b0>] (down_interruptible) from [<8040f558>] (vchiu_queue_push+0x50/0xd8)
[ 1704.878032] [<8040f558>] (vchiu_queue_push) from [<803e1ba4>] (send_worker_msg+0x3c/0x54)
[ 1704.878045] [<803e1ba4>] (send_worker_msg) from [<803e1c9c>] (vc_cma_set_reserve+0xe0/0x1c4)
[ 1704.878057] [<803e1c9c>] (vc_cma_set_reserve) from [<803e2250>] (vc_cma_release+0x30/0x38)
[ 1704.878069] [<803e2250>] (vc_cma_release) from [<80167bac>] (__fput+0x90/0x1e0)
[ 1704.878082] [<80167bac>] (__fput) from [<80167d6c>] (____fput+0x18/0x1c)
[ 1704.878094] [<80167d6c>] (____fput) from [<80047d38>] (task_work_run+0xc0/0xf8)
[ 1704.878109] [<80047d38>] (task_work_run) from [<80012820>] (do_work_pending+0x9c/0xc4)
[ 1704.878123] [<80012820>] (do_work_pending) from [<8000ee5c>] (work_pending+0xc/0x20)
[ 1704.878133] Code: e50b1034 e3a01000 e50b2030 e580300c (e5823000)

..the fix is to ensure that we have actually initialized the queue before we attempt
to push any items onto it.  This occurs if we do an open() followed by a close() without
any activity in between.
Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>

vchiq_arm: Sort out the vmalloc case

See: https://github.com/raspberrypi/linux/issues/1055

vchiq: hack: Add include depecated dma include file

[gregkh] added dependancy on CONFIG_BROKEN to make things sane for now.

Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 88638cf1
......@@ -104,4 +104,6 @@ source "drivers/staging/ks7010/Kconfig"
source "drivers/staging/greybus/Kconfig"
source "drivers/staging/vc04_services/Kconfig"
endif # STAGING
......@@ -41,3 +41,4 @@ obj-$(CONFIG_MOST) += most/
obj-$(CONFIG_ISDN_I4L) += i4l/
obj-$(CONFIG_KS7010) += ks7010/
obj-$(CONFIG_GREYBUS) += greybus/
obj-$(CONFIG_BCM2708_VCHIQ) += vc04_services/
config BCM2708_VCHIQ
tristate "Videocore VCHIQ"
depends on RASPBERRYPI_FIRMWARE && BROKEN
default y
help
Kernel to VideoCore communication interface for the
BCM2708 family of products.
Defaults to Y when the Broadcom Videocore services
are included in the build, N otherwise.
obj-$(CONFIG_BCM2708_VCHIQ) += vchiq.o
vchiq-objs := \
interface/vchiq_arm/vchiq_core.o \
interface/vchiq_arm/vchiq_arm.o \
interface/vchiq_arm/vchiq_kern_lib.o \
interface/vchiq_arm/vchiq_2835_arm.o \
interface/vchiq_arm/vchiq_debugfs.o \
interface/vchiq_arm/vchiq_shim.o \
interface/vchiq_arm/vchiq_util.o \
interface/vchiq_arm/vchiq_connected.o \
ccflags-y += -DVCOS_VERIFY_BKPTS=1 -Idrivers/staging/vc04_services -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef CONNECTION_H_
#define CONNECTION_H_
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/semaphore.h>
#include "interface/vchi/vchi_cfg_internal.h"
#include "interface/vchi/vchi_common.h"
#include "interface/vchi/message_drivers/message.h"
/******************************************************************************
Global defs
*****************************************************************************/
// Opaque handle for a connection / service pair
typedef struct opaque_vchi_connection_connected_service_handle_t *VCHI_CONNECTION_SERVICE_HANDLE_T;
// opaque handle to the connection state information
typedef struct opaque_vchi_connection_info_t VCHI_CONNECTION_STATE_T;
typedef struct vchi_connection_t VCHI_CONNECTION_T;
/******************************************************************************
API
*****************************************************************************/
// Routine to init a connection with a particular low level driver
typedef VCHI_CONNECTION_STATE_T * (*VCHI_CONNECTION_INIT_T)( struct vchi_connection_t * connection,
const VCHI_MESSAGE_DRIVER_T * driver );
// Routine to control CRC enabling at a connection level
typedef int32_t (*VCHI_CONNECTION_CRC_CONTROL_T)( VCHI_CONNECTION_STATE_T *state_handle,
VCHI_CRC_CONTROL_T control );
// Routine to create a service
typedef int32_t (*VCHI_CONNECTION_SERVICE_CONNECT_T)( VCHI_CONNECTION_STATE_T *state_handle,
int32_t service_id,
uint32_t rx_fifo_size,
uint32_t tx_fifo_size,
int server,
VCHI_CALLBACK_T callback,
void *callback_param,
int32_t want_crc,
int32_t want_unaligned_bulk_rx,
int32_t want_unaligned_bulk_tx,
VCHI_CONNECTION_SERVICE_HANDLE_T *service_handle );
// Routine to close a service
typedef int32_t (*VCHI_CONNECTION_SERVICE_DISCONNECT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle );
// Routine to queue a message
typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
const void *data,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *msg_handle );
// scatter-gather (vector) message queueing
typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
VCHI_MSG_VECTOR_T *vector,
uint32_t count,
VCHI_FLAGS_T flags,
void *msg_handle );
// Routine to dequeue a message
typedef int32_t (*VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void *data,
uint32_t max_data_size_to_read,
uint32_t *actual_msg_size,
VCHI_FLAGS_T flags );
// Routine to peek at a message
typedef int32_t (*VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void **data,
uint32_t *msg_size,
VCHI_FLAGS_T flags );
// Routine to hold a message
typedef int32_t (*VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void **data,
uint32_t *msg_size,
VCHI_FLAGS_T flags,
void **message_handle );
// Routine to initialise a received message iterator
typedef int32_t (*VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
VCHI_MSG_ITER_T *iter,
VCHI_FLAGS_T flags );
// Routine to release a held message
typedef int32_t (*VCHI_CONNECTION_HELD_MSG_RELEASE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void *message_handle );
// Routine to get info on a held message
typedef int32_t (*VCHI_CONNECTION_HELD_MSG_INFO_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void *message_handle,
void **data,
int32_t *msg_size,
uint32_t *tx_timestamp,
uint32_t *rx_timestamp );
// Routine to check whether the iterator has a next message
typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
const VCHI_MSG_ITER_T *iter );
// Routine to advance the iterator
typedef int32_t (*VCHI_CONNECTION_MSG_ITER_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
VCHI_MSG_ITER_T *iter,
void **data,
uint32_t *msg_size );
// Routine to remove the last message returned by the iterator
typedef int32_t (*VCHI_CONNECTION_MSG_ITER_REMOVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
VCHI_MSG_ITER_T *iter );
// Routine to hold the last message returned by the iterator
typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HOLD_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
VCHI_MSG_ITER_T *iter,
void **msg_handle );
// Routine to transmit bulk data
typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
const void *data_src,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *bulk_handle );
// Routine to receive data
typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
void *data_dst,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *bulk_handle );
// Routine to report if a server is available
typedef int32_t (*VCHI_CONNECTION_SERVER_PRESENT)( VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t peer_flags );
// Routine to report the number of RX slots available
typedef int (*VCHI_CONNECTION_RX_SLOTS_AVAILABLE)( const VCHI_CONNECTION_STATE_T *state );
// Routine to report the RX slot size
typedef uint32_t (*VCHI_CONNECTION_RX_SLOT_SIZE)( const VCHI_CONNECTION_STATE_T *state );
// Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
typedef void (*VCHI_CONNECTION_RX_BULK_BUFFER_ADDED)(VCHI_CONNECTION_STATE_T *state,
int32_t service,
uint32_t length,
MESSAGE_TX_CHANNEL_T channel,
uint32_t channel_params,
uint32_t data_length,
uint32_t data_offset);
// Callback to inform a service that a Xon or Xoff message has been received
typedef void (*VCHI_CONNECTION_FLOW_CONTROL)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t xoff);
// Callback to inform a service that a server available reply message has been received
typedef void (*VCHI_CONNECTION_SERVER_AVAILABLE_REPLY)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, uint32_t flags);
// Callback to indicate that bulk auxiliary messages have arrived
typedef void (*VCHI_CONNECTION_BULK_AUX_RECEIVED)(VCHI_CONNECTION_STATE_T *state);
// Callback to indicate that bulk auxiliary messages have arrived
typedef void (*VCHI_CONNECTION_BULK_AUX_TRANSMITTED)(VCHI_CONNECTION_STATE_T *state, void *handle);
// Callback with all the connection info you require
typedef void (*VCHI_CONNECTION_INFO)(VCHI_CONNECTION_STATE_T *state, uint32_t protocol_version, uint32_t slot_size, uint32_t num_slots, uint32_t min_bulk_size);
// Callback to inform of a disconnect
typedef void (*VCHI_CONNECTION_DISCONNECT)(VCHI_CONNECTION_STATE_T *state, uint32_t flags);
// Callback to inform of a power control request
typedef void (*VCHI_CONNECTION_POWER_CONTROL)(VCHI_CONNECTION_STATE_T *state, MESSAGE_TX_CHANNEL_T channel, int32_t enable);
// allocate memory suitably aligned for this connection
typedef void * (*VCHI_BUFFER_ALLOCATE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, uint32_t * length);
// free memory allocated by buffer_allocate
typedef void (*VCHI_BUFFER_FREE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, void * address);
/******************************************************************************
System driver struct
*****************************************************************************/
struct opaque_vchi_connection_api_t
{
// Routine to init the connection
VCHI_CONNECTION_INIT_T init;
// Connection-level CRC control
VCHI_CONNECTION_CRC_CONTROL_T crc_control;
// Routine to connect to or create service
VCHI_CONNECTION_SERVICE_CONNECT_T service_connect;
// Routine to disconnect from a service
VCHI_CONNECTION_SERVICE_DISCONNECT_T service_disconnect;
// Routine to queue a message
VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T service_queue_msg;
// scatter-gather (vector) message queue
VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T service_queue_msgv;
// Routine to dequeue a message
VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T service_dequeue_msg;
// Routine to peek at a message
VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T service_peek_msg;
// Routine to hold a message
VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T service_hold_msg;
// Routine to initialise a received message iterator
VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T service_look_ahead_msg;
// Routine to release a message
VCHI_CONNECTION_HELD_MSG_RELEASE_T held_msg_release;
// Routine to get information on a held message
VCHI_CONNECTION_HELD_MSG_INFO_T held_msg_info;
// Routine to check for next message on iterator
VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T msg_iter_has_next;
// Routine to get next message on iterator
VCHI_CONNECTION_MSG_ITER_NEXT_T msg_iter_next;
// Routine to remove the last message returned by iterator
VCHI_CONNECTION_MSG_ITER_REMOVE_T msg_iter_remove;
// Routine to hold the last message returned by iterator
VCHI_CONNECTION_MSG_ITER_HOLD_T msg_iter_hold;
// Routine to transmit bulk data
VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T bulk_queue_transmit;
// Routine to receive data
VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T bulk_queue_receive;
// Routine to report the available servers
VCHI_CONNECTION_SERVER_PRESENT server_present;
// Routine to report the number of RX slots available
VCHI_CONNECTION_RX_SLOTS_AVAILABLE connection_rx_slots_available;
// Routine to report the RX slot size
VCHI_CONNECTION_RX_SLOT_SIZE connection_rx_slot_size;
// Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
VCHI_CONNECTION_RX_BULK_BUFFER_ADDED rx_bulk_buffer_added;
// Callback to inform a service that a Xon or Xoff message has been received
VCHI_CONNECTION_FLOW_CONTROL flow_control;
// Callback to inform a service that a server available reply message has been received
VCHI_CONNECTION_SERVER_AVAILABLE_REPLY server_available_reply;
// Callback to indicate that bulk auxiliary messages have arrived
VCHI_CONNECTION_BULK_AUX_RECEIVED bulk_aux_received;
// Callback to indicate that a bulk auxiliary message has been transmitted
VCHI_CONNECTION_BULK_AUX_TRANSMITTED bulk_aux_transmitted;
// Callback to provide information about the connection
VCHI_CONNECTION_INFO connection_info;
// Callback to notify that peer has requested disconnect
VCHI_CONNECTION_DISCONNECT disconnect;
// Callback to notify that peer has requested power change
VCHI_CONNECTION_POWER_CONTROL power_control;
// allocate memory suitably aligned for this connection
VCHI_BUFFER_ALLOCATE buffer_allocate;
// free memory allocated by buffer_allocate
VCHI_BUFFER_FREE buffer_free;
};
struct vchi_connection_t {
const VCHI_CONNECTION_API_T *api;
VCHI_CONNECTION_STATE_T *state;
#ifdef VCHI_COARSE_LOCKING
struct semaphore sem;
#endif
};
#endif /* CONNECTION_H_ */
/****************************** End of file **********************************/
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _VCHI_MESSAGE_H_
#define _VCHI_MESSAGE_H_
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/semaphore.h>
#include "interface/vchi/vchi_cfg_internal.h"
#include "interface/vchi/vchi_common.h"
typedef enum message_event_type {
MESSAGE_EVENT_NONE,
MESSAGE_EVENT_NOP,
MESSAGE_EVENT_MESSAGE,
MESSAGE_EVENT_SLOT_COMPLETE,
MESSAGE_EVENT_RX_BULK_PAUSED,
MESSAGE_EVENT_RX_BULK_COMPLETE,
MESSAGE_EVENT_TX_COMPLETE,
MESSAGE_EVENT_MSG_DISCARDED
} MESSAGE_EVENT_TYPE_T;
typedef enum vchi_msg_flags
{
VCHI_MSG_FLAGS_NONE = 0x0,
VCHI_MSG_FLAGS_TERMINATE_DMA = 0x1
} VCHI_MSG_FLAGS_T;
typedef enum message_tx_channel
{
MESSAGE_TX_CHANNEL_MESSAGE = 0,
MESSAGE_TX_CHANNEL_BULK = 1 // drivers may provide multiple bulk channels, from 1 upwards
} MESSAGE_TX_CHANNEL_T;
// Macros used for cycling through bulk channels
#define MESSAGE_TX_CHANNEL_BULK_PREV(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION-1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
#define MESSAGE_TX_CHANNEL_BULK_NEXT(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
typedef enum message_rx_channel
{
MESSAGE_RX_CHANNEL_MESSAGE = 0,
MESSAGE_RX_CHANNEL_BULK = 1 // drivers may provide multiple bulk channels, from 1 upwards
} MESSAGE_RX_CHANNEL_T;
// Message receive slot information
typedef struct rx_msg_slot_info {
struct rx_msg_slot_info *next;
//struct slot_info *prev;
#if !defined VCHI_COARSE_LOCKING
struct semaphore sem;
#endif
uint8_t *addr; // base address of slot
uint32_t len; // length of slot in bytes
uint32_t write_ptr; // hardware causes this to advance
uint32_t read_ptr; // this module does the reading
int active; // is this slot in the hardware dma fifo?
uint32_t msgs_parsed; // count how many messages are in this slot
uint32_t msgs_released; // how many messages have been released
void *state; // connection state information
uint8_t ref_count[VCHI_MAX_SERVICES_PER_CONNECTION]; // reference count for slots held by services
} RX_MSG_SLOTINFO_T;
// The message driver no longer needs to know about the fields of RX_BULK_SLOTINFO_T - sort this out.
// In particular, it mustn't use addr and len - they're the client buffer, but the message
// driver will be tasked with sending the aligned core section.
typedef struct rx_bulk_slotinfo_t {
struct rx_bulk_slotinfo_t *next;
struct semaphore *blocking;
// needed by DMA
void *addr;
uint32_t len;
// needed for the callback
void *service;
void *handle;
VCHI_FLAGS_T flags;
} RX_BULK_SLOTINFO_T;
/* ----------------------------------------------------------------------
* each connection driver will have a pool of the following struct.
*
* the pool will be managed by vchi_qman_*
* this means there will be multiple queues (single linked lists)
* a given struct message_info will be on exactly one of these queues
* at any one time
* -------------------------------------------------------------------- */
typedef struct rx_message_info {
struct message_info *next;
//struct message_info *prev;
uint8_t *addr;
uint32_t len;
RX_MSG_SLOTINFO_T *slot; // points to whichever slot contains this message
uint32_t tx_timestamp;
uint32_t rx_timestamp;
} RX_MESSAGE_INFO_T;
typedef struct {
MESSAGE_EVENT_TYPE_T type;
struct {
// for messages
void *addr; // address of message
uint16_t slot_delta; // whether this message indicated slot delta
uint32_t len; // length of message
RX_MSG_SLOTINFO_T *slot; // slot this message is in
int32_t service; // service id this message is destined for
uint32_t tx_timestamp; // timestamp from the header
uint32_t rx_timestamp; // timestamp when we parsed it
} message;
// FIXME: cleanup slot reporting...
RX_MSG_SLOTINFO_T *rx_msg;
RX_BULK_SLOTINFO_T *rx_bulk;
void *tx_handle;
MESSAGE_TX_CHANNEL_T tx_channel;
} MESSAGE_EVENT_T;
// callbacks
typedef void VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T( void *state );
typedef struct {
VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T *event_callback;
} VCHI_MESSAGE_DRIVER_OPEN_T;
// handle to this instance of message driver (as returned by ->open)
typedef struct opaque_mhandle_t *VCHI_MDRIVER_HANDLE_T;
struct opaque_vchi_message_driver_t {
VCHI_MDRIVER_HANDLE_T *(*open)( VCHI_MESSAGE_DRIVER_OPEN_T *params, void *state );
int32_t (*suspending)( VCHI_MDRIVER_HANDLE_T *handle );
int32_t (*resumed)( VCHI_MDRIVER_HANDLE_T *handle );
int32_t (*power_control)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T, int32_t enable );
int32_t (*add_msg_rx_slot)( VCHI_MDRIVER_HANDLE_T *handle, RX_MSG_SLOTINFO_T *slot ); // rx message
int32_t (*add_bulk_rx)( VCHI_MDRIVER_HANDLE_T *handle, void *data, uint32_t len, RX_BULK_SLOTINFO_T *slot ); // rx data (bulk)
int32_t (*send)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, VCHI_MSG_FLAGS_T flags, void *send_handle ); // tx (message & bulk)
void (*next_event)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_EVENT_T *event ); // get the next event from message_driver
int32_t (*enable)( VCHI_MDRIVER_HANDLE_T *handle );
int32_t (*form_message)( VCHI_MDRIVER_HANDLE_T *handle, int32_t service_id, VCHI_MSG_VECTOR_T *vector, uint32_t count, void
*address, uint32_t length_avail, uint32_t max_total_length, int32_t pad_to_fill, int32_t allow_partial );
int32_t (*update_message)( VCHI_MDRIVER_HANDLE_T *handle, void *dest, int16_t *slot_count );
int32_t (*buffer_aligned)( VCHI_MDRIVER_HANDLE_T *handle, int tx, int uncached, const void *address, const uint32_t length );
void * (*allocate_buffer)( VCHI_MDRIVER_HANDLE_T *handle, uint32_t *length );
void (*free_buffer)( VCHI_MDRIVER_HANDLE_T *handle, void *address );
int (*rx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
int (*tx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
int32_t (*tx_supports_terminate)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
uint32_t (*tx_bulk_chunk_size)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
int (*tx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
int (*rx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_RX_CHANNEL_T channel );
void (*form_bulk_aux)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, uint32_t chunk_size, const void **aux_data, int32_t *aux_len );
void (*debug)( VCHI_MDRIVER_HANDLE_T *handle );
};
#endif // _VCHI_MESSAGE_H_
/****************************** End of file ***********************************/
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHI_H_
#define VCHI_H_
#include "interface/vchi/vchi_cfg.h"
#include "interface/vchi/vchi_common.h"
#include "interface/vchi/connections/connection.h"
#include "vchi_mh.h"
/******************************************************************************
Global defs
*****************************************************************************/
#define VCHI_BULK_ROUND_UP(x) ((((unsigned long)(x))+VCHI_BULK_ALIGN-1) & ~(VCHI_BULK_ALIGN-1))
#define VCHI_BULK_ROUND_DOWN(x) (((unsigned long)(x)) & ~(VCHI_BULK_ALIGN-1))
#define VCHI_BULK_ALIGN_NBYTES(x) (VCHI_BULK_ALIGNED(x) ? 0 : (VCHI_BULK_ALIGN - ((unsigned long)(x) & (VCHI_BULK_ALIGN-1))))
#ifdef USE_VCHIQ_ARM
#define VCHI_BULK_ALIGNED(x) 1
#else
#define VCHI_BULK_ALIGNED(x) (((unsigned long)(x) & (VCHI_BULK_ALIGN-1)) == 0)
#endif
struct vchi_version {
uint32_t version;
uint32_t version_min;
};
#define VCHI_VERSION(v_) { v_, v_ }
#define VCHI_VERSION_EX(v_, m_) { v_, m_ }
typedef enum
{
VCHI_VEC_POINTER,
VCHI_VEC_HANDLE,
VCHI_VEC_LIST
} VCHI_MSG_VECTOR_TYPE_T;
typedef struct vchi_msg_vector_ex {
VCHI_MSG_VECTOR_TYPE_T type;
union
{
// a memory handle
struct
{
VCHI_MEM_HANDLE_T handle;
uint32_t offset;
int32_t vec_len;
} handle;
// an ordinary data pointer
struct
{
const void *vec_base;
int32_t vec_len;
} ptr;
// a nested vector list
struct
{
struct vchi_msg_vector_ex *vec;
uint32_t vec_len;
} list;
} u;
} VCHI_MSG_VECTOR_EX_T;
// Construct an entry in a msg vector for a pointer (p) of length (l)
#define VCHI_VEC_POINTER(p,l) VCHI_VEC_POINTER, { { (VCHI_MEM_HANDLE_T)(p), (l) } }
// Construct an entry in a msg vector for a message handle (h), starting at offset (o) of length (l)
#define VCHI_VEC_HANDLE(h,o,l) VCHI_VEC_HANDLE, { { (h), (o), (l) } }
// Macros to manipulate 'FOURCC' values
#define MAKE_FOURCC(x) ((int32_t)( (x[0] << 24) | (x[1] << 16) | (x[2] << 8) | x[3] ))
#define FOURCC_TO_CHAR(x) (x >> 24) & 0xFF,(x >> 16) & 0xFF,(x >> 8) & 0xFF, x & 0xFF
// Opaque service information
struct opaque_vchi_service_t;
// Descriptor for a held message. Allocated by client, initialised by vchi_msg_hold,
// vchi_msg_iter_hold or vchi_msg_iter_hold_next. Fields are for internal VCHI use only.
typedef struct
{
struct opaque_vchi_service_t *service;
void *message;
} VCHI_HELD_MSG_T;
// structure used to provide the information needed to open a server or a client
typedef struct {
struct vchi_version version;
int32_t service_id;
VCHI_CONNECTION_T *connection;
uint32_t rx_fifo_size;
uint32_t tx_fifo_size;
VCHI_CALLBACK_T callback;
void *callback_param;
/* client intends to receive bulk transfers of
odd lengths or into unaligned buffers */
int32_t want_unaligned_bulk_rx;
/* client intends to transmit bulk transfers of
odd lengths or out of unaligned buffers */
int32_t want_unaligned_bulk_tx;
/* client wants to check CRCs on (bulk) xfers.
Only needs to be set at 1 end - will do both directions. */
int32_t want_crc;
} SERVICE_CREATION_T;
// Opaque handle for a VCHI instance
typedef struct opaque_vchi_instance_handle_t *VCHI_INSTANCE_T;
// Opaque handle for a server or client
typedef struct opaque_vchi_service_handle_t *VCHI_SERVICE_HANDLE_T;
// Service registration & startup
typedef void (*VCHI_SERVICE_INIT)(VCHI_INSTANCE_T initialise_instance, VCHI_CONNECTION_T **connections, uint32_t num_connections);
typedef struct service_info_tag {
const char * const vll_filename; /* VLL to load to start this service. This is an empty string if VLL is "static" */
VCHI_SERVICE_INIT init; /* Service initialisation function */
void *vll_handle; /* VLL handle; NULL when unloaded or a "static VLL" in build */
} SERVICE_INFO_T;
/******************************************************************************
Global funcs - implementation is specific to which side you are on (local / remote)
*****************************************************************************/
#ifdef __cplusplus
extern "C" {
#endif
extern /*@observer@*/ VCHI_CONNECTION_T * vchi_create_connection( const VCHI_CONNECTION_API_T * function_table,
const VCHI_MESSAGE_DRIVER_T * low_level);
// Routine used to initialise the vchi on both local + remote connections
extern int32_t vchi_initialise( VCHI_INSTANCE_T *instance_handle );
extern int32_t vchi_exit( void );
extern int32_t vchi_connect( VCHI_CONNECTION_T **connections,
const uint32_t num_connections,
VCHI_INSTANCE_T instance_handle );
//When this is called, ensure that all services have no data pending.
//Bulk transfers can remain 'queued'
extern int32_t vchi_disconnect( VCHI_INSTANCE_T instance_handle );
// Global control over bulk CRC checking
extern int32_t vchi_crc_control( VCHI_CONNECTION_T *connection,
VCHI_CRC_CONTROL_T control );
// helper functions
extern void * vchi_allocate_buffer(VCHI_SERVICE_HANDLE_T handle, uint32_t *length);
extern void vchi_free_buffer(VCHI_SERVICE_HANDLE_T handle, void *address);
extern uint32_t vchi_current_time(VCHI_INSTANCE_T instance_handle);
/******************************************************************************
Global service API
*****************************************************************************/
// Routine to create a named service
extern int32_t vchi_service_create( VCHI_INSTANCE_T instance_handle,
SERVICE_CREATION_T *setup,
VCHI_SERVICE_HANDLE_T *handle );
// Routine to destory a service
extern int32_t vchi_service_destroy( const VCHI_SERVICE_HANDLE_T handle );
// Routine to open a named service
extern int32_t vchi_service_open( VCHI_INSTANCE_T instance_handle,
SERVICE_CREATION_T *setup,
VCHI_SERVICE_HANDLE_T *handle);
extern int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle,
short *peer_version );
// Routine to close a named service
extern int32_t vchi_service_close( const VCHI_SERVICE_HANDLE_T handle );
// Routine to increment ref count on a named service
extern int32_t vchi_service_use( const VCHI_SERVICE_HANDLE_T handle );
// Routine to decrement ref count on a named service
extern int32_t vchi_service_release( const VCHI_SERVICE_HANDLE_T handle );
// Routine to set a control option for a named service
extern int32_t vchi_service_set_option( const VCHI_SERVICE_HANDLE_T handle,
VCHI_SERVICE_OPTION_T option,
int value);
// Routine to send a message across a service
extern int32_t vchi_msg_queue( VCHI_SERVICE_HANDLE_T handle,
const void *data,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *msg_handle );
// scatter-gather (vector) and send message
int32_t vchi_msg_queuev_ex( VCHI_SERVICE_HANDLE_T handle,
VCHI_MSG_VECTOR_EX_T *vector,
uint32_t count,
VCHI_FLAGS_T flags,
void *msg_handle );
// legacy scatter-gather (vector) and send message, only handles pointers
int32_t vchi_msg_queuev( VCHI_SERVICE_HANDLE_T handle,
VCHI_MSG_VECTOR_T *vector,
uint32_t count,
VCHI_FLAGS_T flags,
void *msg_handle );
// Routine to receive a msg from a service
// Dequeue is equivalent to hold, copy into client buffer, release
extern int32_t vchi_msg_dequeue( VCHI_SERVICE_HANDLE_T handle,
void *data,
uint32_t max_data_size_to_read,
uint32_t *actual_msg_size,
VCHI_FLAGS_T flags );
// Routine to look at a message in place.
// The message is not dequeued, so a subsequent call to peek or dequeue
// will return the same message.
extern int32_t vchi_msg_peek( VCHI_SERVICE_HANDLE_T handle,
void **data,
uint32_t *msg_size,
VCHI_FLAGS_T flags );
// Routine to remove a message after it has been read in place with peek
// The first message on the queue is dequeued.
extern int32_t vchi_msg_remove( VCHI_SERVICE_HANDLE_T handle );
// Routine to look at a message in place.
// The message is dequeued, so the caller is left holding it; the descriptor is
// filled in and must be released when the user has finished with the message.
extern int32_t vchi_msg_hold( VCHI_SERVICE_HANDLE_T handle,
void **data, // } may be NULL, as info can be
uint32_t *msg_size, // } obtained from HELD_MSG_T
VCHI_FLAGS_T flags,
VCHI_HELD_MSG_T *message_descriptor );
// Initialise an iterator to look through messages in place
extern int32_t vchi_msg_look_ahead( VCHI_SERVICE_HANDLE_T handle,
VCHI_MSG_ITER_T *iter,
VCHI_FLAGS_T flags );
/******************************************************************************
Global service support API - operations on held messages and message iterators
*****************************************************************************/
// Routine to get the address of a held message
extern void *vchi_held_msg_ptr( const VCHI_HELD_MSG_T *message );
// Routine to get the size of a held message
extern int32_t vchi_held_msg_size( const VCHI_HELD_MSG_T *message );
// Routine to get the transmit timestamp as written into the header by the peer
extern uint32_t vchi_held_msg_tx_timestamp( const VCHI_HELD_MSG_T *message );
// Routine to get the reception timestamp, written as we parsed the header
extern uint32_t vchi_held_msg_rx_timestamp( const VCHI_HELD_MSG_T *message );
// Routine to release a held message after it has been processed
extern int32_t vchi_held_msg_release( VCHI_HELD_MSG_T *message );
// Indicates whether the iterator has a next message.
extern int32_t vchi_msg_iter_has_next( const VCHI_MSG_ITER_T *iter );
// Return the pointer and length for the next message and advance the iterator.
extern int32_t vchi_msg_iter_next( VCHI_MSG_ITER_T *iter,
void **data,
uint32_t *msg_size );
// Remove the last message returned by vchi_msg_iter_next.
// Can only be called once after each call to vchi_msg_iter_next.
extern int32_t vchi_msg_iter_remove( VCHI_MSG_ITER_T *iter );
// Hold the last message returned by vchi_msg_iter_next.
// Can only be called once after each call to vchi_msg_iter_next.
extern int32_t vchi_msg_iter_hold( VCHI_MSG_ITER_T *iter,
VCHI_HELD_MSG_T *message );
// Return information for the next message, and hold it, advancing the iterator.
extern int32_t vchi_msg_iter_hold_next( VCHI_MSG_ITER_T *iter,
void **data, // } may be NULL
uint32_t *msg_size, // }
VCHI_HELD_MSG_T *message );
/******************************************************************************
Global bulk API
*****************************************************************************/
// Routine to prepare interface for a transfer from the other side
extern int32_t vchi_bulk_queue_receive( VCHI_SERVICE_HANDLE_T handle,
void *data_dst,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *transfer_handle );
// Prepare interface for a transfer from the other side into relocatable memory.
int32_t vchi_bulk_queue_receive_reloc( const VCHI_SERVICE_HANDLE_T handle,
VCHI_MEM_HANDLE_T h_dst,
uint32_t offset,
uint32_t data_size,
const VCHI_FLAGS_T flags,
void * const bulk_handle );
// Routine to queue up data ready for transfer to the other (once they have signalled they are ready)
extern int32_t vchi_bulk_queue_transmit( VCHI_SERVICE_HANDLE_T handle,
const void *data_src,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *transfer_handle );
/******************************************************************************
Configuration plumbing
*****************************************************************************/
// function prototypes for the different mid layers (the state info gives the different physical connections)
extern const VCHI_CONNECTION_API_T *single_get_func_table( void );
//extern const VCHI_CONNECTION_API_T *local_server_get_func_table( void );
//extern const VCHI_CONNECTION_API_T *local_client_get_func_table( void );
// declare all message drivers here
const VCHI_MESSAGE_DRIVER_T *vchi_mphi_message_driver_func_table( void );
#ifdef __cplusplus
}
#endif
extern int32_t vchi_bulk_queue_transmit_reloc( VCHI_SERVICE_HANDLE_T handle,
VCHI_MEM_HANDLE_T h_src,
uint32_t offset,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *transfer_handle );
#endif /* VCHI_H_ */
/****************************** End of file **********************************/
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHI_CFG_H_
#define VCHI_CFG_H_
/****************************************************************************************
* Defines in this first section are part of the VCHI API and may be examined by VCHI
* services.
***************************************************************************************/
/* Required alignment of base addresses for bulk transfer, if unaligned transfers are not enabled */
/* Really determined by the message driver, and should be available from a run-time call. */
#ifndef VCHI_BULK_ALIGN
# if __VCCOREVER__ >= 0x04000000
# define VCHI_BULK_ALIGN 32 // Allows for the need to do cache cleans
# else
# define VCHI_BULK_ALIGN 16
# endif
#endif
/* Required length multiple for bulk transfers, if unaligned transfers are not enabled */
/* May be less than or greater than VCHI_BULK_ALIGN */
/* Really determined by the message driver, and should be available from a run-time call. */
#ifndef VCHI_BULK_GRANULARITY
# if __VCCOREVER__ >= 0x04000000
# define VCHI_BULK_GRANULARITY 32 // Allows for the need to do cache cleans
# else
# define VCHI_BULK_GRANULARITY 16
# endif
#endif
/* The largest possible message to be queued with vchi_msg_queue. */
#ifndef VCHI_MAX_MSG_SIZE
# if defined VCHI_LOCAL_HOST_PORT
# define VCHI_MAX_MSG_SIZE 16384 // makes file transfers fast, but should they be using bulk?
# else
# define VCHI_MAX_MSG_SIZE 4096 // NOTE: THIS MUST BE LARGER THAN OR EQUAL TO THE SIZE OF THE KHRONOS MERGE BUFFER!!
# endif
#endif
/******************************************************************************************
* Defines below are system configuration options, and should not be used by VCHI services.
*****************************************************************************************/
/* How many connections can we support? A localhost implementation uses 2 connections,
* 1 for host-app, 1 for VMCS, and these are hooked together by a loopback MPHI VCFW
* driver. */
#ifndef VCHI_MAX_NUM_CONNECTIONS
# define VCHI_MAX_NUM_CONNECTIONS 3
#endif
/* How many services can we open per connection? Extending this doesn't cost processing time, just a small
* amount of static memory. */
#ifndef VCHI_MAX_SERVICES_PER_CONNECTION
# define VCHI_MAX_SERVICES_PER_CONNECTION 36
#endif
/* Adjust if using a message driver that supports more logical TX channels */
#ifndef VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION
# define VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION 9 // 1 MPHI + 8 CCP2 logical channels
#endif
/* Adjust if using a message driver that supports more logical RX channels */
#ifndef VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION
# define VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION 1 // 1 MPHI
#endif
/* How many receive slots do we use. This times VCHI_MAX_MSG_SIZE gives the effective
* receive queue space, less message headers. */
#ifndef VCHI_NUM_READ_SLOTS
# if defined(VCHI_LOCAL_HOST_PORT)
# define VCHI_NUM_READ_SLOTS 4
# else
# define VCHI_NUM_READ_SLOTS 48
# endif
#endif
/* Do we utilise overrun facility for receive message slots? Can aid peer transmit
* performance. Only define on VideoCore end, talking to host.
*/
//#define VCHI_MSG_RX_OVERRUN
/* How many transmit slots do we use. Generally don't need many, as the hardware driver
* underneath VCHI will usually have its own buffering. */
#ifndef VCHI_NUM_WRITE_SLOTS
# define VCHI_NUM_WRITE_SLOTS 4
#endif
/* If a service has held or queued received messages in VCHI_XOFF_THRESHOLD or more slots,
* then it's taking up too much buffer space, and the peer service will be told to stop
* transmitting with an XOFF message. For this to be effective, the VCHI_NUM_READ_SLOTS
* needs to be considerably bigger than VCHI_NUM_WRITE_SLOTS, or the transmit latency
* is too high. */
#ifndef VCHI_XOFF_THRESHOLD
# define VCHI_XOFF_THRESHOLD (VCHI_NUM_READ_SLOTS / 2)
#endif
/* After we've sent an XOFF, the peer will be told to resume transmission once the local
* service has dequeued/released enough messages that it's now occupying
* VCHI_XON_THRESHOLD slots or fewer. */
#ifndef VCHI_XON_THRESHOLD
# define VCHI_XON_THRESHOLD (VCHI_NUM_READ_SLOTS / 4)
#endif
/* A size below which a bulk transfer omits the handshake completely and always goes
* via the message channel, if bulk auxiliary is being sent on that service. (The user
* can guarantee this by enabling unaligned transmits).
* Not API. */
#ifndef VCHI_MIN_BULK_SIZE
# define VCHI_MIN_BULK_SIZE ( VCHI_MAX_MSG_SIZE / 2 < 4096 ? VCHI_MAX_MSG_SIZE / 2 : 4096 )
#endif
/* Maximum size of bulk transmission chunks, for each interface type. A trade-off between
* speed and latency; the smaller the chunk size the better change of messages and other
* bulk transmissions getting in when big bulk transfers are happening. Set to 0 to not
* break transmissions into chunks.
*/
#ifndef VCHI_MAX_BULK_CHUNK_SIZE_MPHI
# define VCHI_MAX_BULK_CHUNK_SIZE_MPHI (16 * 1024)
#endif
/* NB Chunked CCP2 transmissions violate the letter of the CCP2 spec by using "JPEG8" mode
* with multiple-line frames. Only use if the receiver can cope. */
#ifndef VCHI_MAX_BULK_CHUNK_SIZE_CCP2
# define VCHI_MAX_BULK_CHUNK_SIZE_CCP2 0
#endif
/* How many TX messages can we have pending in our transmit slots. Once exhausted,
* vchi_msg_queue will be blocked. */
#ifndef VCHI_TX_MSG_QUEUE_SIZE
# define VCHI_TX_MSG_QUEUE_SIZE 256
#endif
/* How many RX messages can we have parsed in the receive slots. Once exhausted, parsing
* will be suspended until older messages are dequeued/released. */
#ifndef VCHI_RX_MSG_QUEUE_SIZE
# define VCHI_RX_MSG_QUEUE_SIZE 256
#endif
/* Really should be able to cope if we run out of received message descriptors, by
* suspending parsing as the comment above says, but we don't. This sweeps the issue
* under the carpet. */
#if VCHI_RX_MSG_QUEUE_SIZE < (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
# undef VCHI_RX_MSG_QUEUE_SIZE
# define VCHI_RX_MSG_QUEUE_SIZE (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
#endif
/* How many bulk transmits can we have pending. Once exhausted, vchi_bulk_queue_transmit
* will be blocked. */
#ifndef VCHI_TX_BULK_QUEUE_SIZE
# define VCHI_TX_BULK_QUEUE_SIZE 64
#endif
/* How many bulk receives can we have pending. Once exhausted, vchi_bulk_queue_receive
* will be blocked. */
#ifndef VCHI_RX_BULK_QUEUE_SIZE
# define VCHI_RX_BULK_QUEUE_SIZE 64
#endif
/* A limit on how many outstanding bulk requests we expect the peer to give us. If
* the peer asks for more than this, VCHI will fail and assert. The number is determined
* by the peer's hardware - it's the number of outstanding requests that can be queued
* on all bulk channels. VC3's MPHI peripheral allows 16. */
#ifndef VCHI_MAX_PEER_BULK_REQUESTS
# define VCHI_MAX_PEER_BULK_REQUESTS 32
#endif
/* Define VCHI_CCP2TX_MANUAL_POWER if the host tells us when to turn the CCP2
* transmitter on and off.
*/
/*#define VCHI_CCP2TX_MANUAL_POWER*/
#ifndef VCHI_CCP2TX_MANUAL_POWER
/* Timeout (in milliseconds) for putting the CCP2TX interface into IDLE state. Set
* negative for no IDLE.
*/
# ifndef VCHI_CCP2TX_IDLE_TIMEOUT
# define VCHI_CCP2TX_IDLE_TIMEOUT 5
# endif
/* Timeout (in milliseconds) for putting the CCP2TX interface into OFF state. Set
* negative for no OFF.
*/
# ifndef VCHI_CCP2TX_OFF_TIMEOUT
# define VCHI_CCP2TX_OFF_TIMEOUT 1000
# endif
#endif /* VCHI_CCP2TX_MANUAL_POWER */
#endif /* VCHI_CFG_H_ */
/****************************** End of file **********************************/
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHI_CFG_INTERNAL_H_
#define VCHI_CFG_INTERNAL_H_
/****************************************************************************************
* Control optimisation attempts.
***************************************************************************************/
// Don't use lots of short-term locks - use great long ones, reducing the overall locks-per-second
#define VCHI_COARSE_LOCKING
// Avoid lock then unlock on exit from blocking queue operations (msg tx, bulk rx/tx)
// (only relevant if VCHI_COARSE_LOCKING)
#define VCHI_ELIDE_BLOCK_EXIT_LOCK
// Avoid lock on non-blocking peek
// (only relevant if VCHI_COARSE_LOCKING)
#define VCHI_AVOID_PEEK_LOCK
// Use one slot-handler thread per connection, rather than 1 thread dealing with all connections in rotation.
#define VCHI_MULTIPLE_HANDLER_THREADS
// Put free descriptors onto the head of the free queue, rather than the tail, so that we don't thrash
// our way through the pool of descriptors.
#define VCHI_PUSH_FREE_DESCRIPTORS_ONTO_HEAD
// Don't issue a MSG_AVAILABLE callback for every single message. Possibly only safe if VCHI_COARSE_LOCKING.
#define VCHI_FEWER_MSG_AVAILABLE_CALLBACKS
// Don't use message descriptors for TX messages that don't need them
#define VCHI_MINIMISE_TX_MSG_DESCRIPTORS
// Nano-locks for multiqueue
//#define VCHI_MQUEUE_NANOLOCKS
// Lock-free(er) dequeuing
//#define VCHI_RX_NANOLOCKS
#endif /*VCHI_CFG_INTERNAL_H_*/
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHI_COMMON_H_
#define VCHI_COMMON_H_
//flags used when sending messages (must be bitmapped)
typedef enum
{
VCHI_FLAGS_NONE = 0x0,
VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE = 0x1, // waits for message to be received, or sent (NB. not the same as being seen on other side)
VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE = 0x2, // run a callback when message sent
VCHI_FLAGS_BLOCK_UNTIL_QUEUED = 0x4, // return once the transfer is in a queue ready to go
VCHI_FLAGS_ALLOW_PARTIAL = 0x8,
VCHI_FLAGS_BLOCK_UNTIL_DATA_READ = 0x10,
VCHI_FLAGS_CALLBACK_WHEN_DATA_READ = 0x20,
VCHI_FLAGS_ALIGN_SLOT = 0x000080, // internal use only
VCHI_FLAGS_BULK_AUX_QUEUED = 0x010000, // internal use only
VCHI_FLAGS_BULK_AUX_COMPLETE = 0x020000, // internal use only
VCHI_FLAGS_BULK_DATA_QUEUED = 0x040000, // internal use only
VCHI_FLAGS_BULK_DATA_COMPLETE = 0x080000, // internal use only
VCHI_FLAGS_INTERNAL = 0xFF0000
} VCHI_FLAGS_T;
// constants for vchi_crc_control()
typedef enum {
VCHI_CRC_NOTHING = -1,
VCHI_CRC_PER_SERVICE = 0,
VCHI_CRC_EVERYTHING = 1,
} VCHI_CRC_CONTROL_T;
//callback reasons when an event occurs on a service
typedef enum
{
VCHI_CALLBACK_REASON_MIN,
//This indicates that there is data available
//handle is the msg id that was transmitted with the data
// When a message is received and there was no FULL message available previously, send callback
// Tasks get kicked by the callback, reset their event and try and read from the fifo until it fails
VCHI_CALLBACK_MSG_AVAILABLE,
VCHI_CALLBACK_MSG_SENT,
VCHI_CALLBACK_MSG_SPACE_AVAILABLE, // XXX not yet implemented
// This indicates that a transfer from the other side has completed
VCHI_CALLBACK_BULK_RECEIVED,
//This indicates that data queued up to be sent has now gone
//handle is the msg id that was used when sending the data
VCHI_CALLBACK_BULK_SENT,
VCHI_CALLBACK_BULK_RX_SPACE_AVAILABLE, // XXX not yet implemented
VCHI_CALLBACK_BULK_TX_SPACE_AVAILABLE, // XXX not yet implemented
VCHI_CALLBACK_SERVICE_CLOSED,
// this side has sent XOFF to peer due to lack of data consumption by service
// (suggests the service may need to take some recovery action if it has
// been deliberately holding off consuming data)
VCHI_CALLBACK_SENT_XOFF,
VCHI_CALLBACK_SENT_XON,
// indicates that a bulk transfer has finished reading the source buffer
VCHI_CALLBACK_BULK_DATA_READ,
// power notification events (currently host side only)
VCHI_CALLBACK_PEER_OFF,
VCHI_CALLBACK_PEER_SUSPENDED,
VCHI_CALLBACK_PEER_ON,
VCHI_CALLBACK_PEER_RESUMED,
VCHI_CALLBACK_FORCED_POWER_OFF,
#ifdef USE_VCHIQ_ARM
// some extra notifications provided by vchiq_arm
VCHI_CALLBACK_SERVICE_OPENED,
VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
#endif
VCHI_CALLBACK_REASON_MAX
} VCHI_CALLBACK_REASON_T;
// service control options
typedef enum
{
VCHI_SERVICE_OPTION_MIN,
VCHI_SERVICE_OPTION_TRACE,
VCHI_SERVICE_OPTION_SYNCHRONOUS,
VCHI_SERVICE_OPTION_MAX
} VCHI_SERVICE_OPTION_T;
//Callback used by all services / bulk transfers
typedef void (*VCHI_CALLBACK_T)( void *callback_param, //my service local param
VCHI_CALLBACK_REASON_T reason,
void *handle ); //for transmitting msg's only
/*
* Define vector struct for scatter-gather (vector) operations
* Vectors can be nested - if a vector element has negative length, then
* the data pointer is treated as pointing to another vector array, with
* '-vec_len' elements. Thus to append a header onto an existing vector,
* you can do this:
*
* void foo(const VCHI_MSG_VECTOR_T *v, int n)
* {
* VCHI_MSG_VECTOR_T nv[2];
* nv[0].vec_base = my_header;
* nv[0].vec_len = sizeof my_header;
* nv[1].vec_base = v;
* nv[1].vec_len = -n;
* ...
*
*/
typedef struct vchi_msg_vector {
const void *vec_base;
int32_t vec_len;
} VCHI_MSG_VECTOR_T;
// Opaque type for a connection API
typedef struct opaque_vchi_connection_api_t VCHI_CONNECTION_API_T;
// Opaque type for a message driver
typedef struct opaque_vchi_message_driver_t VCHI_MESSAGE_DRIVER_T;
// Iterator structure for reading ahead through received message queue. Allocated by client,
// initialised by vchi_msg_look_ahead. Fields are for internal VCHI use only.
// Iterates over messages in queue at the instant of the call to vchi_msg_lookahead -
// will not proceed to messages received since. Behaviour is undefined if an iterator
// is used again after messages for that service are removed/dequeued by any
// means other than vchi_msg_iter_... calls on the iterator itself.
typedef struct {
struct opaque_vchi_service_t *service;
void *last;
void *next;
void *remove;
} VCHI_MSG_ITER_T;
#endif // VCHI_COMMON_H_
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHI_MH_H_
#define VCHI_MH_H_
#include <linux/types.h>
typedef int32_t VCHI_MEM_HANDLE_T;
#define VCHI_MEM_HANDLE_INVALID 0
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_VCHIQ_H
#define VCHIQ_VCHIQ_H
#include "vchiq_if.h"
#include "vchiq_util.h"
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_2835_H
#define VCHIQ_2835_H
#include "vchiq_pagelist.h"
#define VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX 0
#define VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX 1
#endif /* VCHIQ_2835_H */
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/pagemap.h>
#include <linux/dma-mapping.h>
#include <linux/version.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/uaccess.h>
#include <linux/of.h>
#include <asm/pgtable.h>
#include <soc/bcm2835/raspberrypi-firmware.h>
#define dmac_map_area __glue(_CACHE,_dma_map_area)
#define dmac_unmap_area __glue(_CACHE,_dma_unmap_area)
extern void dmac_map_area(const void *, size_t, int);
extern void dmac_unmap_area(const void *, size_t, int);
#define TOTAL_SLOTS (VCHIQ_SLOT_ZERO_SLOTS + 2 * 32)
#define VCHIQ_ARM_ADDRESS(x) ((void *)((char *)x + g_virt_to_bus_offset))
#include "vchiq_arm.h"
#include "vchiq_2835.h"
#include "vchiq_connected.h"
#include "vchiq_killable.h"
#define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2)
#define BELL0 0x00
#define BELL2 0x08
typedef struct vchiq_2835_state_struct {
int inited;
VCHIQ_ARM_STATE_T arm_state;
} VCHIQ_2835_ARM_STATE_T;
static void __iomem *g_regs;
static unsigned int g_cache_line_size = sizeof(CACHE_LINE_SIZE);
static unsigned int g_fragments_size;
static char *g_fragments_base;
static char *g_free_fragments;
static struct semaphore g_free_fragments_sema;
static unsigned long g_virt_to_bus_offset;
extern int vchiq_arm_log_level;
static DEFINE_SEMAPHORE(g_free_fragments_mutex);
static irqreturn_t
vchiq_doorbell_irq(int irq, void *dev_id);
static int
create_pagelist(char __user *buf, size_t count, unsigned short type,
struct task_struct *task, PAGELIST_T ** ppagelist);
static void
free_pagelist(PAGELIST_T *pagelist, int actual);
int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state)
{
struct device *dev = &pdev->dev;
struct rpi_firmware *fw = platform_get_drvdata(pdev);
VCHIQ_SLOT_ZERO_T *vchiq_slot_zero;
struct resource *res;
void *slot_mem;
dma_addr_t slot_phys;
u32 channelbase;
int slot_mem_size, frag_mem_size;
int err, irq, i;
g_virt_to_bus_offset = virt_to_dma(dev, (void *)0);
(void)of_property_read_u32(dev->of_node, "cache-line-size",
&g_cache_line_size);
g_fragments_size = 2 * g_cache_line_size;
/* Allocate space for the channels in coherent memory */
slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE);
frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS);
slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size,
&slot_phys, GFP_KERNEL);
if (!slot_mem) {
dev_err(dev, "could not allocate DMA memory\n");
return -ENOMEM;
}
WARN_ON(((int)slot_mem & (PAGE_SIZE - 1)) != 0);
vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size);
if (!vchiq_slot_zero)
return -EINVAL;
vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] =
(int)slot_phys + slot_mem_size;
vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] =
MAX_FRAGMENTS;
g_fragments_base = (char *)slot_mem + slot_mem_size;
slot_mem_size += frag_mem_size;
g_free_fragments = g_fragments_base;
for (i = 0; i < (MAX_FRAGMENTS - 1); i++) {
*(char **)&g_fragments_base[i*g_fragments_size] =
&g_fragments_base[(i + 1)*g_fragments_size];
}
*(char **)&g_fragments_base[i * g_fragments_size] = NULL;
sema_init(&g_free_fragments_sema, MAX_FRAGMENTS);
if (vchiq_init_state(state, vchiq_slot_zero, 0) != VCHIQ_SUCCESS)
return -EINVAL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
g_regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(g_regs))
return PTR_ERR(g_regs);
irq = platform_get_irq(pdev, 0);
if (irq <= 0) {
dev_err(dev, "failed to get IRQ\n");
return irq;
}
err = devm_request_irq(dev, irq, vchiq_doorbell_irq, IRQF_IRQPOLL,
"VCHIQ doorbell", state);
if (err) {
dev_err(dev, "failed to register irq=%d\n", irq);
return err;
}
/* Send the base address of the slots to VideoCore */
channelbase = slot_phys;
err = rpi_firmware_property(fw, RPI_FIRMWARE_VCHIQ_INIT,
&channelbase, sizeof(channelbase));
if (err || channelbase) {
dev_err(dev, "failed to set channelbase\n");
return err ? : -ENXIO;
}
vchiq_log_info(vchiq_arm_log_level,
"vchiq_init - done (slots %x, phys %pad)",
(unsigned int)vchiq_slot_zero, &slot_phys);
vchiq_call_connected_callbacks();
return 0;
}
VCHIQ_STATUS_T
vchiq_platform_init_state(VCHIQ_STATE_T *state)
{
VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
state->platform_state = kzalloc(sizeof(VCHIQ_2835_ARM_STATE_T), GFP_KERNEL);
((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 1;
status = vchiq_arm_init_state(state, &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state);
if(status != VCHIQ_SUCCESS)
{
((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 0;
}
return status;
}
VCHIQ_ARM_STATE_T*
vchiq_platform_get_arm_state(VCHIQ_STATE_T *state)
{
if(!((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited)
{
BUG();
}
return &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state;
}
void
remote_event_signal(REMOTE_EVENT_T *event)
{
wmb();
event->fired = 1;
dsb(); /* data barrier operation */
if (event->armed)
writel(0, g_regs + BELL2); /* trigger vc interrupt */
}
int
vchiq_copy_from_user(void *dst, const void *src, int size)
{
if ((uint32_t)src < TASK_SIZE) {
return copy_from_user(dst, src, size);
} else {
memcpy(dst, src, size);
return 0;
}
}
VCHIQ_STATUS_T
vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle,
void *offset, int size, int dir)
{
PAGELIST_T *pagelist;
int ret;
WARN_ON(memhandle != VCHI_MEM_HANDLE_INVALID);
ret = create_pagelist((char __user *)offset, size,
(dir == VCHIQ_BULK_RECEIVE)
? PAGELIST_READ
: PAGELIST_WRITE,
current,
&pagelist);
if (ret != 0)
return VCHIQ_ERROR;
bulk->handle = memhandle;
bulk->data = VCHIQ_ARM_ADDRESS(pagelist);
/* Store the pagelist address in remote_data, which isn't used by the
slave. */
bulk->remote_data = pagelist;
return VCHIQ_SUCCESS;
}
void
vchiq_complete_bulk(VCHIQ_BULK_T *bulk)
{
if (bulk && bulk->remote_data && bulk->actual)
free_pagelist((PAGELIST_T *)bulk->remote_data, bulk->actual);
}
void
vchiq_transfer_bulk(VCHIQ_BULK_T *bulk)
{
/*
* This should only be called on the master (VideoCore) side, but
* provide an implementation to avoid the need for ifdefery.
*/
BUG();
}
void
vchiq_dump_platform_state(void *dump_context)
{
char buf[80];
int len;
len = snprintf(buf, sizeof(buf),
" Platform: 2835 (VC master)");
vchiq_dump(dump_context, buf, len + 1);
}
VCHIQ_STATUS_T
vchiq_platform_suspend(VCHIQ_STATE_T *state)
{
return VCHIQ_ERROR;
}
VCHIQ_STATUS_T
vchiq_platform_resume(VCHIQ_STATE_T *state)
{
return VCHIQ_SUCCESS;
}
void
vchiq_platform_paused(VCHIQ_STATE_T *state)
{
}
void
vchiq_platform_resumed(VCHIQ_STATE_T *state)
{
}
int
vchiq_platform_videocore_wanted(VCHIQ_STATE_T* state)
{
return 1; // autosuspend not supported - videocore always wanted
}
int
vchiq_platform_use_suspend_timer(void)
{
return 0;
}
void
vchiq_dump_platform_use_state(VCHIQ_STATE_T *state)
{
vchiq_log_info(vchiq_arm_log_level, "Suspend timer not in use");
}
void
vchiq_platform_handle_timeout(VCHIQ_STATE_T *state)
{
(void)state;
}
/*
* Local functions
*/
static irqreturn_t
vchiq_doorbell_irq(int irq, void *dev_id)
{
VCHIQ_STATE_T *state = dev_id;
irqreturn_t ret = IRQ_NONE;
unsigned int status;
/* Read (and clear) the doorbell */
status = readl(g_regs + BELL0);
if (status & 0x4) { /* Was the doorbell rung? */
remote_event_pollall(state);
ret = IRQ_HANDLED;
}
return ret;
}
/* There is a potential problem with partial cache lines (pages?)
** at the ends of the block when reading. If the CPU accessed anything in
** the same line (page?) then it may have pulled old data into the cache,
** obscuring the new data underneath. We can solve this by transferring the
** partial cache lines separately, and allowing the ARM to copy into the
** cached area.
** N.B. This implementation plays slightly fast and loose with the Linux
** driver programming rules, e.g. its use of dmac_map_area instead of
** dma_map_single, but it isn't a multi-platform driver and it benefits
** from increased speed as a result.
*/
static int
create_pagelist(char __user *buf, size_t count, unsigned short type,
struct task_struct *task, PAGELIST_T ** ppagelist)
{
PAGELIST_T *pagelist;
struct page **pages;
unsigned long *addrs;
unsigned int num_pages, offset, i;
char *addr, *base_addr, *next_addr;
int run, addridx, actual_pages;
unsigned long *need_release;
offset = (unsigned int)buf & (PAGE_SIZE - 1);
num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
*ppagelist = NULL;
/* Allocate enough storage to hold the page pointers and the page
** list
*/
pagelist = kmalloc(sizeof(PAGELIST_T) +
(num_pages * sizeof(unsigned long)) +
sizeof(unsigned long) +
(num_pages * sizeof(pages[0])),
GFP_KERNEL);
vchiq_log_trace(vchiq_arm_log_level,
"create_pagelist - %x", (unsigned int)pagelist);
if (!pagelist)
return -ENOMEM;
addrs = pagelist->addrs;
need_release = (unsigned long *)(addrs + num_pages);
pages = (struct page **)(addrs + num_pages + 1);
if (is_vmalloc_addr(buf)) {
int dir = (type == PAGELIST_WRITE) ?
DMA_TO_DEVICE : DMA_FROM_DEVICE;
unsigned long length = count;
unsigned int off = offset;
for (actual_pages = 0; actual_pages < num_pages;
actual_pages++) {
struct page *pg = vmalloc_to_page(buf + (actual_pages *
PAGE_SIZE));
size_t bytes = PAGE_SIZE - off;
if (bytes > length)
bytes = length;
pages[actual_pages] = pg;
dmac_map_area(page_address(pg) + off, bytes, dir);
length -= bytes;
off = 0;
}
*need_release = 0; /* do not try and release vmalloc pages */
} else {
down_read(&task->mm->mmap_sem);
actual_pages = get_user_pages(task, task->mm,
(unsigned long)buf & ~(PAGE_SIZE - 1),
num_pages,
(type == PAGELIST_READ) /*Write */ ,
0 /*Force */ ,
pages,
NULL /*vmas */);
up_read(&task->mm->mmap_sem);
if (actual_pages != num_pages) {
vchiq_log_info(vchiq_arm_log_level,
"create_pagelist - only %d/%d pages locked",
actual_pages,
num_pages);
/* This is probably due to the process being killed */
while (actual_pages > 0)
{
actual_pages--;
page_cache_release(pages[actual_pages]);
}
kfree(pagelist);
if (actual_pages == 0)
actual_pages = -ENOMEM;
return actual_pages;
}
*need_release = 1; /* release user pages */
}
pagelist->length = count;
pagelist->type = type;
pagelist->offset = offset;
/* Group the pages into runs of contiguous pages */
base_addr = VCHIQ_ARM_ADDRESS(page_address(pages[0]));
next_addr = base_addr + PAGE_SIZE;
addridx = 0;
run = 0;
for (i = 1; i < num_pages; i++) {
addr = VCHIQ_ARM_ADDRESS(page_address(pages[i]));
if ((addr == next_addr) && (run < (PAGE_SIZE - 1))) {
next_addr += PAGE_SIZE;
run++;
} else {
addrs[addridx] = (unsigned long)base_addr + run;
addridx++;
base_addr = addr;
next_addr = addr + PAGE_SIZE;
run = 0;
}
}
addrs[addridx] = (unsigned long)base_addr + run;
addridx++;
/* Partial cache lines (fragments) require special measures */
if ((type == PAGELIST_READ) &&
((pagelist->offset & (g_cache_line_size - 1)) ||
((pagelist->offset + pagelist->length) &
(g_cache_line_size - 1)))) {
char *fragments;
if (down_interruptible(&g_free_fragments_sema) != 0) {
kfree(pagelist);
return -EINTR;
}
WARN_ON(g_free_fragments == NULL);
down(&g_free_fragments_mutex);
fragments = g_free_fragments;
WARN_ON(fragments == NULL);
g_free_fragments = *(char **) g_free_fragments;
up(&g_free_fragments_mutex);
pagelist->type = PAGELIST_READ_WITH_FRAGMENTS +
(fragments - g_fragments_base) / g_fragments_size;
}
dmac_flush_range(pagelist, addrs + num_pages);
*ppagelist = pagelist;
return 0;
}
static void
free_pagelist(PAGELIST_T *pagelist, int actual)
{
unsigned long *need_release;
struct page **pages;
unsigned int num_pages, i;
vchiq_log_trace(vchiq_arm_log_level,
"free_pagelist - %x, %d", (unsigned int)pagelist, actual);
num_pages =
(pagelist->length + pagelist->offset + PAGE_SIZE - 1) /
PAGE_SIZE;
need_release = (unsigned long *)(pagelist->addrs + num_pages);
pages = (struct page **)(pagelist->addrs + num_pages + 1);
/* Deal with any partial cache lines (fragments) */
if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) {
char *fragments = g_fragments_base +
(pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) *
g_fragments_size;
int head_bytes, tail_bytes;
head_bytes = (g_cache_line_size - pagelist->offset) &
(g_cache_line_size - 1);
tail_bytes = (pagelist->offset + actual) &
(g_cache_line_size - 1);
if ((actual >= 0) && (head_bytes != 0)) {
if (head_bytes > actual)
head_bytes = actual;
memcpy((char *)page_address(pages[0]) +
pagelist->offset,
fragments,
head_bytes);
}
if ((actual >= 0) && (head_bytes < actual) &&
(tail_bytes != 0)) {
memcpy((char *)page_address(pages[num_pages - 1]) +
((pagelist->offset + actual) &
(PAGE_SIZE - 1) & ~(g_cache_line_size - 1)),
fragments + g_cache_line_size,
tail_bytes);
}
down(&g_free_fragments_mutex);
*(char **)fragments = g_free_fragments;
g_free_fragments = fragments;
up(&g_free_fragments_mutex);
up(&g_free_fragments_sema);
}
if (*need_release) {
unsigned int length = pagelist->length;
unsigned int offset = pagelist->offset;
for (i = 0; i < num_pages; i++) {
struct page *pg = pages[i];
if (pagelist->type != PAGELIST_WRITE) {
unsigned int bytes = PAGE_SIZE - offset;
if (bytes > length)
bytes = length;
dmac_unmap_area(page_address(pg) + offset,
bytes, DMA_FROM_DEVICE);
length -= bytes;
offset = 0;
set_page_dirty(pg);
}
page_cache_release(pg);
}
}
kfree(pagelist);
}
/**
* Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/cdev.h>
#include <linux/fs.h>
#include <linux/device.h>
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/pagemap.h>
#include <linux/bug.h>
#include <linux/semaphore.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <soc/bcm2835/raspberrypi-firmware.h>
#include "vchiq_core.h"
#include "vchiq_ioctl.h"
#include "vchiq_arm.h"
#include "vchiq_debugfs.h"
#include "vchiq_killable.h"
#define DEVICE_NAME "vchiq"
/* Override the default prefix, which would be vchiq_arm (from the filename) */
#undef MODULE_PARAM_PREFIX
#define MODULE_PARAM_PREFIX DEVICE_NAME "."
#define VCHIQ_MINOR 0
/* Some per-instance constants */
#define MAX_COMPLETIONS 16
#define MAX_SERVICES 64
#define MAX_ELEMENTS 8
#define MSG_QUEUE_SIZE 64
#define KEEPALIVE_VER 1
#define KEEPALIVE_VER_MIN KEEPALIVE_VER
/* Run time control of log level, based on KERN_XXX level. */
int vchiq_arm_log_level = VCHIQ_LOG_DEFAULT;
int vchiq_susp_log_level = VCHIQ_LOG_ERROR;
#define SUSPEND_TIMER_TIMEOUT_MS 100
#define SUSPEND_RETRY_TIMER_TIMEOUT_MS 1000
#define VC_SUSPEND_NUM_OFFSET 3 /* number of values before idle which are -ve */
static const char *const suspend_state_names[] = {
"VC_SUSPEND_FORCE_CANCELED",
"VC_SUSPEND_REJECTED",
"VC_SUSPEND_FAILED",
"VC_SUSPEND_IDLE",
"VC_SUSPEND_REQUESTED",
"VC_SUSPEND_IN_PROGRESS",
"VC_SUSPEND_SUSPENDED"
};
#define VC_RESUME_NUM_OFFSET 1 /* number of values before idle which are -ve */
static const char *const resume_state_names[] = {
"VC_RESUME_FAILED",
"VC_RESUME_IDLE",
"VC_RESUME_REQUESTED",
"VC_RESUME_IN_PROGRESS",
"VC_RESUME_RESUMED"
};
/* The number of times we allow force suspend to timeout before actually
** _forcing_ suspend. This is to cater for SW which fails to release vchiq
** correctly - we don't want to prevent ARM suspend indefinitely in this case.
*/
#define FORCE_SUSPEND_FAIL_MAX 8
/* The time in ms allowed for videocore to go idle when force suspend has been
* requested */
#define FORCE_SUSPEND_TIMEOUT_MS 200
static void suspend_timer_callback(unsigned long context);
typedef struct user_service_struct {
VCHIQ_SERVICE_T *service;
void *userdata;
VCHIQ_INSTANCE_T instance;
char is_vchi;
char dequeue_pending;
char close_pending;
int message_available_pos;
int msg_insert;
int msg_remove;
struct semaphore insert_event;
struct semaphore remove_event;
struct semaphore close_event;
VCHIQ_HEADER_T * msg_queue[MSG_QUEUE_SIZE];
} USER_SERVICE_T;
struct bulk_waiter_node {
struct bulk_waiter bulk_waiter;
int pid;
struct list_head list;
};
struct vchiq_instance_struct {
VCHIQ_STATE_T *state;
VCHIQ_COMPLETION_DATA_T completions[MAX_COMPLETIONS];
int completion_insert;
int completion_remove;
struct semaphore insert_event;
struct semaphore remove_event;
struct mutex completion_mutex;
int connected;
int closing;
int pid;
int mark;
int use_close_delivered;
int trace;
struct list_head bulk_waiter_list;
struct mutex bulk_waiter_list_mutex;
VCHIQ_DEBUGFS_NODE_T debugfs_node;
};
typedef struct dump_context_struct {
char __user *buf;
size_t actual;
size_t space;
loff_t offset;
} DUMP_CONTEXT_T;
static struct cdev vchiq_cdev;
static dev_t vchiq_devid;
static VCHIQ_STATE_T g_state;
static struct class *vchiq_class;
static struct device *vchiq_dev;
static DEFINE_SPINLOCK(msg_queue_spinlock);
static const char *const ioctl_names[] = {
"CONNECT",
"SHUTDOWN",
"CREATE_SERVICE",
"REMOVE_SERVICE",
"QUEUE_MESSAGE",
"QUEUE_BULK_TRANSMIT",
"QUEUE_BULK_RECEIVE",
"AWAIT_COMPLETION",
"DEQUEUE_MESSAGE",
"GET_CLIENT_ID",
"GET_CONFIG",
"CLOSE_SERVICE",
"USE_SERVICE",
"RELEASE_SERVICE",
"SET_SERVICE_OPTION",
"DUMP_PHYS_MEM",
"LIB_VERSION",
"CLOSE_DELIVERED"
};
vchiq_static_assert((sizeof(ioctl_names)/sizeof(ioctl_names[0])) ==
(VCHIQ_IOC_MAX + 1));
static void
dump_phys_mem(void *virt_addr, uint32_t num_bytes);
/****************************************************************************
*
* add_completion
*
***************************************************************************/
static VCHIQ_STATUS_T
add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason,
VCHIQ_HEADER_T *header, USER_SERVICE_T *user_service,
void *bulk_userdata)
{
VCHIQ_COMPLETION_DATA_T *completion;
DEBUG_INITIALISE(g_state.local)
while (instance->completion_insert ==
(instance->completion_remove + MAX_COMPLETIONS)) {
/* Out of space - wait for the client */
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
vchiq_log_trace(vchiq_arm_log_level,
"add_completion - completion queue full");
DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
if (down_interruptible(&instance->remove_event) != 0) {
vchiq_log_info(vchiq_arm_log_level,
"service_callback interrupted");
return VCHIQ_RETRY;
} else if (instance->closing) {
vchiq_log_info(vchiq_arm_log_level,
"service_callback closing");
return VCHIQ_ERROR;
}
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
}
completion =
&instance->completions[instance->completion_insert &
(MAX_COMPLETIONS - 1)];
completion->header = header;
completion->reason = reason;
/* N.B. service_userdata is updated while processing AWAIT_COMPLETION */
completion->service_userdata = user_service->service;
completion->bulk_userdata = bulk_userdata;
if (reason == VCHIQ_SERVICE_CLOSED) {
/* Take an extra reference, to be held until
this CLOSED notification is delivered. */
lock_service(user_service->service);
if (instance->use_close_delivered)
user_service->close_pending = 1;
}
/* A write barrier is needed here to ensure that the entire completion
record is written out before the insert point. */
wmb();
if (reason == VCHIQ_MESSAGE_AVAILABLE)
user_service->message_available_pos =
instance->completion_insert;
instance->completion_insert++;
up(&instance->insert_event);
return VCHIQ_SUCCESS;
}
/****************************************************************************
*
* service_callback
*
***************************************************************************/
static VCHIQ_STATUS_T
service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header,
VCHIQ_SERVICE_HANDLE_T handle, void *bulk_userdata)
{
/* How do we ensure the callback goes to the right client?
** The service_user data points to a USER_SERVICE_T record containing
** the original callback and the user state structure, which contains a
** circular buffer for completion records.
*/
USER_SERVICE_T *user_service;
VCHIQ_SERVICE_T *service;
VCHIQ_INSTANCE_T instance;
DEBUG_INITIALISE(g_state.local)
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
service = handle_to_service(handle);
BUG_ON(!service);
user_service = (USER_SERVICE_T *)service->base.userdata;
instance = user_service->instance;
if (!instance || instance->closing)
return VCHIQ_SUCCESS;
vchiq_log_trace(vchiq_arm_log_level,
"service_callback - service %lx(%d,%p), reason %d, header %lx, "
"instance %lx, bulk_userdata %lx",
(unsigned long)user_service,
service->localport, user_service->userdata,
reason, (unsigned long)header,
(unsigned long)instance, (unsigned long)bulk_userdata);
if (header && user_service->is_vchi) {
spin_lock(&msg_queue_spinlock);
while (user_service->msg_insert ==
(user_service->msg_remove + MSG_QUEUE_SIZE)) {
spin_unlock(&msg_queue_spinlock);
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
DEBUG_COUNT(MSG_QUEUE_FULL_COUNT);
vchiq_log_trace(vchiq_arm_log_level,
"service_callback - msg queue full");
/* If there is no MESSAGE_AVAILABLE in the completion
** queue, add one
*/
if ((user_service->message_available_pos -
instance->completion_remove) < 0) {
VCHIQ_STATUS_T status;
vchiq_log_info(vchiq_arm_log_level,
"Inserting extra MESSAGE_AVAILABLE");
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
status = add_completion(instance, reason,
NULL, user_service, bulk_userdata);
if (status != VCHIQ_SUCCESS) {
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
return status;
}
}
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
if (down_interruptible(&user_service->remove_event)
!= 0) {
vchiq_log_info(vchiq_arm_log_level,
"service_callback interrupted");
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
return VCHIQ_RETRY;
} else if (instance->closing) {
vchiq_log_info(vchiq_arm_log_level,
"service_callback closing");
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
return VCHIQ_ERROR;
}
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
spin_lock(&msg_queue_spinlock);
}
user_service->msg_queue[user_service->msg_insert &
(MSG_QUEUE_SIZE - 1)] = header;
user_service->msg_insert++;
spin_unlock(&msg_queue_spinlock);
up(&user_service->insert_event);
/* If there is a thread waiting in DEQUEUE_MESSAGE, or if
** there is a MESSAGE_AVAILABLE in the completion queue then
** bypass the completion queue.
*/
if (((user_service->message_available_pos -
instance->completion_remove) >= 0) ||
user_service->dequeue_pending) {
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
user_service->dequeue_pending = 0;
return VCHIQ_SUCCESS;
}
header = NULL;
}
DEBUG_TRACE(SERVICE_CALLBACK_LINE);
return add_completion(instance, reason, header, user_service,
bulk_userdata);
}
/****************************************************************************
*
* user_service_free
*
***************************************************************************/
static void
user_service_free(void *userdata)
{
kfree(userdata);
}
/****************************************************************************
*
* close_delivered
*
***************************************************************************/
static void close_delivered(USER_SERVICE_T *user_service)
{
vchiq_log_info(vchiq_arm_log_level,
"close_delivered(handle=%x)",
user_service->service->handle);
if (user_service->close_pending) {
/* Allow the underlying service to be culled */
unlock_service(user_service->service);
/* Wake the user-thread blocked in close_ or remove_service */
up(&user_service->close_event);
user_service->close_pending = 0;
}
}
/****************************************************************************
*
* vchiq_ioctl
*
***************************************************************************/
static long
vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
VCHIQ_INSTANCE_T instance = file->private_data;
VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
VCHIQ_SERVICE_T *service = NULL;
long ret = 0;
int i, rc;
DEBUG_INITIALISE(g_state.local)
vchiq_log_trace(vchiq_arm_log_level,
"vchiq_ioctl - instance %x, cmd %s, arg %lx",
(unsigned int)instance,
((_IOC_TYPE(cmd) == VCHIQ_IOC_MAGIC) &&
(_IOC_NR(cmd) <= VCHIQ_IOC_MAX)) ?
ioctl_names[_IOC_NR(cmd)] : "<invalid>", arg);
switch (cmd) {
case VCHIQ_IOC_SHUTDOWN:
if (!instance->connected)
break;
/* Remove all services */
i = 0;
while ((service = next_service_by_instance(instance->state,
instance, &i)) != NULL) {
status = vchiq_remove_service(service->handle);
unlock_service(service);
if (status != VCHIQ_SUCCESS)
break;
}
service = NULL;
if (status == VCHIQ_SUCCESS) {
/* Wake the completion thread and ask it to exit */
instance->closing = 1;
up(&instance->insert_event);
}
break;
case VCHIQ_IOC_CONNECT:
if (instance->connected) {
ret = -EINVAL;
break;
}
rc = mutex_lock_interruptible(&instance->state->mutex);
if (rc != 0) {
vchiq_log_error(vchiq_arm_log_level,
"vchiq: connect: could not lock mutex for "
"state %d: %d",
instance->state->id, rc);
ret = -EINTR;
break;
}
status = vchiq_connect_internal(instance->state, instance);
mutex_unlock(&instance->state->mutex);
if (status == VCHIQ_SUCCESS)
instance->connected = 1;
else
vchiq_log_error(vchiq_arm_log_level,
"vchiq: could not connect: %d", status);
break;
case VCHIQ_IOC_CREATE_SERVICE: {
VCHIQ_CREATE_SERVICE_T args;
USER_SERVICE_T *user_service = NULL;
void *userdata;
int srvstate;
if (copy_from_user
(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
user_service = kmalloc(sizeof(USER_SERVICE_T), GFP_KERNEL);
if (!user_service) {
ret = -ENOMEM;
break;
}
if (args.is_open) {
if (!instance->connected) {
ret = -ENOTCONN;
kfree(user_service);
break;
}
srvstate = VCHIQ_SRVSTATE_OPENING;
} else {
srvstate =
instance->connected ?
VCHIQ_SRVSTATE_LISTENING :
VCHIQ_SRVSTATE_HIDDEN;
}
userdata = args.params.userdata;
args.params.callback = service_callback;
args.params.userdata = user_service;
service = vchiq_add_service_internal(
instance->state,
&args.params, srvstate,
instance, user_service_free);
if (service != NULL) {
user_service->service = service;
user_service->userdata = userdata;
user_service->instance = instance;
user_service->is_vchi = (args.is_vchi != 0);
user_service->dequeue_pending = 0;
user_service->close_pending = 0;
user_service->message_available_pos =
instance->completion_remove - 1;
user_service->msg_insert = 0;
user_service->msg_remove = 0;
sema_init(&user_service->insert_event, 0);
sema_init(&user_service->remove_event, 0);
sema_init(&user_service->close_event, 0);
if (args.is_open) {
status = vchiq_open_service_internal
(service, instance->pid);
if (status != VCHIQ_SUCCESS) {
vchiq_remove_service(service->handle);
service = NULL;
ret = (status == VCHIQ_RETRY) ?
-EINTR : -EIO;
break;
}
}
if (copy_to_user((void __user *)
&(((VCHIQ_CREATE_SERVICE_T __user *)
arg)->handle),
(const void *)&service->handle,
sizeof(service->handle)) != 0) {
ret = -EFAULT;
vchiq_remove_service(service->handle);
}
service = NULL;
} else {
ret = -EEXIST;
kfree(user_service);
}
} break;
case VCHIQ_IOC_CLOSE_SERVICE: {
VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
service = find_service_for_instance(instance, handle);
if (service != NULL) {
USER_SERVICE_T *user_service =
(USER_SERVICE_T *)service->base.userdata;
/* close_pending is false on first entry, and when the
wait in vchiq_close_service has been interrupted. */
if (!user_service->close_pending) {
status = vchiq_close_service(service->handle);
if (status != VCHIQ_SUCCESS)
break;
}
/* close_pending is true once the underlying service
has been closed until the client library calls the
CLOSE_DELIVERED ioctl, signalling close_event. */
if (user_service->close_pending &&
down_interruptible(&user_service->close_event))
status = VCHIQ_RETRY;
}
else
ret = -EINVAL;
} break;
case VCHIQ_IOC_REMOVE_SERVICE: {
VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
service = find_service_for_instance(instance, handle);
if (service != NULL) {
USER_SERVICE_T *user_service =
(USER_SERVICE_T *)service->base.userdata;
/* close_pending is false on first entry, and when the
wait in vchiq_close_service has been interrupted. */
if (!user_service->close_pending) {
status = vchiq_remove_service(service->handle);
if (status != VCHIQ_SUCCESS)
break;
}
/* close_pending is true once the underlying service
has been closed until the client library calls the
CLOSE_DELIVERED ioctl, signalling close_event. */
if (user_service->close_pending &&
down_interruptible(&user_service->close_event))
status = VCHIQ_RETRY;
}
else
ret = -EINVAL;
} break;
case VCHIQ_IOC_USE_SERVICE:
case VCHIQ_IOC_RELEASE_SERVICE: {
VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
service = find_service_for_instance(instance, handle);
if (service != NULL) {
status = (cmd == VCHIQ_IOC_USE_SERVICE) ?
vchiq_use_service_internal(service) :
vchiq_release_service_internal(service);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s: cmd %s returned error %d for "
"service %c%c%c%c:%03d",
__func__,
(cmd == VCHIQ_IOC_USE_SERVICE) ?
"VCHIQ_IOC_USE_SERVICE" :
"VCHIQ_IOC_RELEASE_SERVICE",
status,
VCHIQ_FOURCC_AS_4CHARS(
service->base.fourcc),
service->client_id);
ret = -EINVAL;
}
} else
ret = -EINVAL;
} break;
case VCHIQ_IOC_QUEUE_MESSAGE: {
VCHIQ_QUEUE_MESSAGE_T args;
if (copy_from_user
(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
service = find_service_for_instance(instance, args.handle);
if ((service != NULL) && (args.count <= MAX_ELEMENTS)) {
/* Copy elements into kernel space */
VCHIQ_ELEMENT_T elements[MAX_ELEMENTS];
if (copy_from_user(elements, args.elements,
args.count * sizeof(VCHIQ_ELEMENT_T)) == 0)
status = vchiq_queue_message
(args.handle,
elements, args.count);
else
ret = -EFAULT;
} else {
ret = -EINVAL;
}
} break;
case VCHIQ_IOC_QUEUE_BULK_TRANSMIT:
case VCHIQ_IOC_QUEUE_BULK_RECEIVE: {
VCHIQ_QUEUE_BULK_TRANSFER_T args;
struct bulk_waiter_node *waiter = NULL;
VCHIQ_BULK_DIR_T dir =
(cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT) ?
VCHIQ_BULK_TRANSMIT : VCHIQ_BULK_RECEIVE;
if (copy_from_user
(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
service = find_service_for_instance(instance, args.handle);
if (!service) {
ret = -EINVAL;
break;
}
if (args.mode == VCHIQ_BULK_MODE_BLOCKING) {
waiter = kzalloc(sizeof(struct bulk_waiter_node),
GFP_KERNEL);
if (!waiter) {
ret = -ENOMEM;
break;
}
args.userdata = &waiter->bulk_waiter;
} else if (args.mode == VCHIQ_BULK_MODE_WAITING) {
struct list_head *pos;
mutex_lock(&instance->bulk_waiter_list_mutex);
list_for_each(pos, &instance->bulk_waiter_list) {
if (list_entry(pos, struct bulk_waiter_node,
list)->pid == current->pid) {
waiter = list_entry(pos,
struct bulk_waiter_node,
list);
list_del(pos);
break;
}
}
mutex_unlock(&instance->bulk_waiter_list_mutex);
if (!waiter) {
vchiq_log_error(vchiq_arm_log_level,
"no bulk_waiter found for pid %d",
current->pid);
ret = -ESRCH;
break;
}
vchiq_log_info(vchiq_arm_log_level,
"found bulk_waiter %x for pid %d",
(unsigned int)waiter, current->pid);
args.userdata = &waiter->bulk_waiter;
}
status = vchiq_bulk_transfer
(args.handle,
VCHI_MEM_HANDLE_INVALID,
args.data, args.size,
args.userdata, args.mode,
dir);
if (!waiter)
break;
if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
!waiter->bulk_waiter.bulk) {
if (waiter->bulk_waiter.bulk) {
/* Cancel the signal when the transfer
** completes. */
spin_lock(&bulk_waiter_spinlock);
waiter->bulk_waiter.bulk->userdata = NULL;
spin_unlock(&bulk_waiter_spinlock);
}
kfree(waiter);
} else {
const VCHIQ_BULK_MODE_T mode_waiting =
VCHIQ_BULK_MODE_WAITING;
waiter->pid = current->pid;
mutex_lock(&instance->bulk_waiter_list_mutex);
list_add(&waiter->list, &instance->bulk_waiter_list);
mutex_unlock(&instance->bulk_waiter_list_mutex);
vchiq_log_info(vchiq_arm_log_level,
"saved bulk_waiter %x for pid %d",
(unsigned int)waiter, current->pid);
if (copy_to_user((void __user *)
&(((VCHIQ_QUEUE_BULK_TRANSFER_T __user *)
arg)->mode),
(const void *)&mode_waiting,
sizeof(mode_waiting)) != 0)
ret = -EFAULT;
}
} break;
case VCHIQ_IOC_AWAIT_COMPLETION: {
VCHIQ_AWAIT_COMPLETION_T args;
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
if (!instance->connected) {
ret = -ENOTCONN;
break;
}
if (copy_from_user(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
mutex_lock(&instance->completion_mutex);
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
while ((instance->completion_remove ==
instance->completion_insert)
&& !instance->closing) {
int rc;
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
mutex_unlock(&instance->completion_mutex);
rc = down_interruptible(&instance->insert_event);
mutex_lock(&instance->completion_mutex);
if (rc != 0) {
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
vchiq_log_info(vchiq_arm_log_level,
"AWAIT_COMPLETION interrupted");
ret = -EINTR;
break;
}
}
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
/* A read memory barrier is needed to stop prefetch of a stale
** completion record
*/
rmb();
if (ret == 0) {
int msgbufcount = args.msgbufcount;
for (ret = 0; ret < args.count; ret++) {
VCHIQ_COMPLETION_DATA_T *completion;
VCHIQ_SERVICE_T *service;
USER_SERVICE_T *user_service;
VCHIQ_HEADER_T *header;
if (instance->completion_remove ==
instance->completion_insert)
break;
completion = &instance->completions[
instance->completion_remove &
(MAX_COMPLETIONS - 1)];
service = completion->service_userdata;
user_service = service->base.userdata;
completion->service_userdata =
user_service->userdata;
header = completion->header;
if (header) {
void __user *msgbuf;
int msglen;
msglen = header->size +
sizeof(VCHIQ_HEADER_T);
/* This must be a VCHIQ-style service */
if (args.msgbufsize < msglen) {
vchiq_log_error(
vchiq_arm_log_level,
"header %x: msgbufsize"
" %x < msglen %x",
(unsigned int)header,
args.msgbufsize,
msglen);
WARN(1, "invalid message "
"size\n");
if (ret == 0)
ret = -EMSGSIZE;
break;
}
if (msgbufcount <= 0)
/* Stall here for lack of a
** buffer for the message. */
break;
/* Get the pointer from user space */
msgbufcount--;
if (copy_from_user(&msgbuf,
(const void __user *)
&args.msgbufs[msgbufcount],
sizeof(msgbuf)) != 0) {
if (ret == 0)
ret = -EFAULT;
break;
}
/* Copy the message to user space */
if (copy_to_user(msgbuf, header,
msglen) != 0) {
if (ret == 0)
ret = -EFAULT;
break;
}
/* Now it has been copied, the message
** can be released. */
vchiq_release_message(service->handle,
header);
/* The completion must point to the
** msgbuf. */
completion->header = msgbuf;
}
if ((completion->reason ==
VCHIQ_SERVICE_CLOSED) &&
!instance->use_close_delivered)
unlock_service(service);
if (copy_to_user((void __user *)(
(size_t)args.buf +
ret * sizeof(VCHIQ_COMPLETION_DATA_T)),
completion,
sizeof(VCHIQ_COMPLETION_DATA_T)) != 0) {
if (ret == 0)
ret = -EFAULT;
break;
}
instance->completion_remove++;
}
if (msgbufcount != args.msgbufcount) {
if (copy_to_user((void __user *)
&((VCHIQ_AWAIT_COMPLETION_T *)arg)->
msgbufcount,
&msgbufcount,
sizeof(msgbufcount)) != 0) {
ret = -EFAULT;
}
}
}
if (ret != 0)
up(&instance->remove_event);
mutex_unlock(&instance->completion_mutex);
DEBUG_TRACE(AWAIT_COMPLETION_LINE);
} break;
case VCHIQ_IOC_DEQUEUE_MESSAGE: {
VCHIQ_DEQUEUE_MESSAGE_T args;
USER_SERVICE_T *user_service;
VCHIQ_HEADER_T *header;
DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
if (copy_from_user
(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
service = find_service_for_instance(instance, args.handle);
if (!service) {
ret = -EINVAL;
break;
}
user_service = (USER_SERVICE_T *)service->base.userdata;
if (user_service->is_vchi == 0) {
ret = -EINVAL;
break;
}
spin_lock(&msg_queue_spinlock);
if (user_service->msg_remove == user_service->msg_insert) {
if (!args.blocking) {
spin_unlock(&msg_queue_spinlock);
DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
ret = -EWOULDBLOCK;
break;
}
user_service->dequeue_pending = 1;
do {
spin_unlock(&msg_queue_spinlock);
DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
if (down_interruptible(
&user_service->insert_event) != 0) {
vchiq_log_info(vchiq_arm_log_level,
"DEQUEUE_MESSAGE interrupted");
ret = -EINTR;
break;
}
spin_lock(&msg_queue_spinlock);
} while (user_service->msg_remove ==
user_service->msg_insert);
if (ret)
break;
}
BUG_ON((int)(user_service->msg_insert -
user_service->msg_remove) < 0);
header = user_service->msg_queue[user_service->msg_remove &
(MSG_QUEUE_SIZE - 1)];
user_service->msg_remove++;
spin_unlock(&msg_queue_spinlock);
up(&user_service->remove_event);
if (header == NULL)
ret = -ENOTCONN;
else if (header->size <= args.bufsize) {
/* Copy to user space if msgbuf is not NULL */
if ((args.buf == NULL) ||
(copy_to_user((void __user *)args.buf,
header->data,
header->size) == 0)) {
ret = header->size;
vchiq_release_message(
service->handle,
header);
} else
ret = -EFAULT;
} else {
vchiq_log_error(vchiq_arm_log_level,
"header %x: bufsize %x < size %x",
(unsigned int)header, args.bufsize,
header->size);
WARN(1, "invalid size\n");
ret = -EMSGSIZE;
}
DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
} break;
case VCHIQ_IOC_GET_CLIENT_ID: {
VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
ret = vchiq_get_client_id(handle);
} break;
case VCHIQ_IOC_GET_CONFIG: {
VCHIQ_GET_CONFIG_T args;
VCHIQ_CONFIG_T config;
if (copy_from_user(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
if (args.config_size > sizeof(config)) {
ret = -EINVAL;
break;
}
status = vchiq_get_config(instance, args.config_size, &config);
if (status == VCHIQ_SUCCESS) {
if (copy_to_user((void __user *)args.pconfig,
&config, args.config_size) != 0) {
ret = -EFAULT;
break;
}
}
} break;
case VCHIQ_IOC_SET_SERVICE_OPTION: {
VCHIQ_SET_SERVICE_OPTION_T args;
if (copy_from_user(
&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
service = find_service_for_instance(instance, args.handle);
if (!service) {
ret = -EINVAL;
break;
}
status = vchiq_set_service_option(
args.handle, args.option, args.value);
} break;
case VCHIQ_IOC_DUMP_PHYS_MEM: {
VCHIQ_DUMP_MEM_T args;
if (copy_from_user
(&args, (const void __user *)arg,
sizeof(args)) != 0) {
ret = -EFAULT;
break;
}
dump_phys_mem(args.virt_addr, args.num_bytes);
} break;
case VCHIQ_IOC_LIB_VERSION: {
unsigned int lib_version = (unsigned int)arg;
if (lib_version < VCHIQ_VERSION_MIN)
ret = -EINVAL;
else if (lib_version >= VCHIQ_VERSION_CLOSE_DELIVERED)
instance->use_close_delivered = 1;
} break;
case VCHIQ_IOC_CLOSE_DELIVERED: {
VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
service = find_closed_service_for_instance(instance, handle);
if (service != NULL) {
USER_SERVICE_T *user_service =
(USER_SERVICE_T *)service->base.userdata;
close_delivered(user_service);
}
else
ret = -EINVAL;
} break;
default:
ret = -ENOTTY;
break;
}
if (service)
unlock_service(service);
if (ret == 0) {
if (status == VCHIQ_ERROR)
ret = -EIO;
else if (status == VCHIQ_RETRY)
ret = -EINTR;
}
if ((status == VCHIQ_SUCCESS) && (ret < 0) && (ret != -EINTR) &&
(ret != -EWOULDBLOCK))
vchiq_log_info(vchiq_arm_log_level,
" ioctl instance %lx, cmd %s -> status %d, %ld",
(unsigned long)instance,
(_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
ioctl_names[_IOC_NR(cmd)] :
"<invalid>",
status, ret);
else
vchiq_log_trace(vchiq_arm_log_level,
" ioctl instance %lx, cmd %s -> status %d, %ld",
(unsigned long)instance,
(_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
ioctl_names[_IOC_NR(cmd)] :
"<invalid>",
status, ret);
return ret;
}
/****************************************************************************
*
* vchiq_open
*
***************************************************************************/
static int
vchiq_open(struct inode *inode, struct file *file)
{
int dev = iminor(inode) & 0x0f;
vchiq_log_info(vchiq_arm_log_level, "vchiq_open");
switch (dev) {
case VCHIQ_MINOR: {
int ret;
VCHIQ_STATE_T *state = vchiq_get_state();
VCHIQ_INSTANCE_T instance;
if (!state) {
vchiq_log_error(vchiq_arm_log_level,
"vchiq has no connection to VideoCore");
return -ENOTCONN;
}
instance = kzalloc(sizeof(*instance), GFP_KERNEL);
if (!instance)
return -ENOMEM;
instance->state = state;
instance->pid = current->tgid;
ret = vchiq_debugfs_add_instance(instance);
if (ret != 0) {
kfree(instance);
return ret;
}
sema_init(&instance->insert_event, 0);
sema_init(&instance->remove_event, 0);
mutex_init(&instance->completion_mutex);
mutex_init(&instance->bulk_waiter_list_mutex);
INIT_LIST_HEAD(&instance->bulk_waiter_list);
file->private_data = instance;
} break;
default:
vchiq_log_error(vchiq_arm_log_level,
"Unknown minor device: %d", dev);
return -ENXIO;
}
return 0;
}
/****************************************************************************
*
* vchiq_release
*
***************************************************************************/
static int
vchiq_release(struct inode *inode, struct file *file)
{
int dev = iminor(inode) & 0x0f;
int ret = 0;
switch (dev) {
case VCHIQ_MINOR: {
VCHIQ_INSTANCE_T instance = file->private_data;
VCHIQ_STATE_T *state = vchiq_get_state();
VCHIQ_SERVICE_T *service;
int i;
vchiq_log_info(vchiq_arm_log_level,
"vchiq_release: instance=%lx",
(unsigned long)instance);
if (!state) {
ret = -EPERM;
goto out;
}
/* Ensure videocore is awake to allow termination. */
vchiq_use_internal(instance->state, NULL,
USE_TYPE_VCHIQ);
mutex_lock(&instance->completion_mutex);
/* Wake the completion thread and ask it to exit */
instance->closing = 1;
up(&instance->insert_event);
mutex_unlock(&instance->completion_mutex);
/* Wake the slot handler if the completion queue is full. */
up(&instance->remove_event);
/* Mark all services for termination... */
i = 0;
while ((service = next_service_by_instance(state, instance,
&i)) != NULL) {
USER_SERVICE_T *user_service = service->base.userdata;
/* Wake the slot handler if the msg queue is full. */
up(&user_service->remove_event);
vchiq_terminate_service_internal(service);
unlock_service(service);
}
/* ...and wait for them to die */
i = 0;
while ((service = next_service_by_instance(state, instance, &i))
!= NULL) {
USER_SERVICE_T *user_service = service->base.userdata;
down(&service->remove_event);
BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE);
spin_lock(&msg_queue_spinlock);
while (user_service->msg_remove !=
user_service->msg_insert) {
VCHIQ_HEADER_T *header = user_service->
msg_queue[user_service->msg_remove &
(MSG_QUEUE_SIZE - 1)];
user_service->msg_remove++;
spin_unlock(&msg_queue_spinlock);
if (header)
vchiq_release_message(
service->handle,
header);
spin_lock(&msg_queue_spinlock);
}
spin_unlock(&msg_queue_spinlock);
unlock_service(service);
}
/* Release any closed services */
while (instance->completion_remove !=
instance->completion_insert) {
VCHIQ_COMPLETION_DATA_T *completion;
VCHIQ_SERVICE_T *service;
completion = &instance->completions[
instance->completion_remove &
(MAX_COMPLETIONS - 1)];
service = completion->service_userdata;
if (completion->reason == VCHIQ_SERVICE_CLOSED)
{
USER_SERVICE_T *user_service =
service->base.userdata;
/* Wake any blocked user-thread */
if (instance->use_close_delivered)
up(&user_service->close_event);
unlock_service(service);
}
instance->completion_remove++;
}
/* Release the PEER service count. */
vchiq_release_internal(instance->state, NULL);
{
struct list_head *pos, *next;
list_for_each_safe(pos, next,
&instance->bulk_waiter_list) {
struct bulk_waiter_node *waiter;
waiter = list_entry(pos,
struct bulk_waiter_node,
list);
list_del(pos);
vchiq_log_info(vchiq_arm_log_level,
"bulk_waiter - cleaned up %x "
"for pid %d",
(unsigned int)waiter, waiter->pid);
kfree(waiter);
}
}
vchiq_debugfs_remove_instance(instance);
kfree(instance);
file->private_data = NULL;
} break;
default:
vchiq_log_error(vchiq_arm_log_level,
"Unknown minor device: %d", dev);
ret = -ENXIO;
}
out:
return ret;
}
/****************************************************************************
*
* vchiq_dump
*
***************************************************************************/
void
vchiq_dump(void *dump_context, const char *str, int len)
{
DUMP_CONTEXT_T *context = (DUMP_CONTEXT_T *)dump_context;
if (context->actual < context->space) {
int copy_bytes;
if (context->offset > 0) {
int skip_bytes = min(len, (int)context->offset);
str += skip_bytes;
len -= skip_bytes;
context->offset -= skip_bytes;
if (context->offset > 0)
return;
}
copy_bytes = min(len, (int)(context->space - context->actual));
if (copy_bytes == 0)
return;
if (copy_to_user(context->buf + context->actual, str,
copy_bytes))
context->actual = -EFAULT;
context->actual += copy_bytes;
len -= copy_bytes;
/* If tne terminating NUL is included in the length, then it
** marks the end of a line and should be replaced with a
** carriage return. */
if ((len == 0) && (str[copy_bytes - 1] == '\0')) {
char cr = '\n';
if (copy_to_user(context->buf + context->actual - 1,
&cr, 1))
context->actual = -EFAULT;
}
}
}
/****************************************************************************
*
* vchiq_dump_platform_instance_state
*
***************************************************************************/
void
vchiq_dump_platform_instances(void *dump_context)
{
VCHIQ_STATE_T *state = vchiq_get_state();
char buf[80];
int len;
int i;
/* There is no list of instances, so instead scan all services,
marking those that have been dumped. */
for (i = 0; i < state->unused_service; i++) {
VCHIQ_SERVICE_T *service = state->services[i];
VCHIQ_INSTANCE_T instance;
if (service && (service->base.callback == service_callback)) {
instance = service->instance;
if (instance)
instance->mark = 0;
}
}
for (i = 0; i < state->unused_service; i++) {
VCHIQ_SERVICE_T *service = state->services[i];
VCHIQ_INSTANCE_T instance;
if (service && (service->base.callback == service_callback)) {
instance = service->instance;
if (instance && !instance->mark) {
len = snprintf(buf, sizeof(buf),
"Instance %x: pid %d,%s completions "
"%d/%d",
(unsigned int)instance, instance->pid,
instance->connected ? " connected, " :
"",
instance->completion_insert -
instance->completion_remove,
MAX_COMPLETIONS);
vchiq_dump(dump_context, buf, len + 1);
instance->mark = 1;
}
}
}
}
/****************************************************************************
*
* vchiq_dump_platform_service_state
*
***************************************************************************/
void
vchiq_dump_platform_service_state(void *dump_context, VCHIQ_SERVICE_T *service)
{
USER_SERVICE_T *user_service = (USER_SERVICE_T *)service->base.userdata;
char buf[80];
int len;
len = snprintf(buf, sizeof(buf), " instance %x",
(unsigned int)service->instance);
if ((service->base.callback == service_callback) &&
user_service->is_vchi) {
len += snprintf(buf + len, sizeof(buf) - len,
", %d/%d messages",
user_service->msg_insert - user_service->msg_remove,
MSG_QUEUE_SIZE);
if (user_service->dequeue_pending)
len += snprintf(buf + len, sizeof(buf) - len,
" (dequeue pending)");
}
vchiq_dump(dump_context, buf, len + 1);
}
/****************************************************************************
*
* dump_user_mem
*
***************************************************************************/
static void
dump_phys_mem(void *virt_addr, uint32_t num_bytes)
{
int rc;
uint8_t *end_virt_addr = virt_addr + num_bytes;
int num_pages;
int offset;
int end_offset;
int page_idx;
int prev_idx;
struct page *page;
struct page **pages;
uint8_t *kmapped_virt_ptr;
/* Align virtAddr and endVirtAddr to 16 byte boundaries. */
virt_addr = (void *)((unsigned long)virt_addr & ~0x0fuL);
end_virt_addr = (void *)(((unsigned long)end_virt_addr + 15uL) &
~0x0fuL);
offset = (int)(long)virt_addr & (PAGE_SIZE - 1);
end_offset = (int)(long)end_virt_addr & (PAGE_SIZE - 1);
num_pages = (offset + num_bytes + PAGE_SIZE - 1) / PAGE_SIZE;
pages = kmalloc(sizeof(struct page *) * num_pages, GFP_KERNEL);
if (pages == NULL) {
vchiq_log_error(vchiq_arm_log_level,
"Unable to allocation memory for %d pages\n",
num_pages);
return;
}
down_read(&current->mm->mmap_sem);
rc = get_user_pages(current, /* task */
current->mm, /* mm */
(unsigned long)virt_addr, /* start */
num_pages, /* len */
0, /* write */
0, /* force */
pages, /* pages (array of page pointers) */
NULL); /* vmas */
up_read(&current->mm->mmap_sem);
prev_idx = -1;
page = NULL;
while (offset < end_offset) {
int page_offset = offset % PAGE_SIZE;
page_idx = offset / PAGE_SIZE;
if (page_idx != prev_idx) {
if (page != NULL)
kunmap(page);
page = pages[page_idx];
kmapped_virt_ptr = kmap(page);
prev_idx = page_idx;
}
if (vchiq_arm_log_level >= VCHIQ_LOG_TRACE)
vchiq_log_dump_mem("ph",
(uint32_t)(unsigned long)&kmapped_virt_ptr[
page_offset],
&kmapped_virt_ptr[page_offset], 16);
offset += 16;
}
if (page != NULL)
kunmap(page);
for (page_idx = 0; page_idx < num_pages; page_idx++)
page_cache_release(pages[page_idx]);
kfree(pages);
}
/****************************************************************************
*
* vchiq_read
*
***************************************************************************/
static ssize_t
vchiq_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
DUMP_CONTEXT_T context;
context.buf = buf;
context.actual = 0;
context.space = count;
context.offset = *ppos;
vchiq_dump_state(&context, &g_state);
*ppos += context.actual;
return context.actual;
}
VCHIQ_STATE_T *
vchiq_get_state(void)
{
if (g_state.remote == NULL)
printk(KERN_ERR "%s: g_state.remote == NULL\n", __func__);
else if (g_state.remote->initialised != 1)
printk(KERN_NOTICE "%s: g_state.remote->initialised != 1 (%d)\n",
__func__, g_state.remote->initialised);
return ((g_state.remote != NULL) &&
(g_state.remote->initialised == 1)) ? &g_state : NULL;
}
static const struct file_operations
vchiq_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = vchiq_ioctl,
.open = vchiq_open,
.release = vchiq_release,
.read = vchiq_read
};
/*
* Autosuspend related functionality
*/
int
vchiq_videocore_wanted(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
if (!arm_state)
/* autosuspend not supported - always return wanted */
return 1;
else if (arm_state->blocked_count)
return 1;
else if (!arm_state->videocore_use_count)
/* usage count zero - check for override unless we're forcing */
if (arm_state->resume_blocked)
return 0;
else
return vchiq_platform_videocore_wanted(state);
else
/* non-zero usage count - videocore still required */
return 1;
}
static VCHIQ_STATUS_T
vchiq_keepalive_vchiq_callback(VCHIQ_REASON_T reason,
VCHIQ_HEADER_T *header,
VCHIQ_SERVICE_HANDLE_T service_user,
void *bulk_user)
{
vchiq_log_error(vchiq_susp_log_level,
"%s callback reason %d", __func__, reason);
return 0;
}
static int
vchiq_keepalive_thread_func(void *v)
{
VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T status;
VCHIQ_INSTANCE_T instance;
VCHIQ_SERVICE_HANDLE_T ka_handle;
VCHIQ_SERVICE_PARAMS_T params = {
.fourcc = VCHIQ_MAKE_FOURCC('K', 'E', 'E', 'P'),
.callback = vchiq_keepalive_vchiq_callback,
.version = KEEPALIVE_VER,
.version_min = KEEPALIVE_VER_MIN
};
status = vchiq_initialise(&instance);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s vchiq_initialise failed %d", __func__, status);
goto exit;
}
status = vchiq_connect(instance);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s vchiq_connect failed %d", __func__, status);
goto shutdown;
}
status = vchiq_add_service(instance, &params, &ka_handle);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s vchiq_open_service failed %d", __func__, status);
goto shutdown;
}
while (1) {
long rc = 0, uc = 0;
if (wait_for_completion_interruptible(&arm_state->ka_evt)
!= 0) {
vchiq_log_error(vchiq_susp_log_level,
"%s interrupted", __func__);
flush_signals(current);
continue;
}
/* read and clear counters. Do release_count then use_count to
* prevent getting more releases than uses */
rc = atomic_xchg(&arm_state->ka_release_count, 0);
uc = atomic_xchg(&arm_state->ka_use_count, 0);
/* Call use/release service the requisite number of times.
* Process use before release so use counts don't go negative */
while (uc--) {
atomic_inc(&arm_state->ka_use_ack_count);
status = vchiq_use_service(ka_handle);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s vchiq_use_service error %d",
__func__, status);
}
}
while (rc--) {
status = vchiq_release_service(ka_handle);
if (status != VCHIQ_SUCCESS) {
vchiq_log_error(vchiq_susp_log_level,
"%s vchiq_release_service error %d",
__func__, status);
}
}
}
shutdown:
vchiq_shutdown(instance);
exit:
return 0;
}
VCHIQ_STATUS_T
vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state)
{
VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
if (arm_state) {
rwlock_init(&arm_state->susp_res_lock);
init_completion(&arm_state->ka_evt);
atomic_set(&arm_state->ka_use_count, 0);
atomic_set(&arm_state->ka_use_ack_count, 0);
atomic_set(&arm_state->ka_release_count, 0);
init_completion(&arm_state->vc_suspend_complete);
init_completion(&arm_state->vc_resume_complete);
/* Initialise to 'done' state. We only want to block on resume
* completion while videocore is suspended. */
set_resume_state(arm_state, VC_RESUME_RESUMED);
init_completion(&arm_state->resume_blocker);
/* Initialise to 'done' state. We only want to block on this
* completion while resume is blocked */
complete_all(&arm_state->resume_blocker);
init_completion(&arm_state->blocked_blocker);
/* Initialise to 'done' state. We only want to block on this
* completion while things are waiting on the resume blocker */
complete_all(&arm_state->blocked_blocker);
arm_state->suspend_timer_timeout = SUSPEND_TIMER_TIMEOUT_MS;
arm_state->suspend_timer_running = 0;
init_timer(&arm_state->suspend_timer);
arm_state->suspend_timer.data = (unsigned long)(state);
arm_state->suspend_timer.function = suspend_timer_callback;
arm_state->first_connect = 0;
}
return status;
}
/*
** Functions to modify the state variables;
** set_suspend_state
** set_resume_state
**
** There are more state variables than we might like, so ensure they remain in
** step. Suspend and resume state are maintained separately, since most of
** these state machines can operate independently. However, there are a few
** states where state transitions in one state machine cause a reset to the
** other state machine. In addition, there are some completion events which
** need to occur on state machine reset and end-state(s), so these are also
** dealt with in these functions.
**
** In all states we set the state variable according to the input, but in some
** cases we perform additional steps outlined below;
**
** VC_SUSPEND_IDLE - Initialise the suspend completion at the same time.
** The suspend completion is completed after any suspend
** attempt. When we reset the state machine we also reset
** the completion. This reset occurs when videocore is
** resumed, and also if we initiate suspend after a suspend
** failure.
**
** VC_SUSPEND_IN_PROGRESS - This state is considered the point of no return for
** suspend - ie from this point on we must try to suspend
** before resuming can occur. We therefore also reset the
** resume state machine to VC_RESUME_IDLE in this state.
**
** VC_SUSPEND_SUSPENDED - Suspend has completed successfully. Also call
** complete_all on the suspend completion to notify
** anything waiting for suspend to happen.
**
** VC_SUSPEND_REJECTED - Videocore rejected suspend. Videocore will also
** initiate resume, so no need to alter resume state.
** We call complete_all on the suspend completion to notify
** of suspend rejection.
**
** VC_SUSPEND_FAILED - We failed to initiate videocore suspend. We notify the
** suspend completion and reset the resume state machine.
**
** VC_RESUME_IDLE - Initialise the resume completion at the same time. The
** resume completion is in it's 'done' state whenever
** videcore is running. Therfore, the VC_RESUME_IDLE state
** implies that videocore is suspended.
** Hence, any thread which needs to wait until videocore is
** running can wait on this completion - it will only block
** if videocore is suspended.
**
** VC_RESUME_RESUMED - Resume has completed successfully. Videocore is running.
** Call complete_all on the resume completion to unblock
** any threads waiting for resume. Also reset the suspend
** state machine to it's idle state.
**
** VC_RESUME_FAILED - Currently unused - no mechanism to fail resume exists.
*/
void
set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
enum vc_suspend_status new_state)
{
/* set the state in all cases */
arm_state->vc_suspend_state = new_state;
/* state specific additional actions */
switch (new_state) {
case VC_SUSPEND_FORCE_CANCELED:
complete_all(&arm_state->vc_suspend_complete);
break;
case VC_SUSPEND_REJECTED:
complete_all(&arm_state->vc_suspend_complete);
break;
case VC_SUSPEND_FAILED:
complete_all(&arm_state->vc_suspend_complete);
arm_state->vc_resume_state = VC_RESUME_RESUMED;
complete_all(&arm_state->vc_resume_complete);
break;
case VC_SUSPEND_IDLE:
reinit_completion(&arm_state->vc_suspend_complete);
break;
case VC_SUSPEND_REQUESTED:
break;
case VC_SUSPEND_IN_PROGRESS:
set_resume_state(arm_state, VC_RESUME_IDLE);
break;
case VC_SUSPEND_SUSPENDED:
complete_all(&arm_state->vc_suspend_complete);
break;
default:
BUG();
break;
}
}
void
set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
enum vc_resume_status new_state)
{
/* set the state in all cases */
arm_state->vc_resume_state = new_state;
/* state specific additional actions */
switch (new_state) {
case VC_RESUME_FAILED:
break;
case VC_RESUME_IDLE:
reinit_completion(&arm_state->vc_resume_complete);
break;
case VC_RESUME_REQUESTED:
break;
case VC_RESUME_IN_PROGRESS:
break;
case VC_RESUME_RESUMED:
complete_all(&arm_state->vc_resume_complete);
set_suspend_state(arm_state, VC_SUSPEND_IDLE);
break;
default:
BUG();
break;
}
}
/* should be called with the write lock held */
inline void
start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
{
del_timer(&arm_state->suspend_timer);
arm_state->suspend_timer.expires = jiffies +
msecs_to_jiffies(arm_state->
suspend_timer_timeout);
add_timer(&arm_state->suspend_timer);
arm_state->suspend_timer_running = 1;
}
/* should be called with the write lock held */
static inline void
stop_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
{
if (arm_state->suspend_timer_running) {
del_timer(&arm_state->suspend_timer);
arm_state->suspend_timer_running = 0;
}
}
static inline int
need_resume(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
return (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) &&
(arm_state->vc_resume_state < VC_RESUME_REQUESTED) &&
vchiq_videocore_wanted(state);
}
static int
block_resume(VCHIQ_ARM_STATE_T *arm_state)
{
int status = VCHIQ_SUCCESS;
const unsigned long timeout_val =
msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS);
int resume_count = 0;
/* Allow any threads which were blocked by the last force suspend to
* complete if they haven't already. Only give this one shot; if
* blocked_count is incremented after blocked_blocker is completed
* (which only happens when blocked_count hits 0) then those threads
* will have to wait until next time around */
if (arm_state->blocked_count) {
reinit_completion(&arm_state->blocked_blocker);
write_unlock_bh(&arm_state->susp_res_lock);
vchiq_log_info(vchiq_susp_log_level, "%s wait for previously "
"blocked clients", __func__);
if (wait_for_completion_interruptible_timeout(
&arm_state->blocked_blocker, timeout_val)
<= 0) {
vchiq_log_error(vchiq_susp_log_level, "%s wait for "
"previously blocked clients failed" , __func__);
status = VCHIQ_ERROR;
write_lock_bh(&arm_state->susp_res_lock);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s previously blocked "
"clients resumed", __func__);
write_lock_bh(&arm_state->susp_res_lock);
}
/* We need to wait for resume to complete if it's in process */
while (arm_state->vc_resume_state != VC_RESUME_RESUMED &&
arm_state->vc_resume_state > VC_RESUME_IDLE) {
if (resume_count > 1) {
status = VCHIQ_ERROR;
vchiq_log_error(vchiq_susp_log_level, "%s waited too "
"many times for resume" , __func__);
goto out;
}
write_unlock_bh(&arm_state->susp_res_lock);
vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
__func__);
if (wait_for_completion_interruptible_timeout(
&arm_state->vc_resume_complete, timeout_val)
<= 0) {
vchiq_log_error(vchiq_susp_log_level, "%s wait for "
"resume failed (%s)", __func__,
resume_state_names[arm_state->vc_resume_state +
VC_RESUME_NUM_OFFSET]);
status = VCHIQ_ERROR;
write_lock_bh(&arm_state->susp_res_lock);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s resumed", __func__);
write_lock_bh(&arm_state->susp_res_lock);
resume_count++;
}
reinit_completion(&arm_state->resume_blocker);
arm_state->resume_blocked = 1;
out:
return status;
}
static inline void
unblock_resume(VCHIQ_ARM_STATE_T *arm_state)
{
complete_all(&arm_state->resume_blocker);
arm_state->resume_blocked = 0;
}
/* Initiate suspend via slot handler. Should be called with the write lock
* held */
VCHIQ_STATUS_T
vchiq_arm_vcsuspend(VCHIQ_STATE_T *state)
{
VCHIQ_STATUS_T status = VCHIQ_ERROR;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
status = VCHIQ_SUCCESS;
switch (arm_state->vc_suspend_state) {
case VC_SUSPEND_REQUESTED:
vchiq_log_info(vchiq_susp_log_level, "%s: suspend already "
"requested", __func__);
break;
case VC_SUSPEND_IN_PROGRESS:
vchiq_log_info(vchiq_susp_log_level, "%s: suspend already in "
"progress", __func__);
break;
default:
/* We don't expect to be in other states, so log but continue
* anyway */
vchiq_log_error(vchiq_susp_log_level,
"%s unexpected suspend state %s", __func__,
suspend_state_names[arm_state->vc_suspend_state +
VC_SUSPEND_NUM_OFFSET]);
/* fall through */
case VC_SUSPEND_REJECTED:
case VC_SUSPEND_FAILED:
/* Ensure any idle state actions have been run */
set_suspend_state(arm_state, VC_SUSPEND_IDLE);
/* fall through */
case VC_SUSPEND_IDLE:
vchiq_log_info(vchiq_susp_log_level,
"%s: suspending", __func__);
set_suspend_state(arm_state, VC_SUSPEND_REQUESTED);
/* kick the slot handler thread to initiate suspend */
request_poll(state, NULL, 0);
break;
}
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
return status;
}
void
vchiq_platform_check_suspend(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int susp = 0;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
write_lock_bh(&arm_state->susp_res_lock);
if (arm_state->vc_suspend_state == VC_SUSPEND_REQUESTED &&
arm_state->vc_resume_state == VC_RESUME_RESUMED) {
set_suspend_state(arm_state, VC_SUSPEND_IN_PROGRESS);
susp = 1;
}
write_unlock_bh(&arm_state->susp_res_lock);
if (susp)
vchiq_platform_suspend(state);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
return;
}
static void
output_timeout_error(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
char service_err[50] = "";
int vc_use_count = arm_state->videocore_use_count;
int active_services = state->unused_service;
int i;
if (!arm_state->videocore_use_count) {
snprintf(service_err, 50, " Videocore usecount is 0");
goto output_msg;
}
for (i = 0; i < active_services; i++) {
VCHIQ_SERVICE_T *service_ptr = state->services[i];
if (service_ptr && service_ptr->service_use_count &&
(service_ptr->srvstate != VCHIQ_SRVSTATE_FREE)) {
snprintf(service_err, 50, " %c%c%c%c(%d) service has "
"use count %d%s", VCHIQ_FOURCC_AS_4CHARS(
service_ptr->base.fourcc),
service_ptr->client_id,
service_ptr->service_use_count,
service_ptr->service_use_count ==
vc_use_count ? "" : " (+ more)");
break;
}
}
output_msg:
vchiq_log_error(vchiq_susp_log_level,
"timed out waiting for vc suspend (%d).%s",
arm_state->autosuspend_override, service_err);
}
/* Try to get videocore into suspended state, regardless of autosuspend state.
** We don't actually force suspend, since videocore may get into a bad state
** if we force suspend at a bad time. Instead, we wait for autosuspend to
** determine a good point to suspend. If this doesn't happen within 100ms we
** report failure.
**
** Returns VCHIQ_SUCCESS if videocore suspended successfully, VCHIQ_RETRY if
** videocore failed to suspend in time or VCHIQ_ERROR if interrupted.
*/
VCHIQ_STATUS_T
vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T status = VCHIQ_ERROR;
long rc = 0;
int repeat = -1;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
write_lock_bh(&arm_state->susp_res_lock);
status = block_resume(arm_state);
if (status != VCHIQ_SUCCESS)
goto unlock;
if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
/* Already suspended - just block resume and exit */
vchiq_log_info(vchiq_susp_log_level, "%s already suspended",
__func__);
status = VCHIQ_SUCCESS;
goto unlock;
} else if (arm_state->vc_suspend_state <= VC_SUSPEND_IDLE) {
/* initiate suspend immediately in the case that we're waiting
* for the timeout */
stop_suspend_timer(arm_state);
if (!vchiq_videocore_wanted(state)) {
vchiq_log_info(vchiq_susp_log_level, "%s videocore "
"idle, initiating suspend", __func__);
status = vchiq_arm_vcsuspend(state);
} else if (arm_state->autosuspend_override <
FORCE_SUSPEND_FAIL_MAX) {
vchiq_log_info(vchiq_susp_log_level, "%s letting "
"videocore go idle", __func__);
status = VCHIQ_SUCCESS;
} else {
vchiq_log_warning(vchiq_susp_log_level, "%s failed too "
"many times - attempting suspend", __func__);
status = vchiq_arm_vcsuspend(state);
}
} else {
vchiq_log_info(vchiq_susp_log_level, "%s videocore suspend "
"in progress - wait for completion", __func__);
status = VCHIQ_SUCCESS;
}
/* Wait for suspend to happen due to system idle (not forced..) */
if (status != VCHIQ_SUCCESS)
goto unblock_resume;
do {
write_unlock_bh(&arm_state->susp_res_lock);
rc = wait_for_completion_interruptible_timeout(
&arm_state->vc_suspend_complete,
msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
write_lock_bh(&arm_state->susp_res_lock);
if (rc < 0) {
vchiq_log_warning(vchiq_susp_log_level, "%s "
"interrupted waiting for suspend", __func__);
status = VCHIQ_ERROR;
goto unblock_resume;
} else if (rc == 0) {
if (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) {
/* Repeat timeout once if in progress */
if (repeat < 0) {
repeat = 1;
continue;
}
}
arm_state->autosuspend_override++;
output_timeout_error(state);
status = VCHIQ_RETRY;
goto unblock_resume;
}
} while (0 < (repeat--));
/* Check and report state in case we need to abort ARM suspend */
if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED) {
status = VCHIQ_RETRY;
vchiq_log_error(vchiq_susp_log_level,
"%s videocore suspend failed (state %s)", __func__,
suspend_state_names[arm_state->vc_suspend_state +
VC_SUSPEND_NUM_OFFSET]);
/* Reset the state only if it's still in an error state.
* Something could have already initiated another suspend. */
if (arm_state->vc_suspend_state < VC_SUSPEND_IDLE)
set_suspend_state(arm_state, VC_SUSPEND_IDLE);
goto unblock_resume;
}
/* successfully suspended - unlock and exit */
goto unlock;
unblock_resume:
/* all error states need to unblock resume before exit */
unblock_resume(arm_state);
unlock:
write_unlock_bh(&arm_state->susp_res_lock);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
return status;
}
void
vchiq_check_suspend(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
write_lock_bh(&arm_state->susp_res_lock);
if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED &&
arm_state->first_connect &&
!vchiq_videocore_wanted(state)) {
vchiq_arm_vcsuspend(state);
}
write_unlock_bh(&arm_state->susp_res_lock);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
return;
}
int
vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int resume = 0;
int ret = -1;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
write_lock_bh(&arm_state->susp_res_lock);
unblock_resume(arm_state);
resume = vchiq_check_resume(state);
write_unlock_bh(&arm_state->susp_res_lock);
if (resume) {
if (wait_for_completion_interruptible(
&arm_state->vc_resume_complete) < 0) {
vchiq_log_error(vchiq_susp_log_level,
"%s interrupted", __func__);
/* failed, cannot accurately derive suspend
* state, so exit early. */
goto out;
}
}
read_lock_bh(&arm_state->susp_res_lock);
if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
vchiq_log_info(vchiq_susp_log_level,
"%s: Videocore remains suspended", __func__);
} else {
vchiq_log_info(vchiq_susp_log_level,
"%s: Videocore resumed", __func__);
ret = 0;
}
read_unlock_bh(&arm_state->susp_res_lock);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
return ret;
}
/* This function should be called with the write lock held */
int
vchiq_check_resume(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int resume = 0;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
if (need_resume(state)) {
set_resume_state(arm_state, VC_RESUME_REQUESTED);
request_poll(state, NULL, 0);
resume = 1;
}
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
return resume;
}
void
vchiq_platform_check_resume(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int res = 0;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
write_lock_bh(&arm_state->susp_res_lock);
if (arm_state->wake_address == 0) {
vchiq_log_info(vchiq_susp_log_level,
"%s: already awake", __func__);
goto unlock;
}
if (arm_state->vc_resume_state == VC_RESUME_IN_PROGRESS) {
vchiq_log_info(vchiq_susp_log_level,
"%s: already resuming", __func__);
goto unlock;
}
if (arm_state->vc_resume_state == VC_RESUME_REQUESTED) {
set_resume_state(arm_state, VC_RESUME_IN_PROGRESS);
res = 1;
} else
vchiq_log_trace(vchiq_susp_log_level,
"%s: not resuming (resume state %s)", __func__,
resume_state_names[arm_state->vc_resume_state +
VC_RESUME_NUM_OFFSET]);
unlock:
write_unlock_bh(&arm_state->susp_res_lock);
if (res)
vchiq_platform_resume(state);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
return;
}
VCHIQ_STATUS_T
vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
enum USE_TYPE_E use_type)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
char entity[16];
int *entity_uc;
int local_uc, local_entity_uc;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
if (use_type == USE_TYPE_VCHIQ) {
sprintf(entity, "VCHIQ: ");
entity_uc = &arm_state->peer_use_count;
} else if (service) {
sprintf(entity, "%c%c%c%c:%03d",
VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
service->client_id);
entity_uc = &service->service_use_count;
} else {
vchiq_log_error(vchiq_susp_log_level, "%s null service "
"ptr", __func__);
ret = VCHIQ_ERROR;
goto out;
}
write_lock_bh(&arm_state->susp_res_lock);
while (arm_state->resume_blocked) {
/* If we call 'use' while force suspend is waiting for suspend,
* then we're about to block the thread which the force is
* waiting to complete, so we're bound to just time out. In this
* case, set the suspend state such that the wait will be
* canceled, so we can complete as quickly as possible. */
if (arm_state->resume_blocked && arm_state->vc_suspend_state ==
VC_SUSPEND_IDLE) {
set_suspend_state(arm_state, VC_SUSPEND_FORCE_CANCELED);
break;
}
/* If suspend is already in progress then we need to block */
if (!try_wait_for_completion(&arm_state->resume_blocker)) {
/* Indicate that there are threads waiting on the resume
* blocker. These need to be allowed to complete before
* a _second_ call to force suspend can complete,
* otherwise low priority threads might never actually
* continue */
arm_state->blocked_count++;
write_unlock_bh(&arm_state->susp_res_lock);
vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
"blocked - waiting...", __func__, entity);
if (wait_for_completion_killable(
&arm_state->resume_blocker) != 0) {
vchiq_log_error(vchiq_susp_log_level, "%s %s "
"wait for resume blocker interrupted",
__func__, entity);
ret = VCHIQ_ERROR;
write_lock_bh(&arm_state->susp_res_lock);
arm_state->blocked_count--;
write_unlock_bh(&arm_state->susp_res_lock);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
"unblocked", __func__, entity);
write_lock_bh(&arm_state->susp_res_lock);
if (--arm_state->blocked_count == 0)
complete_all(&arm_state->blocked_blocker);
}
}
stop_suspend_timer(arm_state);
local_uc = ++arm_state->videocore_use_count;
local_entity_uc = ++(*entity_uc);
/* If there's a pending request which hasn't yet been serviced then
* just clear it. If we're past VC_SUSPEND_REQUESTED state then
* vc_resume_complete will block until we either resume or fail to
* suspend */
if (arm_state->vc_suspend_state <= VC_SUSPEND_REQUESTED)
set_suspend_state(arm_state, VC_SUSPEND_IDLE);
if ((use_type != USE_TYPE_SERVICE_NO_RESUME) && need_resume(state)) {
set_resume_state(arm_state, VC_RESUME_REQUESTED);
vchiq_log_info(vchiq_susp_log_level,
"%s %s count %d, state count %d",
__func__, entity, local_entity_uc, local_uc);
request_poll(state, NULL, 0);
} else
vchiq_log_trace(vchiq_susp_log_level,
"%s %s count %d, state count %d",
__func__, entity, *entity_uc, local_uc);
write_unlock_bh(&arm_state->susp_res_lock);
/* Completion is in a done state when we're not suspended, so this won't
* block for the non-suspended case. */
if (!try_wait_for_completion(&arm_state->vc_resume_complete)) {
vchiq_log_info(vchiq_susp_log_level, "%s %s wait for resume",
__func__, entity);
if (wait_for_completion_killable(
&arm_state->vc_resume_complete) != 0) {
vchiq_log_error(vchiq_susp_log_level, "%s %s wait for "
"resume interrupted", __func__, entity);
ret = VCHIQ_ERROR;
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s %s resumed", __func__,
entity);
}
if (ret == VCHIQ_SUCCESS) {
VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
long ack_cnt = atomic_xchg(&arm_state->ka_use_ack_count, 0);
while (ack_cnt && (status == VCHIQ_SUCCESS)) {
/* Send the use notify to videocore */
status = vchiq_send_remote_use_active(state);
if (status == VCHIQ_SUCCESS)
ack_cnt--;
else
atomic_add(ack_cnt,
&arm_state->ka_use_ack_count);
}
}
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
return ret;
}
VCHIQ_STATUS_T
vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
char entity[16];
int *entity_uc;
int local_uc, local_entity_uc;
if (!arm_state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
if (service) {
sprintf(entity, "%c%c%c%c:%03d",
VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
service->client_id);
entity_uc = &service->service_use_count;
} else {
sprintf(entity, "PEER: ");
entity_uc = &arm_state->peer_use_count;
}
write_lock_bh(&arm_state->susp_res_lock);
if (!arm_state->videocore_use_count || !(*entity_uc)) {
/* Don't use BUG_ON - don't allow user thread to crash kernel */
WARN_ON(!arm_state->videocore_use_count);
WARN_ON(!(*entity_uc));
ret = VCHIQ_ERROR;
goto unlock;
}
local_uc = --arm_state->videocore_use_count;
local_entity_uc = --(*entity_uc);
if (!vchiq_videocore_wanted(state)) {
if (vchiq_platform_use_suspend_timer() &&
!arm_state->resume_blocked) {
/* Only use the timer if we're not trying to force
* suspend (=> resume_blocked) */
start_suspend_timer(arm_state);
} else {
vchiq_log_info(vchiq_susp_log_level,
"%s %s count %d, state count %d - suspending",
__func__, entity, *entity_uc,
arm_state->videocore_use_count);
vchiq_arm_vcsuspend(state);
}
} else
vchiq_log_trace(vchiq_susp_log_level,
"%s %s count %d, state count %d",
__func__, entity, *entity_uc,
arm_state->videocore_use_count);
unlock:
write_unlock_bh(&arm_state->susp_res_lock);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
return ret;
}
void
vchiq_on_remote_use(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
atomic_inc(&arm_state->ka_use_count);
complete(&arm_state->ka_evt);
}
void
vchiq_on_remote_release(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
atomic_inc(&arm_state->ka_release_count);
complete(&arm_state->ka_evt);
}
VCHIQ_STATUS_T
vchiq_use_service_internal(VCHIQ_SERVICE_T *service)
{
return vchiq_use_internal(service->state, service, USE_TYPE_SERVICE);
}
VCHIQ_STATUS_T
vchiq_release_service_internal(VCHIQ_SERVICE_T *service)
{
return vchiq_release_internal(service->state, service);
}
VCHIQ_DEBUGFS_NODE_T *
vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance)
{
return &instance->debugfs_node;
}
int
vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance)
{
VCHIQ_SERVICE_T *service;
int use_count = 0, i;
i = 0;
while ((service = next_service_by_instance(instance->state,
instance, &i)) != NULL) {
use_count += service->service_use_count;
unlock_service(service);
}
return use_count;
}
int
vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance)
{
return instance->pid;
}
int
vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance)
{
return instance->trace;
}
void
vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace)
{
VCHIQ_SERVICE_T *service;
int i;
i = 0;
while ((service = next_service_by_instance(instance->state,
instance, &i)) != NULL) {
service->trace = trace;
unlock_service(service);
}
instance->trace = (trace != 0);
}
static void suspend_timer_callback(unsigned long context)
{
VCHIQ_STATE_T *state = (VCHIQ_STATE_T *)context;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
if (!arm_state)
goto out;
vchiq_log_info(vchiq_susp_log_level,
"%s - suspend timer expired - check suspend", __func__);
vchiq_check_suspend(state);
out:
return;
}
VCHIQ_STATUS_T
vchiq_use_service_no_resume(VCHIQ_SERVICE_HANDLE_T handle)
{
VCHIQ_STATUS_T ret = VCHIQ_ERROR;
VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
if (service) {
ret = vchiq_use_internal(service->state, service,
USE_TYPE_SERVICE_NO_RESUME);
unlock_service(service);
}
return ret;
}
VCHIQ_STATUS_T
vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle)
{
VCHIQ_STATUS_T ret = VCHIQ_ERROR;
VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
if (service) {
ret = vchiq_use_internal(service->state, service,
USE_TYPE_SERVICE);
unlock_service(service);
}
return ret;
}
VCHIQ_STATUS_T
vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle)
{
VCHIQ_STATUS_T ret = VCHIQ_ERROR;
VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
if (service) {
ret = vchiq_release_internal(service->state, service);
unlock_service(service);
}
return ret;
}
void
vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int i, j = 0;
/* Only dump 64 services */
static const int local_max_services = 64;
/* If there's more than 64 services, only dump ones with
* non-zero counts */
int only_nonzero = 0;
static const char *nz = "<-- preventing suspend";
enum vc_suspend_status vc_suspend_state;
enum vc_resume_status vc_resume_state;
int peer_count;
int vc_use_count;
int active_services;
struct service_data_struct {
int fourcc;
int clientid;
int use_count;
} service_data[local_max_services];
if (!arm_state)
return;
read_lock_bh(&arm_state->susp_res_lock);
vc_suspend_state = arm_state->vc_suspend_state;
vc_resume_state = arm_state->vc_resume_state;
peer_count = arm_state->peer_use_count;
vc_use_count = arm_state->videocore_use_count;
active_services = state->unused_service;
if (active_services > local_max_services)
only_nonzero = 1;
for (i = 0; (i < active_services) && (j < local_max_services); i++) {
VCHIQ_SERVICE_T *service_ptr = state->services[i];
if (!service_ptr)
continue;
if (only_nonzero && !service_ptr->service_use_count)
continue;
if (service_ptr->srvstate != VCHIQ_SRVSTATE_FREE) {
service_data[j].fourcc = service_ptr->base.fourcc;
service_data[j].clientid = service_ptr->client_id;
service_data[j++].use_count = service_ptr->
service_use_count;
}
}
read_unlock_bh(&arm_state->susp_res_lock);
vchiq_log_warning(vchiq_susp_log_level,
"-- Videcore suspend state: %s --",
suspend_state_names[vc_suspend_state + VC_SUSPEND_NUM_OFFSET]);
vchiq_log_warning(vchiq_susp_log_level,
"-- Videcore resume state: %s --",
resume_state_names[vc_resume_state + VC_RESUME_NUM_OFFSET]);
if (only_nonzero)
vchiq_log_warning(vchiq_susp_log_level, "Too many active "
"services (%d). Only dumping up to first %d services "
"with non-zero use-count", active_services,
local_max_services);
for (i = 0; i < j; i++) {
vchiq_log_warning(vchiq_susp_log_level,
"----- %c%c%c%c:%d service count %d %s",
VCHIQ_FOURCC_AS_4CHARS(service_data[i].fourcc),
service_data[i].clientid,
service_data[i].use_count,
service_data[i].use_count ? nz : "");
}
vchiq_log_warning(vchiq_susp_log_level,
"----- VCHIQ use count count %d", peer_count);
vchiq_log_warning(vchiq_susp_log_level,
"--- Overall vchiq instance use count %d", vc_use_count);
vchiq_dump_platform_use_state(state);
}
VCHIQ_STATUS_T
vchiq_check_service(VCHIQ_SERVICE_T *service)
{
VCHIQ_ARM_STATE_T *arm_state;
VCHIQ_STATUS_T ret = VCHIQ_ERROR;
if (!service || !service->state)
goto out;
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
arm_state = vchiq_platform_get_arm_state(service->state);
read_lock_bh(&arm_state->susp_res_lock);
if (service->service_use_count)
ret = VCHIQ_SUCCESS;
read_unlock_bh(&arm_state->susp_res_lock);
if (ret == VCHIQ_ERROR) {
vchiq_log_error(vchiq_susp_log_level,
"%s ERROR - %c%c%c%c:%d service count %d, "
"state count %d, videocore suspend state %s", __func__,
VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
service->client_id, service->service_use_count,
arm_state->videocore_use_count,
suspend_state_names[arm_state->vc_suspend_state +
VC_SUSPEND_NUM_OFFSET]);
vchiq_dump_service_use_state(service->state);
}
out:
return ret;
}
/* stub functions */
void vchiq_on_remote_use_active(VCHIQ_STATE_T *state)
{
(void)state;
}
void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate)
{
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
vchiq_log_info(vchiq_susp_log_level, "%d: %s->%s", state->id,
get_conn_state_name(oldstate), get_conn_state_name(newstate));
if (state->conn_state == VCHIQ_CONNSTATE_CONNECTED) {
write_lock_bh(&arm_state->susp_res_lock);
if (!arm_state->first_connect) {
char threadname[10];
arm_state->first_connect = 1;
write_unlock_bh(&arm_state->susp_res_lock);
snprintf(threadname, sizeof(threadname), "VCHIQka-%d",
state->id);
arm_state->ka_thread = kthread_create(
&vchiq_keepalive_thread_func,
(void *)state,
threadname);
if (arm_state->ka_thread == NULL) {
vchiq_log_error(vchiq_susp_log_level,
"vchiq: FATAL: couldn't create thread %s",
threadname);
} else {
wake_up_process(arm_state->ka_thread);
}
} else
write_unlock_bh(&arm_state->susp_res_lock);
}
}
static int vchiq_probe(struct platform_device *pdev)
{
struct device_node *fw_node;
struct rpi_firmware *fw;
int err;
void *ptr_err;
fw_node = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
/* Remove comment when booting without Device Tree is no longer supported
if (!fw_node) {
dev_err(&pdev->dev, "Missing firmware node\n");
return -ENOENT;
}
*/
fw = rpi_firmware_get(fw_node);
if (!fw)
return -EPROBE_DEFER;
platform_set_drvdata(pdev, fw);
/* create debugfs entries */
err = vchiq_debugfs_init();
if (err != 0)
goto failed_debugfs_init;
err = alloc_chrdev_region(&vchiq_devid, VCHIQ_MINOR, 1, DEVICE_NAME);
if (err != 0) {
vchiq_log_error(vchiq_arm_log_level,
"Unable to allocate device number");
goto failed_alloc_chrdev;
}
cdev_init(&vchiq_cdev, &vchiq_fops);
vchiq_cdev.owner = THIS_MODULE;
err = cdev_add(&vchiq_cdev, vchiq_devid, 1);
if (err != 0) {
vchiq_log_error(vchiq_arm_log_level,
"Unable to register device");
goto failed_cdev_add;
}
/* create sysfs entries */
vchiq_class = class_create(THIS_MODULE, DEVICE_NAME);
ptr_err = vchiq_class;
if (IS_ERR(ptr_err))
goto failed_class_create;
vchiq_dev = device_create(vchiq_class, NULL,
vchiq_devid, NULL, "vchiq");
ptr_err = vchiq_dev;
if (IS_ERR(ptr_err))
goto failed_device_create;
err = vchiq_platform_init(pdev, &g_state);
if (err != 0)
goto failed_platform_init;
vchiq_log_info(vchiq_arm_log_level,
"vchiq: initialised - version %d (min %d), device %d.%d",
VCHIQ_VERSION, VCHIQ_VERSION_MIN,
MAJOR(vchiq_devid), MINOR(vchiq_devid));
return 0;
failed_platform_init:
device_destroy(vchiq_class, vchiq_devid);
failed_device_create:
class_destroy(vchiq_class);
failed_class_create:
cdev_del(&vchiq_cdev);
err = PTR_ERR(ptr_err);
failed_cdev_add:
unregister_chrdev_region(vchiq_devid, 1);
failed_alloc_chrdev:
vchiq_debugfs_deinit();
failed_debugfs_init:
vchiq_log_warning(vchiq_arm_log_level, "could not load vchiq");
return err;
}
static int vchiq_remove(struct platform_device *pdev)
{
device_destroy(vchiq_class, vchiq_devid);
class_destroy(vchiq_class);
cdev_del(&vchiq_cdev);
unregister_chrdev_region(vchiq_devid, 1);
return 0;
}
static const struct of_device_id vchiq_of_match[] = {
{ .compatible = "brcm,bcm2835-vchiq", },
{},
};
MODULE_DEVICE_TABLE(of, vchiq_of_match);
static struct platform_driver vchiq_driver = {
.driver = {
.name = "bcm2835_vchiq",
.owner = THIS_MODULE,
.of_match_table = vchiq_of_match,
},
.probe = vchiq_probe,
.remove = vchiq_remove,
};
module_platform_driver(vchiq_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Broadcom Corporation");
/**
* Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_ARM_H
#define VCHIQ_ARM_H
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/semaphore.h>
#include <linux/atomic.h>
#include "vchiq_core.h"
#include "vchiq_debugfs.h"
enum vc_suspend_status {
VC_SUSPEND_FORCE_CANCELED = -3, /* Force suspend canceled, too busy */
VC_SUSPEND_REJECTED = -2, /* Videocore rejected suspend request */
VC_SUSPEND_FAILED = -1, /* Videocore suspend failed */
VC_SUSPEND_IDLE = 0, /* VC active, no suspend actions */
VC_SUSPEND_REQUESTED, /* User has requested suspend */
VC_SUSPEND_IN_PROGRESS, /* Slot handler has recvd suspend request */
VC_SUSPEND_SUSPENDED /* Videocore suspend succeeded */
};
enum vc_resume_status {
VC_RESUME_FAILED = -1, /* Videocore resume failed */
VC_RESUME_IDLE = 0, /* VC suspended, no resume actions */
VC_RESUME_REQUESTED, /* User has requested resume */
VC_RESUME_IN_PROGRESS, /* Slot handler has received resume request */
VC_RESUME_RESUMED /* Videocore resumed successfully (active) */
};
enum USE_TYPE_E {
USE_TYPE_SERVICE,
USE_TYPE_SERVICE_NO_RESUME,
USE_TYPE_VCHIQ
};
typedef struct vchiq_arm_state_struct {
/* Keepalive-related data */
struct task_struct *ka_thread;
struct completion ka_evt;
atomic_t ka_use_count;
atomic_t ka_use_ack_count;
atomic_t ka_release_count;
struct completion vc_suspend_complete;
struct completion vc_resume_complete;
rwlock_t susp_res_lock;
enum vc_suspend_status vc_suspend_state;
enum vc_resume_status vc_resume_state;
unsigned int wake_address;
struct timer_list suspend_timer;
int suspend_timer_timeout;
int suspend_timer_running;
/* Global use count for videocore.
** This is equal to the sum of the use counts for all services. When
** this hits zero the videocore suspend procedure will be initiated.
*/
int videocore_use_count;
/* Use count to track requests from videocore peer.
** This use count is not associated with a service, so needs to be
** tracked separately with the state.
*/
int peer_use_count;
/* Flag to indicate whether resume is blocked. This happens when the
** ARM is suspending
*/
struct completion resume_blocker;
int resume_blocked;
struct completion blocked_blocker;
int blocked_count;
int autosuspend_override;
/* Flag to indicate that the first vchiq connect has made it through.
** This means that both sides should be fully ready, and we should
** be able to suspend after this point.
*/
int first_connect;
unsigned long long suspend_start_time;
unsigned long long sleep_start_time;
unsigned long long resume_start_time;
unsigned long long last_wake_time;
} VCHIQ_ARM_STATE_T;
extern int vchiq_arm_log_level;
extern int vchiq_susp_log_level;
int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state);
extern VCHIQ_STATE_T *
vchiq_get_state(void);
extern VCHIQ_STATUS_T
vchiq_arm_vcsuspend(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_arm_force_suspend(VCHIQ_STATE_T *state);
extern int
vchiq_arm_allow_resume(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_arm_vcresume(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state);
extern int
vchiq_check_resume(VCHIQ_STATE_T *state);
extern void
vchiq_check_suspend(VCHIQ_STATE_T *state);
VCHIQ_STATUS_T
vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle);
extern VCHIQ_STATUS_T
vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle);
extern VCHIQ_STATUS_T
vchiq_check_service(VCHIQ_SERVICE_T *service);
extern VCHIQ_STATUS_T
vchiq_platform_suspend(VCHIQ_STATE_T *state);
extern int
vchiq_platform_videocore_wanted(VCHIQ_STATE_T *state);
extern int
vchiq_platform_use_suspend_timer(void);
extern void
vchiq_dump_platform_use_state(VCHIQ_STATE_T *state);
extern void
vchiq_dump_service_use_state(VCHIQ_STATE_T *state);
extern VCHIQ_ARM_STATE_T*
vchiq_platform_get_arm_state(VCHIQ_STATE_T *state);
extern int
vchiq_videocore_wanted(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
enum USE_TYPE_E use_type);
extern VCHIQ_STATUS_T
vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service);
extern VCHIQ_DEBUGFS_NODE_T *
vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance);
extern int
vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance);
extern int
vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance);
extern int
vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance);
extern void
vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace);
extern void
set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
enum vc_suspend_status new_state);
extern void
set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
enum vc_resume_status new_state);
extern void
start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state);
#endif /* VCHIQ_ARM_H */
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
const char *vchiq_get_build_hostname(void);
const char *vchiq_get_build_version(void);
const char *vchiq_get_build_time(void);
const char *vchiq_get_build_date(void);
/**
* Copyright (c) 2010-2014 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_CFG_H
#define VCHIQ_CFG_H
#define VCHIQ_MAGIC VCHIQ_MAKE_FOURCC('V', 'C', 'H', 'I')
/* The version of VCHIQ - change with any non-trivial change */
#define VCHIQ_VERSION 8
/* The minimum compatible version - update to match VCHIQ_VERSION with any
** incompatible change */
#define VCHIQ_VERSION_MIN 3
/* The version that introduced the VCHIQ_IOC_LIB_VERSION ioctl */
#define VCHIQ_VERSION_LIB_VERSION 7
/* The version that introduced the VCHIQ_IOC_CLOSE_DELIVERED ioctl */
#define VCHIQ_VERSION_CLOSE_DELIVERED 7
/* The version that made it safe to use SYNCHRONOUS mode */
#define VCHIQ_VERSION_SYNCHRONOUS_MODE 8
#define VCHIQ_MAX_STATES 1
#define VCHIQ_MAX_SERVICES 4096
#define VCHIQ_MAX_SLOTS 128
#define VCHIQ_MAX_SLOTS_PER_SIDE 64
#define VCHIQ_NUM_CURRENT_BULKS 32
#define VCHIQ_NUM_SERVICE_BULKS 4
#ifndef VCHIQ_ENABLE_DEBUG
#define VCHIQ_ENABLE_DEBUG 1
#endif
#ifndef VCHIQ_ENABLE_STATS
#define VCHIQ_ENABLE_STATS 1
#endif
#endif /* VCHIQ_CFG_H */
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "vchiq_connected.h"
#include "vchiq_core.h"
#include "vchiq_killable.h"
#include <linux/module.h>
#include <linux/mutex.h>
#define MAX_CALLBACKS 10
static int g_connected;
static int g_num_deferred_callbacks;
static VCHIQ_CONNECTED_CALLBACK_T g_deferred_callback[MAX_CALLBACKS];
static int g_once_init;
static struct mutex g_connected_mutex;
/****************************************************************************
*
* Function to initialize our lock.
*
***************************************************************************/
static void connected_init(void)
{
if (!g_once_init) {
mutex_init(&g_connected_mutex);
g_once_init = 1;
}
}
/****************************************************************************
*
* This function is used to defer initialization until the vchiq stack is
* initialized. If the stack is already initialized, then the callback will
* be made immediately, otherwise it will be deferred until
* vchiq_call_connected_callbacks is called.
*
***************************************************************************/
void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback)
{
connected_init();
if (mutex_lock_interruptible(&g_connected_mutex) != 0)
return;
if (g_connected)
/* We're already connected. Call the callback immediately. */
callback();
else {
if (g_num_deferred_callbacks >= MAX_CALLBACKS)
vchiq_log_error(vchiq_core_log_level,
"There already %d callback registered - "
"please increase MAX_CALLBACKS",
g_num_deferred_callbacks);
else {
g_deferred_callback[g_num_deferred_callbacks] =
callback;
g_num_deferred_callbacks++;
}
}
mutex_unlock(&g_connected_mutex);
}
/****************************************************************************
*
* This function is called by the vchiq stack once it has been connected to
* the videocore and clients can start to use the stack.
*
***************************************************************************/
void vchiq_call_connected_callbacks(void)
{
int i;
connected_init();
if (mutex_lock_interruptible(&g_connected_mutex) != 0)
return;
for (i = 0; i < g_num_deferred_callbacks; i++)
g_deferred_callback[i]();
g_num_deferred_callbacks = 0;
g_connected = 1;
mutex_unlock(&g_connected_mutex);
}
EXPORT_SYMBOL(vchiq_add_connected_callback);
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_CONNECTED_H
#define VCHIQ_CONNECTED_H
/* ---- Include Files ----------------------------------------------------- */
/* ---- Constants and Types ---------------------------------------------- */
typedef void (*VCHIQ_CONNECTED_CALLBACK_T)(void);
/* ---- Variable Externs ------------------------------------------------- */
/* ---- Function Prototypes ---------------------------------------------- */
void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback);
void vchiq_call_connected_callbacks(void);
#endif /* VCHIQ_CONNECTED_H */
This source diff could not be displayed because it is too large. You can view the blob instead.
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_CORE_H
#define VCHIQ_CORE_H
#include <linux/mutex.h>
#include <linux/semaphore.h>
#include <linux/kthread.h>
#include "vchiq_cfg.h"
#include "vchiq.h"
/* Run time control of log level, based on KERN_XXX level. */
#define VCHIQ_LOG_DEFAULT 4
#define VCHIQ_LOG_ERROR 3
#define VCHIQ_LOG_WARNING 4
#define VCHIQ_LOG_INFO 6
#define VCHIQ_LOG_TRACE 7
#define VCHIQ_LOG_PREFIX KERN_INFO "vchiq: "
#ifndef vchiq_log_error
#define vchiq_log_error(cat, fmt, ...) \
do { if (cat >= VCHIQ_LOG_ERROR) \
printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
#endif
#ifndef vchiq_log_warning
#define vchiq_log_warning(cat, fmt, ...) \
do { if (cat >= VCHIQ_LOG_WARNING) \
printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
#endif
#ifndef vchiq_log_info
#define vchiq_log_info(cat, fmt, ...) \
do { if (cat >= VCHIQ_LOG_INFO) \
printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
#endif
#ifndef vchiq_log_trace
#define vchiq_log_trace(cat, fmt, ...) \
do { if (cat >= VCHIQ_LOG_TRACE) \
printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
#endif
#define vchiq_loud_error(...) \
vchiq_log_error(vchiq_core_log_level, "===== " __VA_ARGS__)
#ifndef vchiq_static_assert
#define vchiq_static_assert(cond) __attribute__((unused)) \
extern int vchiq_static_assert[(cond) ? 1 : -1]
#endif
#define IS_POW2(x) (x && ((x & (x - 1)) == 0))
/* Ensure that the slot size and maximum number of slots are powers of 2 */
vchiq_static_assert(IS_POW2(VCHIQ_SLOT_SIZE));
vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS));
vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS_PER_SIDE));
#define VCHIQ_SLOT_MASK (VCHIQ_SLOT_SIZE - 1)
#define VCHIQ_SLOT_QUEUE_MASK (VCHIQ_MAX_SLOTS_PER_SIDE - 1)
#define VCHIQ_SLOT_ZERO_SLOTS ((sizeof(VCHIQ_SLOT_ZERO_T) + \
VCHIQ_SLOT_SIZE - 1) / VCHIQ_SLOT_SIZE)
#define VCHIQ_MSG_PADDING 0 /* - */
#define VCHIQ_MSG_CONNECT 1 /* - */
#define VCHIQ_MSG_OPEN 2 /* + (srcport, -), fourcc, client_id */
#define VCHIQ_MSG_OPENACK 3 /* + (srcport, dstport) */
#define VCHIQ_MSG_CLOSE 4 /* + (srcport, dstport) */
#define VCHIQ_MSG_DATA 5 /* + (srcport, dstport) */
#define VCHIQ_MSG_BULK_RX 6 /* + (srcport, dstport), data, size */
#define VCHIQ_MSG_BULK_TX 7 /* + (srcport, dstport), data, size */
#define VCHIQ_MSG_BULK_RX_DONE 8 /* + (srcport, dstport), actual */
#define VCHIQ_MSG_BULK_TX_DONE 9 /* + (srcport, dstport), actual */
#define VCHIQ_MSG_PAUSE 10 /* - */
#define VCHIQ_MSG_RESUME 11 /* - */
#define VCHIQ_MSG_REMOTE_USE 12 /* - */
#define VCHIQ_MSG_REMOTE_RELEASE 13 /* - */
#define VCHIQ_MSG_REMOTE_USE_ACTIVE 14 /* - */
#define VCHIQ_PORT_MAX (VCHIQ_MAX_SERVICES - 1)
#define VCHIQ_PORT_FREE 0x1000
#define VCHIQ_PORT_IS_VALID(port) (port < VCHIQ_PORT_FREE)
#define VCHIQ_MAKE_MSG(type, srcport, dstport) \
((type<<24) | (srcport<<12) | (dstport<<0))
#define VCHIQ_MSG_TYPE(msgid) ((unsigned int)msgid >> 24)
#define VCHIQ_MSG_SRCPORT(msgid) \
(unsigned short)(((unsigned int)msgid >> 12) & 0xfff)
#define VCHIQ_MSG_DSTPORT(msgid) \
((unsigned short)msgid & 0xfff)
#define VCHIQ_FOURCC_AS_4CHARS(fourcc) \
((fourcc) >> 24) & 0xff, \
((fourcc) >> 16) & 0xff, \
((fourcc) >> 8) & 0xff, \
(fourcc) & 0xff
/* Ensure the fields are wide enough */
vchiq_static_assert(VCHIQ_MSG_SRCPORT(VCHIQ_MAKE_MSG(0, 0, VCHIQ_PORT_MAX))
== 0);
vchiq_static_assert(VCHIQ_MSG_TYPE(VCHIQ_MAKE_MSG(0, VCHIQ_PORT_MAX, 0)) == 0);
vchiq_static_assert((unsigned int)VCHIQ_PORT_MAX <
(unsigned int)VCHIQ_PORT_FREE);
#define VCHIQ_MSGID_PADDING VCHIQ_MAKE_MSG(VCHIQ_MSG_PADDING, 0, 0)
#define VCHIQ_MSGID_CLAIMED 0x40000000
#define VCHIQ_FOURCC_INVALID 0x00000000
#define VCHIQ_FOURCC_IS_LEGAL(fourcc) (fourcc != VCHIQ_FOURCC_INVALID)
#define VCHIQ_BULK_ACTUAL_ABORTED -1
typedef uint32_t BITSET_T;
vchiq_static_assert((sizeof(BITSET_T) * 8) == 32);
#define BITSET_SIZE(b) ((b + 31) >> 5)
#define BITSET_WORD(b) (b >> 5)
#define BITSET_BIT(b) (1 << (b & 31))
#define BITSET_ZERO(bs) memset(bs, 0, sizeof(bs))
#define BITSET_IS_SET(bs, b) (bs[BITSET_WORD(b)] & BITSET_BIT(b))
#define BITSET_SET(bs, b) (bs[BITSET_WORD(b)] |= BITSET_BIT(b))
#define BITSET_CLR(bs, b) (bs[BITSET_WORD(b)] &= ~BITSET_BIT(b))
#if VCHIQ_ENABLE_STATS
#define VCHIQ_STATS_INC(state, stat) (state->stats. stat++)
#define VCHIQ_SERVICE_STATS_INC(service, stat) (service->stats. stat++)
#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) \
(service->stats. stat += addend)
#else
#define VCHIQ_STATS_INC(state, stat) ((void)0)
#define VCHIQ_SERVICE_STATS_INC(service, stat) ((void)0)
#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) ((void)0)
#endif
enum {
DEBUG_ENTRIES,
#if VCHIQ_ENABLE_DEBUG
DEBUG_SLOT_HANDLER_COUNT,
DEBUG_SLOT_HANDLER_LINE,
DEBUG_PARSE_LINE,
DEBUG_PARSE_HEADER,
DEBUG_PARSE_MSGID,
DEBUG_AWAIT_COMPLETION_LINE,
DEBUG_DEQUEUE_MESSAGE_LINE,
DEBUG_SERVICE_CALLBACK_LINE,
DEBUG_MSG_QUEUE_FULL_COUNT,
DEBUG_COMPLETION_QUEUE_FULL_COUNT,
#endif
DEBUG_MAX
};
#if VCHIQ_ENABLE_DEBUG
#define DEBUG_INITIALISE(local) int *debug_ptr = (local)->debug;
#define DEBUG_TRACE(d) \
do { debug_ptr[DEBUG_ ## d] = __LINE__; dsb(); } while (0)
#define DEBUG_VALUE(d, v) \
do { debug_ptr[DEBUG_ ## d] = (v); dsb(); } while (0)
#define DEBUG_COUNT(d) \
do { debug_ptr[DEBUG_ ## d]++; dsb(); } while (0)
#else /* VCHIQ_ENABLE_DEBUG */
#define DEBUG_INITIALISE(local)
#define DEBUG_TRACE(d)
#define DEBUG_VALUE(d, v)
#define DEBUG_COUNT(d)
#endif /* VCHIQ_ENABLE_DEBUG */
typedef enum {
VCHIQ_CONNSTATE_DISCONNECTED,
VCHIQ_CONNSTATE_CONNECTING,
VCHIQ_CONNSTATE_CONNECTED,
VCHIQ_CONNSTATE_PAUSING,
VCHIQ_CONNSTATE_PAUSE_SENT,
VCHIQ_CONNSTATE_PAUSED,
VCHIQ_CONNSTATE_RESUMING,
VCHIQ_CONNSTATE_PAUSE_TIMEOUT,
VCHIQ_CONNSTATE_RESUME_TIMEOUT
} VCHIQ_CONNSTATE_T;
enum {
VCHIQ_SRVSTATE_FREE,
VCHIQ_SRVSTATE_HIDDEN,
VCHIQ_SRVSTATE_LISTENING,
VCHIQ_SRVSTATE_OPENING,
VCHIQ_SRVSTATE_OPEN,
VCHIQ_SRVSTATE_OPENSYNC,
VCHIQ_SRVSTATE_CLOSESENT,
VCHIQ_SRVSTATE_CLOSERECVD,
VCHIQ_SRVSTATE_CLOSEWAIT,
VCHIQ_SRVSTATE_CLOSED
};
enum {
VCHIQ_POLL_TERMINATE,
VCHIQ_POLL_REMOVE,
VCHIQ_POLL_TXNOTIFY,
VCHIQ_POLL_RXNOTIFY,
VCHIQ_POLL_COUNT
};
typedef enum {
VCHIQ_BULK_TRANSMIT,
VCHIQ_BULK_RECEIVE
} VCHIQ_BULK_DIR_T;
typedef void (*VCHIQ_USERDATA_TERM_T)(void *userdata);
typedef struct vchiq_bulk_struct {
short mode;
short dir;
void *userdata;
VCHI_MEM_HANDLE_T handle;
void *data;
int size;
void *remote_data;
int remote_size;
int actual;
} VCHIQ_BULK_T;
typedef struct vchiq_bulk_queue_struct {
int local_insert; /* Where to insert the next local bulk */
int remote_insert; /* Where to insert the next remote bulk (master) */
int process; /* Bulk to transfer next */
int remote_notify; /* Bulk to notify the remote client of next (mstr) */
int remove; /* Bulk to notify the local client of, and remove,
** next */
VCHIQ_BULK_T bulks[VCHIQ_NUM_SERVICE_BULKS];
} VCHIQ_BULK_QUEUE_T;
typedef struct remote_event_struct {
int armed;
int fired;
struct semaphore *event;
} REMOTE_EVENT_T;
typedef struct opaque_platform_state_t *VCHIQ_PLATFORM_STATE_T;
typedef struct vchiq_state_struct VCHIQ_STATE_T;
typedef struct vchiq_slot_struct {
char data[VCHIQ_SLOT_SIZE];
} VCHIQ_SLOT_T;
typedef struct vchiq_slot_info_struct {
/* Use two counters rather than one to avoid the need for a mutex. */
short use_count;
short release_count;
} VCHIQ_SLOT_INFO_T;
typedef struct vchiq_service_struct {
VCHIQ_SERVICE_BASE_T base;
VCHIQ_SERVICE_HANDLE_T handle;
unsigned int ref_count;
int srvstate;
VCHIQ_USERDATA_TERM_T userdata_term;
unsigned int localport;
unsigned int remoteport;
int public_fourcc;
int client_id;
char auto_close;
char sync;
char closing;
char trace;
atomic_t poll_flags;
short version;
short version_min;
short peer_version;
VCHIQ_STATE_T *state;
VCHIQ_INSTANCE_T instance;
int service_use_count;
VCHIQ_BULK_QUEUE_T bulk_tx;
VCHIQ_BULK_QUEUE_T bulk_rx;
struct semaphore remove_event;
struct semaphore bulk_remove_event;
struct mutex bulk_mutex;
struct service_stats_struct {
int quota_stalls;
int slot_stalls;
int bulk_stalls;
int error_count;
int ctrl_tx_count;
int ctrl_rx_count;
int bulk_tx_count;
int bulk_rx_count;
int bulk_aborted_count;
uint64_t ctrl_tx_bytes;
uint64_t ctrl_rx_bytes;
uint64_t bulk_tx_bytes;
uint64_t bulk_rx_bytes;
} stats;
} VCHIQ_SERVICE_T;
/* The quota information is outside VCHIQ_SERVICE_T so that it can be
statically allocated, since for accounting reasons a service's slot
usage is carried over between users of the same port number.
*/
typedef struct vchiq_service_quota_struct {
unsigned short slot_quota;
unsigned short slot_use_count;
unsigned short message_quota;
unsigned short message_use_count;
struct semaphore quota_event;
int previous_tx_index;
} VCHIQ_SERVICE_QUOTA_T;
typedef struct vchiq_shared_state_struct {
/* A non-zero value here indicates that the content is valid. */
int initialised;
/* The first and last (inclusive) slots allocated to the owner. */
int slot_first;
int slot_last;
/* The slot allocated to synchronous messages from the owner. */
int slot_sync;
/* Signalling this event indicates that owner's slot handler thread
** should run. */
REMOTE_EVENT_T trigger;
/* Indicates the byte position within the stream where the next message
** will be written. The least significant bits are an index into the
** slot. The next bits are the index of the slot in slot_queue. */
int tx_pos;
/* This event should be signalled when a slot is recycled. */
REMOTE_EVENT_T recycle;
/* The slot_queue index where the next recycled slot will be written. */
int slot_queue_recycle;
/* This event should be signalled when a synchronous message is sent. */
REMOTE_EVENT_T sync_trigger;
/* This event should be signalled when a synchronous message has been
** released. */
REMOTE_EVENT_T sync_release;
/* A circular buffer of slot indexes. */
int slot_queue[VCHIQ_MAX_SLOTS_PER_SIDE];
/* Debugging state */
int debug[DEBUG_MAX];
} VCHIQ_SHARED_STATE_T;
typedef struct vchiq_slot_zero_struct {
int magic;
short version;
short version_min;
int slot_zero_size;
int slot_size;
int max_slots;
int max_slots_per_side;
int platform_data[2];
VCHIQ_SHARED_STATE_T master;
VCHIQ_SHARED_STATE_T slave;
VCHIQ_SLOT_INFO_T slots[VCHIQ_MAX_SLOTS];
} VCHIQ_SLOT_ZERO_T;
struct vchiq_state_struct {
int id;
int initialised;
VCHIQ_CONNSTATE_T conn_state;
int is_master;
short version_common;
VCHIQ_SHARED_STATE_T *local;
VCHIQ_SHARED_STATE_T *remote;
VCHIQ_SLOT_T *slot_data;
unsigned short default_slot_quota;
unsigned short default_message_quota;
/* Event indicating connect message received */
struct semaphore connect;
/* Mutex protecting services */
struct mutex mutex;
VCHIQ_INSTANCE_T *instance;
/* Processes incoming messages */
struct task_struct *slot_handler_thread;
/* Processes recycled slots */
struct task_struct *recycle_thread;
/* Processes synchronous messages */
struct task_struct *sync_thread;
/* Local implementation of the trigger remote event */
struct semaphore trigger_event;
/* Local implementation of the recycle remote event */
struct semaphore recycle_event;
/* Local implementation of the sync trigger remote event */
struct semaphore sync_trigger_event;
/* Local implementation of the sync release remote event */
struct semaphore sync_release_event;
char *tx_data;
char *rx_data;
VCHIQ_SLOT_INFO_T *rx_info;
struct mutex slot_mutex;
struct mutex recycle_mutex;
struct mutex sync_mutex;
struct mutex bulk_transfer_mutex;
/* Indicates the byte position within the stream from where the next
** message will be read. The least significant bits are an index into
** the slot.The next bits are the index of the slot in
** remote->slot_queue. */
int rx_pos;
/* A cached copy of local->tx_pos. Only write to local->tx_pos, and read
from remote->tx_pos. */
int local_tx_pos;
/* The slot_queue index of the slot to become available next. */
int slot_queue_available;
/* A flag to indicate if any poll has been requested */
int poll_needed;
/* Ths index of the previous slot used for data messages. */
int previous_data_index;
/* The number of slots occupied by data messages. */
unsigned short data_use_count;
/* The maximum number of slots to be occupied by data messages. */
unsigned short data_quota;
/* An array of bit sets indicating which services must be polled. */
atomic_t poll_services[BITSET_SIZE(VCHIQ_MAX_SERVICES)];
/* The number of the first unused service */
int unused_service;
/* Signalled when a free slot becomes available. */
struct semaphore slot_available_event;
struct semaphore slot_remove_event;
/* Signalled when a free data slot becomes available. */
struct semaphore data_quota_event;
/* Incremented when there are bulk transfers which cannot be processed
* whilst paused and must be processed on resume */
int deferred_bulks;
struct state_stats_struct {
int slot_stalls;
int data_stalls;
int ctrl_tx_count;
int ctrl_rx_count;
int error_count;
} stats;
VCHIQ_SERVICE_T * services[VCHIQ_MAX_SERVICES];
VCHIQ_SERVICE_QUOTA_T service_quotas[VCHIQ_MAX_SERVICES];
VCHIQ_SLOT_INFO_T slot_info[VCHIQ_MAX_SLOTS];
VCHIQ_PLATFORM_STATE_T platform_state;
};
struct bulk_waiter {
VCHIQ_BULK_T *bulk;
struct semaphore event;
int actual;
};
extern spinlock_t bulk_waiter_spinlock;
extern int vchiq_core_log_level;
extern int vchiq_core_msg_log_level;
extern int vchiq_sync_log_level;
extern VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES];
extern const char *
get_conn_state_name(VCHIQ_CONNSTATE_T conn_state);
extern VCHIQ_SLOT_ZERO_T *
vchiq_init_slots(void *mem_base, int mem_size);
extern VCHIQ_STATUS_T
vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero,
int is_master);
extern VCHIQ_STATUS_T
vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
extern VCHIQ_SERVICE_T *
vchiq_add_service_internal(VCHIQ_STATE_T *state,
const VCHIQ_SERVICE_PARAMS_T *params, int srvstate,
VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term);
extern VCHIQ_STATUS_T
vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id);
extern VCHIQ_STATUS_T
vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd);
extern void
vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service);
extern void
vchiq_free_service_internal(VCHIQ_SERVICE_T *service);
extern VCHIQ_STATUS_T
vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
extern VCHIQ_STATUS_T
vchiq_pause_internal(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_resume_internal(VCHIQ_STATE_T *state);
extern void
remote_event_pollall(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata,
VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir);
extern void
vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state);
extern void
vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service);
extern void
vchiq_loud_error_header(void);
extern void
vchiq_loud_error_footer(void);
extern void
request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type);
static inline VCHIQ_SERVICE_T *
handle_to_service(VCHIQ_SERVICE_HANDLE_T handle)
{
VCHIQ_STATE_T *state = vchiq_states[(handle / VCHIQ_MAX_SERVICES) &
(VCHIQ_MAX_STATES - 1)];
if (!state)
return NULL;
return state->services[handle & (VCHIQ_MAX_SERVICES - 1)];
}
extern VCHIQ_SERVICE_T *
find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle);
extern VCHIQ_SERVICE_T *
find_service_by_port(VCHIQ_STATE_T *state, int localport);
extern VCHIQ_SERVICE_T *
find_service_for_instance(VCHIQ_INSTANCE_T instance,
VCHIQ_SERVICE_HANDLE_T handle);
extern VCHIQ_SERVICE_T *
find_closed_service_for_instance(VCHIQ_INSTANCE_T instance,
VCHIQ_SERVICE_HANDLE_T handle);
extern VCHIQ_SERVICE_T *
next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance,
int *pidx);
extern void
lock_service(VCHIQ_SERVICE_T *service);
extern void
unlock_service(VCHIQ_SERVICE_T *service);
/* The following functions are called from vchiq_core, and external
** implementations must be provided. */
extern VCHIQ_STATUS_T
vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk,
VCHI_MEM_HANDLE_T memhandle, void *offset, int size, int dir);
extern void
vchiq_transfer_bulk(VCHIQ_BULK_T *bulk);
extern void
vchiq_complete_bulk(VCHIQ_BULK_T *bulk);
extern VCHIQ_STATUS_T
vchiq_copy_from_user(void *dst, const void *src, int size);
extern void
remote_event_signal(REMOTE_EVENT_T *event);
void
vchiq_platform_check_suspend(VCHIQ_STATE_T *state);
extern void
vchiq_platform_paused(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_platform_resume(VCHIQ_STATE_T *state);
extern void
vchiq_platform_resumed(VCHIQ_STATE_T *state);
extern void
vchiq_dump(void *dump_context, const char *str, int len);
extern void
vchiq_dump_platform_state(void *dump_context);
extern void
vchiq_dump_platform_instances(void *dump_context);
extern void
vchiq_dump_platform_service_state(void *dump_context,
VCHIQ_SERVICE_T *service);
extern VCHIQ_STATUS_T
vchiq_use_service_internal(VCHIQ_SERVICE_T *service);
extern VCHIQ_STATUS_T
vchiq_release_service_internal(VCHIQ_SERVICE_T *service);
extern void
vchiq_on_remote_use(VCHIQ_STATE_T *state);
extern void
vchiq_on_remote_release(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_platform_init_state(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_check_service(VCHIQ_SERVICE_T *service);
extern void
vchiq_on_remote_use_active(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_send_remote_use(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_send_remote_release(VCHIQ_STATE_T *state);
extern VCHIQ_STATUS_T
vchiq_send_remote_use_active(VCHIQ_STATE_T *state);
extern void
vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate);
extern void
vchiq_platform_handle_timeout(VCHIQ_STATE_T *state);
extern void
vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate);
extern void
vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem,
size_t numBytes);
#endif
/**
* Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/debugfs.h>
#include "vchiq_core.h"
#include "vchiq_arm.h"
#include "vchiq_debugfs.h"
#ifdef CONFIG_DEBUG_FS
/****************************************************************************
*
* log category entries
*
***************************************************************************/
#define DEBUGFS_WRITE_BUF_SIZE 256
#define VCHIQ_LOG_ERROR_STR "error"
#define VCHIQ_LOG_WARNING_STR "warning"
#define VCHIQ_LOG_INFO_STR "info"
#define VCHIQ_LOG_TRACE_STR "trace"
/* Top-level debug info */
struct vchiq_debugfs_info {
/* Global 'vchiq' debugfs entry used by all instances */
struct dentry *vchiq_cfg_dir;
/* one entry per client process */
struct dentry *clients;
/* log categories */
struct dentry *log_categories;
};
static struct vchiq_debugfs_info debugfs_info;
/* Log category debugfs entries */
struct vchiq_debugfs_log_entry {
const char *name;
int *plevel;
struct dentry *dir;
};
static struct vchiq_debugfs_log_entry vchiq_debugfs_log_entries[] = {
{ "core", &vchiq_core_log_level },
{ "msg", &vchiq_core_msg_log_level },
{ "sync", &vchiq_sync_log_level },
{ "susp", &vchiq_susp_log_level },
{ "arm", &vchiq_arm_log_level },
};
static int n_log_entries =
sizeof(vchiq_debugfs_log_entries)/sizeof(vchiq_debugfs_log_entries[0]);
static struct dentry *vchiq_clients_top(void);
static struct dentry *vchiq_debugfs_top(void);
static int debugfs_log_show(struct seq_file *f, void *offset)
{
int *levp = f->private;
char *log_value = NULL;
switch (*levp) {
case VCHIQ_LOG_ERROR:
log_value = VCHIQ_LOG_ERROR_STR;
break;
case VCHIQ_LOG_WARNING:
log_value = VCHIQ_LOG_WARNING_STR;
break;
case VCHIQ_LOG_INFO:
log_value = VCHIQ_LOG_INFO_STR;
break;
case VCHIQ_LOG_TRACE:
log_value = VCHIQ_LOG_TRACE_STR;
break;
default:
break;
}
seq_printf(f, "%s\n", log_value ? log_value : "(null)");
return 0;
}
static int debugfs_log_open(struct inode *inode, struct file *file)
{
return single_open(file, debugfs_log_show, inode->i_private);
}
static int debugfs_log_write(struct file *file,
const char __user *buffer,
size_t count, loff_t *ppos)
{
struct seq_file *f = (struct seq_file *)file->private_data;
int *levp = f->private;
char kbuf[DEBUGFS_WRITE_BUF_SIZE + 1];
memset(kbuf, 0, DEBUGFS_WRITE_BUF_SIZE + 1);
if (count >= DEBUGFS_WRITE_BUF_SIZE)
count = DEBUGFS_WRITE_BUF_SIZE;
if (copy_from_user(kbuf, buffer, count) != 0)
return -EFAULT;
kbuf[count - 1] = 0;
if (strncmp("error", kbuf, strlen("error")) == 0)
*levp = VCHIQ_LOG_ERROR;
else if (strncmp("warning", kbuf, strlen("warning")) == 0)
*levp = VCHIQ_LOG_WARNING;
else if (strncmp("info", kbuf, strlen("info")) == 0)
*levp = VCHIQ_LOG_INFO;
else if (strncmp("trace", kbuf, strlen("trace")) == 0)
*levp = VCHIQ_LOG_TRACE;
else
*levp = VCHIQ_LOG_DEFAULT;
*ppos += count;
return count;
}
static const struct file_operations debugfs_log_fops = {
.owner = THIS_MODULE,
.open = debugfs_log_open,
.write = debugfs_log_write,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/* create an entry under <debugfs>/vchiq/log for each log category */
static int vchiq_debugfs_create_log_entries(struct dentry *top)
{
struct dentry *dir;
size_t i;
int ret = 0;
dir = debugfs_create_dir("log", vchiq_debugfs_top());
if (!dir)
return -ENOMEM;
debugfs_info.log_categories = dir;
for (i = 0; i < n_log_entries; i++) {
void *levp = (void *)vchiq_debugfs_log_entries[i].plevel;
dir = debugfs_create_file(vchiq_debugfs_log_entries[i].name,
0644,
debugfs_info.log_categories,
levp,
&debugfs_log_fops);
if (!dir) {
ret = -ENOMEM;
break;
}
vchiq_debugfs_log_entries[i].dir = dir;
}
return ret;
}
static int debugfs_usecount_show(struct seq_file *f, void *offset)
{
VCHIQ_INSTANCE_T instance = f->private;
int use_count;
use_count = vchiq_instance_get_use_count(instance);
seq_printf(f, "%d\n", use_count);
return 0;
}
static int debugfs_usecount_open(struct inode *inode, struct file *file)
{
return single_open(file, debugfs_usecount_show, inode->i_private);
}
static const struct file_operations debugfs_usecount_fops = {
.owner = THIS_MODULE,
.open = debugfs_usecount_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int debugfs_trace_show(struct seq_file *f, void *offset)
{
VCHIQ_INSTANCE_T instance = f->private;
int trace;
trace = vchiq_instance_get_trace(instance);
seq_printf(f, "%s\n", trace ? "Y" : "N");
return 0;
}
static int debugfs_trace_open(struct inode *inode, struct file *file)
{
return single_open(file, debugfs_trace_show, inode->i_private);
}
static int debugfs_trace_write(struct file *file,
const char __user *buffer,
size_t count, loff_t *ppos)
{
struct seq_file *f = (struct seq_file *)file->private_data;
VCHIQ_INSTANCE_T instance = f->private;
char firstchar;
if (copy_from_user(&firstchar, buffer, 1) != 0)
return -EFAULT;
switch (firstchar) {
case 'Y':
case 'y':
case '1':
vchiq_instance_set_trace(instance, 1);
break;
case 'N':
case 'n':
case '0':
vchiq_instance_set_trace(instance, 0);
break;
default:
break;
}
*ppos += count;
return count;
}
static const struct file_operations debugfs_trace_fops = {
.owner = THIS_MODULE,
.open = debugfs_trace_open,
.write = debugfs_trace_write,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/* add an instance (process) to the debugfs entries */
int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
{
char pidstr[16];
struct dentry *top, *use_count, *trace;
struct dentry *clients = vchiq_clients_top();
snprintf(pidstr, sizeof(pidstr), "%d",
vchiq_instance_get_pid(instance));
top = debugfs_create_dir(pidstr, clients);
if (!top)
goto fail_top;
use_count = debugfs_create_file("use_count",
0444, top,
instance,
&debugfs_usecount_fops);
if (!use_count)
goto fail_use_count;
trace = debugfs_create_file("trace",
0644, top,
instance,
&debugfs_trace_fops);
if (!trace)
goto fail_trace;
vchiq_instance_get_debugfs_node(instance)->dentry = top;
return 0;
fail_trace:
debugfs_remove(use_count);
fail_use_count:
debugfs_remove(top);
fail_top:
return -ENOMEM;
}
void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
{
VCHIQ_DEBUGFS_NODE_T *node = vchiq_instance_get_debugfs_node(instance);
debugfs_remove_recursive(node->dentry);
}
int vchiq_debugfs_init(void)
{
BUG_ON(debugfs_info.vchiq_cfg_dir != NULL);
debugfs_info.vchiq_cfg_dir = debugfs_create_dir("vchiq", NULL);
if (debugfs_info.vchiq_cfg_dir == NULL)
goto fail;
debugfs_info.clients = debugfs_create_dir("clients",
vchiq_debugfs_top());
if (!debugfs_info.clients)
goto fail;
if (vchiq_debugfs_create_log_entries(vchiq_debugfs_top()) != 0)
goto fail;
return 0;
fail:
vchiq_debugfs_deinit();
vchiq_log_error(vchiq_arm_log_level,
"%s: failed to create debugfs directory",
__func__);
return -ENOMEM;
}
/* remove all the debugfs entries */
void vchiq_debugfs_deinit(void)
{
debugfs_remove_recursive(vchiq_debugfs_top());
}
static struct dentry *vchiq_clients_top(void)
{
return debugfs_info.clients;
}
static struct dentry *vchiq_debugfs_top(void)
{
BUG_ON(debugfs_info.vchiq_cfg_dir == NULL);
return debugfs_info.vchiq_cfg_dir;
}
#else /* CONFIG_DEBUG_FS */
int vchiq_debugfs_init(void)
{
return 0;
}
void vchiq_debugfs_deinit(void)
{
}
int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
{
return 0;
}
void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
{
}
#endif /* CONFIG_DEBUG_FS */
/**
* Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_DEBUGFS_H
#define VCHIQ_DEBUGFS_H
#include "vchiq_core.h"
typedef struct vchiq_debugfs_node_struct
{
struct dentry *dentry;
} VCHIQ_DEBUGFS_NODE_T;
int vchiq_debugfs_init(void);
void vchiq_debugfs_deinit(void);
int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance);
void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance);
#endif /* VCHIQ_DEBUGFS_H */
#!/usr/bin/perl -w
use strict;
#
# Generate a version from available information
#
my $prefix = shift @ARGV;
my $root = shift @ARGV;
if ( not defined $root ) {
die "usage: $0 prefix root-dir\n";
}
if ( ! -d $root ) {
die "root directory $root not found\n";
}
my $version = "unknown";
my $tainted = "";
if ( -d "$root/.git" ) {
# attempt to work out git version. only do so
# on a linux build host, as cygwin builds are
# already slow enough
if ( -f "/usr/bin/git" || -f "/usr/local/bin/git" ) {
if (not open(F, "git --git-dir $root/.git rev-parse --verify HEAD|")) {
$version = "no git version";
}
else {
$version = <F>;
$version =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
$version =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
}
if (open(G, "git --git-dir $root/.git status --porcelain|")) {
$tainted = <G>;
$tainted =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
$tainted =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
if (length $tainted) {
$version = join ' ', $version, "(tainted)";
}
else {
$version = join ' ', $version, "(clean)";
}
}
}
}
my $hostname = `hostname`;
$hostname =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
$hostname =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
print STDERR "Version $version\n";
print <<EOF;
#include "${prefix}_build_info.h"
#include <linux/broadcom/vc_debug_sym.h>
VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_hostname, "$hostname" );
VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_version, "$version" );
VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_time, __TIME__ );
VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_date, __DATE__ );
const char *vchiq_get_build_hostname( void )
{
return vchiq_build_hostname;
}
const char *vchiq_get_build_version( void )
{
return vchiq_build_version;
}
const char *vchiq_get_build_date( void )
{
return vchiq_build_date;
}
const char *vchiq_get_build_time( void )
{
return vchiq_build_time;
}
EOF
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_IF_H
#define VCHIQ_IF_H
#include "interface/vchi/vchi_mh.h"
#define VCHIQ_SERVICE_HANDLE_INVALID 0
#define VCHIQ_SLOT_SIZE 4096
#define VCHIQ_MAX_MSG_SIZE (VCHIQ_SLOT_SIZE - sizeof(VCHIQ_HEADER_T))
#define VCHIQ_CHANNEL_SIZE VCHIQ_MAX_MSG_SIZE /* For backwards compatibility */
#define VCHIQ_MAKE_FOURCC(x0, x1, x2, x3) \
(((x0) << 24) | ((x1) << 16) | ((x2) << 8) | (x3))
#define VCHIQ_GET_SERVICE_USERDATA(service) vchiq_get_service_userdata(service)
#define VCHIQ_GET_SERVICE_FOURCC(service) vchiq_get_service_fourcc(service)
typedef enum {
VCHIQ_SERVICE_OPENED, /* service, -, - */
VCHIQ_SERVICE_CLOSED, /* service, -, - */
VCHIQ_MESSAGE_AVAILABLE, /* service, header, - */
VCHIQ_BULK_TRANSMIT_DONE, /* service, -, bulk_userdata */
VCHIQ_BULK_RECEIVE_DONE, /* service, -, bulk_userdata */
VCHIQ_BULK_TRANSMIT_ABORTED, /* service, -, bulk_userdata */
VCHIQ_BULK_RECEIVE_ABORTED /* service, -, bulk_userdata */
} VCHIQ_REASON_T;
typedef enum {
VCHIQ_ERROR = -1,
VCHIQ_SUCCESS = 0,
VCHIQ_RETRY = 1
} VCHIQ_STATUS_T;
typedef enum {
VCHIQ_BULK_MODE_CALLBACK,
VCHIQ_BULK_MODE_BLOCKING,
VCHIQ_BULK_MODE_NOCALLBACK,
VCHIQ_BULK_MODE_WAITING /* Reserved for internal use */
} VCHIQ_BULK_MODE_T;
typedef enum {
VCHIQ_SERVICE_OPTION_AUTOCLOSE,
VCHIQ_SERVICE_OPTION_SLOT_QUOTA,
VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA,
VCHIQ_SERVICE_OPTION_SYNCHRONOUS,
VCHIQ_SERVICE_OPTION_TRACE
} VCHIQ_SERVICE_OPTION_T;
typedef struct vchiq_header_struct {
/* The message identifier - opaque to applications. */
int msgid;
/* Size of message data. */
unsigned int size;
char data[0]; /* message */
} VCHIQ_HEADER_T;
typedef struct {
const void *data;
unsigned int size;
} VCHIQ_ELEMENT_T;
typedef unsigned int VCHIQ_SERVICE_HANDLE_T;
typedef VCHIQ_STATUS_T (*VCHIQ_CALLBACK_T)(VCHIQ_REASON_T, VCHIQ_HEADER_T *,
VCHIQ_SERVICE_HANDLE_T, void *);
typedef struct vchiq_service_base_struct {
int fourcc;
VCHIQ_CALLBACK_T callback;
void *userdata;
} VCHIQ_SERVICE_BASE_T;
typedef struct vchiq_service_params_struct {
int fourcc;
VCHIQ_CALLBACK_T callback;
void *userdata;
short version; /* Increment for non-trivial changes */
short version_min; /* Update for incompatible changes */
} VCHIQ_SERVICE_PARAMS_T;
typedef struct vchiq_config_struct {
unsigned int max_msg_size;
unsigned int bulk_threshold; /* The message size above which it
is better to use a bulk transfer
(<= max_msg_size) */
unsigned int max_outstanding_bulks;
unsigned int max_services;
short version; /* The version of VCHIQ */
short version_min; /* The minimum compatible version of VCHIQ */
} VCHIQ_CONFIG_T;
typedef struct vchiq_instance_struct *VCHIQ_INSTANCE_T;
typedef void (*VCHIQ_REMOTE_USE_CALLBACK_T)(void *cb_arg);
extern VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *pinstance);
extern VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance);
extern VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance);
extern VCHIQ_STATUS_T vchiq_add_service(VCHIQ_INSTANCE_T instance,
const VCHIQ_SERVICE_PARAMS_T *params,
VCHIQ_SERVICE_HANDLE_T *pservice);
extern VCHIQ_STATUS_T vchiq_open_service(VCHIQ_INSTANCE_T instance,
const VCHIQ_SERVICE_PARAMS_T *params,
VCHIQ_SERVICE_HANDLE_T *pservice);
extern VCHIQ_STATUS_T vchiq_close_service(VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_use_service(VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_use_service_no_resume(
VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_release_service(VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T service,
const VCHIQ_ELEMENT_T *elements, unsigned int count);
extern void vchiq_release_message(VCHIQ_SERVICE_HANDLE_T service,
VCHIQ_HEADER_T *header);
extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
const void *data, unsigned int size, void *userdata);
extern VCHIQ_STATUS_T vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
void *data, unsigned int size, void *userdata);
extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit_handle(
VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
const void *offset, unsigned int size, void *userdata);
extern VCHIQ_STATUS_T vchiq_queue_bulk_receive_handle(
VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
void *offset, unsigned int size, void *userdata);
extern VCHIQ_STATUS_T vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
const void *data, unsigned int size, void *userdata,
VCHIQ_BULK_MODE_T mode);
extern VCHIQ_STATUS_T vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
void *data, unsigned int size, void *userdata,
VCHIQ_BULK_MODE_T mode);
extern VCHIQ_STATUS_T vchiq_bulk_transmit_handle(VCHIQ_SERVICE_HANDLE_T service,
VCHI_MEM_HANDLE_T handle, const void *offset, unsigned int size,
void *userdata, VCHIQ_BULK_MODE_T mode);
extern VCHIQ_STATUS_T vchiq_bulk_receive_handle(VCHIQ_SERVICE_HANDLE_T service,
VCHI_MEM_HANDLE_T handle, void *offset, unsigned int size,
void *userdata, VCHIQ_BULK_MODE_T mode);
extern int vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T service);
extern void *vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T service);
extern int vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T service);
extern VCHIQ_STATUS_T vchiq_get_config(VCHIQ_INSTANCE_T instance,
int config_size, VCHIQ_CONFIG_T *pconfig);
extern VCHIQ_STATUS_T vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T service,
VCHIQ_SERVICE_OPTION_T option, int value);
extern VCHIQ_STATUS_T vchiq_remote_use(VCHIQ_INSTANCE_T instance,
VCHIQ_REMOTE_USE_CALLBACK_T callback, void *cb_arg);
extern VCHIQ_STATUS_T vchiq_remote_release(VCHIQ_INSTANCE_T instance);
extern VCHIQ_STATUS_T vchiq_dump_phys_mem(VCHIQ_SERVICE_HANDLE_T service,
void *ptr, size_t num_bytes);
extern VCHIQ_STATUS_T vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle,
short *peer_version);
#endif /* VCHIQ_IF_H */
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_IOCTLS_H
#define VCHIQ_IOCTLS_H
#include <linux/ioctl.h>
#include "vchiq_if.h"
#define VCHIQ_IOC_MAGIC 0xc4
#define VCHIQ_INVALID_HANDLE (~0)
typedef struct {
VCHIQ_SERVICE_PARAMS_T params;
int is_open;
int is_vchi;
unsigned int handle; /* OUT */
} VCHIQ_CREATE_SERVICE_T;
typedef struct {
unsigned int handle;
unsigned int count;
const VCHIQ_ELEMENT_T *elements;
} VCHIQ_QUEUE_MESSAGE_T;
typedef struct {
unsigned int handle;
void *data;
unsigned int size;
void *userdata;
VCHIQ_BULK_MODE_T mode;
} VCHIQ_QUEUE_BULK_TRANSFER_T;
typedef struct {
VCHIQ_REASON_T reason;
VCHIQ_HEADER_T *header;
void *service_userdata;
void *bulk_userdata;
} VCHIQ_COMPLETION_DATA_T;
typedef struct {
unsigned int count;
VCHIQ_COMPLETION_DATA_T *buf;
unsigned int msgbufsize;
unsigned int msgbufcount; /* IN/OUT */
void **msgbufs;
} VCHIQ_AWAIT_COMPLETION_T;
typedef struct {
unsigned int handle;
int blocking;
unsigned int bufsize;
void *buf;
} VCHIQ_DEQUEUE_MESSAGE_T;
typedef struct {
unsigned int config_size;
VCHIQ_CONFIG_T *pconfig;
} VCHIQ_GET_CONFIG_T;
typedef struct {
unsigned int handle;
VCHIQ_SERVICE_OPTION_T option;
int value;
} VCHIQ_SET_SERVICE_OPTION_T;
typedef struct {
void *virt_addr;
size_t num_bytes;
} VCHIQ_DUMP_MEM_T;
#define VCHIQ_IOC_CONNECT _IO(VCHIQ_IOC_MAGIC, 0)
#define VCHIQ_IOC_SHUTDOWN _IO(VCHIQ_IOC_MAGIC, 1)
#define VCHIQ_IOC_CREATE_SERVICE \
_IOWR(VCHIQ_IOC_MAGIC, 2, VCHIQ_CREATE_SERVICE_T)
#define VCHIQ_IOC_REMOVE_SERVICE _IO(VCHIQ_IOC_MAGIC, 3)
#define VCHIQ_IOC_QUEUE_MESSAGE \
_IOW(VCHIQ_IOC_MAGIC, 4, VCHIQ_QUEUE_MESSAGE_T)
#define VCHIQ_IOC_QUEUE_BULK_TRANSMIT \
_IOWR(VCHIQ_IOC_MAGIC, 5, VCHIQ_QUEUE_BULK_TRANSFER_T)
#define VCHIQ_IOC_QUEUE_BULK_RECEIVE \
_IOWR(VCHIQ_IOC_MAGIC, 6, VCHIQ_QUEUE_BULK_TRANSFER_T)
#define VCHIQ_IOC_AWAIT_COMPLETION \
_IOWR(VCHIQ_IOC_MAGIC, 7, VCHIQ_AWAIT_COMPLETION_T)
#define VCHIQ_IOC_DEQUEUE_MESSAGE \
_IOWR(VCHIQ_IOC_MAGIC, 8, VCHIQ_DEQUEUE_MESSAGE_T)
#define VCHIQ_IOC_GET_CLIENT_ID _IO(VCHIQ_IOC_MAGIC, 9)
#define VCHIQ_IOC_GET_CONFIG \
_IOWR(VCHIQ_IOC_MAGIC, 10, VCHIQ_GET_CONFIG_T)
#define VCHIQ_IOC_CLOSE_SERVICE _IO(VCHIQ_IOC_MAGIC, 11)
#define VCHIQ_IOC_USE_SERVICE _IO(VCHIQ_IOC_MAGIC, 12)
#define VCHIQ_IOC_RELEASE_SERVICE _IO(VCHIQ_IOC_MAGIC, 13)
#define VCHIQ_IOC_SET_SERVICE_OPTION \
_IOW(VCHIQ_IOC_MAGIC, 14, VCHIQ_SET_SERVICE_OPTION_T)
#define VCHIQ_IOC_DUMP_PHYS_MEM \
_IOW(VCHIQ_IOC_MAGIC, 15, VCHIQ_DUMP_MEM_T)
#define VCHIQ_IOC_LIB_VERSION _IO(VCHIQ_IOC_MAGIC, 16)
#define VCHIQ_IOC_CLOSE_DELIVERED _IO(VCHIQ_IOC_MAGIC, 17)
#define VCHIQ_IOC_MAX 17
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* ---- Include Files ---------------------------------------------------- */
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include "vchiq_core.h"
#include "vchiq_arm.h"
#include "vchiq_killable.h"
/* ---- Public Variables ------------------------------------------------- */
/* ---- Private Constants and Types -------------------------------------- */
struct bulk_waiter_node {
struct bulk_waiter bulk_waiter;
int pid;
struct list_head list;
};
struct vchiq_instance_struct {
VCHIQ_STATE_T *state;
int connected;
struct list_head bulk_waiter_list;
struct mutex bulk_waiter_list_mutex;
};
static VCHIQ_STATUS_T
vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
unsigned int size, VCHIQ_BULK_DIR_T dir);
/****************************************************************************
*
* vchiq_initialise
*
***************************************************************************/
#define VCHIQ_INIT_RETRIES 10
VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *instanceOut)
{
VCHIQ_STATUS_T status = VCHIQ_ERROR;
VCHIQ_STATE_T *state;
VCHIQ_INSTANCE_T instance = NULL;
int i;
vchiq_log_trace(vchiq_core_log_level, "%s called", __func__);
/* VideoCore may not be ready due to boot up timing.
It may never be ready if kernel and firmware are mismatched, so don't block forever. */
for (i=0; i<VCHIQ_INIT_RETRIES; i++) {
state = vchiq_get_state();
if (state)
break;
udelay(500);
}
if (i==VCHIQ_INIT_RETRIES) {
vchiq_log_error(vchiq_core_log_level,
"%s: videocore not initialized\n", __func__);
goto failed;
} else if (i>0) {
vchiq_log_warning(vchiq_core_log_level,
"%s: videocore initialized after %d retries\n", __func__, i);
}
instance = kzalloc(sizeof(*instance), GFP_KERNEL);
if (!instance) {
vchiq_log_error(vchiq_core_log_level,
"%s: error allocating vchiq instance\n", __func__);
goto failed;
}
instance->connected = 0;
instance->state = state;
mutex_init(&instance->bulk_waiter_list_mutex);
INIT_LIST_HEAD(&instance->bulk_waiter_list);
*instanceOut = instance;
status = VCHIQ_SUCCESS;
failed:
vchiq_log_trace(vchiq_core_log_level,
"%s(%p): returning %d", __func__, instance, status);
return status;
}
EXPORT_SYMBOL(vchiq_initialise);
/****************************************************************************
*
* vchiq_shutdown
*
***************************************************************************/
VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance)
{
VCHIQ_STATUS_T status;
VCHIQ_STATE_T *state = instance->state;
vchiq_log_trace(vchiq_core_log_level,
"%s(%p) called", __func__, instance);
if (mutex_lock_interruptible(&state->mutex) != 0)
return VCHIQ_RETRY;
/* Remove all services */
status = vchiq_shutdown_internal(state, instance);
mutex_unlock(&state->mutex);
vchiq_log_trace(vchiq_core_log_level,
"%s(%p): returning %d", __func__, instance, status);
if (status == VCHIQ_SUCCESS) {
struct list_head *pos, *next;
list_for_each_safe(pos, next,
&instance->bulk_waiter_list) {
struct bulk_waiter_node *waiter;
waiter = list_entry(pos,
struct bulk_waiter_node,
list);
list_del(pos);
vchiq_log_info(vchiq_arm_log_level,
"bulk_waiter - cleaned up %x "
"for pid %d",
(unsigned int)waiter, waiter->pid);
kfree(waiter);
}
kfree(instance);
}
return status;
}
EXPORT_SYMBOL(vchiq_shutdown);
/****************************************************************************
*
* vchiq_is_connected
*
***************************************************************************/
int vchiq_is_connected(VCHIQ_INSTANCE_T instance)
{
return instance->connected;
}
/****************************************************************************
*
* vchiq_connect
*
***************************************************************************/
VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance)
{
VCHIQ_STATUS_T status;
VCHIQ_STATE_T *state = instance->state;
vchiq_log_trace(vchiq_core_log_level,
"%s(%p) called", __func__, instance);
if (mutex_lock_interruptible(&state->mutex) != 0) {
vchiq_log_trace(vchiq_core_log_level,
"%s: call to mutex_lock failed", __func__);
status = VCHIQ_RETRY;
goto failed;
}
status = vchiq_connect_internal(state, instance);
if (status == VCHIQ_SUCCESS)
instance->connected = 1;
mutex_unlock(&state->mutex);
failed:
vchiq_log_trace(vchiq_core_log_level,
"%s(%p): returning %d", __func__, instance, status);
return status;
}
EXPORT_SYMBOL(vchiq_connect);
/****************************************************************************
*
* vchiq_add_service
*
***************************************************************************/
VCHIQ_STATUS_T vchiq_add_service(
VCHIQ_INSTANCE_T instance,
const VCHIQ_SERVICE_PARAMS_T *params,
VCHIQ_SERVICE_HANDLE_T *phandle)
{
VCHIQ_STATUS_T status;
VCHIQ_STATE_T *state = instance->state;
VCHIQ_SERVICE_T *service = NULL;
int srvstate;
vchiq_log_trace(vchiq_core_log_level,
"%s(%p) called", __func__, instance);
*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
srvstate = vchiq_is_connected(instance)
? VCHIQ_SRVSTATE_LISTENING
: VCHIQ_SRVSTATE_HIDDEN;
service = vchiq_add_service_internal(
state,
params,
srvstate,
instance,
NULL);
if (service) {
*phandle = service->handle;
status = VCHIQ_SUCCESS;
} else
status = VCHIQ_ERROR;
vchiq_log_trace(vchiq_core_log_level,
"%s(%p): returning %d", __func__, instance, status);
return status;
}
EXPORT_SYMBOL(vchiq_add_service);
/****************************************************************************
*
* vchiq_open_service
*
***************************************************************************/
VCHIQ_STATUS_T vchiq_open_service(
VCHIQ_INSTANCE_T instance,
const VCHIQ_SERVICE_PARAMS_T *params,
VCHIQ_SERVICE_HANDLE_T *phandle)
{
VCHIQ_STATUS_T status = VCHIQ_ERROR;
VCHIQ_STATE_T *state = instance->state;
VCHIQ_SERVICE_T *service = NULL;
vchiq_log_trace(vchiq_core_log_level,
"%s(%p) called", __func__, instance);
*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
if (!vchiq_is_connected(instance))
goto failed;
service = vchiq_add_service_internal(state,
params,
VCHIQ_SRVSTATE_OPENING,
instance,
NULL);
if (service) {
*phandle = service->handle;
status = vchiq_open_service_internal(service, current->pid);
if (status != VCHIQ_SUCCESS) {
vchiq_remove_service(service->handle);
*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
}
}
failed:
vchiq_log_trace(vchiq_core_log_level,
"%s(%p): returning %d", __func__, instance, status);
return status;
}
EXPORT_SYMBOL(vchiq_open_service);
VCHIQ_STATUS_T
vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle,
const void *data, unsigned int size, void *userdata)
{
return vchiq_bulk_transfer(handle,
VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_TRANSMIT);
}
EXPORT_SYMBOL(vchiq_queue_bulk_transmit);
VCHIQ_STATUS_T
vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
unsigned int size, void *userdata)
{
return vchiq_bulk_transfer(handle,
VCHI_MEM_HANDLE_INVALID, data, size, userdata,
VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_RECEIVE);
}
EXPORT_SYMBOL(vchiq_queue_bulk_receive);
VCHIQ_STATUS_T
vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle, const void *data,
unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
{
VCHIQ_STATUS_T status;
switch (mode) {
case VCHIQ_BULK_MODE_NOCALLBACK:
case VCHIQ_BULK_MODE_CALLBACK:
status = vchiq_bulk_transfer(handle,
VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
mode, VCHIQ_BULK_TRANSMIT);
break;
case VCHIQ_BULK_MODE_BLOCKING:
status = vchiq_blocking_bulk_transfer(handle,
(void *)data, size, VCHIQ_BULK_TRANSMIT);
break;
default:
return VCHIQ_ERROR;
}
return status;
}
EXPORT_SYMBOL(vchiq_bulk_transmit);
VCHIQ_STATUS_T
vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
{
VCHIQ_STATUS_T status;
switch (mode) {
case VCHIQ_BULK_MODE_NOCALLBACK:
case VCHIQ_BULK_MODE_CALLBACK:
status = vchiq_bulk_transfer(handle,
VCHI_MEM_HANDLE_INVALID, data, size, userdata,
mode, VCHIQ_BULK_RECEIVE);
break;
case VCHIQ_BULK_MODE_BLOCKING:
status = vchiq_blocking_bulk_transfer(handle,
(void *)data, size, VCHIQ_BULK_RECEIVE);
break;
default:
return VCHIQ_ERROR;
}
return status;
}
EXPORT_SYMBOL(vchiq_bulk_receive);
static VCHIQ_STATUS_T
vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
unsigned int size, VCHIQ_BULK_DIR_T dir)
{
VCHIQ_INSTANCE_T instance;
VCHIQ_SERVICE_T *service;
VCHIQ_STATUS_T status;
struct bulk_waiter_node *waiter = NULL;
struct list_head *pos;
service = find_service_by_handle(handle);
if (!service)
return VCHIQ_ERROR;
instance = service->instance;
unlock_service(service);
mutex_lock(&instance->bulk_waiter_list_mutex);
list_for_each(pos, &instance->bulk_waiter_list) {
if (list_entry(pos, struct bulk_waiter_node,
list)->pid == current->pid) {
waiter = list_entry(pos,
struct bulk_waiter_node,
list);
list_del(pos);
break;
}
}
mutex_unlock(&instance->bulk_waiter_list_mutex);
if (waiter) {
VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
if (bulk) {
/* This thread has an outstanding bulk transfer. */
if ((bulk->data != data) ||
(bulk->size != size)) {
/* This is not a retry of the previous one.
** Cancel the signal when the transfer
** completes. */
spin_lock(&bulk_waiter_spinlock);
bulk->userdata = NULL;
spin_unlock(&bulk_waiter_spinlock);
}
}
}
if (!waiter) {
waiter = kzalloc(sizeof(struct bulk_waiter_node), GFP_KERNEL);
if (!waiter) {
vchiq_log_error(vchiq_core_log_level,
"%s - out of memory", __func__);
return VCHIQ_ERROR;
}
}
status = vchiq_bulk_transfer(handle, VCHI_MEM_HANDLE_INVALID,
data, size, &waiter->bulk_waiter, VCHIQ_BULK_MODE_BLOCKING,
dir);
if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
!waiter->bulk_waiter.bulk) {
VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
if (bulk) {
/* Cancel the signal when the transfer
** completes. */
spin_lock(&bulk_waiter_spinlock);
bulk->userdata = NULL;
spin_unlock(&bulk_waiter_spinlock);
}
kfree(waiter);
} else {
waiter->pid = current->pid;
mutex_lock(&instance->bulk_waiter_list_mutex);
list_add(&waiter->list, &instance->bulk_waiter_list);
mutex_unlock(&instance->bulk_waiter_list_mutex);
vchiq_log_info(vchiq_arm_log_level,
"saved bulk_waiter %x for pid %d",
(unsigned int)waiter, current->pid);
}
return status;
}
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_KILLABLE_H
#define VCHIQ_KILLABLE_H
#include <linux/mutex.h>
#include <linux/semaphore.h>
#define SHUTDOWN_SIGS (sigmask(SIGKILL) | sigmask(SIGINT) | sigmask(SIGQUIT) | sigmask(SIGTRAP) | sigmask(SIGSTOP) | sigmask(SIGCONT))
static inline int __must_check down_interruptible_killable(struct semaphore *sem)
{
/* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
int ret;
sigset_t blocked, oldset;
siginitsetinv(&blocked, SHUTDOWN_SIGS);
sigprocmask(SIG_SETMASK, &blocked, &oldset);
ret = down_interruptible(sem);
sigprocmask(SIG_SETMASK, &oldset, NULL);
return ret;
}
#define down_interruptible down_interruptible_killable
static inline int __must_check mutex_lock_interruptible_killable(struct mutex *lock)
{
/* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
int ret;
sigset_t blocked, oldset;
siginitsetinv(&blocked, SHUTDOWN_SIGS);
sigprocmask(SIG_SETMASK, &blocked, &oldset);
ret = mutex_lock_interruptible(lock);
sigprocmask(SIG_SETMASK, &oldset, NULL);
return ret;
}
#define mutex_lock_interruptible mutex_lock_interruptible_killable
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_MEMDRV_H
#define VCHIQ_MEMDRV_H
/* ---- Include Files ----------------------------------------------------- */
#include <linux/kernel.h>
#include "vchiq_if.h"
/* ---- Constants and Types ---------------------------------------------- */
typedef struct {
void *armSharedMemVirt;
dma_addr_t armSharedMemPhys;
size_t armSharedMemSize;
void *vcSharedMemVirt;
dma_addr_t vcSharedMemPhys;
size_t vcSharedMemSize;
} VCHIQ_SHARED_MEM_INFO_T;
/* ---- Variable Externs ------------------------------------------------- */
/* ---- Function Prototypes ---------------------------------------------- */
void vchiq_get_shared_mem_info(VCHIQ_SHARED_MEM_INFO_T *info);
VCHIQ_STATUS_T vchiq_memdrv_initialise(void);
VCHIQ_STATUS_T vchiq_userdrv_create_instance(
const VCHIQ_PLATFORM_DATA_T * platform_data);
VCHIQ_STATUS_T vchiq_userdrv_suspend(
const VCHIQ_PLATFORM_DATA_T * platform_data);
VCHIQ_STATUS_T vchiq_userdrv_resume(
const VCHIQ_PLATFORM_DATA_T * platform_data);
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_PAGELIST_H
#define VCHIQ_PAGELIST_H
#ifndef PAGE_SIZE
#define PAGE_SIZE 4096
#endif
#define CACHE_LINE_SIZE 32
#define PAGELIST_WRITE 0
#define PAGELIST_READ 1
#define PAGELIST_READ_WITH_FRAGMENTS 2
typedef struct pagelist_struct {
unsigned long length;
unsigned short type;
unsigned short offset;
unsigned long addrs[1]; /* N.B. 12 LSBs hold the number of following
pages at consecutive addresses. */
} PAGELIST_T;
typedef struct fragments_struct {
char headbuf[CACHE_LINE_SIZE];
char tailbuf[CACHE_LINE_SIZE];
} FRAGMENTS_T;
#endif /* VCHIQ_PAGELIST_H */
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/module.h>
#include <linux/types.h>
#include "interface/vchi/vchi.h"
#include "vchiq.h"
#include "vchiq_core.h"
#include "vchiq_util.h"
#include <stddef.h>
#define vchiq_status_to_vchi(status) ((int32_t)status)
typedef struct {
VCHIQ_SERVICE_HANDLE_T handle;
VCHIU_QUEUE_T queue;
VCHI_CALLBACK_T callback;
void *callback_param;
} SHIM_SERVICE_T;
/* ----------------------------------------------------------------------
* return pointer to the mphi message driver function table
* -------------------------------------------------------------------- */
const VCHI_MESSAGE_DRIVER_T *
vchi_mphi_message_driver_func_table(void)
{
return NULL;
}
/* ----------------------------------------------------------------------
* return a pointer to the 'single' connection driver fops
* -------------------------------------------------------------------- */
const VCHI_CONNECTION_API_T *
single_get_func_table(void)
{
return NULL;
}
VCHI_CONNECTION_T *vchi_create_connection(
const VCHI_CONNECTION_API_T *function_table,
const VCHI_MESSAGE_DRIVER_T *low_level)
{
(void)function_table;
(void)low_level;
return NULL;
}
/***********************************************************
* Name: vchi_msg_peek
*
* Arguments: const VCHI_SERVICE_HANDLE_T handle,
* void **data,
* uint32_t *msg_size,
* VCHI_FLAGS_T flags
*
* Description: Routine to return a pointer to the current message (to allow in
* place processing). The message can be removed using
* vchi_msg_remove when you're finished
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_msg_peek(VCHI_SERVICE_HANDLE_T handle,
void **data,
uint32_t *msg_size,
VCHI_FLAGS_T flags)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_HEADER_T *header;
WARN_ON((flags != VCHI_FLAGS_NONE) &&
(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
if (flags == VCHI_FLAGS_NONE)
if (vchiu_queue_is_empty(&service->queue))
return -1;
header = vchiu_queue_peek(&service->queue);
*data = header->data;
*msg_size = header->size;
return 0;
}
EXPORT_SYMBOL(vchi_msg_peek);
/***********************************************************
* Name: vchi_msg_remove
*
* Arguments: const VCHI_SERVICE_HANDLE_T handle,
*
* Description: Routine to remove a message (after it has been read with
* vchi_msg_peek)
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_msg_remove(VCHI_SERVICE_HANDLE_T handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_HEADER_T *header;
header = vchiu_queue_pop(&service->queue);
vchiq_release_message(service->handle, header);
return 0;
}
EXPORT_SYMBOL(vchi_msg_remove);
/***********************************************************
* Name: vchi_msg_queue
*
* Arguments: VCHI_SERVICE_HANDLE_T handle,
* const void *data,
* uint32_t data_size,
* VCHI_FLAGS_T flags,
* void *msg_handle,
*
* Description: Thin wrapper to queue a message onto a connection
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_msg_queue(VCHI_SERVICE_HANDLE_T handle,
const void *data,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *msg_handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_ELEMENT_T element = {data, data_size};
VCHIQ_STATUS_T status;
(void)msg_handle;
WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
status = vchiq_queue_message(service->handle, &element, 1);
/* vchiq_queue_message() may return VCHIQ_RETRY, so we need to
** implement a retry mechanism since this function is supposed
** to block until queued
*/
while (status == VCHIQ_RETRY) {
msleep(1);
status = vchiq_queue_message(service->handle, &element, 1);
}
return vchiq_status_to_vchi(status);
}
EXPORT_SYMBOL(vchi_msg_queue);
/***********************************************************
* Name: vchi_bulk_queue_receive
*
* Arguments: VCHI_BULK_HANDLE_T handle,
* void *data_dst,
* const uint32_t data_size,
* VCHI_FLAGS_T flags
* void *bulk_handle
*
* Description: Routine to setup a rcv buffer
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_bulk_queue_receive(VCHI_SERVICE_HANDLE_T handle,
void *data_dst,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *bulk_handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_BULK_MODE_T mode;
VCHIQ_STATUS_T status;
switch ((int)flags) {
case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
| VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
WARN_ON(!service->callback);
mode = VCHIQ_BULK_MODE_CALLBACK;
break;
case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
mode = VCHIQ_BULK_MODE_BLOCKING;
break;
case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
case VCHI_FLAGS_NONE:
mode = VCHIQ_BULK_MODE_NOCALLBACK;
break;
default:
WARN(1, "unsupported message\n");
return vchiq_status_to_vchi(VCHIQ_ERROR);
}
status = vchiq_bulk_receive(service->handle, data_dst, data_size,
bulk_handle, mode);
/* vchiq_bulk_receive() may return VCHIQ_RETRY, so we need to
** implement a retry mechanism since this function is supposed
** to block until queued
*/
while (status == VCHIQ_RETRY) {
msleep(1);
status = vchiq_bulk_receive(service->handle, data_dst,
data_size, bulk_handle, mode);
}
return vchiq_status_to_vchi(status);
}
EXPORT_SYMBOL(vchi_bulk_queue_receive);
/***********************************************************
* Name: vchi_bulk_queue_transmit
*
* Arguments: VCHI_BULK_HANDLE_T handle,
* const void *data_src,
* uint32_t data_size,
* VCHI_FLAGS_T flags,
* void *bulk_handle
*
* Description: Routine to transmit some data
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_bulk_queue_transmit(VCHI_SERVICE_HANDLE_T handle,
const void *data_src,
uint32_t data_size,
VCHI_FLAGS_T flags,
void *bulk_handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_BULK_MODE_T mode;
VCHIQ_STATUS_T status;
switch ((int)flags) {
case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
| VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
WARN_ON(!service->callback);
mode = VCHIQ_BULK_MODE_CALLBACK;
break;
case VCHI_FLAGS_BLOCK_UNTIL_DATA_READ:
case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
mode = VCHIQ_BULK_MODE_BLOCKING;
break;
case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
case VCHI_FLAGS_NONE:
mode = VCHIQ_BULK_MODE_NOCALLBACK;
break;
default:
WARN(1, "unsupported message\n");
return vchiq_status_to_vchi(VCHIQ_ERROR);
}
status = vchiq_bulk_transmit(service->handle, data_src, data_size,
bulk_handle, mode);
/* vchiq_bulk_transmit() may return VCHIQ_RETRY, so we need to
** implement a retry mechanism since this function is supposed
** to block until queued
*/
while (status == VCHIQ_RETRY) {
msleep(1);
status = vchiq_bulk_transmit(service->handle, data_src,
data_size, bulk_handle, mode);
}
return vchiq_status_to_vchi(status);
}
EXPORT_SYMBOL(vchi_bulk_queue_transmit);
/***********************************************************
* Name: vchi_msg_dequeue
*
* Arguments: VCHI_SERVICE_HANDLE_T handle,
* void *data,
* uint32_t max_data_size_to_read,
* uint32_t *actual_msg_size
* VCHI_FLAGS_T flags
*
* Description: Routine to dequeue a message into the supplied buffer
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_msg_dequeue(VCHI_SERVICE_HANDLE_T handle,
void *data,
uint32_t max_data_size_to_read,
uint32_t *actual_msg_size,
VCHI_FLAGS_T flags)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_HEADER_T *header;
WARN_ON((flags != VCHI_FLAGS_NONE) &&
(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
if (flags == VCHI_FLAGS_NONE)
if (vchiu_queue_is_empty(&service->queue))
return -1;
header = vchiu_queue_pop(&service->queue);
memcpy(data, header->data, header->size < max_data_size_to_read ?
header->size : max_data_size_to_read);
*actual_msg_size = header->size;
vchiq_release_message(service->handle, header);
return 0;
}
EXPORT_SYMBOL(vchi_msg_dequeue);
/***********************************************************
* Name: vchi_msg_queuev
*
* Arguments: VCHI_SERVICE_HANDLE_T handle,
* VCHI_MSG_VECTOR_T *vector,
* uint32_t count,
* VCHI_FLAGS_T flags,
* void *msg_handle
*
* Description: Thin wrapper to queue a message onto a connection
*
* Returns: int32_t - success == 0
*
***********************************************************/
vchiq_static_assert(sizeof(VCHI_MSG_VECTOR_T) == sizeof(VCHIQ_ELEMENT_T));
vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_base) ==
offsetof(VCHIQ_ELEMENT_T, data));
vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_len) ==
offsetof(VCHIQ_ELEMENT_T, size));
int32_t vchi_msg_queuev(VCHI_SERVICE_HANDLE_T handle,
VCHI_MSG_VECTOR_T *vector,
uint32_t count,
VCHI_FLAGS_T flags,
void *msg_handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
(void)msg_handle;
WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
return vchiq_status_to_vchi(vchiq_queue_message(service->handle,
(const VCHIQ_ELEMENT_T *)vector, count));
}
EXPORT_SYMBOL(vchi_msg_queuev);
/***********************************************************
* Name: vchi_held_msg_release
*
* Arguments: VCHI_HELD_MSG_T *message
*
* Description: Routine to release a held message (after it has been read with
* vchi_msg_hold)
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message)
{
vchiq_release_message((VCHIQ_SERVICE_HANDLE_T)message->service,
(VCHIQ_HEADER_T *)message->message);
return 0;
}
EXPORT_SYMBOL(vchi_held_msg_release);
/***********************************************************
* Name: vchi_msg_hold
*
* Arguments: VCHI_SERVICE_HANDLE_T handle,
* void **data,
* uint32_t *msg_size,
* VCHI_FLAGS_T flags,
* VCHI_HELD_MSG_T *message_handle
*
* Description: Routine to return a pointer to the current message (to allow
* in place processing). The message is dequeued - don't forget
* to release the message using vchi_held_msg_release when you're
* finished.
*
* Returns: int32_t - success == 0
*
***********************************************************/
int32_t vchi_msg_hold(VCHI_SERVICE_HANDLE_T handle,
void **data,
uint32_t *msg_size,
VCHI_FLAGS_T flags,
VCHI_HELD_MSG_T *message_handle)
{
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_HEADER_T *header;
WARN_ON((flags != VCHI_FLAGS_NONE) &&
(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
if (flags == VCHI_FLAGS_NONE)
if (vchiu_queue_is_empty(&service->queue))
return -1;
header = vchiu_queue_pop(&service->queue);
*data = header->data;
*msg_size = header->size;
message_handle->service =
(struct opaque_vchi_service_t *)service->handle;
message_handle->message = header;
return 0;
}
EXPORT_SYMBOL(vchi_msg_hold);
/***********************************************************
* Name: vchi_initialise
*
* Arguments: VCHI_INSTANCE_T *instance_handle
*
* Description: Initialises the hardware but does not transmit anything
* When run as a Host App this will be called twice hence the need
* to malloc the state information
*
* Returns: 0 if successful, failure otherwise
*
***********************************************************/
int32_t vchi_initialise(VCHI_INSTANCE_T *instance_handle)
{
VCHIQ_INSTANCE_T instance;
VCHIQ_STATUS_T status;
status = vchiq_initialise(&instance);
*instance_handle = (VCHI_INSTANCE_T)instance;
return vchiq_status_to_vchi(status);
}
EXPORT_SYMBOL(vchi_initialise);
/***********************************************************
* Name: vchi_connect
*
* Arguments: VCHI_CONNECTION_T **connections
* const uint32_t num_connections
* VCHI_INSTANCE_T instance_handle)
*
* Description: Starts the command service on each connection,
* causing INIT messages to be pinged back and forth
*
* Returns: 0 if successful, failure otherwise
*
***********************************************************/
int32_t vchi_connect(VCHI_CONNECTION_T **connections,
const uint32_t num_connections,
VCHI_INSTANCE_T instance_handle)
{
VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
(void)connections;
(void)num_connections;
return vchiq_connect(instance);
}
EXPORT_SYMBOL(vchi_connect);
/***********************************************************
* Name: vchi_disconnect
*
* Arguments: VCHI_INSTANCE_T instance_handle
*
* Description: Stops the command service on each connection,
* causing DE-INIT messages to be pinged back and forth
*
* Returns: 0 if successful, failure otherwise
*
***********************************************************/
int32_t vchi_disconnect(VCHI_INSTANCE_T instance_handle)
{
VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
return vchiq_status_to_vchi(vchiq_shutdown(instance));
}
EXPORT_SYMBOL(vchi_disconnect);
/***********************************************************
* Name: vchi_service_open
* Name: vchi_service_create
*
* Arguments: VCHI_INSTANCE_T *instance_handle
* SERVICE_CREATION_T *setup,
* VCHI_SERVICE_HANDLE_T *handle
*
* Description: Routine to open a service
*
* Returns: int32_t - success == 0
*
***********************************************************/
static VCHIQ_STATUS_T shim_callback(VCHIQ_REASON_T reason,
VCHIQ_HEADER_T *header, VCHIQ_SERVICE_HANDLE_T handle, void *bulk_user)
{
SHIM_SERVICE_T *service =
(SHIM_SERVICE_T *)VCHIQ_GET_SERVICE_USERDATA(handle);
if (!service->callback)
goto release;
switch (reason) {
case VCHIQ_MESSAGE_AVAILABLE:
vchiu_queue_push(&service->queue, header);
service->callback(service->callback_param,
VCHI_CALLBACK_MSG_AVAILABLE, NULL);
goto done;
break;
case VCHIQ_BULK_TRANSMIT_DONE:
service->callback(service->callback_param,
VCHI_CALLBACK_BULK_SENT, bulk_user);
break;
case VCHIQ_BULK_RECEIVE_DONE:
service->callback(service->callback_param,
VCHI_CALLBACK_BULK_RECEIVED, bulk_user);
break;
case VCHIQ_SERVICE_CLOSED:
service->callback(service->callback_param,
VCHI_CALLBACK_SERVICE_CLOSED, NULL);
break;
case VCHIQ_SERVICE_OPENED:
/* No equivalent VCHI reason */
break;
case VCHIQ_BULK_TRANSMIT_ABORTED:
service->callback(service->callback_param,
VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
bulk_user);
break;
case VCHIQ_BULK_RECEIVE_ABORTED:
service->callback(service->callback_param,
VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
bulk_user);
break;
default:
WARN(1, "not supported\n");
break;
}
release:
vchiq_release_message(service->handle, header);
done:
return VCHIQ_SUCCESS;
}
static SHIM_SERVICE_T *service_alloc(VCHIQ_INSTANCE_T instance,
SERVICE_CREATION_T *setup)
{
SHIM_SERVICE_T *service = kzalloc(sizeof(SHIM_SERVICE_T), GFP_KERNEL);
(void)instance;
if (service) {
if (vchiu_queue_init(&service->queue, 64)) {
service->callback = setup->callback;
service->callback_param = setup->callback_param;
} else {
kfree(service);
service = NULL;
}
}
return service;
}
static void service_free(SHIM_SERVICE_T *service)
{
if (service) {
vchiu_queue_delete(&service->queue);
kfree(service);
}
}
int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle,
SERVICE_CREATION_T *setup,
VCHI_SERVICE_HANDLE_T *handle)
{
VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
SHIM_SERVICE_T *service = service_alloc(instance, setup);
*handle = (VCHI_SERVICE_HANDLE_T)service;
if (service) {
VCHIQ_SERVICE_PARAMS_T params;
VCHIQ_STATUS_T status;
memset(&params, 0, sizeof(params));
params.fourcc = setup->service_id;
params.callback = shim_callback;
params.userdata = service;
params.version = setup->version.version;
params.version_min = setup->version.version_min;
status = vchiq_open_service(instance, &params,
&service->handle);
if (status != VCHIQ_SUCCESS) {
service_free(service);
service = NULL;
*handle = NULL;
}
}
return (service != NULL) ? 0 : -1;
}
EXPORT_SYMBOL(vchi_service_open);
int32_t vchi_service_create(VCHI_INSTANCE_T instance_handle,
SERVICE_CREATION_T *setup,
VCHI_SERVICE_HANDLE_T *handle)
{
VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
SHIM_SERVICE_T *service = service_alloc(instance, setup);
*handle = (VCHI_SERVICE_HANDLE_T)service;
if (service) {
VCHIQ_SERVICE_PARAMS_T params;
VCHIQ_STATUS_T status;
memset(&params, 0, sizeof(params));
params.fourcc = setup->service_id;
params.callback = shim_callback;
params.userdata = service;
params.version = setup->version.version;
params.version_min = setup->version.version_min;
status = vchiq_add_service(instance, &params, &service->handle);
if (status != VCHIQ_SUCCESS) {
service_free(service);
service = NULL;
*handle = NULL;
}
}
return (service != NULL) ? 0 : -1;
}
EXPORT_SYMBOL(vchi_service_create);
int32_t vchi_service_close(const VCHI_SERVICE_HANDLE_T handle)
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
if (service) {
VCHIQ_STATUS_T status = vchiq_close_service(service->handle);
if (status == VCHIQ_SUCCESS) {
service_free(service);
service = NULL;
}
ret = vchiq_status_to_vchi(status);
}
return ret;
}
EXPORT_SYMBOL(vchi_service_close);
int32_t vchi_service_destroy(const VCHI_SERVICE_HANDLE_T handle)
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
if (service) {
VCHIQ_STATUS_T status = vchiq_remove_service(service->handle);
if (status == VCHIQ_SUCCESS) {
service_free(service);
service = NULL;
}
ret = vchiq_status_to_vchi(status);
}
return ret;
}
EXPORT_SYMBOL(vchi_service_destroy);
int32_t vchi_service_set_option(const VCHI_SERVICE_HANDLE_T handle,
VCHI_SERVICE_OPTION_T option,
int value)
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
VCHIQ_SERVICE_OPTION_T vchiq_option;
switch (option) {
case VCHI_SERVICE_OPTION_TRACE:
vchiq_option = VCHIQ_SERVICE_OPTION_TRACE;
break;
case VCHI_SERVICE_OPTION_SYNCHRONOUS:
vchiq_option = VCHIQ_SERVICE_OPTION_SYNCHRONOUS;
break;
default:
service = NULL;
break;
}
if (service) {
VCHIQ_STATUS_T status =
vchiq_set_service_option(service->handle,
vchiq_option,
value);
ret = vchiq_status_to_vchi(status);
}
return ret;
}
EXPORT_SYMBOL(vchi_service_set_option);
int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle, short *peer_version )
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
if(service)
{
VCHIQ_STATUS_T status = vchiq_get_peer_version(service->handle, peer_version);
ret = vchiq_status_to_vchi( status );
}
return ret;
}
EXPORT_SYMBOL(vchi_get_peer_version);
/* ----------------------------------------------------------------------
* read a uint32_t from buffer.
* network format is defined to be little endian
* -------------------------------------------------------------------- */
uint32_t
vchi_readbuf_uint32(const void *_ptr)
{
const unsigned char *ptr = _ptr;
return ptr[0] | (ptr[1] << 8) | (ptr[2] << 16) | (ptr[3] << 24);
}
/* ----------------------------------------------------------------------
* write a uint32_t to buffer.
* network format is defined to be little endian
* -------------------------------------------------------------------- */
void
vchi_writebuf_uint32(void *_ptr, uint32_t value)
{
unsigned char *ptr = _ptr;
ptr[0] = (unsigned char)((value >> 0) & 0xFF);
ptr[1] = (unsigned char)((value >> 8) & 0xFF);
ptr[2] = (unsigned char)((value >> 16) & 0xFF);
ptr[3] = (unsigned char)((value >> 24) & 0xFF);
}
/* ----------------------------------------------------------------------
* read a uint16_t from buffer.
* network format is defined to be little endian
* -------------------------------------------------------------------- */
uint16_t
vchi_readbuf_uint16(const void *_ptr)
{
const unsigned char *ptr = _ptr;
return ptr[0] | (ptr[1] << 8);
}
/* ----------------------------------------------------------------------
* write a uint16_t into the buffer.
* network format is defined to be little endian
* -------------------------------------------------------------------- */
void
vchi_writebuf_uint16(void *_ptr, uint16_t value)
{
unsigned char *ptr = _ptr;
ptr[0] = (value >> 0) & 0xFF;
ptr[1] = (value >> 8) & 0xFF;
}
/***********************************************************
* Name: vchi_service_use
*
* Arguments: const VCHI_SERVICE_HANDLE_T handle
*
* Description: Routine to increment refcount on a service
*
* Returns: void
*
***********************************************************/
int32_t vchi_service_use(const VCHI_SERVICE_HANDLE_T handle)
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
if (service)
ret = vchiq_status_to_vchi(vchiq_use_service(service->handle));
return ret;
}
EXPORT_SYMBOL(vchi_service_use);
/***********************************************************
* Name: vchi_service_release
*
* Arguments: const VCHI_SERVICE_HANDLE_T handle
*
* Description: Routine to decrement refcount on a service
*
* Returns: void
*
***********************************************************/
int32_t vchi_service_release(const VCHI_SERVICE_HANDLE_T handle)
{
int32_t ret = -1;
SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
if (service)
ret = vchiq_status_to_vchi(
vchiq_release_service(service->handle));
return ret;
}
EXPORT_SYMBOL(vchi_service_release);
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "vchiq_util.h"
#include "vchiq_killable.h"
static inline int is_pow2(int i)
{
return i && !(i & (i - 1));
}
int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size)
{
WARN_ON(!is_pow2(size));
queue->size = size;
queue->read = 0;
queue->write = 0;
queue->initialized = 1;
sema_init(&queue->pop, 0);
sema_init(&queue->push, 0);
queue->storage = kzalloc(size * sizeof(VCHIQ_HEADER_T *), GFP_KERNEL);
if (queue->storage == NULL) {
vchiu_queue_delete(queue);
return 0;
}
return 1;
}
void vchiu_queue_delete(VCHIU_QUEUE_T *queue)
{
if (queue->storage != NULL)
kfree(queue->storage);
}
int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue)
{
return queue->read == queue->write;
}
int vchiu_queue_is_full(VCHIU_QUEUE_T *queue)
{
return queue->write == queue->read + queue->size;
}
void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header)
{
if (!queue->initialized)
return;
while (queue->write == queue->read + queue->size) {
if (down_interruptible(&queue->pop) != 0) {
flush_signals(current);
}
}
/*
* Write to queue->storage must be visible after read from
* queue->read
*/
smp_mb();
queue->storage[queue->write & (queue->size - 1)] = header;
/*
* Write to queue->storage must be visible before write to
* queue->write
*/
smp_wmb();
queue->write++;
up(&queue->push);
}
VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue)
{
while (queue->write == queue->read) {
if (down_interruptible(&queue->push) != 0) {
flush_signals(current);
}
}
up(&queue->push); // We haven't removed anything from the queue.
/*
* Read from queue->storage must be visible after read from
* queue->write
*/
smp_rmb();
return queue->storage[queue->read & (queue->size - 1)];
}
VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue)
{
VCHIQ_HEADER_T *header;
while (queue->write == queue->read) {
if (down_interruptible(&queue->push) != 0) {
flush_signals(current);
}
}
/*
* Read from queue->storage must be visible after read from
* queue->write
*/
smp_rmb();
header = queue->storage[queue->read & (queue->size - 1)];
/*
* Read from queue->storage must be visible before write to
* queue->read
*/
smp_mb();
queue->read++;
up(&queue->pop);
return header;
}
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef VCHIQ_UTIL_H
#define VCHIQ_UTIL_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/mutex.h>
#include <linux/bitops.h>
#include <linux/kthread.h>
#include <linux/wait.h>
#include <linux/vmalloc.h>
#include <linux/jiffies.h>
#include <linux/delay.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/random.h>
#include <linux/sched.h>
#include <linux/ctype.h>
#include <linux/uaccess.h>
#include <linux/time.h> /* for time_t */
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include "vchiq_if.h"
typedef struct {
int size;
int read;
int write;
int initialized;
struct semaphore pop;
struct semaphore push;
VCHIQ_HEADER_T **storage;
} VCHIU_QUEUE_T;
extern int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size);
extern void vchiu_queue_delete(VCHIU_QUEUE_T *queue);
extern int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue);
extern int vchiu_queue_is_full(VCHIU_QUEUE_T *queue);
extern void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header);
extern VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue);
extern VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue);
#endif
/**
* Copyright (c) 2010-2012 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The names of the above-listed copyright holders may not be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2, as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "vchiq_build_info.h"
#include <linux/broadcom/vc_debug_sym.h>
VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_hostname, "dc4-arm-01" );
VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_version, "9245b4c35b99b3870e1f7dc598c5692b3c66a6f0 (tainted)" );
VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_time, __TIME__ );
VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_date, __DATE__ );
const char *vchiq_get_build_hostname( void )
{
return vchiq_build_hostname;
}
const char *vchiq_get_build_version( void )
{
return vchiq_build_version;
}
const char *vchiq_get_build_date( void )
{
return vchiq_build_date;
}
const char *vchiq_get_build_time( void )
{
return vchiq_build_time;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment