Commit 559fb51b authored by Scott Bardone's avatar Scott Bardone Committed by Jeff Garzik

Update Chelsio gige net driver.

- Use extern prefix for functions required.
- Removed a lot of wrappers, including t1_read/write_reg_4.
- Removed various macros, using native kernel calls now.
- Enumerated various #defines.
- Removed a lot of shared code which is not currently used in "NIC only" mode.
- Removed dead code.

Documentation/networking/cxgb.txt:
- Updated release notes for version 2.1.1

drivers/net/chelsio/ch_ethtool.h
- removed file, no longer using ETHTOOL namespace.

drivers/net/chelsio/common.h
- moved code from osdep.h to common.h
- added comment to #endif indicating which symbol it closes.

drivers/net/chelsio/cphy.h
- removed dead code.
- added comment to #endif indicating which symbol it closes.

drivers/net/chelsio/cxgb2.c
- use DMA_{32,64}BIT_MASK in include/linux/dma-mapping.h.
- removed unused code.
- use printk message for link info resembling drivers/net/mii.c.
- no longer using the MODULE_xxx namespace.
- no longer using "pci_" namespace.
- no longer using ETHTOOL namespace.

drivers/net/chelsio/cxgb2.h
- removed file, merged into common.h

drivers/net/chelsio/elmer0.h
- removed dead code.
- added various enums.
- added comment to #endif indicating which symbol it closes.

drivers/net/chelsio/espi.c
- removed various macros, using native kernel calls now.
- removed a lot of wrappers, including t1_read/write_reg_4.

drivers/net/chelsio/espi.h
- added comment to #endif indicating which symbol it closes.

drivers/net/chelsio/gmac.h
- added comment to #endif indicating which symbol it closes.

drivers/net/chelsio/mv88x201x.c
- changes to sync with Chelsio TOT.

drivers/net/chelsio/osdep.h
- removed file, consolidation. osdep was used to translate wrapper functions
  since our code supports multiple OSs. removed wrappers.

  drivers/net/chelsio/pm3393.c
  - removed various macros, using native kernel calls now.
  - removed a lot of wrappers, including t1_read/write_reg_4.
  - removed unused code.

  drivers/net/chelsio/regs.h
  - added a few register entries for future and current feature support.
  - added comment to #endif indicating which symbol it closes.

  drivers/net/chelsio/sge.c
  - rewrote large portion of scatter-gather engine to stabilize
  performance.
  - using u8/u16/u32 kernel types instead of __u8/__u16/__u32 compiler
  types.

  drivers/net/chelsio/sge.h
  - rewrote large portion of scatter-gather engine to stabilize
  performance.
  - added comment to #endif indicating which symbol it closes.

  drivers/net/chelsio/subr.c
  - merged tp.c into subr.c
  - removed various macros, using native kernel calls now.
  - removed a lot of wrappers, including t1_read/write_reg_4.
  - removed unused code.

  drivers/net/chelsio/suni1x10gexp_regs.h
  - modified copyright and authorship of file.
  - added comment to #endif indicating which symbol it closes.

  drivers/net/chelsio/tp.c
  - removed file, merged into subr.c.

  drivers/net/chelsio/tp.h
  - removed file.

  include/linux/pci_ids.h
  - patched to include PCI_VENDOR_ID_CHELSIO 0x1425, removed define from
  our code.
parent a5324343
...@@ -2,9 +2,9 @@ ...@@ -2,9 +2,9 @@
Driver Release Notes for Linux Driver Release Notes for Linux
Version 2.1.0 Version 2.1.1
March 8, 2005 June 20, 2005
CONTENTS CONTENTS
======== ========
...@@ -21,8 +21,7 @@ INTRODUCTION ...@@ -21,8 +21,7 @@ INTRODUCTION
This document describes the Linux driver for Chelsio 10Gb Ethernet Network This document describes the Linux driver for Chelsio 10Gb Ethernet Network
Controller. This driver supports the Chelsio N210 NIC and is backward Controller. This driver supports the Chelsio N210 NIC and is backward
compatible with the Chelsio N110 model 10Gb NICs. This driver supports AMD64 compatible with the Chelsio N110 model 10Gb NICs.
and EM64T, and x86 systems.
FEATURES FEATURES
...@@ -121,23 +120,17 @@ PERFORMANCE ...@@ -121,23 +120,17 @@ PERFORMANCE
Disabling SACK: Disabling SACK:
sysctl -w net.ipv4.tcp_sack=0 sysctl -w net.ipv4.tcp_sack=0
Setting TCP read buffers (min/default/max): Setting large number of incoming connection requests:
sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"
Setting TCP write buffers (min/pressure/max):
sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"
Setting TCP buffer space (min/pressure/max):
sysctl -w net.ipv4.tcp_mem="10000000 10000000 10000000"
Setting large number of incoming connection requests (2.6.x only):
sysctl -w net.ipv4.tcp_max_syn_backlog=3000 sysctl -w net.ipv4.tcp_max_syn_backlog=3000
Setting maximum receive socket buffer size: Setting maximum receive socket buffer size:
sysctl -w net.core.rmem_max=524287 sysctl -w net.core.rmem_max=1024000
Setting maximum send socket buffer size: Setting maximum send socket buffer size:
sysctl -w net.core.wmem_max=524287 sysctl -w net.core.wmem_max=1024000
Set smp_affinity (on a multiprocessor system) to a single CPU:
echo 1 > /proc/irq/<interrupt_number>/smp_affinity
Setting default receive socket buffer size: Setting default receive socket buffer size:
sysctl -w net.core.rmem_default=524287 sysctl -w net.core.rmem_default=524287
...@@ -151,8 +144,14 @@ PERFORMANCE ...@@ -151,8 +144,14 @@ PERFORMANCE
Setting maximum backlog (# of unprocessed packets before kernel drops): Setting maximum backlog (# of unprocessed packets before kernel drops):
sysctl -w net.core.netdev_max_backlog=300000 sysctl -w net.core.netdev_max_backlog=300000
Set smp_affinity (on a multiprocessor system) to a single CPU: Setting TCP read buffers (min/default/max):
echo 00000001 > /proc/irq/<interrupt_number>/smp_affinity sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"
Setting TCP write buffers (min/pressure/max):
sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"
Setting TCP buffer space (min/pressure/max):
sysctl -w net.ipv4.tcp_mem="10000000 10000000 10000000"
TCP window size for single connections: TCP window size for single connections:
The receive buffer (RX_WINDOW) size must be at least as large as the The receive buffer (RX_WINDOW) size must be at least as large as the
...@@ -186,7 +185,7 @@ DRIVER MESSAGES ...@@ -186,7 +185,7 @@ DRIVER MESSAGES
may be found in /var/log/messages. may be found in /var/log/messages.
Driver up: Driver up:
Chelsio Network Driver - version 2.1.0 Chelsio Network Driver - version 2.1.1
NIC detected: NIC detected:
eth#: Chelsio N210 1x10GBaseX NIC (rev #), PCIX 133MHz/64-bit eth#: Chelsio N210 1x10GBaseX NIC (rev #), PCIX 133MHz/64-bit
...@@ -282,13 +281,44 @@ KNOWN ISSUES ...@@ -282,13 +281,44 @@ KNOWN ISSUES
the number of outstanding transactions, via BIOS configuration the number of outstanding transactions, via BIOS configuration
programming of the PCI-X card, to the following: programming of the PCI-X card, to the following:
Data Length (bytes): 2k Data Length (bytes): 1k
Total allowed outstanding transactions: 1 Total allowed outstanding transactions: 2
Please refer to AMD 8131-HT/PCI-X Errata 26310 Rev 3.08 August 2004, Please refer to AMD 8131-HT/PCI-X Errata 26310 Rev 3.08 August 2004,
section 56, "133-MHz Mode Split Completion Data Corruption" for more section 56, "133-MHz Mode Split Completion Data Corruption" for more
details with this bug and workarounds suggested by AMD. details with this bug and workarounds suggested by AMD.
It may be possible to work outside AMD's recommended PCI-X settings, try
increasing the Data Length to 2k bytes for increased performance. If you
have issues with these settings, please revert to the "safe" settings
and duplicate the problem before submitting a bug or asking for support.
NOTE: The default setting on most systems is 8 outstanding transactions
and 2k bytes data length.
4. On multiprocessor systems, it has been noted that an application which
is handling 10Gb networking can switch between CPUs causing degraded
and/or unstable performance.
If running on an SMP system and taking performance measurements, it
is suggested you either run the latest netperf-2.4.0+ or use a binding
tool such as Tim Hockin's procstate utilities (runon)
<http://www.hockin.org/~thockin/procstate/>.
Binding netserver and netperf (or other applications) to particular
CPUs will have a significant difference in performance measurements.
You may need to experiment which CPU to bind the application to in
order to achieve the best performance for your system.
If you are developing an application designed for 10Gb networking,
please keep in mind you may want to look at kernel functions
sched_setaffinity & sched_getaffinity to bind your application.
If you are just running user-space applications such as ftp, telnet,
etc., you may want to try the runon tool provided by Tim Hockin's
procstate utility. You could also try binding the interface to a
particular CPU: runon 0 ifup eth0
SUPPORT SUPPORT
======= =======
......
...@@ -7,6 +7,5 @@ obj-$(CONFIG_CHELSIO_T1) += cxgb.o ...@@ -7,6 +7,5 @@ obj-$(CONFIG_CHELSIO_T1) += cxgb.o
EXTRA_CFLAGS += -I$(TOPDIR)/drivers/net/chelsio $(DEBUG_FLAGS) EXTRA_CFLAGS += -I$(TOPDIR)/drivers/net/chelsio $(DEBUG_FLAGS)
cxgb-objs := cxgb2.o espi.o tp.o pm3393.o sge.o subr.o mv88x201x.o cxgb-objs := cxgb2.o espi.o pm3393.o sge.o subr.o mv88x201x.o
/*****************************************************************************
* *
* File: ch_ethtool.h *
* $Revision: 1.5 $ *
* $Date: 2005/03/23 07:15:58 $ *
* Description: *
* part of the Chelsio 10Gb Ethernet Driver. *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License, version 2, as *
* published by the Free Software Foundation. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program; if not, write to the Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
* *
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *
* WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *
* *
* http://www.chelsio.com *
* *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com *
* *
* Authors: Dimitrios Michailidis <dm@chelsio.com> *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* *
* History: *
* *
****************************************************************************/
#ifndef __CHETHTOOL_LINUX_H__
#define __CHETHTOOL_LINUX_H__
/* TCB size in 32-bit words */
#define TCB_WORDS (TCB_SIZE / 4)
enum {
ETHTOOL_SETREG,
ETHTOOL_GETREG,
ETHTOOL_SETTPI,
ETHTOOL_GETTPI,
ETHTOOL_DEVUP,
ETHTOOL_GETMTUTAB,
ETHTOOL_SETMTUTAB,
ETHTOOL_GETMTU,
ETHTOOL_SET_PM,
ETHTOOL_GET_PM,
ETHTOOL_GET_TCAM,
ETHTOOL_SET_TCAM,
ETHTOOL_GET_TCB,
ETHTOOL_READ_TCAM_WORD,
};
struct ethtool_reg {
uint32_t cmd;
uint32_t addr;
uint32_t val;
};
struct ethtool_mtus {
uint32_t cmd;
uint16_t mtus[NMTUS];
};
struct ethtool_pm {
uint32_t cmd;
uint32_t tx_pg_sz;
uint32_t tx_num_pg;
uint32_t rx_pg_sz;
uint32_t rx_num_pg;
uint32_t pm_total;
};
struct ethtool_tcam {
uint32_t cmd;
uint32_t tcam_size;
uint32_t nservers;
uint32_t nroutes;
};
struct ethtool_tcb {
uint32_t cmd;
uint32_t tcb_index;
uint32_t tcb_data[TCB_WORDS];
};
struct ethtool_tcam_word {
uint32_t cmd;
uint32_t addr;
uint32_t buf[3];
};
#define SIOCCHETHTOOL SIOCDEVPRIVATE
#endif
/***************************************************************************** /*****************************************************************************
* * * *
* File: common.h * * File: common.h *
* $Revision: 1.5 $ * * $Revision: 1.21 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/06/22 00:43:25 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,74 +36,101 @@ ...@@ -36,74 +36,101 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef CHELSIO_COMMON_H #ifndef _CXGB_COMMON_H_
#define CHELSIO_COMMON_H #define _CXGB_COMMON_H_
#include <linux/config.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/crc32.h>
#include <linux/init.h>
#include <asm/io.h>
#include <linux/pci_ids.h>
#define DRV_DESCRIPTION "Chelsio 10Gb Ethernet Driver"
#define DRV_NAME "cxgb"
#define DRV_VERSION "2.1.1"
#define PFX DRV_NAME ": "
#define CH_ERR(fmt, ...) printk(KERN_ERR PFX fmt, ## __VA_ARGS__)
#define CH_WARN(fmt, ...) printk(KERN_WARNING PFX fmt, ## __VA_ARGS__)
#define CH_ALERT(fmt, ...) printk(KERN_ALERT PFX fmt, ## __VA_ARGS__)
#define CH_DEVICE(devid, ssid, idx) \
{ PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, ssid, 0, 0, idx }
#define SUPPORTED_PAUSE (1 << 13)
#define SUPPORTED_LOOPBACK (1 << 15)
#define ADVERTISED_PAUSE (1 << 13)
#define ADVERTISED_ASYM_PAUSE (1 << 14)
typedef struct adapter adapter_t;
void t1_elmer0_ext_intr(adapter_t *adapter);
void t1_link_changed(adapter_t *adapter, int port_id, int link_status,
int speed, int duplex, int fc);
struct t1_rx_mode {
struct net_device *dev;
u32 idx;
struct dev_mc_list *list;
};
#define t1_rx_mode_promisc(rm) (rm->dev->flags & IFF_PROMISC)
#define t1_rx_mode_allmulti(rm) (rm->dev->flags & IFF_ALLMULTI)
#define t1_rx_mode_mc_cnt(rm) (rm->dev->mc_count)
#define DIMOF(x) (sizeof(x)/sizeof(x[0])) static inline u8 *t1_get_next_mcaddr(struct t1_rx_mode *rm)
{
u8 *addr = 0;
if (rm->idx++ < rm->dev->mc_count) {
addr = rm->list->dmi_addr;
rm->list = rm->list->next;
}
return addr;
}
#define NMTUS 8
#define MAX_NPORTS 4 #define MAX_NPORTS 4
#define TCB_SIZE 128
#define SPEED_INVALID 0xffff
#define DUPLEX_INVALID 0xff
enum { enum {
CHBT_BOARD_7500,
CHBT_BOARD_8000,
CHBT_BOARD_CHT101,
CHBT_BOARD_CHT110,
CHBT_BOARD_CHT210,
CHBT_BOARD_CHT204,
CHBT_BOARD_N110, CHBT_BOARD_N110,
CHBT_BOARD_N210, CHBT_BOARD_N210
CHBT_BOARD_COUGAR,
CHBT_BOARD_6800,
CHBT_BOARD_SIMUL
}; };
enum { enum {
CHBT_TERM_FPGA,
CHBT_TERM_T1, CHBT_TERM_T1,
CHBT_TERM_T2, CHBT_TERM_T2
CHBT_TERM_T3
}; };
enum { enum {
CHBT_MAC_CHELSIO_A,
CHBT_MAC_IXF1010,
CHBT_MAC_PM3393, CHBT_MAC_PM3393,
CHBT_MAC_VSC7321,
CHBT_MAC_DUMMY
}; };
enum { enum {
CHBT_PHY_88E1041,
CHBT_PHY_88E1111,
CHBT_PHY_88X2010, CHBT_PHY_88X2010,
CHBT_PHY_XPAK,
CHBT_PHY_MY3126,
CHBT_PHY_DUMMY
}; };
enum { enum {
PAUSE_RX = 1, PAUSE_RX = 1 << 0,
PAUSE_TX = 2, PAUSE_TX = 1 << 1,
PAUSE_AUTONEG = 4 PAUSE_AUTONEG = 1 << 2
}; };
/* Revisions of T1 chip */ /* Revisions of T1 chip */
#define TERM_T1A 0 enum {
#define TERM_T1B 1 TERM_T1A = 0,
#define TERM_T2 3 TERM_T1B = 1,
TERM_T2 = 3
struct tp_params {
unsigned int pm_size;
unsigned int cm_size;
unsigned int pm_rx_base;
unsigned int pm_tx_base;
unsigned int pm_rx_pg_size;
unsigned int pm_tx_pg_size;
unsigned int pm_rx_num_pgs;
unsigned int pm_tx_num_pgs;
unsigned int use_5tuple_mode;
}; };
struct sge_params { struct sge_params {
...@@ -118,17 +145,7 @@ struct sge_params { ...@@ -118,17 +145,7 @@ struct sge_params {
unsigned int polling; unsigned int polling;
}; };
struct mc5_params { struct chelsio_pci_params {
unsigned int mode; /* selects MC5 width */
unsigned int nservers; /* size of server region */
unsigned int nroutes; /* size of routing region */
};
/* Default MC5 region sizes */
#define DEFAULT_SERVER_REGION_LEN 256
#define DEFAULT_RT_REGION_LEN 1024
struct pci_params {
unsigned short speed; unsigned short speed;
unsigned char width; unsigned char width;
unsigned char is_pcix; unsigned char is_pcix;
...@@ -136,31 +153,14 @@ struct pci_params { ...@@ -136,31 +153,14 @@ struct pci_params {
struct adapter_params { struct adapter_params {
struct sge_params sge; struct sge_params sge;
struct mc5_params mc5; struct chelsio_pci_params pci;
struct tp_params tp;
struct pci_params pci;
const struct board_info *brd_info; const struct board_info *brd_info;
unsigned short mtus[NMTUS];
unsigned int nports; /* # of ethernet ports */ unsigned int nports; /* # of ethernet ports */
unsigned int stats_update_period; unsigned int stats_update_period;
unsigned short chip_revision; unsigned short chip_revision;
unsigned char chip_version; unsigned char chip_version;
unsigned char is_asic;
};
struct pci_err_cnt {
unsigned int master_parity_err;
unsigned int sig_target_abort;
unsigned int rcv_target_abort;
unsigned int rcv_master_abort;
unsigned int sig_sys_err;
unsigned int det_parity_err;
unsigned int pio_parity_err;
unsigned int wf_parity_err;
unsigned int rf_parity_err;
unsigned int cf_parity_err;
}; };
struct link_config { struct link_config {
...@@ -175,8 +175,60 @@ struct link_config { ...@@ -175,8 +175,60 @@ struct link_config {
unsigned char autoneg; /* autonegotiating? */ unsigned char autoneg; /* autonegotiating? */
}; };
#define SPEED_INVALID 0xffff struct cmac;
#define DUPLEX_INVALID 0xff struct cphy;
struct port_info {
struct net_device *dev;
struct cmac *mac;
struct cphy *phy;
struct link_config link_config;
struct net_device_stats netstats;
};
struct sge;
struct peespi;
struct adapter {
u8 *regs;
struct pci_dev *pdev;
unsigned long registered_device_map;
unsigned long open_device_map;
unsigned long flags;
const char *name;
int msg_enable;
u32 mmio_len;
struct work_struct ext_intr_handler_task;
struct adapter_params params;
struct vlan_group *vlan_grp;
/* Terminator modules. */
struct sge *sge;
struct peespi *espi;
struct port_info port[MAX_NPORTS];
struct work_struct stats_update_task;
struct timer_list stats_update_timer;
struct semaphore mib_mutex;
spinlock_t tpi_lock;
spinlock_t work_lock;
/* guards async operations */
spinlock_t async_lock ____cacheline_aligned;
u32 slow_intr_mask;
};
enum { /* adapter flags */
FULL_INIT_DONE = 1 << 0,
TSO_CAPABLE = 1 << 2,
TCP_CSUM_CAPABLE = 1 << 3,
UDP_CSUM_CAPABLE = 1 << 4,
VLAN_ACCEL_CAPABLE = 1 << 5,
RX_CSUM_ENABLED = 1 << 6,
};
struct mdio_ops; struct mdio_ops;
struct gmac; struct gmac;
...@@ -205,19 +257,8 @@ struct board_info { ...@@ -205,19 +257,8 @@ struct board_info {
const char *desc; const char *desc;
}; };
#include "osdep.h"
#ifndef PCI_VENDOR_ID_CHELSIO
#define PCI_VENDOR_ID_CHELSIO 0x1425
#endif
extern struct pci_device_id t1_pci_tbl[]; extern struct pci_device_id t1_pci_tbl[];
static inline int t1_is_asic(const adapter_t *adapter)
{
return adapter->params.is_asic;
}
static inline int adapter_matches_type(const adapter_t *adapter, static inline int adapter_matches_type(const adapter_t *adapter,
int version, int revision) int version, int revision)
{ {
...@@ -245,25 +286,29 @@ static inline unsigned int core_ticks_per_usec(const adapter_t *adap) ...@@ -245,25 +286,29 @@ static inline unsigned int core_ticks_per_usec(const adapter_t *adap)
return board_info(adap)->clock_core / 1000000; return board_info(adap)->clock_core / 1000000;
} }
int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value); extern int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value);
int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *value); extern int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *value);
void t1_interrupts_enable(adapter_t *adapter); extern void t1_interrupts_enable(adapter_t *adapter);
void t1_interrupts_disable(adapter_t *adapter); extern void t1_interrupts_disable(adapter_t *adapter);
void t1_interrupts_clear(adapter_t *adapter); extern void t1_interrupts_clear(adapter_t *adapter);
int elmer0_ext_intr_handler(adapter_t *adapter); extern int elmer0_ext_intr_handler(adapter_t *adapter);
int t1_slow_intr_handler(adapter_t *adapter); extern int t1_slow_intr_handler(adapter_t *adapter);
int t1_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc); extern int t1_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc);
const struct board_info *t1_get_board_info(unsigned int board_id); extern const struct board_info *t1_get_board_info(unsigned int board_id);
const struct board_info *t1_get_board_info_from_ids(unsigned int devid, extern const struct board_info *t1_get_board_info_from_ids(unsigned int devid,
unsigned short ssid); unsigned short ssid);
int t1_seeprom_read(adapter_t *adapter, u32 addr, u32 *data); extern int t1_seeprom_read(adapter_t *adapter, u32 addr, u32 *data);
int t1_get_board_rev(adapter_t *adapter, const struct board_info *bi, extern int t1_get_board_rev(adapter_t *adapter, const struct board_info *bi,
struct adapter_params *p); struct adapter_params *p);
int t1_init_hw_modules(adapter_t *adapter); extern int t1_init_hw_modules(adapter_t *adapter);
int t1_init_sw_modules(adapter_t *adapter, const struct board_info *bi); extern int t1_init_sw_modules(adapter_t *adapter, const struct board_info *bi);
void t1_free_sw_modules(adapter_t *adapter); extern void t1_free_sw_modules(adapter_t *adapter);
void t1_fatal_err(adapter_t *adapter); extern void t1_fatal_err(adapter_t *adapter);
#endif
extern void t1_tp_set_udp_checksum_offload(adapter_t *adapter, int enable);
extern void t1_tp_set_tcp_checksum_offload(adapter_t *adapter, int enable);
extern void t1_tp_set_ip_checksum_offload(adapter_t *adapter, int enable);
#endif /* _CXGB_COMMON_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: cphy.h * * File: cphy.h *
* $Revision: 1.4 $ * * $Revision: 1.7 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/06/21 18:29:47 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,8 +36,8 @@ ...@@ -36,8 +36,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef CHELSIO_CPHY_H #ifndef _CXGB_CPHY_H_
#define CHELSIO_CPHY_H #define _CXGB_CPHY_H_
#include "common.h" #include "common.h"
...@@ -142,9 +142,7 @@ struct gphy { ...@@ -142,9 +142,7 @@ struct gphy {
int (*reset)(adapter_t *adapter); int (*reset)(adapter_t *adapter);
}; };
extern struct gphy t1_my3126_ops;
extern struct gphy t1_mv88e1xxx_ops;
extern struct gphy t1_xpak_ops;
extern struct gphy t1_mv88x201x_ops; extern struct gphy t1_mv88x201x_ops;
extern struct gphy t1_dummy_phy_ops; extern struct gphy t1_dummy_phy_ops;
#endif
#endif /* _CXGB_CPHY_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: cpl5_cmd.h * * File: cpl5_cmd.h *
* $Revision: 1.4 $ * * $Revision: 1.6 $ *
* $Date: 2005/03/23 07:15:58 $ * * $Date: 2005/06/21 18:29:47 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,8 +36,8 @@ ...@@ -36,8 +36,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef _CPL5_CMD_H #ifndef _CXGB_CPL5_CMD_H_
#define _CPL5_CMD_H #define _CXGB_CPL5_CMD_H_
#include <asm/byteorder.h> #include <asm/byteorder.h>
...@@ -59,12 +59,12 @@ enum { /* TX_PKT_LSO ethernet types */ ...@@ -59,12 +59,12 @@ enum { /* TX_PKT_LSO ethernet types */
}; };
struct cpl_rx_data { struct cpl_rx_data {
__u32 rsvd0; u32 rsvd0;
__u32 len; u32 len;
__u32 seq; u32 seq;
__u16 urg; u16 urg;
__u8 rsvd1; u8 rsvd1;
__u8 status; u8 status;
}; };
/* /*
...@@ -73,73 +73,73 @@ struct cpl_rx_data { ...@@ -73,73 +73,73 @@ struct cpl_rx_data {
* used so we break it into 2 16-bit parts to easily meet our alignment needs. * used so we break it into 2 16-bit parts to easily meet our alignment needs.
*/ */
struct cpl_tx_pkt { struct cpl_tx_pkt {
__u8 opcode; u8 opcode;
#if defined(__LITTLE_ENDIAN_BITFIELD) #if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 iff:4; u8 iff:4;
__u8 ip_csum_dis:1; u8 ip_csum_dis:1;
__u8 l4_csum_dis:1; u8 l4_csum_dis:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 rsvd:1; u8 rsvd:1;
#else #else
__u8 rsvd:1; u8 rsvd:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 l4_csum_dis:1; u8 l4_csum_dis:1;
__u8 ip_csum_dis:1; u8 ip_csum_dis:1;
__u8 iff:4; u8 iff:4;
#endif #endif
__u16 vlan; u16 vlan;
__u16 len_hi; u16 len_hi;
__u16 len_lo; u16 len_lo;
}; };
struct cpl_tx_pkt_lso { struct cpl_tx_pkt_lso {
__u8 opcode; u8 opcode;
#if defined(__LITTLE_ENDIAN_BITFIELD) #if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 iff:4; u8 iff:4;
__u8 ip_csum_dis:1; u8 ip_csum_dis:1;
__u8 l4_csum_dis:1; u8 l4_csum_dis:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 rsvd:1; u8 rsvd:1;
#else #else
__u8 rsvd:1; u8 rsvd:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 l4_csum_dis:1; u8 l4_csum_dis:1;
__u8 ip_csum_dis:1; u8 ip_csum_dis:1;
__u8 iff:4; u8 iff:4;
#endif #endif
__u16 vlan; u16 vlan;
__u32 len; u32 len;
__u32 rsvd2; u32 rsvd2;
__u8 rsvd3; u8 rsvd3;
#if defined(__LITTLE_ENDIAN_BITFIELD) #if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 tcp_hdr_words:4; u8 tcp_hdr_words:4;
__u8 ip_hdr_words:4; u8 ip_hdr_words:4;
#else #else
__u8 ip_hdr_words:4; u8 ip_hdr_words:4;
__u8 tcp_hdr_words:4; u8 tcp_hdr_words:4;
#endif #endif
__u16 eth_type_mss; u16 eth_type_mss;
}; };
struct cpl_rx_pkt { struct cpl_rx_pkt {
__u8 opcode; u8 opcode;
#if defined(__LITTLE_ENDIAN_BITFIELD) #if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 iff:4; u8 iff:4;
__u8 csum_valid:1; u8 csum_valid:1;
__u8 bad_pkt:1; u8 bad_pkt:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 rsvd:1; u8 rsvd:1;
#else #else
__u8 rsvd:1; u8 rsvd:1;
__u8 vlan_valid:1; u8 vlan_valid:1;
__u8 bad_pkt:1; u8 bad_pkt:1;
__u8 csum_valid:1; u8 csum_valid:1;
__u8 iff:4; u8 iff:4;
#endif #endif
__u16 csum; u16 csum;
__u16 vlan; u16 vlan;
__u16 len; u16 len;
}; };
#endif #endif /* _CXGB_CPL5_CMD_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: cxgb2.c * * File: cxgb2.c *
* $Revision: 1.11 $ * * $Revision: 1.25 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/06/22 00:43:25 $ *
* Description: * * Description: *
* Chelsio 10Gb Ethernet Driver. * * Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -37,7 +37,6 @@ ...@@ -37,7 +37,6 @@
****************************************************************************/ ****************************************************************************/
#include "common.h" #include "common.h"
#include <linux/config.h> #include <linux/config.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -48,44 +47,56 @@ ...@@ -48,44 +47,56 @@
#include <linux/mii.h> #include <linux/mii.h>
#include <linux/sockios.h> #include <linux/sockios.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/version.h> #include <linux/dma-mapping.h>
#include <linux/workqueue.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include "ch_ethtool.h"
#include "cpl5_cmd.h" #include "cpl5_cmd.h"
#include "regs.h" #include "regs.h"
#include "gmac.h" #include "gmac.h"
#include "cphy.h" #include "cphy.h"
#include "sge.h" #include "sge.h"
#include "tp.h"
#include "espi.h" #include "espi.h"
#ifdef work_struct
#include <linux/tqueue.h>
#define INIT_WORK INIT_TQUEUE
#define schedule_work schedule_task
#define flush_scheduled_work flush_scheduled_tasks
static inline void schedule_mac_stats_update(struct adapter *ap, int secs) static inline void schedule_mac_stats_update(struct adapter *ap, int secs)
{ {
schedule_delayed_work(&ap->stats_update_task, secs * HZ); mod_timer(&ap->stats_update_timer, jiffies + secs * HZ);
} }
static inline void cancel_mac_stats_update(struct adapter *ap) static inline void cancel_mac_stats_update(struct adapter *ap)
{ {
cancel_delayed_work(&ap->stats_update_task); del_timer_sync(&ap->stats_update_timer);
flush_scheduled_tasks();
} }
#if BITS_PER_LONG == 64 && !defined(CONFIG_X86_64) /*
# define FMT64 "l" * Stats update timer for 2.4. It schedules a task to do the actual update as
#else * we need to access MAC statistics in process context.
# define FMT64 "ll" */
#endif static void mac_stats_timer(unsigned long data)
{
struct adapter *ap = (struct adapter *)data;
# define DRV_TYPE "" schedule_task(&ap->stats_update_task);
# define MODULE_DESC "Chelsio Network Driver" }
#else
#include <linux/workqueue.h>
static char driver_name[] = DRV_NAME; static inline void schedule_mac_stats_update(struct adapter *ap, int secs)
static char driver_string[] = "Chelsio " DRV_TYPE "Network Driver"; {
static char driver_version[] = "2.1.0"; schedule_delayed_work(&ap->stats_update_task, secs * HZ);
}
#define PCI_DMA_64BIT ~0ULL static inline void cancel_mac_stats_update(struct adapter *ap)
#define PCI_DMA_32BIT 0xffffffffULL {
cancel_delayed_work(&ap->stats_update_task);
}
#endif
#define MAX_CMDQ_ENTRIES 16384 #define MAX_CMDQ_ENTRIES 16384
#define MAX_CMDQ1_ENTRIES 1024 #define MAX_CMDQ1_ENTRIES 1024
...@@ -107,10 +118,9 @@ static char driver_version[] = "2.1.0"; ...@@ -107,10 +118,9 @@ static char driver_version[] = "2.1.0";
*/ */
#define EEPROM_SIZE 32 #define EEPROM_SIZE 32
MODULE_DESCRIPTION(MODULE_DESC); MODULE_DESCRIPTION(DRV_DESCRIPTION);
MODULE_AUTHOR("Chelsio Communications"); MODULE_AUTHOR("Chelsio Communications");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, t1_pci_tbl);
static int dflt_msg_enable = DFLT_MSG_ENABLE; static int dflt_msg_enable = DFLT_MSG_ENABLE;
...@@ -140,17 +150,17 @@ static void t1_set_rxmode(struct net_device *dev) ...@@ -140,17 +150,17 @@ static void t1_set_rxmode(struct net_device *dev)
static void link_report(struct port_info *p) static void link_report(struct port_info *p)
{ {
if (!netif_carrier_ok(p->dev)) if (!netif_carrier_ok(p->dev))
printk(KERN_INFO "%s: link is down\n", p->dev->name); printk(KERN_INFO "%s: link down\n", p->dev->name);
else { else {
const char *s = "10 Mbps"; const char *s = "10Mbps";
switch (p->link_config.speed) { switch (p->link_config.speed) {
case SPEED_10000: s = "10 Gbps"; break; case SPEED_10000: s = "10Gbps"; break;
case SPEED_1000: s = "1000 Mbps"; break; case SPEED_1000: s = "1000Mbps"; break;
case SPEED_100: s = "100 Mbps"; break; case SPEED_100: s = "100Mbps"; break;
} }
printk(KERN_INFO "%s: link is up at %s, %s duplex\n", printk(KERN_INFO "%s: link up, %s, %s-duplex\n",
p->dev->name, s, p->dev->name, s,
p->link_config.duplex == DUPLEX_FULL ? "full" : "half"); p->link_config.duplex == DUPLEX_FULL ? "full" : "half");
} }
...@@ -186,10 +196,8 @@ static void link_start(struct port_info *p) ...@@ -186,10 +196,8 @@ static void link_start(struct port_info *p)
static void enable_hw_csum(struct adapter *adapter) static void enable_hw_csum(struct adapter *adapter)
{ {
if (adapter->flags & TSO_CAPABLE) if (adapter->flags & TSO_CAPABLE)
t1_tp_set_ip_checksum_offload(adapter->tp, 1); /* for TSO only */ t1_tp_set_ip_checksum_offload(adapter, 1); /* for TSO only */
if (adapter->flags & UDP_CSUM_CAPABLE) t1_tp_set_tcp_checksum_offload(adapter, 1);
t1_tp_set_udp_checksum_offload(adapter->tp, 1);
t1_tp_set_tcp_checksum_offload(adapter->tp, 1);
} }
/* /*
...@@ -210,15 +218,13 @@ static int cxgb_up(struct adapter *adapter) ...@@ -210,15 +218,13 @@ static int cxgb_up(struct adapter *adapter)
} }
t1_interrupts_clear(adapter); t1_interrupts_clear(adapter);
if ((err = request_irq(adapter->pdev->irq,
if ((err = request_irq(adapter->pdev->irq, &t1_interrupt, SA_SHIRQ, t1_select_intr_handler(adapter), SA_SHIRQ,
adapter->name, adapter))) adapter->name, adapter))) {
goto out_err; goto out_err;
}
t1_sge_start(adapter->sge); t1_sge_start(adapter->sge);
t1_interrupts_enable(adapter); t1_interrupts_enable(adapter);
err = 0;
out_err: out_err:
return err; return err;
} }
...@@ -371,15 +377,48 @@ static char stats_strings[][ETH_GSTRING_LEN] = { ...@@ -371,15 +377,48 @@ static char stats_strings[][ETH_GSTRING_LEN] = {
"RxInternalMACRcvError", "RxInternalMACRcvError",
"RxInRangeLengthErrors", "RxInRangeLengthErrors",
"RxOutOfRangeLengthField", "RxOutOfRangeLengthField",
"RxFrameTooLongErrors" "RxFrameTooLongErrors",
"TSO",
"VLANextractions",
"VLANinsertions",
"RxCsumGood",
"TxCsumOffload",
"RxDrops"
"respQ_empty",
"respQ_overflow",
"freelistQ_empty",
"pkt_too_big",
"pkt_mismatch",
"cmdQ_full0",
"cmdQ_full1",
"tx_ipfrags",
"tx_reg_pkts",
"tx_lso_pkts",
"tx_do_cksum",
"espi_DIP2ParityErr",
"espi_DIP4Err",
"espi_RxDrops",
"espi_TxDrops",
"espi_RxOvfl",
"espi_ParityErr"
}; };
#define T2_REGMAP_SIZE (3 * 1024)
static int get_regs_len(struct net_device *dev)
{
return T2_REGMAP_SIZE;
}
static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
strcpy(info->driver, driver_name); strcpy(info->driver, DRV_NAME);
strcpy(info->version, driver_version); strcpy(info->version, DRV_VERSION);
strcpy(info->fw_version, "N/A"); strcpy(info->fw_version, "N/A");
strcpy(info->bus_info, pci_name(adapter->pdev)); strcpy(info->bus_info, pci_name(adapter->pdev));
} }
...@@ -401,8 +440,12 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats, ...@@ -401,8 +440,12 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
struct cmac *mac = adapter->port[dev->if_port].mac; struct cmac *mac = adapter->port[dev->if_port].mac;
const struct cmac_statistics *s; const struct cmac_statistics *s;
const struct sge_port_stats *ss;
const struct sge_intr_counts *t;
s = mac->ops->statistics_update(mac, MAC_STATS_UPDATE_FULL); s = mac->ops->statistics_update(mac, MAC_STATS_UPDATE_FULL);
ss = t1_sge_get_port_stats(adapter->sge, dev->if_port);
t = t1_sge_get_intr_counts(adapter->sge);
*data++ = s->TxOctetsOK; *data++ = s->TxOctetsOK;
*data++ = s->TxOctetsBad; *data++ = s->TxOctetsBad;
...@@ -437,6 +480,48 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats, ...@@ -437,6 +480,48 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
*data++ = s->RxInRangeLengthErrors; *data++ = s->RxInRangeLengthErrors;
*data++ = s->RxOutOfRangeLengthField; *data++ = s->RxOutOfRangeLengthField;
*data++ = s->RxFrameTooLongErrors; *data++ = s->RxFrameTooLongErrors;
*data++ = ss->tso;
*data++ = ss->vlan_xtract;
*data++ = ss->vlan_insert;
*data++ = ss->rx_cso_good;
*data++ = ss->tx_cso;
*data++ = ss->rx_drops;
*data++ = (u64)t->respQ_empty;
*data++ = (u64)t->respQ_overflow;
*data++ = (u64)t->freelistQ_empty;
*data++ = (u64)t->pkt_too_big;
*data++ = (u64)t->pkt_mismatch;
*data++ = (u64)t->cmdQ_full[0];
*data++ = (u64)t->cmdQ_full[1];
*data++ = (u64)t->tx_ipfrags;
*data++ = (u64)t->tx_reg_pkts;
*data++ = (u64)t->tx_lso_pkts;
*data++ = (u64)t->tx_do_cksum;
}
static inline void reg_block_dump(struct adapter *ap, void *buf,
unsigned int start, unsigned int end)
{
u32 *p = buf + start;
for ( ; start <= end; start += sizeof(u32))
*p++ = readl(ap->regs + start);
}
static void get_regs(struct net_device *dev, struct ethtool_regs *regs,
void *buf)
{
struct adapter *ap = dev->priv;
/*
* Version scheme: bits 0..9: chip version, bits 10..15: chip revision
*/
regs->version = 2;
memset(buf, 0, T2_REGMAP_SIZE);
reg_block_dump(ap, buf, 0, A_SG_RESPACCUTIMER);
} }
static int get_settings(struct net_device *dev, struct ethtool_cmd *cmd) static int get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
...@@ -645,22 +730,20 @@ static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c) ...@@ -645,22 +730,20 @@ static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
unsigned int sge_coalesce_usecs = 0; /*
* If RX coalescing is requested we use NAPI, otherwise interrupts.
* This choice can be made only when all ports and the TOE are off.
*/
if (adapter->open_device_map == 0)
adapter->params.sge.polling = c->use_adaptive_rx_coalesce;
sge_coalesce_usecs = adapter->params.sge.last_rx_coalesce_raw; if (adapter->params.sge.polling) {
sge_coalesce_usecs /= board_info(adapter)->clock_core / 1000000; adapter->params.sge.rx_coalesce_usecs = 0;
if ( (adapter->params.sge.coalesce_enable && !c->use_adaptive_rx_coalesce) &&
(c->rx_coalesce_usecs == sge_coalesce_usecs) ) {
adapter->params.sge.rx_coalesce_usecs =
adapter->params.sge.default_rx_coalesce_usecs;
} else { } else {
adapter->params.sge.rx_coalesce_usecs = c->rx_coalesce_usecs; adapter->params.sge.rx_coalesce_usecs = c->rx_coalesce_usecs;
} }
adapter->params.sge.last_rx_coalesce_raw = adapter->params.sge.rx_coalesce_usecs;
adapter->params.sge.last_rx_coalesce_raw *= (board_info(adapter)->clock_core / 1000000);
adapter->params.sge.sample_interval_usecs = c->rate_sample_interval;
adapter->params.sge.coalesce_enable = c->use_adaptive_rx_coalesce; adapter->params.sge.coalesce_enable = c->use_adaptive_rx_coalesce;
adapter->params.sge.sample_interval_usecs = c->rate_sample_interval;
t1_sge_set_coalesce_params(adapter->sge, &adapter->params.sge); t1_sge_set_coalesce_params(adapter->sge, &adapter->params.sge);
return 0; return 0;
} }
...@@ -669,12 +752,7 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c) ...@@ -669,12 +752,7 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
if (adapter->params.sge.coalesce_enable) { /* Adaptive algorithm on */
c->rx_coalesce_usecs = adapter->params.sge.last_rx_coalesce_raw;
c->rx_coalesce_usecs /= board_info(adapter)->clock_core / 1000000;
} else {
c->rx_coalesce_usecs = adapter->params.sge.rx_coalesce_usecs; c->rx_coalesce_usecs = adapter->params.sge.rx_coalesce_usecs;
}
c->rate_sample_interval = adapter->params.sge.sample_interval_usecs; c->rate_sample_interval = adapter->params.sge.sample_interval_usecs;
c->use_adaptive_rx_coalesce = adapter->params.sge.coalesce_enable; c->use_adaptive_rx_coalesce = adapter->params.sge.coalesce_enable;
return 0; return 0;
...@@ -682,9 +760,7 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c) ...@@ -682,9 +760,7 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
static int get_eeprom_len(struct net_device *dev) static int get_eeprom_len(struct net_device *dev)
{ {
struct adapter *adapter = dev->priv; return EEPROM_SIZE;
return t1_is_asic(adapter) ? EEPROM_SIZE : 0;
} }
#define EEPROM_MAGIC(ap) \ #define EEPROM_MAGIC(ap) \
...@@ -728,86 +804,22 @@ static struct ethtool_ops t1_ethtool_ops = { ...@@ -728,86 +804,22 @@ static struct ethtool_ops t1_ethtool_ops = {
.get_strings = get_strings, .get_strings = get_strings,
.get_stats_count = get_stats_count, .get_stats_count = get_stats_count,
.get_ethtool_stats = get_stats, .get_ethtool_stats = get_stats,
.get_regs_len = get_regs_len,
.get_regs = get_regs,
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = set_tso, .set_tso = set_tso,
}; };
static int ethtool_ioctl(struct net_device *dev, void *useraddr) static void cxgb_proc_cleanup(struct adapter *adapter,
struct proc_dir_entry *dir)
{ {
u32 cmd; const char *name;
struct adapter *adapter = dev->priv; name = adapter->name;
remove_proc_entry(name, dir);
if (copy_from_user(&cmd, useraddr, sizeof(cmd)))
return -EFAULT;
switch (cmd) {
case ETHTOOL_SETREG: {
struct ethtool_reg edata;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (copy_from_user(&edata, useraddr, sizeof(edata)))
return -EFAULT;
if ((edata.addr & 3) != 0 || edata.addr >= adapter->mmio_len)
return -EINVAL;
if (edata.addr == A_ESPI_MISC_CONTROL)
t1_espi_set_misc_ctrl(adapter, edata.val);
else {
if (edata.addr == 0x950)
t1_sge_set_ptimeout(adapter, edata.val);
else
writel(edata.val, adapter->regs + edata.addr);
}
break;
}
case ETHTOOL_GETREG: {
struct ethtool_reg edata;
if (copy_from_user(&edata, useraddr, sizeof(edata)))
return -EFAULT;
if ((edata.addr & 3) != 0 || edata.addr >= adapter->mmio_len)
return -EINVAL;
if (edata.addr >= 0x900 && edata.addr <= 0x93c)
edata.val = t1_espi_get_mon(adapter, edata.addr, 1);
else {
if (edata.addr == 0x950)
edata.val = t1_sge_get_ptimeout(adapter);
else
edata.val = readl(adapter->regs + edata.addr);
}
if (copy_to_user(useraddr, &edata, sizeof(edata)))
return -EFAULT;
break;
}
case ETHTOOL_SETTPI: {
struct ethtool_reg edata;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (copy_from_user(&edata, useraddr, sizeof(edata)))
return -EFAULT;
if ((edata.addr & 3) != 0)
return -EINVAL;
t1_tpi_write(adapter, edata.addr, edata.val);
break;
}
case ETHTOOL_GETTPI: {
struct ethtool_reg edata;
if (copy_from_user(&edata, useraddr, sizeof(edata)))
return -EFAULT;
if ((edata.addr & 3) != 0)
return -EINVAL;
t1_tpi_read(adapter, edata.addr, &edata.val);
if (copy_to_user(useraddr, &edata, sizeof(edata)))
return -EFAULT;
break;
}
default:
return -EOPNOTSUPP;
}
return 0;
} }
//#define chtoe_setup_toedev(adapter) NULL
#define update_mtu_tab(adapter)
#define write_smt_entry(adapter, idx)
static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd) static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
{ {
...@@ -822,7 +834,8 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd) ...@@ -822,7 +834,8 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
struct cphy *phy = adapter->port[dev->if_port].phy; struct cphy *phy = adapter->port[dev->if_port].phy;
u32 val; u32 val;
if (!phy->mdio_read) return -EOPNOTSUPP; if (!phy->mdio_read)
return -EOPNOTSUPP;
phy->mdio_read(adapter, data->phy_id, 0, data->reg_num & 0x1f, phy->mdio_read(adapter, data->phy_id, 0, data->reg_num & 0x1f,
&val); &val);
data->val_out = val; data->val_out = val;
...@@ -831,15 +844,15 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd) ...@@ -831,15 +844,15 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
case SIOCSMIIREG: { case SIOCSMIIREG: {
struct cphy *phy = adapter->port[dev->if_port].phy; struct cphy *phy = adapter->port[dev->if_port].phy;
if (!capable(CAP_NET_ADMIN)) return -EPERM; if (!capable(CAP_NET_ADMIN))
if (!phy->mdio_write) return -EOPNOTSUPP; return -EPERM;
if (!phy->mdio_write)
return -EOPNOTSUPP;
phy->mdio_write(adapter, data->phy_id, 0, data->reg_num & 0x1f, phy->mdio_write(adapter, data->phy_id, 0, data->reg_num & 0x1f,
data->val_in); data->val_in);
break; break;
} }
case SIOCCHETHTOOL:
return ethtool_ioctl(dev, (void *)req->ifr_data);
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
...@@ -902,9 +915,12 @@ static void vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) ...@@ -902,9 +915,12 @@ static void vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
#ifdef CONFIG_NET_POLL_CONTROLLER #ifdef CONFIG_NET_POLL_CONTROLLER
static void t1_netpoll(struct net_device *dev) static void t1_netpoll(struct net_device *dev)
{ {
unsigned long flags;
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
t1_interrupt(adapter->pdev->irq, adapter, NULL); local_irq_save(flags);
t1_select_intr_handler(adapter)(adapter->pdev->irq, adapter, NULL);
local_irq_restore(flags);
} }
#endif #endif
...@@ -938,16 +954,17 @@ static void mac_stats_task(void *data) ...@@ -938,16 +954,17 @@ static void mac_stats_task(void *data)
*/ */
static void ext_intr_task(void *data) static void ext_intr_task(void *data)
{ {
u32 enable;
struct adapter *adapter = data; struct adapter *adapter = data;
elmer0_ext_intr_handler(adapter); elmer0_ext_intr_handler(adapter);
/* Now reenable external interrupts */ /* Now reenable external interrupts */
t1_write_reg_4(adapter, A_PL_CAUSE, F_PL_INTR_EXT); spin_lock_irq(&adapter->async_lock);
enable = t1_read_reg_4(adapter, A_PL_ENABLE);
t1_write_reg_4(adapter, A_PL_ENABLE, enable | F_PL_INTR_EXT);
adapter->slow_intr_mask |= F_PL_INTR_EXT; adapter->slow_intr_mask |= F_PL_INTR_EXT;
writel(F_PL_INTR_EXT, adapter->regs + A_PL_CAUSE);
writel(adapter->slow_intr_mask | F_PL_INTR_SGE_DATA,
adapter->regs + A_PL_ENABLE);
spin_unlock_irq(&adapter->async_lock);
} }
/* /*
...@@ -955,15 +972,14 @@ static void ext_intr_task(void *data) ...@@ -955,15 +972,14 @@ static void ext_intr_task(void *data)
*/ */
void t1_elmer0_ext_intr(struct adapter *adapter) void t1_elmer0_ext_intr(struct adapter *adapter)
{ {
u32 enable = t1_read_reg_4(adapter, A_PL_ENABLE);
/* /*
* Schedule a task to handle external interrupts as we require * Schedule a task to handle external interrupts as we require
* a process context. We disable EXT interrupts in the interim * a process context. We disable EXT interrupts in the interim
* and let the task reenable them when it's done. * and let the task reenable them when it's done.
*/ */
adapter->slow_intr_mask &= ~F_PL_INTR_EXT; adapter->slow_intr_mask &= ~F_PL_INTR_EXT;
t1_write_reg_4(adapter, A_PL_ENABLE, enable & ~F_PL_INTR_EXT); writel(adapter->slow_intr_mask | F_PL_INTR_SGE_DATA,
adapter->regs + A_PL_ENABLE);
schedule_work(&adapter->ext_intr_handler_task); schedule_work(&adapter->ext_intr_handler_task);
} }
...@@ -977,7 +993,6 @@ void t1_fatal_err(struct adapter *adapter) ...@@ -977,7 +993,6 @@ void t1_fatal_err(struct adapter *adapter)
adapter->name); adapter->name);
} }
static int __devinit init_one(struct pci_dev *pdev, static int __devinit init_one(struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
...@@ -990,8 +1005,8 @@ static int __devinit init_one(struct pci_dev *pdev, ...@@ -990,8 +1005,8 @@ static int __devinit init_one(struct pci_dev *pdev,
struct port_info *pi; struct port_info *pi;
if (!version_printed) { if (!version_printed) {
printk(KERN_INFO "%s - version %s\n", driver_string, printk(KERN_INFO "%s - version %s\n", DRV_DESCRIPTION,
driver_version); DRV_VERSION);
++version_printed; ++version_printed;
} }
...@@ -1006,20 +1021,22 @@ static int __devinit init_one(struct pci_dev *pdev, ...@@ -1006,20 +1021,22 @@ static int __devinit init_one(struct pci_dev *pdev,
goto out_disable_pdev; goto out_disable_pdev;
} }
if (!pci_set_dma_mask(pdev, PCI_DMA_64BIT)) { if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) {
pci_using_dac = 1; pci_using_dac = 1;
if (pci_set_consistent_dma_mask(pdev, PCI_DMA_64BIT)) {
if (pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK)) {
CH_ERR("%s: unable to obtain 64-bit DMA for" CH_ERR("%s: unable to obtain 64-bit DMA for"
"consistent allocations\n", pci_name(pdev)); "consistent allocations\n", pci_name(pdev));
err = -ENODEV; err = -ENODEV;
goto out_disable_pdev; goto out_disable_pdev;
} }
} else if ((err = pci_set_dma_mask(pdev, PCI_DMA_32BIT)) != 0) {
} else if ((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK)) != 0) {
CH_ERR("%s: no usable DMA configuration\n", pci_name(pdev)); CH_ERR("%s: no usable DMA configuration\n", pci_name(pdev));
goto out_disable_pdev; goto out_disable_pdev;
} }
err = pci_request_regions(pdev, driver_name); err = pci_request_regions(pdev, DRV_NAME);
if (err) { if (err) {
CH_ERR("%s: cannot obtain PCI resources\n", pci_name(pdev)); CH_ERR("%s: cannot obtain PCI resources\n", pci_name(pdev));
goto out_disable_pdev; goto out_disable_pdev;
...@@ -1074,9 +1091,14 @@ static int __devinit init_one(struct pci_dev *pdev, ...@@ -1074,9 +1091,14 @@ static int __devinit init_one(struct pci_dev *pdev,
ext_intr_task, adapter); ext_intr_task, adapter);
INIT_WORK(&adapter->stats_update_task, mac_stats_task, INIT_WORK(&adapter->stats_update_task, mac_stats_task,
adapter); adapter);
#ifdef work_struct
init_timer(&adapter->stats_update_timer);
adapter->stats_update_timer.function = mac_stats_timer;
adapter->stats_update_timer.data =
(unsigned long)adapter;
#endif
pci_set_drvdata(pdev, netdev); pci_set_drvdata(pdev, netdev);
} }
pi = &adapter->port[i]; pi = &adapter->port[i];
...@@ -1088,11 +1110,12 @@ static int __devinit init_one(struct pci_dev *pdev, ...@@ -1088,11 +1110,12 @@ static int __devinit init_one(struct pci_dev *pdev,
netdev->mem_end = mmio_start + mmio_len - 1; netdev->mem_end = mmio_start + mmio_len - 1;
netdev->priv = adapter; netdev->priv = adapter;
netdev->features |= NETIF_F_SG | NETIF_F_IP_CSUM; netdev->features |= NETIF_F_SG | NETIF_F_IP_CSUM;
netdev->features |= NETIF_F_LLTX;
adapter->flags |= RX_CSUM_ENABLED | TCP_CSUM_CAPABLE; adapter->flags |= RX_CSUM_ENABLED | TCP_CSUM_CAPABLE;
if (pci_using_dac) if (pci_using_dac)
netdev->features |= NETIF_F_HIGHDMA; netdev->features |= NETIF_F_HIGHDMA;
if (vlan_tso_capable(adapter)) { if (vlan_tso_capable(adapter)) {
adapter->flags |= UDP_CSUM_CAPABLE;
#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE) #if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
adapter->flags |= VLAN_ACCEL_CAPABLE; adapter->flags |= VLAN_ACCEL_CAPABLE;
netdev->features |= netdev->features |=
...@@ -1166,11 +1189,12 @@ static int __devinit init_one(struct pci_dev *pdev, ...@@ -1166,11 +1189,12 @@ static int __devinit init_one(struct pci_dev *pdev,
t1_free_sw_modules(adapter); t1_free_sw_modules(adapter);
out_free_dev: out_free_dev:
if (adapter) { if (adapter) {
if (adapter->regs) if (adapter->regs) iounmap(adapter->regs);
iounmap(adapter->regs);
for (i = bi->port_number - 1; i >= 0; --i) for (i = bi->port_number - 1; i >= 0; --i)
if (adapter->port[i].dev) if (adapter->port[i].dev) {
free_netdev(adapter->port[i].dev); cxgb_proc_cleanup(adapter, proc_root_driver);
kfree(adapter->port[i].dev);
}
} }
pci_release_regions(pdev); pci_release_regions(pdev);
out_disable_pdev: out_disable_pdev:
...@@ -1200,8 +1224,10 @@ static void __devexit remove_one(struct pci_dev *pdev) ...@@ -1200,8 +1224,10 @@ static void __devexit remove_one(struct pci_dev *pdev)
t1_free_sw_modules(adapter); t1_free_sw_modules(adapter);
iounmap(adapter->regs); iounmap(adapter->regs);
while (--i >= 0) while (--i >= 0)
if (adapter->port[i].dev) if (adapter->port[i].dev) {
free_netdev(adapter->port[i].dev); cxgb_proc_cleanup(adapter, proc_root_driver);
kfree(adapter->port[i].dev);
}
pci_release_regions(pdev); pci_release_regions(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
...@@ -1210,7 +1236,7 @@ static void __devexit remove_one(struct pci_dev *pdev) ...@@ -1210,7 +1236,7 @@ static void __devexit remove_one(struct pci_dev *pdev)
} }
static struct pci_driver driver = { static struct pci_driver driver = {
.name = driver_name, .name = DRV_NAME,
.id_table = t1_pci_tbl, .id_table = t1_pci_tbl,
.probe = init_one, .probe = init_one,
.remove = __devexit_p(remove_one), .remove = __devexit_p(remove_one),
...@@ -1228,4 +1254,3 @@ static void __exit t1_cleanup_module(void) ...@@ -1228,4 +1254,3 @@ static void __exit t1_cleanup_module(void)
module_init(t1_init_module); module_init(t1_init_module);
module_exit(t1_cleanup_module); module_exit(t1_cleanup_module);
/*****************************************************************************
* *
* File: cxgb2.h *
* $Revision: 1.8 $ *
* $Date: 2005/03/23 07:41:27 $ *
* Description: *
* part of the Chelsio 10Gb Ethernet Driver. *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License, version 2, as *
* published by the Free Software Foundation. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program; if not, write to the Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
* *
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *
* WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *
* *
* http://www.chelsio.com *
* *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com *
* *
* Authors: Dimitrios Michailidis <dm@chelsio.com> *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* *
* History: *
* *
****************************************************************************/
#ifndef __CXGB_LINUX_H__
#define __CXGB_LINUX_H__
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/version.h>
#include <asm/semaphore.h>
#include <asm/bitops.h>
/* This belongs in if_ether.h */
#define ETH_P_CPL5 0xf
struct cmac;
struct cphy;
struct port_info {
struct net_device *dev;
struct cmac *mac;
struct cphy *phy;
struct link_config link_config;
struct net_device_stats netstats;
};
struct cxgbdev;
struct t1_sge;
struct pemc3;
struct pemc4;
struct pemc5;
struct peulp;
struct petp;
struct pecspi;
struct peespi;
struct work_struct;
struct vlan_group;
enum { /* adapter flags */
FULL_INIT_DONE = 0x1,
USING_MSI = 0x2,
TSO_CAPABLE = 0x4,
TCP_CSUM_CAPABLE = 0x8,
UDP_CSUM_CAPABLE = 0x10,
VLAN_ACCEL_CAPABLE = 0x20,
RX_CSUM_ENABLED = 0x40,
};
struct adapter {
u8 *regs;
struct pci_dev *pdev;
unsigned long registered_device_map;
unsigned long open_device_map;
unsigned int flags;
const char *name;
int msg_enable;
u32 mmio_len;
struct work_struct ext_intr_handler_task;
struct adapter_params params;
struct vlan_group *vlan_grp;
/* Terminator modules. */
struct sge *sge;
struct pemc3 *mc3;
struct pemc4 *mc4;
struct pemc5 *mc5;
struct petp *tp;
struct pecspi *cspi;
struct peespi *espi;
struct peulp *ulp;
struct port_info port[MAX_NPORTS];
struct work_struct stats_update_task;
struct timer_list stats_update_timer;
struct semaphore mib_mutex;
spinlock_t tpi_lock;
spinlock_t work_lock;
spinlock_t async_lock ____cacheline_aligned; /* guards async operations */
u32 slow_intr_mask;
};
#endif
/***************************************************************************** /*****************************************************************************
* * * *
* File: elmer0.h * * File: elmer0.h *
* $Revision: 1.3 $ * * $Revision: 1.6 $ *
* $Date: 2005/03/23 07:15:58 $ * * $Date: 2005/06/21 22:49:43 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,14 +36,8 @@ ...@@ -36,14 +36,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef CHELSIO_ELMER0_H #ifndef _CXGB_ELMER0_H_
#define CHELSIO_ELMER0_H #define _CXGB_ELMER0_H_
/* ELMER0 flavors */
enum {
ELMER0_XC2S300E_6FT256_C,
ELMER0_XC2S100E_6TQ144_C
};
/* ELMER0 registers */ /* ELMER0 registers */
#define A_ELMER0_VERSION 0x100000 #define A_ELMER0_VERSION 0x100000
...@@ -154,4 +148,4 @@ enum { ...@@ -154,4 +148,4 @@ enum {
#define MI1_OP_INDIRECT_READ_INC 2 #define MI1_OP_INDIRECT_READ_INC 2
#define MI1_OP_INDIRECT_READ 3 #define MI1_OP_INDIRECT_READ 3
#endif #endif /* _CXGB_ELMER0_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: espi.c * * File: espi.c *
* $Revision: 1.9 $ * * $Revision: 1.14 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/05/14 00:59:32 $ *
* Description: * * Description: *
* Ethernet SPI functionality. * * Ethernet SPI functionality. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -63,15 +63,16 @@ static int tricn_write(adapter_t *adapter, int bundle_addr, int module_addr, ...@@ -63,15 +63,16 @@ static int tricn_write(adapter_t *adapter, int bundle_addr, int module_addr,
{ {
int busy, attempts = TRICN_CMD_ATTEMPTS; int busy, attempts = TRICN_CMD_ATTEMPTS;
t1_write_reg_4(adapter, A_ESPI_CMD_ADDR, V_WRITE_DATA(wr_data) | writel(V_WRITE_DATA(wr_data) |
V_REGISTER_OFFSET(reg_offset) | V_REGISTER_OFFSET(reg_offset) |
V_CHANNEL_ADDR(ch_addr) | V_MODULE_ADDR(module_addr) | V_CHANNEL_ADDR(ch_addr) | V_MODULE_ADDR(module_addr) |
V_BUNDLE_ADDR(bundle_addr) | V_BUNDLE_ADDR(bundle_addr) |
V_SPI4_COMMAND(TRICN_CMD_WRITE)); V_SPI4_COMMAND(TRICN_CMD_WRITE),
t1_write_reg_4(adapter, A_ESPI_GOSTAT, 0); adapter->regs + A_ESPI_CMD_ADDR);
writel(0, adapter->regs + A_ESPI_GOSTAT);
do { do {
busy = t1_read_reg_4(adapter, A_ESPI_GOSTAT) & F_ESPI_CMD_BUSY; busy = readl(adapter->regs + A_ESPI_GOSTAT) & F_ESPI_CMD_BUSY;
} while (busy && --attempts); } while (busy && --attempts);
if (busy) if (busy)
...@@ -99,12 +100,12 @@ static int tricn_init(adapter_t *adapter) ...@@ -99,12 +100,12 @@ static int tricn_init(adapter_t *adapter)
/* 1 */ /* 1 */
timeout=1000; timeout=1000;
do { do {
stat = t1_read_reg_4(adapter, A_ESPI_RX_RESET); stat = readl(adapter->regs + A_ESPI_RX_RESET);
is_ready = (stat & 0x4); is_ready = (stat & 0x4);
timeout--; timeout--;
udelay(5); udelay(5);
} while (!is_ready || (timeout==0)); } while (!is_ready || (timeout==0));
t1_write_reg_4(adapter, A_ESPI_RX_RESET, 0x2); writel(0x2, adapter->regs + A_ESPI_RX_RESET);
if (timeout==0) if (timeout==0)
{ {
CH_ERR("ESPI : ERROR : Timeout tricn_init() \n"); CH_ERR("ESPI : ERROR : Timeout tricn_init() \n");
...@@ -127,14 +128,14 @@ static int tricn_init(adapter_t *adapter) ...@@ -127,14 +128,14 @@ static int tricn_init(adapter_t *adapter)
for (i=8; i<= 8; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xf1); for (i=8; i<= 8; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xf1);
/* 3 */ /* 3 */
t1_write_reg_4(adapter, A_ESPI_RX_RESET, 0x3); writel(0x3, adapter->regs + A_ESPI_RX_RESET);
return 0; return 0;
} }
void t1_espi_intr_enable(struct peespi *espi) void t1_espi_intr_enable(struct peespi *espi)
{ {
u32 enable, pl_intr = t1_read_reg_4(espi->adapter, A_PL_ENABLE); u32 enable, pl_intr = readl(espi->adapter->regs + A_PL_ENABLE);
/* /*
* Cannot enable ESPI interrupts on T1B because HW asserts the * Cannot enable ESPI interrupts on T1B because HW asserts the
...@@ -144,28 +145,28 @@ void t1_espi_intr_enable(struct peespi *espi) ...@@ -144,28 +145,28 @@ void t1_espi_intr_enable(struct peespi *espi)
* cannot be cleared (HW bug). * cannot be cleared (HW bug).
*/ */
enable = t1_is_T1B(espi->adapter) ? 0 : ESPI_INTR_MASK; enable = t1_is_T1B(espi->adapter) ? 0 : ESPI_INTR_MASK;
t1_write_reg_4(espi->adapter, A_ESPI_INTR_ENABLE, enable); writel(enable, espi->adapter->regs + A_ESPI_INTR_ENABLE);
t1_write_reg_4(espi->adapter, A_PL_ENABLE, pl_intr | F_PL_INTR_ESPI); writel(pl_intr | F_PL_INTR_ESPI, espi->adapter->regs + A_PL_ENABLE);
} }
void t1_espi_intr_clear(struct peespi *espi) void t1_espi_intr_clear(struct peespi *espi)
{ {
t1_write_reg_4(espi->adapter, A_ESPI_INTR_STATUS, 0xffffffff); writel(0xffffffff, espi->adapter->regs + A_ESPI_INTR_STATUS);
t1_write_reg_4(espi->adapter, A_PL_CAUSE, F_PL_INTR_ESPI); writel(F_PL_INTR_ESPI, espi->adapter->regs + A_PL_CAUSE);
} }
void t1_espi_intr_disable(struct peespi *espi) void t1_espi_intr_disable(struct peespi *espi)
{ {
u32 pl_intr = t1_read_reg_4(espi->adapter, A_PL_ENABLE); u32 pl_intr = readl(espi->adapter->regs + A_PL_ENABLE);
t1_write_reg_4(espi->adapter, A_ESPI_INTR_ENABLE, 0); writel(0, espi->adapter->regs + A_ESPI_INTR_ENABLE);
t1_write_reg_4(espi->adapter, A_PL_ENABLE, pl_intr & ~F_PL_INTR_ESPI); writel(pl_intr & ~F_PL_INTR_ESPI, espi->adapter->regs + A_PL_ENABLE);
} }
int t1_espi_intr_handler(struct peespi *espi) int t1_espi_intr_handler(struct peespi *espi)
{ {
u32 cnt; u32 cnt;
u32 status = t1_read_reg_4(espi->adapter, A_ESPI_INTR_STATUS); u32 status = readl(espi->adapter->regs + A_ESPI_INTR_STATUS);
if (status & F_DIP4ERR) if (status & F_DIP4ERR)
espi->intr_cnt.DIP4_err++; espi->intr_cnt.DIP4_err++;
...@@ -184,7 +185,7 @@ int t1_espi_intr_handler(struct peespi *espi) ...@@ -184,7 +185,7 @@ int t1_espi_intr_handler(struct peespi *espi)
* Must read the error count to clear the interrupt * Must read the error count to clear the interrupt
* that it causes. * that it causes.
*/ */
cnt = t1_read_reg_4(espi->adapter, A_ESPI_DIP2_ERR_COUNT); cnt = readl(espi->adapter->regs + A_ESPI_DIP2_ERR_COUNT);
} }
/* /*
...@@ -193,68 +194,28 @@ int t1_espi_intr_handler(struct peespi *espi) ...@@ -193,68 +194,28 @@ int t1_espi_intr_handler(struct peespi *espi)
*/ */
if (status && t1_is_T1B(espi->adapter)) if (status && t1_is_T1B(espi->adapter))
status = 1; status = 1;
t1_write_reg_4(espi->adapter, A_ESPI_INTR_STATUS, status); writel(status, espi->adapter->regs + A_ESPI_INTR_STATUS);
return 0; return 0;
} }
static void espi_setup_for_pm3393(adapter_t *adapter) const struct espi_intr_counts *t1_espi_get_intr_counts(struct peespi *espi)
{ {
u32 wmark = t1_is_T1B(adapter) ? 0x4000 : 0x3200; return &espi->intr_cnt;
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN0, 0x1f4);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN1, 0x1f4);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN2, 0x1f4);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN3, 0x1f4);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK, 0x100);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK, wmark);
t1_write_reg_4(adapter, A_ESPI_CALENDAR_LENGTH, 3);
t1_write_reg_4(adapter, A_ESPI_TRAIN, 0x08000008);
t1_write_reg_4(adapter, A_PORT_CONFIG,
V_RX_NPORTS(1) | V_TX_NPORTS(1));
} }
static void espi_setup_for_vsc7321(adapter_t *adapter) static void espi_setup_for_pm3393(adapter_t *adapter)
{ {
u32 wmark = t1_is_T1B(adapter) ? 0x4000 : 0x3200; u32 wmark = t1_is_T1B(adapter) ? 0x4000 : 0x3200;
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN0, 0x1f4); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN0);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN1, 0x1f4); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN1);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN2, 0x1f4); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN2);
t1_write_reg_4(adapter, A_ESPI_SCH_TOKEN3, 0x1f4); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN3);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK, 0x100); writel(0x100, adapter->regs + A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK, wmark); writel(wmark, adapter->regs + A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK);
t1_write_reg_4(adapter, A_ESPI_CALENDAR_LENGTH, 3); writel(3, adapter->regs + A_ESPI_CALENDAR_LENGTH);
t1_write_reg_4(adapter, A_ESPI_TRAIN, 0x08000008); writel(0x08000008, adapter->regs + A_ESPI_TRAIN);
t1_write_reg_4(adapter, A_PORT_CONFIG, writel(V_RX_NPORTS(1) | V_TX_NPORTS(1), adapter->regs + A_PORT_CONFIG);
V_RX_NPORTS(1) | V_TX_NPORTS(1));
}
/*
* Note that T1B requires at least 2 ports for IXF1010 due to a HW bug.
*/
static void espi_setup_for_ixf1010(adapter_t *adapter, int nports)
{
t1_write_reg_4(adapter, A_ESPI_CALENDAR_LENGTH, 1);
if (nports == 4) {
if (is_T2(adapter)) {
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK,
0xf00);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK,
0x3c0);
} else {
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK,
0x7ff);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK,
0x1ff);
}
} else {
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK,
0x1fff);
t1_write_reg_4(adapter, A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK,
0x7ff);
}
t1_write_reg_4(adapter, A_PORT_CONFIG,
V_RX_NPORTS(nports) | V_TX_NPORTS(nports));
} }
/* T2 Init part -- */ /* T2 Init part -- */
...@@ -263,43 +224,42 @@ static void espi_setup_for_ixf1010(adapter_t *adapter, int nports) ...@@ -263,43 +224,42 @@ static void espi_setup_for_ixf1010(adapter_t *adapter, int nports)
/* 3. Init TriCN Hard Macro */ /* 3. Init TriCN Hard Macro */
int t1_espi_init(struct peespi *espi, int mac_type, int nports) int t1_espi_init(struct peespi *espi, int mac_type, int nports)
{ {
u32 cnt;
u32 status_enable_extra = 0; u32 status_enable_extra = 0;
adapter_t *adapter = espi->adapter; adapter_t *adapter = espi->adapter;
u32 cnt;
u32 status, burstval = 0x800100; u32 status, burstval = 0x800100;
/* Disable ESPI training. MACs that can handle it enable it below. */ /* Disable ESPI training. MACs that can handle it enable it below. */
t1_write_reg_4(adapter, A_ESPI_TRAIN, 0); writel(0, adapter->regs + A_ESPI_TRAIN);
if (is_T2(adapter)) { if (is_T2(adapter)) {
t1_write_reg_4(adapter, A_ESPI_MISC_CONTROL, writel(V_OUT_OF_SYNC_COUNT(4) |
V_OUT_OF_SYNC_COUNT(4) | V_DIP2_PARITY_ERR_THRES(3) |
V_DIP2_PARITY_ERR_THRES(3) | V_DIP4_THRES(1)); V_DIP4_THRES(1), adapter->regs + A_ESPI_MISC_CONTROL);
if (nports == 4) { if (nports == 4) {
/* T204: maxburst1 = 0x40, maxburst2 = 0x20 */ /* T204: maxburst1 = 0x40, maxburst2 = 0x20 */
burstval = 0x200040; burstval = 0x200040;
} }
} }
t1_write_reg_4(adapter, A_ESPI_MAXBURST1_MAXBURST2, burstval); writel(burstval, adapter->regs + A_ESPI_MAXBURST1_MAXBURST2);
if (mac_type == CHBT_MAC_PM3393) switch (mac_type) {
case CHBT_MAC_PM3393:
espi_setup_for_pm3393(adapter); espi_setup_for_pm3393(adapter);
else if (mac_type == CHBT_MAC_VSC7321) break;
espi_setup_for_vsc7321(adapter); default:
else if (mac_type == CHBT_MAC_IXF1010) {
status_enable_extra = F_INTEL1010MODE;
espi_setup_for_ixf1010(adapter, nports);
} else
return -1; return -1;
}
/* /*
* Make sure any pending interrupts from the SPI are * Make sure any pending interrupts from the SPI are
* Cleared before enabling the interrupt. * Cleared before enabling the interrupt.
*/ */
t1_write_reg_4(espi->adapter, A_ESPI_INTR_ENABLE, ESPI_INTR_MASK); writel(ESPI_INTR_MASK, espi->adapter->regs + A_ESPI_INTR_ENABLE);
status = t1_read_reg_4(espi->adapter, A_ESPI_INTR_STATUS); status = readl(espi->adapter->regs + A_ESPI_INTR_STATUS);
if (status & F_DIP2PARITYERR) { if (status & F_DIP2PARITYERR) {
cnt = t1_read_reg_4(espi->adapter, A_ESPI_DIP2_ERR_COUNT); cnt = readl(espi->adapter->regs + A_ESPI_DIP2_ERR_COUNT);
} }
/* /*
...@@ -308,10 +268,10 @@ int t1_espi_init(struct peespi *espi, int mac_type, int nports) ...@@ -308,10 +268,10 @@ int t1_espi_init(struct peespi *espi, int mac_type, int nports)
*/ */
if (status && t1_is_T1B(espi->adapter)) if (status && t1_is_T1B(espi->adapter))
status = 1; status = 1;
t1_write_reg_4(espi->adapter, A_ESPI_INTR_STATUS, status); writel(status, espi->adapter->regs + A_ESPI_INTR_STATUS);
t1_write_reg_4(adapter, A_ESPI_FIFO_STATUS_ENABLE, writel(status_enable_extra | F_RXSTATUSENABLE,
status_enable_extra | F_RXSTATUSENABLE); adapter->regs + A_ESPI_FIFO_STATUS_ENABLE);
if (is_T2(adapter)) { if (is_T2(adapter)) {
tricn_init(adapter); tricn_init(adapter);
...@@ -319,10 +279,10 @@ int t1_espi_init(struct peespi *espi, int mac_type, int nports) ...@@ -319,10 +279,10 @@ int t1_espi_init(struct peespi *espi, int mac_type, int nports)
* Always position the control at the 1st port egress IN * Always position the control at the 1st port egress IN
* (sop,eop) counter to reduce PIOs for T/N210 workaround. * (sop,eop) counter to reduce PIOs for T/N210 workaround.
*/ */
espi->misc_ctrl = (t1_read_reg_4(adapter, A_ESPI_MISC_CONTROL) espi->misc_ctrl = (readl(adapter->regs + A_ESPI_MISC_CONTROL)
& ~MON_MASK) | (F_MONITORED_DIRECTION & ~MON_MASK) | (F_MONITORED_DIRECTION
| F_MONITORED_INTERFACE); | F_MONITORED_INTERFACE);
t1_write_reg_4(adapter, A_ESPI_MISC_CONTROL, espi->misc_ctrl); writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);
spin_lock_init(&espi->lock); spin_lock_init(&espi->lock);
} }
...@@ -354,15 +314,16 @@ void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val) ...@@ -354,15 +314,16 @@ void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val)
spin_lock(&espi->lock); spin_lock(&espi->lock);
espi->misc_ctrl = (val & ~MON_MASK) | espi->misc_ctrl = (val & ~MON_MASK) |
(espi->misc_ctrl & MON_MASK); (espi->misc_ctrl & MON_MASK);
t1_write_reg_4(adapter, A_ESPI_MISC_CONTROL, espi->misc_ctrl); writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);
spin_unlock(&espi->lock); spin_unlock(&espi->lock);
} }
u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait) u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait)
{ {
struct peespi *espi = adapter->espi;
u32 sel; u32 sel;
struct peespi *espi = adapter->espi;
if (!is_T2(adapter)) if (!is_T2(adapter))
return 0; return 0;
sel = V_MONITORED_PORT_NUM((addr & 0x3c) >> 2); sel = V_MONITORED_PORT_NUM((addr & 0x3c) >> 2);
...@@ -373,14 +334,13 @@ u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait) ...@@ -373,14 +334,13 @@ u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait)
else else
spin_lock(&espi->lock); spin_lock(&espi->lock);
if ((sel != (espi->misc_ctrl & MON_MASK))) { if ((sel != (espi->misc_ctrl & MON_MASK))) {
t1_write_reg_4(adapter, A_ESPI_MISC_CONTROL, writel(((espi->misc_ctrl & ~MON_MASK) | sel),
((espi->misc_ctrl & ~MON_MASK) | sel)); adapter->regs + A_ESPI_MISC_CONTROL);
sel = t1_read_reg_4(adapter, A_ESPI_SCH_TOKEN3); sel = readl(adapter->regs + A_ESPI_SCH_TOKEN3);
t1_write_reg_4(adapter, A_ESPI_MISC_CONTROL, writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);
espi->misc_ctrl);
} }
else else
sel = t1_read_reg_4(adapter, A_ESPI_SCH_TOKEN3); sel = readl(adapter->regs + A_ESPI_SCH_TOKEN3);
spin_unlock(&espi->lock); spin_unlock(&espi->lock);
return sel; return sel;
} }
/***************************************************************************** /*****************************************************************************
* * * *
* File: espi.h * * File: espi.h *
* $Revision: 1.4 $ * * $Revision: 1.7 $ *
* $Date: 2005/03/23 07:15:58 $ * * $Date: 2005/06/21 18:29:47 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,8 +36,8 @@ ...@@ -36,8 +36,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef CHELSIO_ESPI_H #ifndef _CXGB_ESPI_H_
#define CHELSIO_ESPI_H #define _CXGB_ESPI_H_
#include "common.h" #include "common.h"
...@@ -60,8 +60,9 @@ void t1_espi_intr_enable(struct peespi *); ...@@ -60,8 +60,9 @@ void t1_espi_intr_enable(struct peespi *);
void t1_espi_intr_clear(struct peespi *); void t1_espi_intr_clear(struct peespi *);
void t1_espi_intr_disable(struct peespi *); void t1_espi_intr_disable(struct peespi *);
int t1_espi_intr_handler(struct peespi *); int t1_espi_intr_handler(struct peespi *);
const struct espi_intr_counts *t1_espi_get_intr_counts(struct peespi *espi);
void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val); void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val);
u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait); u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait);
#endif #endif /* _CXGB_ESPI_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: gmac.h * * File: gmac.h *
* $Revision: 1.3 $ * * $Revision: 1.6 $ *
* $Date: 2005/03/23 07:15:58 $ * * $Date: 2005/06/21 18:29:47 $ *
* Description: * * Description: *
* Generic MAC functionality. * * Generic MAC functionality. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -37,8 +37,8 @@ ...@@ -37,8 +37,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef CHELSIO_GMAC_H #ifndef _CXGB_GMAC_H_
#define CHELSIO_GMAC_H #define _CXGB_GMAC_H_
#include "common.h" #include "common.h"
...@@ -130,4 +130,5 @@ extern struct gmac t1_chelsio_mac_ops; ...@@ -130,4 +130,5 @@ extern struct gmac t1_chelsio_mac_ops;
extern struct gmac t1_vsc7321_ops; extern struct gmac t1_vsc7321_ops;
extern struct gmac t1_ixf1010_ops; extern struct gmac t1_ixf1010_ops;
extern struct gmac t1_dummy_mac_ops; extern struct gmac t1_dummy_mac_ops;
#endif
#endif /* _CXGB_GMAC_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: mv88x201x.c * * File: mv88x201x.c *
* $Revision: 1.7 $ * * $Revision: 1.12 $ *
* $Date: 2005/03/23 07:15:59 $ * * $Date: 2005/04/15 19:27:14 $ *
* Description: * * Description: *
* Marvell PHY (mv88x201x) functionality. * * Marvell PHY (mv88x201x) functionality. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -85,33 +85,29 @@ static int mv88x201x_reset(struct cphy *cphy, int wait) ...@@ -85,33 +85,29 @@ static int mv88x201x_reset(struct cphy *cphy, int wait)
static int mv88x201x_interrupt_enable(struct cphy *cphy) static int mv88x201x_interrupt_enable(struct cphy *cphy)
{ {
u32 elmer;
/* Enable PHY LASI interrupts. */ /* Enable PHY LASI interrupts. */
mdio_write(cphy, 0x1, 0x9002, 0x1); mdio_write(cphy, 0x1, 0x9002, 0x1);
/* Enable Marvell interrupts through Elmer0. */ /* Enable Marvell interrupts through Elmer0. */
if (t1_is_asic(cphy->adapter)) {
u32 elmer;
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer |= ELMER0_GP_BIT6; elmer |= ELMER0_GP_BIT6;
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
}
return 0; return 0;
} }
static int mv88x201x_interrupt_disable(struct cphy *cphy) static int mv88x201x_interrupt_disable(struct cphy *cphy)
{ {
u32 elmer;
/* Disable PHY LASI interrupts. */ /* Disable PHY LASI interrupts. */
mdio_write(cphy, 0x1, 0x9002, 0x0); mdio_write(cphy, 0x1, 0x9002, 0x0);
/* Disable Marvell interrupts through Elmer0. */ /* Disable Marvell interrupts through Elmer0. */
if (t1_is_asic(cphy->adapter)) {
u32 elmer;
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer &= ~ELMER0_GP_BIT6; elmer &= ~ELMER0_GP_BIT6;
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
}
return 0; return 0;
} }
...@@ -144,11 +140,9 @@ static int mv88x201x_interrupt_clear(struct cphy *cphy) ...@@ -144,11 +140,9 @@ static int mv88x201x_interrupt_clear(struct cphy *cphy)
#endif #endif
/* Clear Marvell interrupts through Elmer0. */ /* Clear Marvell interrupts through Elmer0. */
if (t1_is_asic(cphy->adapter)) {
t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);
elmer |= ELMER0_GP_BIT6; elmer |= ELMER0_GP_BIT6;
t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);
}
return 0; return 0;
} }
......
/*****************************************************************************
* *
* File: osdep.h *
* $Revision: 1.9 $ *
* $Date: 2005/03/23 07:41:27 $ *
* Description: *
* part of the Chelsio 10Gb Ethernet Driver. *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License, version 2, as *
* published by the Free Software Foundation. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program; if not, write to the Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
* *
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *
* WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *
* *
* http://www.chelsio.com *
* *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com *
* *
* Authors: Dimitrios Michailidis <dm@chelsio.com> *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* *
* History: *
* *
****************************************************************************/
#ifndef __CHELSIO_OSDEP_H
#define __CHELSIO_OSDEP_H
#include <linux/version.h>
#include <linux/module.h>
#include <linux/config.h>
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/crc32.h>
#include <linux/init.h>
#include <asm/io.h>
#include "cxgb2.h"
#define DRV_NAME "cxgb"
#define PFX DRV_NAME ": "
#define CH_ERR(fmt, ...) printk(KERN_ERR PFX fmt, ## __VA_ARGS__)
#define CH_WARN(fmt, ...) printk(KERN_WARNING PFX fmt, ## __VA_ARGS__)
#define CH_ALERT(fmt, ...) printk(KERN_ALERT PFX fmt, ## __VA_ARGS__)
/*
* More powerful macro that selectively prints messages based on msg_enable.
* For info and debugging messages.
*/
#define CH_MSG(adapter, level, category, fmt, ...) do { \
if ((adapter)->msg_enable & NETIF_MSG_##category) \
printk(KERN_##level PFX "%s: " fmt, (adapter)->name, \
## __VA_ARGS__); \
} while (0)
#ifdef DEBUG
# define CH_DBG(adapter, category, fmt, ...) \
CH_MSG(adapter, DEBUG, category, fmt, ## __VA_ARGS__)
#else
# define CH_DBG(fmt, ...)
#endif
/* Additional NETIF_MSG_* categories */
#define NETIF_MSG_MMIO 0x8000000
#define CH_DEVICE(devid, ssid, idx) \
{ PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, ssid, 0, 0, idx }
#define SUPPORTED_PAUSE (1 << 13)
#define SUPPORTED_LOOPBACK (1 << 15)
#define ADVERTISED_PAUSE (1 << 13)
#define ADVERTISED_ASYM_PAUSE (1 << 14)
/*
* Now that we have included the driver's main data structure,
* we typedef it to something the rest of the system understands.
*/
typedef struct adapter adapter_t;
#define TPI_LOCK(adapter) spin_lock(&(adapter)->tpi_lock)
#define TPI_UNLOCK(adapter) spin_unlock(&(adapter)->tpi_lock)
void t1_elmer0_ext_intr(adapter_t *adapter);
void t1_link_changed(adapter_t *adapter, int port_id, int link_status,
int speed, int duplex, int fc);
static inline u16 t1_read_reg_2(adapter_t *adapter, u32 reg_addr)
{
u16 val = readw(adapter->regs + reg_addr);
CH_DBG(adapter, MMIO, "read register 0x%x value 0x%x\n", reg_addr,
val);
return val;
}
static inline void t1_write_reg_2(adapter_t *adapter, u32 reg_addr, u16 val)
{
CH_DBG(adapter, MMIO, "setting register 0x%x to 0x%x\n", reg_addr,
val);
writew(val, adapter->regs + reg_addr);
}
static inline u32 t1_read_reg_4(adapter_t *adapter, u32 reg_addr)
{
u32 val = readl(adapter->regs + reg_addr);
CH_DBG(adapter, MMIO, "read register 0x%x value 0x%x\n", reg_addr,
val);
return val;
}
static inline void t1_write_reg_4(adapter_t *adapter, u32 reg_addr, u32 val)
{
CH_DBG(adapter, MMIO, "setting register 0x%x to 0x%x\n", reg_addr,
val);
writel(val, adapter->regs + reg_addr);
}
static inline const char *port_name(adapter_t *adapter, int port_idx)
{
return adapter->port[port_idx].dev->name;
}
static inline void t1_set_hw_addr(adapter_t *adapter, int port_idx,
u8 hw_addr[])
{
memcpy(adapter->port[port_idx].dev->dev_addr, hw_addr, ETH_ALEN);
}
struct t1_rx_mode {
struct net_device *dev;
u32 idx;
struct dev_mc_list *list;
};
#define t1_rx_mode_promisc(rm) (rm->dev->flags & IFF_PROMISC)
#define t1_rx_mode_allmulti(rm) (rm->dev->flags & IFF_ALLMULTI)
#define t1_rx_mode_mc_cnt(rm) (rm->dev->mc_count)
static inline u8 *t1_get_next_mcaddr(struct t1_rx_mode *rm)
{
u8 *addr = 0;
if (rm->idx++ < rm->dev->mc_count) {
addr = rm->list->dmi_addr;
rm->list = rm->list->next;
}
return addr;
}
#endif
/***************************************************************************** /*****************************************************************************
* * * *
* File: pm3393.c * * File: pm3393.c *
* $Revision: 1.9 $ * * $Revision: 1.16 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/05/14 00:59:32 $ *
* Description: * * Description: *
* PMC/SIERRA (pm3393) MAC-PHY functionality. * * PMC/SIERRA (pm3393) MAC-PHY functionality. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -45,15 +45,19 @@ ...@@ -45,15 +45,19 @@
/* 802.3ae 10Gb/s MDIO Manageable Device(MMD) /* 802.3ae 10Gb/s MDIO Manageable Device(MMD)
*/ */
#define MMD_RESERVED 0 enum {
#define MMD_PMAPMD 1 MMD_RESERVED,
#define MMD_WIS 2 MMD_PMAPMD,
#define MMD_PCS 3 MMD_WIS,
#define MMD_PHY_XGXS 4 /* XGMII Extender Sublayer */ MMD_PCS,
#define MMD_DTE_XGXS 5 MMD_PHY_XGXS, /* XGMII Extender Sublayer */
MMD_DTE_XGXS,
};
#define PHY_XGXS_CTRL_1 0 enum {
#define PHY_XGXS_STATUS_1 1 PHY_XGXS_CTRL_1,
PHY_XGXS_STATUS_1
};
#define OFFSET(REG_ADDR) (REG_ADDR << 2) #define OFFSET(REG_ADDR) (REG_ADDR << 2)
...@@ -160,9 +164,9 @@ static int pm3393_interrupt_enable(struct cmac *cmac) ...@@ -160,9 +164,9 @@ static int pm3393_interrupt_enable(struct cmac *cmac)
0 /*SUNI1x10GEXP_BITMSK_TOP_INTE */ ); 0 /*SUNI1x10GEXP_BITMSK_TOP_INTE */ );
/* TERMINATOR - PL_INTERUPTS_EXT */ /* TERMINATOR - PL_INTERUPTS_EXT */
pl_intr = t1_read_reg_4(cmac->adapter, A_PL_ENABLE); pl_intr = readl(cmac->adapter->regs + A_PL_ENABLE);
pl_intr |= F_PL_INTR_EXT; pl_intr |= F_PL_INTR_EXT;
t1_write_reg_4(cmac->adapter, A_PL_ENABLE, pl_intr); writel(pl_intr, cmac->adapter->regs + A_PL_ENABLE);
return 0; return 0;
} }
...@@ -242,9 +246,9 @@ static int pm3393_interrupt_clear(struct cmac *cmac) ...@@ -242,9 +246,9 @@ static int pm3393_interrupt_clear(struct cmac *cmac)
/* TERMINATOR - PL_INTERUPTS_EXT /* TERMINATOR - PL_INTERUPTS_EXT
*/ */
pl_intr = t1_read_reg_4(cmac->adapter, A_PL_CAUSE); pl_intr = readl(cmac->adapter->regs + A_PL_CAUSE);
pl_intr |= F_PL_INTR_EXT; pl_intr |= F_PL_INTR_EXT;
t1_write_reg_4(cmac->adapter, A_PL_CAUSE, pl_intr); writel(pl_intr, cmac->adapter->regs + A_PL_CAUSE);
return 0; return 0;
} }
...@@ -261,8 +265,6 @@ static int pm3393_interrupt_handler(struct cmac *cmac) ...@@ -261,8 +265,6 @@ static int pm3393_interrupt_handler(struct cmac *cmac)
/* Read the master interrupt status register. */ /* Read the master interrupt status register. */
pmread(cmac, SUNI1x10GEXP_REG_MASTER_INTERRUPT_STATUS, pmread(cmac, SUNI1x10GEXP_REG_MASTER_INTERRUPT_STATUS,
&master_intr_status); &master_intr_status);
CH_DBG(cmac->adapter, INTR, "PM3393 intr cause 0x%x\n",
master_intr_status);
/* TBD XXX Lets just clear everything for now */ /* TBD XXX Lets just clear everything for now */
pm3393_interrupt_clear(cmac); pm3393_interrupt_clear(cmac);
...@@ -703,10 +705,9 @@ static struct cmac *pm3393_mac_create(adapter_t *adapter, int index) ...@@ -703,10 +705,9 @@ static struct cmac *pm3393_mac_create(adapter_t *adapter, int index)
t1_tpi_write(adapter, OFFSET(0x3040), 0x0c32); /* # TXXG Config */ t1_tpi_write(adapter, OFFSET(0x3040), 0x0c32); /* # TXXG Config */
/* For T1 use timer based Mac flow control. */ /* For T1 use timer based Mac flow control. */
if (t1_is_T1B(adapter))
t1_tpi_write(adapter, OFFSET(0x304d), 0x8000); t1_tpi_write(adapter, OFFSET(0x304d), 0x8000);
t1_tpi_write(adapter, OFFSET(0x2040), 0x059c); /* # RXXG Config */ t1_tpi_write(adapter, OFFSET(0x2040), 0x059c); /* # RXXG Config */
t1_tpi_write(adapter, OFFSET(0x2049), 0x0000); /* # RXXG Cut Through */ t1_tpi_write(adapter, OFFSET(0x2049), 0x0001); /* # RXXG Cut Through */
t1_tpi_write(adapter, OFFSET(0x2070), 0x0000); /* # Disable promiscuous mode */ t1_tpi_write(adapter, OFFSET(0x2070), 0x0000); /* # Disable promiscuous mode */
/* Setup Exact Match Filter 0 to allow broadcast packets. /* Setup Exact Match Filter 0 to allow broadcast packets.
...@@ -814,12 +815,6 @@ static int pm3393_mac_reset(adapter_t * adapter) ...@@ -814,12 +815,6 @@ static int pm3393_mac_reset(adapter_t * adapter)
successful_reset = (is_pl4_reset_finished && !is_pl4_outof_lock successful_reset = (is_pl4_reset_finished && !is_pl4_outof_lock
&& is_xaui_mabc_pll_locked); && is_xaui_mabc_pll_locked);
CH_DBG(adapter, HW,
"PM3393 HW reset %d: pl4_reset 0x%x, val 0x%x, "
"is_pl4_outof_lock 0x%x, xaui_locked 0x%x\n",
i, is_pl4_reset_finished, val, is_pl4_outof_lock,
is_xaui_mabc_pll_locked);
} }
return successful_reset ? 0 : 1; return successful_reset ? 0 : 1;
} }
......
/***************************************************************************** /*****************************************************************************
* * * *
* File: regs.h * * File: regs.h *
* $Revision: 1.4 $ * * $Revision: 1.8 $ *
* $Date: 2005/03/23 07:15:59 $ * * $Date: 2005/06/21 18:29:48 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,7 +36,8 @@ ...@@ -36,7 +36,8 @@
* * * *
****************************************************************************/ ****************************************************************************/
/* Do not edit this file */ #ifndef _CXGB_REGS_H_
#define _CXGB_REGS_H_
/* SGE registers */ /* SGE registers */
#define A_SG_CONTROL 0x0 #define A_SG_CONTROL 0x0
...@@ -74,6 +75,14 @@ ...@@ -74,6 +75,14 @@
#define V_DISABLE_CMDQ1_GTS(x) ((x) << S_DISABLE_CMDQ1_GTS) #define V_DISABLE_CMDQ1_GTS(x) ((x) << S_DISABLE_CMDQ1_GTS)
#define F_DISABLE_CMDQ1_GTS V_DISABLE_CMDQ1_GTS(1U) #define F_DISABLE_CMDQ1_GTS V_DISABLE_CMDQ1_GTS(1U)
#define S_DISABLE_FL0_GTS 10
#define V_DISABLE_FL0_GTS(x) ((x) << S_DISABLE_FL0_GTS)
#define F_DISABLE_FL0_GTS V_DISABLE_FL0_GTS(1U)
#define S_DISABLE_FL1_GTS 11
#define V_DISABLE_FL1_GTS(x) ((x) << S_DISABLE_FL1_GTS)
#define F_DISABLE_FL1_GTS V_DISABLE_FL1_GTS(1U)
#define S_ENABLE_BIG_ENDIAN 12 #define S_ENABLE_BIG_ENDIAN 12
#define V_ENABLE_BIG_ENDIAN(x) ((x) << S_ENABLE_BIG_ENDIAN) #define V_ENABLE_BIG_ENDIAN(x) ((x) << S_ENABLE_BIG_ENDIAN)
#define F_ENABLE_BIG_ENDIAN V_ENABLE_BIG_ENDIAN(1U) #define F_ENABLE_BIG_ENDIAN V_ENABLE_BIG_ENDIAN(1U)
...@@ -132,6 +141,7 @@ ...@@ -132,6 +141,7 @@
#define F_PACKET_MISMATCH V_PACKET_MISMATCH(1U) #define F_PACKET_MISMATCH V_PACKET_MISMATCH(1U)
#define A_SG_INT_CAUSE 0xbc #define A_SG_INT_CAUSE 0xbc
#define A_SG_RESPACCUTIMER 0xc0
/* MC3 registers */ /* MC3 registers */
...@@ -247,6 +257,10 @@ ...@@ -247,6 +257,10 @@
#define V_SYN_COOKIE_PARAMETER(x) ((x) << S_SYN_COOKIE_PARAMETER) #define V_SYN_COOKIE_PARAMETER(x) ((x) << S_SYN_COOKIE_PARAMETER)
#define A_TP_PC_CONFIG 0x348 #define A_TP_PC_CONFIG 0x348
#define S_DIS_TX_FILL_WIN_PUSH 12
#define V_DIS_TX_FILL_WIN_PUSH(x) ((x) << S_DIS_TX_FILL_WIN_PUSH)
#define F_DIS_TX_FILL_WIN_PUSH V_DIS_TX_FILL_WIN_PUSH(1U)
#define S_TP_PC_REV 30 #define S_TP_PC_REV 30
#define M_TP_PC_REV 0x3 #define M_TP_PC_REV 0x3
#define G_TP_PC_REV(x) (((x) >> S_TP_PC_REV) & M_TP_PC_REV) #define G_TP_PC_REV(x) (((x) >> S_TP_PC_REV) & M_TP_PC_REV)
...@@ -451,3 +465,4 @@ ...@@ -451,3 +465,4 @@
#define M_PCI_MODE_CLK 0x3 #define M_PCI_MODE_CLK 0x3
#define G_PCI_MODE_CLK(x) (((x) >> S_PCI_MODE_CLK) & M_PCI_MODE_CLK) #define G_PCI_MODE_CLK(x) (((x) >> S_PCI_MODE_CLK) & M_PCI_MODE_CLK)
#endif /* _CXGB_REGS_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: sge.c * * File: sge.c *
* $Revision: 1.13 $ * * $Revision: 1.26 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/06/21 18:29:48 $ *
* Description: * * Description: *
* DMA engine. * * DMA engine. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -58,59 +58,62 @@ ...@@ -58,59 +58,62 @@
#include "regs.h" #include "regs.h"
#include "espi.h" #include "espi.h"
#ifdef NETIF_F_TSO
#include <linux/tcp.h> #include <linux/tcp.h>
#endif
#define SGE_CMDQ_N 2 #define SGE_CMDQ_N 2
#define SGE_FREELQ_N 2 #define SGE_FREELQ_N 2
#define SGE_CMDQ0_E_N 512 #define SGE_CMDQ0_E_N 1024
#define SGE_CMDQ1_E_N 128 #define SGE_CMDQ1_E_N 128
#define SGE_FREEL_SIZE 4096 #define SGE_FREEL_SIZE 4096
#define SGE_JUMBO_FREEL_SIZE 512 #define SGE_JUMBO_FREEL_SIZE 512
#define SGE_FREEL_REFILL_THRESH 16 #define SGE_FREEL_REFILL_THRESH 16
#define SGE_RESPQ_E_N 1024 #define SGE_RESPQ_E_N 1024
#define SGE_INTR_BUCKETSIZE 100 #define SGE_INTRTIMER_NRES 1000
#define SGE_INTR_LATBUCKETS 5 #define SGE_RX_COPY_THRES 256
#define SGE_INTR_MAXBUCKETS 11
#define SGE_INTRTIMER0 1
#define SGE_INTRTIMER1 50
#define SGE_INTRTIMER_NRES 10000
#define SGE_RX_COPY_THRESHOLD 256
#define SGE_RX_SM_BUF_SIZE 1536 #define SGE_RX_SM_BUF_SIZE 1536
#define SGE_RESPQ_REPLENISH_THRES ((3 * SGE_RESPQ_E_N) / 4) # define SGE_RX_DROP_THRES 2
#define SGE_RESPQ_REPLENISH_THRES (SGE_RESPQ_E_N / 4)
/*
* Period of the TX buffer reclaim timer. This timer does not need to run
* frequently as TX buffers are usually reclaimed by new TX packets.
*/
#define TX_RECLAIM_PERIOD (HZ / 4)
#define SGE_RX_OFFSET 2
#ifndef NET_IP_ALIGN #ifndef NET_IP_ALIGN
# define NET_IP_ALIGN SGE_RX_OFFSET # define NET_IP_ALIGN 2
#endif #endif
#define M_CMD_LEN 0x7fffffff
#define V_CMD_LEN(v) (v)
#define G_CMD_LEN(v) ((v) & M_CMD_LEN)
#define V_CMD_GEN1(v) ((v) << 31)
#define V_CMD_GEN2(v) (v)
#define F_CMD_DATAVALID (1 << 1)
#define F_CMD_SOP (1 << 2)
#define V_CMD_EOP(v) ((v) << 3)
/* /*
* Memory Mapped HW Command, Freelist and Response Queue Descriptors * Command queue, receive buffer list, and response queue descriptors.
*/ */
#if defined(__BIG_ENDIAN_BITFIELD) #if defined(__BIG_ENDIAN_BITFIELD)
struct cmdQ_e { struct cmdQ_e {
u32 AddrLow; u32 addr_lo;
u32 GenerationBit : 1; u32 len_gen;
u32 BufferLength : 31; u32 flags;
u32 RespQueueSelector : 4; u32 addr_hi;
u32 ResponseTokens : 12;
u32 CmdId : 8;
u32 Reserved : 3;
u32 TokenValid : 1;
u32 Eop : 1;
u32 Sop : 1;
u32 DataValid : 1;
u32 GenerationBit2 : 1;
u32 AddrHigh;
}; };
struct freelQ_e { struct freelQ_e {
u32 AddrLow; u32 addr_lo;
u32 GenerationBit : 1; u32 len_gen;
u32 BufferLength : 31; u32 gen2;
u32 Reserved : 31; u32 addr_hi;
u32 GenerationBit2 : 1;
u32 AddrHigh;
}; };
struct respQ_e { struct respQ_e {
...@@ -128,31 +131,19 @@ struct respQ_e { ...@@ -128,31 +131,19 @@ struct respQ_e {
u32 GenerationBit : 1; u32 GenerationBit : 1;
u32 BufferLength; u32 BufferLength;
}; };
#elif defined(__LITTLE_ENDIAN_BITFIELD) #elif defined(__LITTLE_ENDIAN_BITFIELD)
struct cmdQ_e { struct cmdQ_e {
u32 BufferLength : 31; u32 len_gen;
u32 GenerationBit : 1; u32 addr_lo;
u32 AddrLow; u32 addr_hi;
u32 AddrHigh; u32 flags;
u32 GenerationBit2 : 1;
u32 DataValid : 1;
u32 Sop : 1;
u32 Eop : 1;
u32 TokenValid : 1;
u32 Reserved : 3;
u32 CmdId : 8;
u32 ResponseTokens : 12;
u32 RespQueueSelector : 4;
}; };
struct freelQ_e { struct freelQ_e {
u32 BufferLength : 31; u32 len_gen;
u32 GenerationBit : 1; u32 addr_lo;
u32 AddrLow; u32 addr_hi;
u32 AddrHigh; u32 gen2;
u32 GenerationBit2 : 1;
u32 Reserved : 31;
}; };
struct respQ_e { struct respQ_e {
...@@ -179,7 +170,6 @@ struct cmdQ_ce { ...@@ -179,7 +170,6 @@ struct cmdQ_ce {
struct sk_buff *skb; struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(dma_addr); DECLARE_PCI_UNMAP_ADDR(dma_addr);
DECLARE_PCI_UNMAP_LEN(dma_len); DECLARE_PCI_UNMAP_LEN(dma_len);
unsigned int single;
}; };
struct freelQ_ce { struct freelQ_ce {
...@@ -189,46 +179,54 @@ struct freelQ_ce { ...@@ -189,46 +179,54 @@ struct freelQ_ce {
}; };
/* /*
* SW Command, Freelist and Response Queue * SW command, freelist and response rings
*/ */
struct cmdQ { struct cmdQ {
atomic_t asleep; /* HW DMA Fetch status */ unsigned long status; /* HW DMA fetch status */
atomic_t credits; /* # available descriptors for TX */ unsigned int in_use; /* # of in-use command descriptors */
atomic_t pio_pidx; /* Variable updated on Doorbell */ unsigned int size; /* # of descriptors */
u16 entries_n; /* # descriptors for TX */ unsigned int processed; /* total # of descs HW has processed */
unsigned int cleaned; /* total # of descs SW has reclaimed */
unsigned int stop_thres; /* SW TX queue suspend threshold */
u16 pidx; /* producer index (SW) */ u16 pidx; /* producer index (SW) */
u16 cidx; /* consumer index (HW) */ u16 cidx; /* consumer index (HW) */
u8 genbit; /* current generation (=valid) bit */ u8 genbit; /* current generation (=valid) bit */
u8 sop; /* is next entry start of packet? */
struct cmdQ_e *entries; /* HW command descriptor Q */ struct cmdQ_e *entries; /* HW command descriptor Q */
struct cmdQ_ce *centries; /* SW command context descriptor Q */ struct cmdQ_ce *centries; /* SW command context descriptor Q */
spinlock_t Qlock; /* Lock to protect cmdQ enqueuing */ spinlock_t lock; /* Lock to protect cmdQ enqueuing */
dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */ dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */
}; };
struct freelQ { struct freelQ {
unsigned int credits; /* # of available RX buffers */ unsigned int credits; /* # of available RX buffers */
unsigned int entries_n; /* free list capacity */ unsigned int size; /* free list capacity */
u16 pidx; /* producer index (SW) */ u16 pidx; /* producer index (SW) */
u16 cidx; /* consumer index (HW) */ u16 cidx; /* consumer index (HW) */
u16 rx_buffer_size; /* Buffer size on this free list */ u16 rx_buffer_size; /* Buffer size on this free list */
u16 dma_offset; /* DMA offset to align IP headers */ u16 dma_offset; /* DMA offset to align IP headers */
u16 recycleq_idx; /* skb recycle q to use */
u8 genbit; /* current generation (=valid) bit */ u8 genbit; /* current generation (=valid) bit */
struct freelQ_e *entries; /* HW freelist descriptor Q */ struct freelQ_e *entries; /* HW freelist descriptor Q */
struct freelQ_ce *centries; /* SW freelist conext descriptor Q */ struct freelQ_ce *centries; /* SW freelist context descriptor Q */
dma_addr_t dma_addr; /* DMA addr HW freelist descriptor Q */ dma_addr_t dma_addr; /* DMA addr HW freelist descriptor Q */
}; };
struct respQ { struct respQ {
u16 credits; /* # of available respQ descriptors */ unsigned int credits; /* credits to be returned to SGE */
u16 credits_pend; /* # of not yet returned descriptors */ unsigned int size; /* # of response Q descriptors */
u16 entries_n; /* # of response Q descriptors */
u16 pidx; /* producer index (HW) */
u16 cidx; /* consumer index (SW) */ u16 cidx; /* consumer index (SW) */
u8 genbit; /* current generation(=valid) bit */ u8 genbit; /* current generation(=valid) bit */
struct respQ_e *entries; /* HW response descriptor Q */ struct respQ_e *entries; /* HW response descriptor Q */
dma_addr_t dma_addr; /* DMA addr HW response descriptor Q */ dma_addr_t dma_addr; /* DMA addr HW response descriptor Q */
}; };
/* Bit flags for cmdQ.status */
enum {
CMDQ_STAT_RUNNING = 1, /* fetch engine is running */
CMDQ_STAT_LAST_PKT_DB = 2 /* last packet rung the doorbell */
};
/* /*
* Main SGE data structure * Main SGE data structure
* *
...@@ -239,134 +237,50 @@ struct respQ { ...@@ -239,134 +237,50 @@ struct respQ {
*/ */
struct sge { struct sge {
struct adapter *adapter; /* adapter backpointer */ struct adapter *adapter; /* adapter backpointer */
struct freelQ freelQ[SGE_FREELQ_N]; /* freelist Q(s) */ struct net_device *netdev; /* netdevice backpointer */
struct respQ respQ; /* response Q instatiation */ struct freelQ freelQ[SGE_FREELQ_N]; /* buffer free lists */
struct respQ respQ; /* response Q */
unsigned long stopped_tx_queues; /* bitmap of suspended Tx queues */
unsigned int rx_pkt_pad; /* RX padding for L2 packets */ unsigned int rx_pkt_pad; /* RX padding for L2 packets */
unsigned int jumbo_fl; /* jumbo freelist Q index */ unsigned int jumbo_fl; /* jumbo freelist Q index */
u32 intrtimer[SGE_INTR_MAXBUCKETS]; /* ! */ unsigned int intrtimer_nres; /* no-resource interrupt timer */
u32 currIndex; /* current index into intrtimer[] */ unsigned int fixed_intrtimer;/* non-adaptive interrupt timer */
u32 intrtimer_nres; /* no resource interrupt timer value */ struct timer_list tx_reclaim_timer; /* reclaims TX buffers */
u32 sge_control; /* shadow content of sge control reg */ struct timer_list espibug_timer;
struct sge_intr_counts intr_cnt; unsigned int espibug_timeout;
struct timer_list ptimer; struct sk_buff *espibug_skb;
struct sk_buff *pskb; u32 sge_control; /* shadow value of sge control reg */
u32 ptimeout; struct sge_intr_counts stats;
struct cmdQ cmdQ[SGE_CMDQ_N] ____cacheline_aligned; /* command Q(s)*/ struct sge_port_stats port_stats[MAX_NPORTS];
struct cmdQ cmdQ[SGE_CMDQ_N] ____cacheline_aligned_in_smp;
}; };
static unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
unsigned int qid);
/* /*
* PIO to indicate that memory mapped Q contains valid descriptor(s). * PIO to indicate that memory mapped Q contains valid descriptor(s).
*/ */
static inline void doorbell_pio(struct sge *sge, u32 val) static inline void doorbell_pio(struct adapter *adapter, u32 val)
{ {
wmb(); wmb();
t1_write_reg_4(sge->adapter, A_SG_DOORBELL, val); writel(val, adapter->regs + A_SG_DOORBELL);
}
/*
* Disables the DMA engine.
*/
void t1_sge_stop(struct sge *sge)
{
t1_write_reg_4(sge->adapter, A_SG_CONTROL, 0);
t1_read_reg_4(sge->adapter, A_SG_CONTROL); /* flush write */
if (is_T2(sge->adapter))
del_timer_sync(&sge->ptimer);
}
static u8 ch_mac_addr[ETH_ALEN] = {0x0, 0x7, 0x43, 0x0, 0x0, 0x0};
static void t1_espi_workaround(void *data)
{
struct adapter *adapter = (struct adapter *)data;
struct sge *sge = adapter->sge;
if (netif_running(adapter->port[0].dev) &&
atomic_read(&sge->cmdQ[0].asleep)) {
u32 seop = t1_espi_get_mon(adapter, 0x930, 0);
if ((seop & 0xfff0fff) == 0xfff && sge->pskb) {
struct sk_buff *skb = sge->pskb;
if (!skb->cb[0]) {
memcpy(skb->data+sizeof(struct cpl_tx_pkt), ch_mac_addr, ETH_ALEN);
memcpy(skb->data+skb->len-10, ch_mac_addr, ETH_ALEN);
skb->cb[0] = 0xff;
}
t1_sge_tx(skb, adapter,0);
}
}
mod_timer(&adapter->sge->ptimer, jiffies + sge->ptimeout);
}
/*
* Enables the DMA engine.
*/
void t1_sge_start(struct sge *sge)
{
t1_write_reg_4(sge->adapter, A_SG_CONTROL, sge->sge_control);
t1_read_reg_4(sge->adapter, A_SG_CONTROL); /* flush write */
if (is_T2(sge->adapter)) {
init_timer(&sge->ptimer);
sge->ptimer.function = (void *)&t1_espi_workaround;
sge->ptimer.data = (unsigned long)sge->adapter;
sge->ptimer.expires = jiffies + sge->ptimeout;
add_timer(&sge->ptimer);
}
}
/*
* Creates a t1_sge structure and returns suggested resource parameters.
*/
struct sge * __devinit t1_sge_create(struct adapter *adapter,
struct sge_params *p)
{
struct sge *sge = kmalloc(sizeof(*sge), GFP_KERNEL);
if (!sge)
return NULL;
memset(sge, 0, sizeof(*sge));
if (is_T2(adapter))
sge->ptimeout = 1; /* finest allowed */
sge->adapter = adapter;
sge->rx_pkt_pad = t1_is_T1B(adapter) ? 0 : SGE_RX_OFFSET;
sge->jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;
p->cmdQ_size[0] = SGE_CMDQ0_E_N;
p->cmdQ_size[1] = SGE_CMDQ1_E_N;
p->freelQ_size[!sge->jumbo_fl] = SGE_FREEL_SIZE;
p->freelQ_size[sge->jumbo_fl] = SGE_JUMBO_FREEL_SIZE;
p->rx_coalesce_usecs = SGE_INTRTIMER1;
p->last_rx_coalesce_raw = SGE_INTRTIMER1 *
(board_info(sge->adapter)->clock_core / 1000000);
p->default_rx_coalesce_usecs = SGE_INTRTIMER1;
p->coalesce_enable = 0; /* Turn off adaptive algorithm by default */
p->sample_interval_usecs = 0;
return sge;
} }
/* /*
* Frees all RX buffers on the freelist Q. The caller must make sure that * Frees all RX buffers on the freelist Q. The caller must make sure that
* the SGE is turned off before calling this function. * the SGE is turned off before calling this function.
*/ */
static void free_freelQ_buffers(struct pci_dev *pdev, struct freelQ *Q) static void free_freelQ_buffers(struct pci_dev *pdev, struct freelQ *q)
{ {
unsigned int cidx = Q->cidx, credits = Q->credits; unsigned int cidx = q->cidx;
while (credits--) { while (q->credits--) {
struct freelQ_ce *ce = &Q->centries[cidx]; struct freelQ_ce *ce = &q->centries[cidx];
pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len), pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
dev_kfree_skb(ce->skb); dev_kfree_skb(ce->skb);
ce->skb = NULL; ce->skb = NULL;
if (++cidx == Q->entries_n) if (++cidx == q->size)
cidx = 0; cidx = 0;
} }
} }
...@@ -380,29 +294,29 @@ static void free_rx_resources(struct sge *sge) ...@@ -380,29 +294,29 @@ static void free_rx_resources(struct sge *sge)
unsigned int size, i; unsigned int size, i;
if (sge->respQ.entries) { if (sge->respQ.entries) {
size = sizeof(struct respQ_e) * sge->respQ.entries_n; size = sizeof(struct respQ_e) * sge->respQ.size;
pci_free_consistent(pdev, size, sge->respQ.entries, pci_free_consistent(pdev, size, sge->respQ.entries,
sge->respQ.dma_addr); sge->respQ.dma_addr);
} }
for (i = 0; i < SGE_FREELQ_N; i++) { for (i = 0; i < SGE_FREELQ_N; i++) {
struct freelQ *Q = &sge->freelQ[i]; struct freelQ *q = &sge->freelQ[i];
if (Q->centries) { if (q->centries) {
free_freelQ_buffers(pdev, Q); free_freelQ_buffers(pdev, q);
kfree(Q->centries); kfree(q->centries);
} }
if (Q->entries) { if (q->entries) {
size = sizeof(struct freelQ_e) * Q->entries_n; size = sizeof(struct freelQ_e) * q->size;
pci_free_consistent(pdev, size, Q->entries, pci_free_consistent(pdev, size, q->entries,
Q->dma_addr); q->dma_addr);
} }
} }
} }
/* /*
* Allocates basic RX resources, consisting of memory mapped freelist Qs and a * Allocates basic RX resources, consisting of memory mapped freelist Qs and a
* response Q. * response queue.
*/ */
static int alloc_rx_resources(struct sge *sge, struct sge_params *p) static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
{ {
...@@ -410,21 +324,22 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p) ...@@ -410,21 +324,22 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
unsigned int size, i; unsigned int size, i;
for (i = 0; i < SGE_FREELQ_N; i++) { for (i = 0; i < SGE_FREELQ_N; i++) {
struct freelQ *Q = &sge->freelQ[i]; struct freelQ *q = &sge->freelQ[i];
Q->genbit = 1; q->genbit = 1;
Q->entries_n = p->freelQ_size[i]; q->size = p->freelQ_size[i];
Q->dma_offset = SGE_RX_OFFSET - sge->rx_pkt_pad; q->dma_offset = sge->rx_pkt_pad ? 0 : NET_IP_ALIGN;
size = sizeof(struct freelQ_e) * Q->entries_n; size = sizeof(struct freelQ_e) * q->size;
Q->entries = (struct freelQ_e *) q->entries = (struct freelQ_e *)
pci_alloc_consistent(pdev, size, &Q->dma_addr); pci_alloc_consistent(pdev, size, &q->dma_addr);
if (!Q->entries) if (!q->entries)
goto err_no_mem; goto err_no_mem;
memset(Q->entries, 0, size); memset(q->entries, 0, size);
Q->centries = kcalloc(Q->entries_n, sizeof(struct freelQ_ce), size = sizeof(struct freelQ_ce) * q->size;
GFP_KERNEL); q->centries = kmalloc(size, GFP_KERNEL);
if (!Q->centries) if (!q->centries)
goto err_no_mem; goto err_no_mem;
memset(q->centries, 0, size);
} }
/* /*
...@@ -440,10 +355,17 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p) ...@@ -440,10 +355,17 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
sge->freelQ[sge->jumbo_fl].rx_buffer_size = (16 * 1024) - sge->freelQ[sge->jumbo_fl].rx_buffer_size = (16 * 1024) -
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
/*
* Setup which skb recycle Q should be used when recycling buffers from
* each free list.
*/
sge->freelQ[!sge->jumbo_fl].recycleq_idx = 0;
sge->freelQ[sge->jumbo_fl].recycleq_idx = 1;
sge->respQ.genbit = 1; sge->respQ.genbit = 1;
sge->respQ.entries_n = SGE_RESPQ_E_N; sge->respQ.size = SGE_RESPQ_E_N;
sge->respQ.credits = SGE_RESPQ_E_N; sge->respQ.credits = 0;
size = sizeof(struct respQ_e) * sge->respQ.entries_n; size = sizeof(struct respQ_e) * sge->respQ.size;
sge->respQ.entries = (struct respQ_e *) sge->respQ.entries = (struct respQ_e *)
pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr); pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr);
if (!sge->respQ.entries) if (!sge->respQ.entries)
...@@ -457,25 +379,18 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p) ...@@ -457,25 +379,18 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
} }
/* /*
* Frees 'credits_pend' TX buffers and returns the credits to Q->credits. * Reclaims n TX descriptors and frees the buffers associated with them.
*
* The adaptive algorithm receives the total size of the buffers freed
* accumulated in @*totpayload. No initialization of this argument here.
*
*/ */
static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *Q, static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *q, unsigned int n)
unsigned int credits_pend, unsigned int *totpayload)
{ {
struct cmdQ_ce *ce;
struct pci_dev *pdev = sge->adapter->pdev; struct pci_dev *pdev = sge->adapter->pdev;
struct sk_buff *skb; unsigned int cidx = q->cidx;
struct cmdQ_ce *ce, *cq = Q->centries;
unsigned int entries_n = Q->entries_n, cidx = Q->cidx,
i = credits_pend;
q->in_use -= n;
ce = &cq[cidx]; ce = &q->centries[cidx];
while (i--) { while (n--) {
if (ce->single) if (q->sop)
pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len), pci_unmap_len(ce, dma_len),
PCI_DMA_TODEVICE); PCI_DMA_TODEVICE);
...@@ -483,22 +398,18 @@ static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *Q, ...@@ -483,22 +398,18 @@ static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *Q,
pci_unmap_page(pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_page(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len), pci_unmap_len(ce, dma_len),
PCI_DMA_TODEVICE); PCI_DMA_TODEVICE);
if (totpayload) q->sop = 0;
*totpayload += pci_unmap_len(ce, dma_len); if (ce->skb) {
dev_kfree_skb(ce->skb);
skb = ce->skb; q->sop = 1;
if (skb) }
dev_kfree_skb_irq(skb);
ce++; ce++;
if (++cidx == entries_n) { if (++cidx == q->size) {
cidx = 0; cidx = 0;
ce = cq; ce = q->centries;
} }
} }
q->cidx = cidx;
Q->cidx = cidx;
atomic_add(credits_pend, &Q->credits);
} }
/* /*
...@@ -512,20 +423,17 @@ static void free_tx_resources(struct sge *sge) ...@@ -512,20 +423,17 @@ static void free_tx_resources(struct sge *sge)
unsigned int size, i; unsigned int size, i;
for (i = 0; i < SGE_CMDQ_N; i++) { for (i = 0; i < SGE_CMDQ_N; i++) {
struct cmdQ *Q = &sge->cmdQ[i]; struct cmdQ *q = &sge->cmdQ[i];
if (Q->centries) {
unsigned int pending = Q->entries_n -
atomic_read(&Q->credits);
if (pending) if (q->centries) {
free_cmdQ_buffers(sge, Q, pending, NULL); if (q->in_use)
kfree(Q->centries); free_cmdQ_buffers(sge, q, q->in_use);
kfree(q->centries);
} }
if (Q->entries) { if (q->entries) {
size = sizeof(struct cmdQ_e) * Q->entries_n; size = sizeof(struct cmdQ_e) * q->size;
pci_free_consistent(pdev, size, Q->entries, pci_free_consistent(pdev, size, q->entries,
Q->dma_addr); q->dma_addr);
} }
} }
} }
...@@ -539,25 +447,38 @@ static int alloc_tx_resources(struct sge *sge, struct sge_params *p) ...@@ -539,25 +447,38 @@ static int alloc_tx_resources(struct sge *sge, struct sge_params *p)
unsigned int size, i; unsigned int size, i;
for (i = 0; i < SGE_CMDQ_N; i++) { for (i = 0; i < SGE_CMDQ_N; i++) {
struct cmdQ *Q = &sge->cmdQ[i]; struct cmdQ *q = &sge->cmdQ[i];
Q->genbit = 1; q->genbit = 1;
Q->entries_n = p->cmdQ_size[i]; q->sop = 1;
atomic_set(&Q->credits, Q->entries_n); q->size = p->cmdQ_size[i];
atomic_set(&Q->asleep, 1); q->in_use = 0;
spin_lock_init(&Q->Qlock); q->status = 0;
size = sizeof(struct cmdQ_e) * Q->entries_n; q->processed = q->cleaned = 0;
Q->entries = (struct cmdQ_e *) q->stop_thres = 0;
pci_alloc_consistent(pdev, size, &Q->dma_addr); spin_lock_init(&q->lock);
if (!Q->entries) size = sizeof(struct cmdQ_e) * q->size;
q->entries = (struct cmdQ_e *)
pci_alloc_consistent(pdev, size, &q->dma_addr);
if (!q->entries)
goto err_no_mem; goto err_no_mem;
memset(Q->entries, 0, size); memset(q->entries, 0, size);
Q->centries = kcalloc(Q->entries_n, sizeof(struct cmdQ_ce), size = sizeof(struct cmdQ_ce) * q->size;
GFP_KERNEL); q->centries = kmalloc(size, GFP_KERNEL);
if (!Q->centries) if (!q->centries)
goto err_no_mem; goto err_no_mem;
memset(q->centries, 0, size);
} }
/*
* CommandQ 0 handles Ethernet and TOE packets, while queue 1 is TOE
* only. For queue 0 set the stop threshold so we can handle one more
* packet from each port, plus reserve an additional 24 entries for
* Ethernet packets only. Queue 1 never suspends nor do we reserve
* space for Ethernet packets.
*/
sge->cmdQ[0].stop_thres = sge->adapter->params.nports *
(MAX_SKB_FRAGS + 1);
return 0; return 0;
err_no_mem: err_no_mem:
...@@ -569,9 +490,9 @@ static inline void setup_ring_params(struct adapter *adapter, u64 addr, ...@@ -569,9 +490,9 @@ static inline void setup_ring_params(struct adapter *adapter, u64 addr,
u32 size, int base_reg_lo, u32 size, int base_reg_lo,
int base_reg_hi, int size_reg) int base_reg_hi, int size_reg)
{ {
t1_write_reg_4(adapter, base_reg_lo, (u32)addr); writel((u32)addr, adapter->regs + base_reg_lo);
t1_write_reg_4(adapter, base_reg_hi, addr >> 32); writel(addr >> 32, adapter->regs + base_reg_hi);
t1_write_reg_4(adapter, size_reg, size); writel(size, adapter->regs + size_reg);
} }
/* /*
...@@ -585,27 +506,9 @@ void t1_set_vlan_accel(struct adapter *adapter, int on_off) ...@@ -585,27 +506,9 @@ void t1_set_vlan_accel(struct adapter *adapter, int on_off)
if (on_off) if (on_off)
sge->sge_control |= F_VLAN_XTRACT; sge->sge_control |= F_VLAN_XTRACT;
if (adapter->open_device_map) { if (adapter->open_device_map) {
t1_write_reg_4(adapter, A_SG_CONTROL, sge->sge_control); writel(sge->sge_control, adapter->regs + A_SG_CONTROL);
t1_read_reg_4(adapter, A_SG_CONTROL); /* flush */ readl(adapter->regs + A_SG_CONTROL); /* flush */
}
}
/*
* Sets the interrupt latency timer when the adaptive Rx coalescing
* is turned off. Do nothing when it is turned on again.
*
* This routine relies on the fact that the caller has already set
* the adaptive policy in adapter->sge_params before calling it.
*/
int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p)
{
if (!p->coalesce_enable) {
u32 newTimer = p->rx_coalesce_usecs *
(board_info(sge->adapter)->clock_core / 1000000);
t1_write_reg_4(sge->adapter, A_SG_INTRTIMER, newTimer);
} }
return 0;
} }
/* /*
...@@ -615,66 +518,39 @@ int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p) ...@@ -615,66 +518,39 @@ int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p)
static void configure_sge(struct sge *sge, struct sge_params *p) static void configure_sge(struct sge *sge, struct sge_params *p)
{ {
struct adapter *ap = sge->adapter; struct adapter *ap = sge->adapter;
int i;
t1_write_reg_4(ap, A_SG_CONTROL, 0); writel(0, ap->regs + A_SG_CONTROL);
setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].entries_n, setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].size,
A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE); A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE);
setup_ring_params(ap, sge->cmdQ[1].dma_addr, sge->cmdQ[1].entries_n, setup_ring_params(ap, sge->cmdQ[1].dma_addr, sge->cmdQ[1].size,
A_SG_CMD1BASELWR, A_SG_CMD1BASEUPR, A_SG_CMD1SIZE); A_SG_CMD1BASELWR, A_SG_CMD1BASEUPR, A_SG_CMD1SIZE);
setup_ring_params(ap, sge->freelQ[0].dma_addr, setup_ring_params(ap, sge->freelQ[0].dma_addr,
sge->freelQ[0].entries_n, A_SG_FL0BASELWR, sge->freelQ[0].size, A_SG_FL0BASELWR,
A_SG_FL0BASEUPR, A_SG_FL0SIZE); A_SG_FL0BASEUPR, A_SG_FL0SIZE);
setup_ring_params(ap, sge->freelQ[1].dma_addr, setup_ring_params(ap, sge->freelQ[1].dma_addr,
sge->freelQ[1].entries_n, A_SG_FL1BASELWR, sge->freelQ[1].size, A_SG_FL1BASELWR,
A_SG_FL1BASEUPR, A_SG_FL1SIZE); A_SG_FL1BASEUPR, A_SG_FL1SIZE);
/* The threshold comparison uses <. */ /* The threshold comparison uses <. */
t1_write_reg_4(ap, A_SG_FLTHRESHOLD, SGE_RX_SM_BUF_SIZE + 1); writel(SGE_RX_SM_BUF_SIZE + 1, ap->regs + A_SG_FLTHRESHOLD);
setup_ring_params(ap, sge->respQ.dma_addr, sge->respQ.entries_n, setup_ring_params(ap, sge->respQ.dma_addr, sge->respQ.size,
A_SG_RSPBASELWR, A_SG_RSPBASEUPR, A_SG_RSPSIZE); A_SG_RSPBASELWR, A_SG_RSPBASEUPR, A_SG_RSPSIZE);
t1_write_reg_4(ap, A_SG_RSPQUEUECREDIT, (u32)sge->respQ.entries_n); writel((u32)sge->respQ.size - 1, ap->regs + A_SG_RSPQUEUECREDIT);
sge->sge_control = F_CMDQ0_ENABLE | F_CMDQ1_ENABLE | F_FL0_ENABLE | sge->sge_control = F_CMDQ0_ENABLE | F_CMDQ1_ENABLE | F_FL0_ENABLE |
F_FL1_ENABLE | F_CPL_ENABLE | F_RESPONSE_QUEUE_ENABLE | F_FL1_ENABLE | F_CPL_ENABLE | F_RESPONSE_QUEUE_ENABLE |
V_CMDQ_PRIORITY(2) | F_DISABLE_CMDQ1_GTS | F_ISCSI_COALESCE | V_CMDQ_PRIORITY(2) | F_DISABLE_CMDQ1_GTS | F_ISCSI_COALESCE |
F_DISABLE_FL0_GTS | F_DISABLE_FL1_GTS |
V_RX_PKT_OFFSET(sge->rx_pkt_pad); V_RX_PKT_OFFSET(sge->rx_pkt_pad);
#if defined(__BIG_ENDIAN_BITFIELD) #if defined(__BIG_ENDIAN_BITFIELD)
sge->sge_control |= F_ENABLE_BIG_ENDIAN; sge->sge_control |= F_ENABLE_BIG_ENDIAN;
#endif #endif
/* /* Initialize no-resource timer */
* Initialize the SGE Interrupt Timer arrray: sge->intrtimer_nres = SGE_INTRTIMER_NRES * core_ticks_per_usec(ap);
* intrtimer[0] = (SGE_INTRTIMER0) usec
* intrtimer[0<i<5] = (SGE_INTRTIMER0 + i*2) usec
* intrtimer[4<i<10] = ((i - 3) * 6) usec
* intrtimer[10] = (SGE_INTRTIMER1) usec
*
*/
sge->intrtimer[0] = board_info(sge->adapter)->clock_core / 1000000;
for (i = 1; i < SGE_INTR_LATBUCKETS; ++i) {
sge->intrtimer[i] = SGE_INTRTIMER0 + (2 * i);
sge->intrtimer[i] *= sge->intrtimer[0];
}
for (i = SGE_INTR_LATBUCKETS; i < SGE_INTR_MAXBUCKETS - 1; ++i) {
sge->intrtimer[i] = (i - 3) * 6;
sge->intrtimer[i] *= sge->intrtimer[0];
}
sge->intrtimer[SGE_INTR_MAXBUCKETS - 1] =
sge->intrtimer[0] * SGE_INTRTIMER1;
/* Initialize resource timer */
sge->intrtimer_nres = sge->intrtimer[0] * SGE_INTRTIMER_NRES;
/* Finally finish initialization of intrtimer[0] */
sge->intrtimer[0] *= SGE_INTRTIMER0;
/* Initialize for a throughput oriented workload */
sge->currIndex = SGE_INTR_MAXBUCKETS - 1;
if (p->coalesce_enable)
t1_write_reg_4(ap, A_SG_INTRTIMER,
sge->intrtimer[sge->currIndex]);
else
t1_sge_set_coalesce_params(sge, p); t1_sge_set_coalesce_params(sge, p);
} }
...@@ -684,31 +560,8 @@ static void configure_sge(struct sge *sge, struct sge_params *p) ...@@ -684,31 +560,8 @@ static void configure_sge(struct sge *sge, struct sge_params *p)
static inline unsigned int jumbo_payload_capacity(const struct sge *sge) static inline unsigned int jumbo_payload_capacity(const struct sge *sge)
{ {
return sge->freelQ[sge->jumbo_fl].rx_buffer_size - return sge->freelQ[sge->jumbo_fl].rx_buffer_size -
sizeof(struct cpl_rx_data) - SGE_RX_OFFSET + sge->rx_pkt_pad; sge->freelQ[sge->jumbo_fl].dma_offset -
} sizeof(struct cpl_rx_data);
/*
* Allocates both RX and TX resources and configures the SGE. However,
* the hardware is not enabled yet.
*/
int t1_sge_configure(struct sge *sge, struct sge_params *p)
{
if (alloc_rx_resources(sge, p))
return -ENOMEM;
if (alloc_tx_resources(sge, p)) {
free_rx_resources(sge);
return -ENOMEM;
}
configure_sge(sge, p);
/*
* Now that we have sized the free lists calculate the payload
* capacity of the large buffers. Other parts of the driver use
* this to set the max offload coalescing size so that RX packets
* do not overflow our large buffers.
*/
p->large_buf_capacity = jumbo_payload_capacity(sge);
return 0;
} }
/* /*
...@@ -716,8 +569,9 @@ int t1_sge_configure(struct sge *sge, struct sge_params *p) ...@@ -716,8 +569,9 @@ int t1_sge_configure(struct sge *sge, struct sge_params *p)
*/ */
void t1_sge_destroy(struct sge *sge) void t1_sge_destroy(struct sge *sge)
{ {
if (sge->pskb) if (sge->espibug_skb)
dev_kfree_skb(sge->pskb); kfree_skb(sge->espibug_skb);
free_tx_resources(sge); free_tx_resources(sge);
free_rx_resources(sge); free_rx_resources(sge);
kfree(sge); kfree(sge);
...@@ -735,75 +589,75 @@ void t1_sge_destroy(struct sge *sge) ...@@ -735,75 +589,75 @@ void t1_sge_destroy(struct sge *sge)
* we specify a RX_OFFSET in order to make sure that the IP header is 4B * we specify a RX_OFFSET in order to make sure that the IP header is 4B
* aligned. * aligned.
*/ */
static void refill_free_list(struct sge *sge, struct freelQ *Q) static void refill_free_list(struct sge *sge, struct freelQ *q)
{ {
struct pci_dev *pdev = sge->adapter->pdev; struct pci_dev *pdev = sge->adapter->pdev;
struct freelQ_ce *ce = &Q->centries[Q->pidx]; struct freelQ_ce *ce = &q->centries[q->pidx];
struct freelQ_e *e = &Q->entries[Q->pidx]; struct freelQ_e *e = &q->entries[q->pidx];
unsigned int dma_len = Q->rx_buffer_size - Q->dma_offset; unsigned int dma_len = q->rx_buffer_size - q->dma_offset;
while (Q->credits < Q->entries_n) { while (q->credits < q->size) {
if (e->GenerationBit != Q->genbit) {
struct sk_buff *skb; struct sk_buff *skb;
dma_addr_t mapping; dma_addr_t mapping;
skb = alloc_skb(Q->rx_buffer_size, GFP_ATOMIC); skb = alloc_skb(q->rx_buffer_size, GFP_ATOMIC);
if (!skb) if (!skb)
break; break;
if (Q->dma_offset)
skb_reserve(skb, Q->dma_offset); skb_reserve(skb, q->dma_offset);
mapping = pci_map_single(pdev, skb->data, dma_len, mapping = pci_map_single(pdev, skb->data, dma_len,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
ce->skb = skb; ce->skb = skb;
pci_unmap_addr_set(ce, dma_addr, mapping); pci_unmap_addr_set(ce, dma_addr, mapping);
pci_unmap_len_set(ce, dma_len, dma_len); pci_unmap_len_set(ce, dma_len, dma_len);
e->AddrLow = (u32)mapping; e->addr_lo = (u32)mapping;
e->AddrHigh = (u64)mapping >> 32; e->addr_hi = (u64)mapping >> 32;
e->BufferLength = dma_len; e->len_gen = V_CMD_LEN(dma_len) | V_CMD_GEN1(q->genbit);
e->GenerationBit = e->GenerationBit2 = Q->genbit; wmb();
} e->gen2 = V_CMD_GEN2(q->genbit);
e++; e++;
ce++; ce++;
if (++Q->pidx == Q->entries_n) { if (++q->pidx == q->size) {
Q->pidx = 0; q->pidx = 0;
Q->genbit ^= 1; q->genbit ^= 1;
ce = Q->centries; ce = q->centries;
e = Q->entries; e = q->entries;
} }
Q->credits++; q->credits++;
} }
} }
/* /*
* Calls refill_free_list for both freelist Qs. If we cannot * Calls refill_free_list for both free lists. If we cannot fill at least 1/4
* fill at least 1/4 of both Qs, we go into 'few interrupt mode' in order * of both rings, we go into 'few interrupt mode' in order to give the system
* to give the system time to free up resources. * time to free up resources.
*/ */
static void freelQs_empty(struct sge *sge) static void freelQs_empty(struct sge *sge)
{ {
u32 irq_reg = t1_read_reg_4(sge->adapter, A_SG_INT_ENABLE); struct adapter *adapter = sge->adapter;
u32 irq_reg = readl(adapter->regs + A_SG_INT_ENABLE);
u32 irqholdoff_reg; u32 irqholdoff_reg;
refill_free_list(sge, &sge->freelQ[0]); refill_free_list(sge, &sge->freelQ[0]);
refill_free_list(sge, &sge->freelQ[1]); refill_free_list(sge, &sge->freelQ[1]);
if (sge->freelQ[0].credits > (sge->freelQ[0].entries_n >> 2) && if (sge->freelQ[0].credits > (sge->freelQ[0].size >> 2) &&
sge->freelQ[1].credits > (sge->freelQ[1].entries_n >> 2)) { sge->freelQ[1].credits > (sge->freelQ[1].size >> 2)) {
irq_reg |= F_FL_EXHAUSTED; irq_reg |= F_FL_EXHAUSTED;
irqholdoff_reg = sge->intrtimer[sge->currIndex]; irqholdoff_reg = sge->fixed_intrtimer;
} else { } else {
/* Clear the F_FL_EXHAUSTED interrupts for now */ /* Clear the F_FL_EXHAUSTED interrupts for now */
irq_reg &= ~F_FL_EXHAUSTED; irq_reg &= ~F_FL_EXHAUSTED;
irqholdoff_reg = sge->intrtimer_nres; irqholdoff_reg = sge->intrtimer_nres;
} }
t1_write_reg_4(sge->adapter, A_SG_INTRTIMER, irqholdoff_reg); writel(irqholdoff_reg, adapter->regs + A_SG_INTRTIMER);
t1_write_reg_4(sge->adapter, A_SG_INT_ENABLE, irq_reg); writel(irq_reg, adapter->regs + A_SG_INT_ENABLE);
/* We reenable the Qs to force a freelist GTS interrupt later */ /* We reenable the Qs to force a freelist GTS interrupt later */
doorbell_pio(sge, F_FL0_ENABLE | F_FL1_ENABLE); doorbell_pio(adapter, F_FL0_ENABLE | F_FL1_ENABLE);
} }
#define SGE_PL_INTR_MASK (F_PL_INTR_SGE_ERR | F_PL_INTR_SGE_DATA) #define SGE_PL_INTR_MASK (F_PL_INTR_SGE_ERR | F_PL_INTR_SGE_DATA)
...@@ -816,10 +670,10 @@ static void freelQs_empty(struct sge *sge) ...@@ -816,10 +670,10 @@ static void freelQs_empty(struct sge *sge)
*/ */
void t1_sge_intr_disable(struct sge *sge) void t1_sge_intr_disable(struct sge *sge)
{ {
u32 val = t1_read_reg_4(sge->adapter, A_PL_ENABLE); u32 val = readl(sge->adapter->regs + A_PL_ENABLE);
t1_write_reg_4(sge->adapter, A_PL_ENABLE, val & ~SGE_PL_INTR_MASK); writel(val & ~SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);
t1_write_reg_4(sge->adapter, A_SG_INT_ENABLE, 0); writel(0, sge->adapter->regs + A_SG_INT_ENABLE);
} }
/* /*
...@@ -828,12 +682,12 @@ void t1_sge_intr_disable(struct sge *sge) ...@@ -828,12 +682,12 @@ void t1_sge_intr_disable(struct sge *sge)
void t1_sge_intr_enable(struct sge *sge) void t1_sge_intr_enable(struct sge *sge)
{ {
u32 en = SGE_INT_ENABLE; u32 en = SGE_INT_ENABLE;
u32 val = t1_read_reg_4(sge->adapter, A_PL_ENABLE); u32 val = readl(sge->adapter->regs + A_PL_ENABLE);
if (sge->adapter->flags & TSO_CAPABLE) if (sge->adapter->flags & TSO_CAPABLE)
en &= ~F_PACKET_TOO_BIG; en &= ~F_PACKET_TOO_BIG;
t1_write_reg_4(sge->adapter, A_SG_INT_ENABLE, en); writel(en, sge->adapter->regs + A_SG_INT_ENABLE);
t1_write_reg_4(sge->adapter, A_PL_ENABLE, val | SGE_PL_INTR_MASK); writel(val | SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);
} }
/* /*
...@@ -841,8 +695,8 @@ void t1_sge_intr_enable(struct sge *sge) ...@@ -841,8 +695,8 @@ void t1_sge_intr_enable(struct sge *sge)
*/ */
void t1_sge_intr_clear(struct sge *sge) void t1_sge_intr_clear(struct sge *sge)
{ {
t1_write_reg_4(sge->adapter, A_PL_CAUSE, SGE_PL_INTR_MASK); writel(SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_CAUSE);
t1_write_reg_4(sge->adapter, A_SG_INT_CAUSE, 0xffffffff); writel(0xffffffff, sge->adapter->regs + A_SG_INT_CAUSE);
} }
/* /*
...@@ -851,464 +705,673 @@ void t1_sge_intr_clear(struct sge *sge) ...@@ -851,464 +705,673 @@ void t1_sge_intr_clear(struct sge *sge)
int t1_sge_intr_error_handler(struct sge *sge) int t1_sge_intr_error_handler(struct sge *sge)
{ {
struct adapter *adapter = sge->adapter; struct adapter *adapter = sge->adapter;
u32 cause = t1_read_reg_4(adapter, A_SG_INT_CAUSE); u32 cause = readl(adapter->regs + A_SG_INT_CAUSE);
if (adapter->flags & TSO_CAPABLE) if (adapter->flags & TSO_CAPABLE)
cause &= ~F_PACKET_TOO_BIG; cause &= ~F_PACKET_TOO_BIG;
if (cause & F_RESPQ_EXHAUSTED) if (cause & F_RESPQ_EXHAUSTED)
sge->intr_cnt.respQ_empty++; sge->stats.respQ_empty++;
if (cause & F_RESPQ_OVERFLOW) { if (cause & F_RESPQ_OVERFLOW) {
sge->intr_cnt.respQ_overflow++; sge->stats.respQ_overflow++;
CH_ALERT("%s: SGE response queue overflow\n", CH_ALERT("%s: SGE response queue overflow\n",
adapter->name); adapter->name);
} }
if (cause & F_FL_EXHAUSTED) { if (cause & F_FL_EXHAUSTED) {
sge->intr_cnt.freelistQ_empty++; sge->stats.freelistQ_empty++;
freelQs_empty(sge); freelQs_empty(sge);
} }
if (cause & F_PACKET_TOO_BIG) { if (cause & F_PACKET_TOO_BIG) {
sge->intr_cnt.pkt_too_big++; sge->stats.pkt_too_big++;
CH_ALERT("%s: SGE max packet size exceeded\n", CH_ALERT("%s: SGE max packet size exceeded\n",
adapter->name); adapter->name);
} }
if (cause & F_PACKET_MISMATCH) { if (cause & F_PACKET_MISMATCH) {
sge->intr_cnt.pkt_mismatch++; sge->stats.pkt_mismatch++;
CH_ALERT("%s: SGE packet mismatch\n", adapter->name); CH_ALERT("%s: SGE packet mismatch\n", adapter->name);
} }
if (cause & SGE_INT_FATAL) if (cause & SGE_INT_FATAL)
t1_fatal_err(adapter); t1_fatal_err(adapter);
t1_write_reg_4(adapter, A_SG_INT_CAUSE, cause); writel(cause, adapter->regs + A_SG_INT_CAUSE);
return 0; return 0;
} }
/* const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge)
* The following code is copied from 2.6, where the skb_pull is doing the {
* right thing and only pulls ETH_HLEN. return &sge->stats;
}
const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port)
{
return &sge->port_stats[port];
}
/**
* recycle_fl_buf - recycle a free list buffer
* @fl: the free list
* @idx: index of buffer to recycle
* *
* Determine the packet's protocol ID. The rule here is that we * Recycles the specified buffer on the given free list by adding it at
* assume 802.3 if the type field is short enough to be a length. * the next available slot on the list.
* This is normal practice and works for any 'now in use' protocol.
*/ */
static unsigned short sge_eth_type_trans(struct sk_buff *skb, static void recycle_fl_buf(struct freelQ *fl, int idx)
struct net_device *dev)
{ {
struct ethhdr *eth; struct freelQ_e *from = &fl->entries[idx];
unsigned char *rawp; struct freelQ_e *to = &fl->entries[fl->pidx];
skb->mac.raw = skb->data; fl->centries[fl->pidx] = fl->centries[idx];
skb_pull(skb, ETH_HLEN); to->addr_lo = from->addr_lo;
eth = (struct ethhdr *)skb->mac.raw; to->addr_hi = from->addr_hi;
to->len_gen = G_CMD_LEN(from->len_gen) | V_CMD_GEN1(fl->genbit);
wmb();
to->gen2 = V_CMD_GEN2(fl->genbit);
fl->credits++;
if (*eth->h_dest&1) { if (++fl->pidx == fl->size) {
if(memcmp(eth->h_dest, dev->broadcast, ETH_ALEN) == 0) fl->pidx = 0;
skb->pkt_type = PACKET_BROADCAST; fl->genbit ^= 1;
else
skb->pkt_type = PACKET_MULTICAST;
} }
}
/* /**
* This ALLMULTI check should be redundant by 1.4 * get_packet - return the next ingress packet buffer
* so don't forget to remove it. * @pdev: the PCI device that received the packet
* @fl: the SGE free list holding the packet
* @len: the actual packet length, excluding any SGE padding
* @dma_pad: padding at beginning of buffer left by SGE DMA
* @skb_pad: padding to be used if the packet is copied
* @copy_thres: length threshold under which a packet should be copied
* @drop_thres: # of remaining buffers before we start dropping packets
* *
* Seems, you forgot to remove it. All silly devices * Get the next packet from a free list and complete setup of the
* seems to set IFF_PROMISC. * sk_buff. If the packet is small we make a copy and recycle the
* original buffer, otherwise we use the original buffer itself. If a
* positive drop threshold is supplied packets are dropped and their
* buffers recycled if (a) the number of remaining buffers is under the
* threshold and the packet is too big to copy, or (b) the packet should
* be copied but there is no memory for the copy.
*/ */
static inline struct sk_buff *get_packet(struct pci_dev *pdev,
struct freelQ *fl, unsigned int len,
int dma_pad, int skb_pad,
unsigned int copy_thres,
unsigned int drop_thres)
{
struct sk_buff *skb;
struct freelQ_ce *ce = &fl->centries[fl->cidx];
else if (1 /*dev->flags&IFF_PROMISC*/) if (len < copy_thres) {
{ skb = alloc_skb(len + skb_pad, GFP_ATOMIC);
if(memcmp(eth->h_dest,dev->dev_addr, ETH_ALEN)) if (likely(skb != NULL)) {
skb->pkt_type=PACKET_OTHERHOST; skb_reserve(skb, skb_pad);
skb_put(skb, len);
pci_dma_sync_single_for_cpu(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
memcpy(skb->data, ce->skb->data + dma_pad, len);
pci_dma_sync_single_for_device(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
} else if (!drop_thres)
goto use_orig_buf;
recycle_fl_buf(fl, fl->cidx);
return skb;
} }
if (ntohs(eth->h_proto) >= 1536) if (fl->credits < drop_thres) {
return eth->h_proto; recycle_fl_buf(fl, fl->cidx);
return NULL;
}
rawp = skb->data; use_orig_buf:
pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);
skb = ce->skb;
skb_reserve(skb, dma_pad);
skb_put(skb, len);
return skb;
}
/* /**
* This is a magic hack to spot IPX packets. Older Novell breaks * unexpected_offload - handle an unexpected offload packet
* the protocol design and runs IPX over 802.3 without an 802.2 LLC * @adapter: the adapter
* layer. We look for FFFF which isn't a used 802.2 SSAP/DSAP. This * @fl: the free list that received the packet
* won't work for fault tolerant netware but does for the rest. *
* Called when we receive an unexpected offload packet (e.g., the TOE
* function is disabled or the card is a NIC). Prints a message and
* recycles the buffer.
*/ */
if (*(unsigned short *)rawp == 0xFFFF) static void unexpected_offload(struct adapter *adapter, struct freelQ *fl)
return htons(ETH_P_802_3); {
struct freelQ_ce *ce = &fl->centries[fl->cidx];
struct sk_buff *skb = ce->skb;
/* pci_dma_sync_single_for_cpu(adapter->pdev, pci_unmap_addr(ce, dma_addr),
* Real 802.2 LLC pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);
*/ CH_ERR("%s: unexpected offload packet, cmd %u\n",
return htons(ETH_P_802_2); adapter->name, *skb->data);
recycle_fl_buf(fl, fl->cidx);
} }
/* /*
* Prepare the received buffer and pass it up the stack. If it is small enough * Write the command descriptors to transmit the given skb starting at
* and allocation doesn't fail, we use a new sk_buff and copy the content. * descriptor pidx with the given generation.
*/ */
static unsigned int t1_sge_rx(struct sge *sge, struct freelQ *Q, static inline void write_tx_descs(struct adapter *adapter, struct sk_buff *skb,
unsigned int len, unsigned int offload) unsigned int pidx, unsigned int gen,
struct cmdQ *q)
{ {
struct sk_buff *skb; dma_addr_t mapping;
struct adapter *adapter = sge->adapter; struct cmdQ_e *e, *e1;
struct freelQ_ce *ce = &Q->centries[Q->cidx]; struct cmdQ_ce *ce;
unsigned int i, flags, nfrags = skb_shinfo(skb)->nr_frags;
if (len <= SGE_RX_COPY_THRESHOLD && mapping = pci_map_single(adapter->pdev, skb->data,
(skb = alloc_skb(len + NET_IP_ALIGN, GFP_ATOMIC))) { skb->len - skb->data_len, PCI_DMA_TODEVICE);
struct freelQ_e *e; ce = &q->centries[pidx];
char *src = ce->skb->data; ce->skb = NULL;
pci_unmap_addr_set(ce, dma_addr, mapping);
pci_unmap_len_set(ce, dma_len, skb->len - skb->data_len);
pci_dma_sync_single_for_cpu(adapter->pdev, flags = F_CMD_DATAVALID | F_CMD_SOP | V_CMD_EOP(nfrags == 0) |
pci_unmap_addr(ce, dma_addr), V_CMD_GEN2(gen);
pci_unmap_len(ce, dma_len), e = &q->entries[pidx];
PCI_DMA_FROMDEVICE); e->addr_lo = (u32)mapping;
if (!offload) { e->addr_hi = (u64)mapping >> 32;
skb_reserve(skb, NET_IP_ALIGN); e->len_gen = V_CMD_LEN(skb->len - skb->data_len) | V_CMD_GEN1(gen);
src += sge->rx_pkt_pad; for (e1 = e, i = 0; nfrags--; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
ce++;
e1++;
if (++pidx == q->size) {
pidx = 0;
gen ^= 1;
ce = q->centries;
e1 = q->entries;
} }
memcpy(skb->data, src, len);
/* Reuse the entry. */ mapping = pci_map_page(adapter->pdev, frag->page,
e = &Q->entries[Q->cidx]; frag->page_offset, frag->size,
e->GenerationBit ^= 1; PCI_DMA_TODEVICE);
e->GenerationBit2 ^= 1; ce->skb = NULL;
} else { pci_unmap_addr_set(ce, dma_addr, mapping);
pci_unmap_single(adapter->pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_len_set(ce, dma_len, frag->size);
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE); e1->addr_lo = (u32)mapping;
skb = ce->skb; e1->addr_hi = (u64)mapping >> 32;
if (!offload && sge->rx_pkt_pad) e1->len_gen = V_CMD_LEN(frag->size) | V_CMD_GEN1(gen);
__skb_pull(skb, sge->rx_pkt_pad); e1->flags = F_CMD_DATAVALID | V_CMD_EOP(nfrags == 0) |
V_CMD_GEN2(gen);
} }
skb_put(skb, len); ce->skb = skb;
wmb();
e->flags = flags;
}
/*
* Clean up completed Tx buffers.
*/
static inline void reclaim_completed_tx(struct sge *sge, struct cmdQ *q)
{
unsigned int reclaim = q->processed - q->cleaned;
if (unlikely(offload)) { if (reclaim) {
{ free_cmdQ_buffers(sge, q, reclaim);
printk(KERN_ERR q->cleaned += reclaim;
"%s: unexpected offloaded packet, cmd %u\n", }
adapter->name, *skb->data); }
dev_kfree_skb_any(skb);
#ifndef SET_ETHTOOL_OPS
# define __netif_rx_complete(dev) netif_rx_complete(dev)
#endif
/*
* We cannot use the standard netif_rx_schedule_prep() because we have multiple
* ports plus the TOE all multiplexing onto a single response queue, therefore
* accepting new responses cannot depend on the state of any particular port.
* So define our own equivalent that omits the netif_running() test.
*/
static inline int napi_schedule_prep(struct net_device *dev)
{
return !test_and_set_bit(__LINK_STATE_RX_SCHED, &dev->state);
}
/**
* sge_rx - process an ingress ethernet packet
* @sge: the sge structure
* @fl: the free list that contains the packet buffer
* @len: the packet length
*
* Process an ingress ethernet pakcet and deliver it to the stack.
*/
static int sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)
{
struct sk_buff *skb;
struct cpl_rx_pkt *p;
struct adapter *adapter = sge->adapter;
sge->stats.ethernet_pkts++;
skb = get_packet(adapter->pdev, fl, len - sge->rx_pkt_pad,
sge->rx_pkt_pad, 2, SGE_RX_COPY_THRES,
SGE_RX_DROP_THRES);
if (!skb) {
sge->port_stats[0].rx_drops++; /* charge only port 0 for now */
return 0;
} }
} else {
struct cpl_rx_pkt *p = (struct cpl_rx_pkt *)skb->data;
p = (struct cpl_rx_pkt *)skb->data;
skb_pull(skb, sizeof(*p)); skb_pull(skb, sizeof(*p));
skb->dev = adapter->port[p->iff].dev; skb->dev = adapter->port[p->iff].dev;
skb->dev->last_rx = jiffies; skb->dev->last_rx = jiffies;
skb->protocol = sge_eth_type_trans(skb, skb->dev); skb->protocol = eth_type_trans(skb, skb->dev);
if ((adapter->flags & RX_CSUM_ENABLED) && p->csum == 0xffff && if ((adapter->flags & RX_CSUM_ENABLED) && p->csum == 0xffff &&
skb->protocol == htons(ETH_P_IP) && skb->protocol == htons(ETH_P_IP) &&
(skb->data[9] == IPPROTO_TCP || (skb->data[9] == IPPROTO_TCP || skb->data[9] == IPPROTO_UDP)) {
skb->data[9] == IPPROTO_UDP)) sge->port_stats[p->iff].rx_cso_good++;
skb->ip_summed = CHECKSUM_UNNECESSARY; skb->ip_summed = CHECKSUM_UNNECESSARY;
else } else
skb->ip_summed = CHECKSUM_NONE; skb->ip_summed = CHECKSUM_NONE;
if (adapter->vlan_grp && p->vlan_valid)
if (unlikely(adapter->vlan_grp && p->vlan_valid)) {
sge->port_stats[p->iff].vlan_xtract++;
if (adapter->params.sge.polling)
vlan_hwaccel_receive_skb(skb, adapter->vlan_grp,
ntohs(p->vlan));
else
vlan_hwaccel_rx(skb, adapter->vlan_grp, vlan_hwaccel_rx(skb, adapter->vlan_grp,
ntohs(p->vlan)); ntohs(p->vlan));
} else if (adapter->params.sge.polling)
netif_receive_skb(skb);
else else
netif_rx(skb); netif_rx(skb);
} return 0;
}
if (++Q->cidx == Q->entries_n) /*
Q->cidx = 0; * Returns true if a command queue has enough available descriptors that
* we can resume Tx operation after temporarily disabling its packet queue.
*/
static inline int enough_free_Tx_descs(const struct cmdQ *q)
{
unsigned int r = q->processed - q->cleaned;
if (unlikely(--Q->credits < Q->entries_n - SGE_FREEL_REFILL_THRESH)) return q->in_use - r < (q->size >> 1);
refill_free_list(sge, Q);
return 1;
} }
/* /*
* Adaptive interrupt timer logic to keep the CPU utilization to * Called when sufficient space has become available in the SGE command queues
* manageable levels. Basically, as the Average Packet Size (APS) * after the Tx packet schedulers have been suspended to restart the Tx path.
* gets higher, the interrupt latency setting gets longer. Every
* SGE_INTR_BUCKETSIZE (of 100B) causes a bump of 2usec to the
* base value of SGE_INTRTIMER0. At large values of payload the
* latency hits the ceiling value of SGE_INTRTIMER1 stored at
* index SGE_INTR_MAXBUCKETS-1 in sge->intrtimer[].
*
* sge->currIndex caches the last index to save unneeded PIOs.
*/ */
static inline void update_intr_timer(struct sge *sge, unsigned int avg_payload) static void restart_tx_queues(struct sge *sge)
{ {
unsigned int newIndex; struct adapter *adap = sge->adapter;
if (enough_free_Tx_descs(&sge->cmdQ[0])) {
int i;
for_each_port(adap, i) {
struct net_device *nd = adap->port[i].dev;
newIndex = avg_payload / SGE_INTR_BUCKETSIZE; if (test_and_clear_bit(nd->if_port,
if (newIndex > SGE_INTR_MAXBUCKETS - 1) { &sge->stopped_tx_queues) &&
newIndex = SGE_INTR_MAXBUCKETS - 1; netif_running(nd)) {
sge->stats.cmdQ_restarted[3]++;
netif_wake_queue(nd);
}
} }
/* Save a PIO with this check....maybe */
if (newIndex != sge->currIndex) {
t1_write_reg_4(sge->adapter, A_SG_INTRTIMER,
sge->intrtimer[newIndex]);
sge->currIndex = newIndex;
sge->adapter->params.sge.last_rx_coalesce_raw =
sge->intrtimer[newIndex];
} }
} }
/* /*
* Returns true if command queue q_num has enough available descriptors that * update_tx_info is called from the interrupt handler/NAPI to return cmdQ0
* we can resume Tx operation after temporarily disabling its packet queue. * information.
*/ */
static inline int enough_free_Tx_descs(struct sge *sge, int q_num) static unsigned int update_tx_info(struct adapter *adapter,
unsigned int flags,
unsigned int pr0)
{ {
return atomic_read(&sge->cmdQ[q_num].credits) > struct sge *sge = adapter->sge;
(sge->cmdQ[q_num].entries_n >> 2); struct cmdQ *cmdq = &sge->cmdQ[0];
}
/* cmdq->processed += pr0;
* Main interrupt handler, optimized assuming that we took a 'DATA'
* interrupt. if (flags & F_CMDQ0_ENABLE) {
* clear_bit(CMDQ_STAT_RUNNING, &cmdq->status);
* 1. Clear the interrupt
* 2. Loop while we find valid descriptors and process them; accumulate if (cmdq->cleaned + cmdq->in_use != cmdq->processed &&
* information that can be processed after the loop !test_and_set_bit(CMDQ_STAT_LAST_PKT_DB, &cmdq->status)) {
* 3. Tell the SGE at which index we stopped processing descriptors set_bit(CMDQ_STAT_RUNNING, &cmdq->status);
* 4. Bookkeeping; free TX buffers, ring doorbell if there are any writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);
* outstanding TX buffers waiting, replenish RX buffers, potentially }
* reenable upper layers if they were turned off due to lack of TX flags &= ~F_CMDQ0_ENABLE;
* resources which are available again. }
* 5. If we took an interrupt, but no valid respQ descriptors was found we
* let the slow_intr_handler run and do error handling. if (unlikely(sge->stopped_tx_queues != 0))
restart_tx_queues(sge);
return flags;
}
/*
* Process SGE responses, up to the supplied budget. Returns the number of
* responses processed. A negative budget is effectively unlimited.
*/ */
irqreturn_t t1_interrupt(int irq, void *cookie, struct pt_regs *regs) static int process_responses(struct adapter *adapter, int budget)
{ {
struct net_device *netdev;
struct adapter *adapter = cookie;
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
struct respQ *Q = &sge->respQ; struct respQ *q = &sge->respQ;
unsigned int credits = Q->credits, flags = 0, ret = 0; struct respQ_e *e = &q->entries[q->cidx];
unsigned int tot_rxpayload = 0, tot_txpayload = 0, n_rx = 0, n_tx = 0; int budget_left = budget;
unsigned int credits_pend[SGE_CMDQ_N] = { 0, 0 }; unsigned int flags = 0;
unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};
struct respQ_e *e = &Q->entries[Q->cidx];
prefetch(e);
t1_write_reg_4(adapter, A_PL_CAUSE, F_PL_INTR_SGE_DATA);
while (likely(budget_left && e->GenerationBit == q->genbit)) {
flags |= e->Qsleeping;
while (e->GenerationBit == Q->genbit) { cmdq_processed[0] += e->Cmdq0CreditReturn;
if (--credits < SGE_RESPQ_REPLENISH_THRES) { cmdq_processed[1] += e->Cmdq1CreditReturn;
u32 n = Q->entries_n - credits - 1;
t1_write_reg_4(adapter, A_SG_RSPQUEUECREDIT, n); /* We batch updates to the TX side to avoid cacheline
credits += n; * ping-pong of TX state information on MP where the sender
* might run on a different CPU than this function...
*/
if (unlikely(flags & F_CMDQ0_ENABLE || cmdq_processed[0] > 64)) {
flags = update_tx_info(adapter, flags, cmdq_processed[0]);
cmdq_processed[0] = 0;
}
if (unlikely(cmdq_processed[1] > 16)) {
sge->cmdQ[1].processed += cmdq_processed[1];
cmdq_processed[1] = 0;
} }
if (likely(e->DataValid)) { if (likely(e->DataValid)) {
if (!e->Sop || !e->Eop) struct freelQ *fl = &sge->freelQ[e->FreelistQid];
if (unlikely(!e->Sop || !e->Eop))
BUG(); BUG();
t1_sge_rx(sge, &sge->freelQ[e->FreelistQid], if (unlikely(e->Offload))
e->BufferLength, e->Offload); unexpected_offload(adapter, fl);
tot_rxpayload += e->BufferLength; else
++n_rx; sge_rx(sge, fl, e->BufferLength);
}
flags |= e->Qsleeping;
credits_pend[0] += e->Cmdq0CreditReturn;
credits_pend[1] += e->Cmdq1CreditReturn;
#ifdef CONFIG_SMP
/* /*
* If enough cmdQ0 buffers have finished DMAing free them so * Note: this depends on each packet consuming a
* anyone that may be waiting for their release can continue. * single free-list buffer; cf. the BUG above.
* We do this only on MP systems to allow other CPUs to proceed
* promptly. UP systems can wait for the free_cmdQ_buffers()
* calls after this loop as the sole CPU is currently busy in
* this loop.
*/ */
if (unlikely(credits_pend[0] > SGE_FREEL_REFILL_THRESH)) { if (++fl->cidx == fl->size)
free_cmdQ_buffers(sge, &sge->cmdQ[0], credits_pend[0], fl->cidx = 0;
&tot_txpayload); if (unlikely(--fl->credits <
n_tx += credits_pend[0]; fl->size - SGE_FREEL_REFILL_THRESH))
credits_pend[0] = 0; refill_free_list(sge, fl);
} } else
#endif sge->stats.pure_rsps++;
ret++;
e++; e++;
if (unlikely(++Q->cidx == Q->entries_n)) { if (unlikely(++q->cidx == q->size)) {
Q->cidx = 0; q->cidx = 0;
Q->genbit ^= 1; q->genbit ^= 1;
e = Q->entries; e = q->entries;
} }
} prefetch(e);
Q->credits = credits;
t1_write_reg_4(adapter, A_SG_SLEEPING, Q->cidx);
if (credits_pend[0]) if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {
free_cmdQ_buffers(sge, &sge->cmdQ[0], credits_pend[0], &tot_txpayload); writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);
if (credits_pend[1]) q->credits = 0;
free_cmdQ_buffers(sge, &sge->cmdQ[1], credits_pend[1], &tot_txpayload); }
--budget_left;
}
/* Do any coalescing and interrupt latency timer adjustments */ flags = update_tx_info(adapter, flags, cmdq_processed[0]);
if (adapter->params.sge.coalesce_enable) { sge->cmdQ[1].processed += cmdq_processed[1];
unsigned int avg_txpayload = 0, avg_rxpayload = 0;
n_tx += credits_pend[0] + credits_pend[1]; budget -= budget_left;
return budget;
}
/* /*
* Choose larger avg. payload size to increase * A simpler version of process_responses() that handles only pure (i.e.,
* throughput and reduce [CPU util., intr/s.] * non data-carrying) responses. Such respones are too light-weight to justify
* * calling a softirq when using NAPI, so we handle them specially in hard
* Throughput behavior favored in mixed-mode. * interrupt context. The function is called with a pointer to a response,
* which the caller must ensure is a valid pure response. Returns 1 if it
* encounters a valid data-carrying response, 0 otherwise.
*/ */
if (n_tx) static int process_pure_responses(struct adapter *adapter, struct respQ_e *e)
avg_txpayload = tot_txpayload/n_tx; {
if (n_rx) struct sge *sge = adapter->sge;
avg_rxpayload = tot_rxpayload/n_rx; struct respQ *q = &sge->respQ;
unsigned int flags = 0;
if (n_tx && avg_txpayload > avg_rxpayload){ unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};
update_intr_timer(sge, avg_txpayload);
} else if (n_rx) {
update_intr_timer(sge, avg_rxpayload);
}
}
if (flags & F_CMDQ0_ENABLE) { do {
struct cmdQ *cmdQ = &sge->cmdQ[0]; flags |= e->Qsleeping;
atomic_set(&cmdQ->asleep, 1); cmdq_processed[0] += e->Cmdq0CreditReturn;
if (atomic_read(&cmdQ->pio_pidx) != cmdQ->pidx) { cmdq_processed[1] += e->Cmdq1CreditReturn;
doorbell_pio(sge, F_CMDQ0_ENABLE);
atomic_set(&cmdQ->pio_pidx, cmdQ->pidx); e++;
} if (unlikely(++q->cidx == q->size)) {
q->cidx = 0;
q->genbit ^= 1;
e = q->entries;
} }
if (unlikely(flags & (F_FL0_ENABLE | F_FL1_ENABLE))) prefetch(e);
freelQs_empty(sge);
netdev = adapter->port[0].dev; if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {
if (unlikely(netif_queue_stopped(netdev) && netif_carrier_ok(netdev) && writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);
enough_free_Tx_descs(sge, 0) && q->credits = 0;
enough_free_Tx_descs(sge, 1))) {
netif_wake_queue(netdev);
} }
if (unlikely(!ret)) sge->stats.pure_rsps++;
ret = t1_slow_intr_handler(adapter); } while (e->GenerationBit == q->genbit && !e->DataValid);
flags = update_tx_info(adapter, flags, cmdq_processed[0]);
sge->cmdQ[1].processed += cmdq_processed[1];
return IRQ_RETVAL(ret != 0); return e->GenerationBit == q->genbit;
} }
/* /*
* Enqueues the sk_buff onto the cmdQ[qid] and has hardware fetch it. * Handler for new data events when using NAPI. This does not need any locking
* * or protection from interrupts as data interrupts are off at this point and
* The code figures out how many entries the sk_buff will require in the * other adapter interrupts do not interfere.
* cmdQ and updates the cmdQ data structure with the state once the enqueue
* has complete. Then, it doesn't access the global structure anymore, but
* uses the corresponding fields on the stack. In conjuction with a spinlock
* around that code, we can make the function reentrant without holding the
* lock when we actually enqueue (which might be expensive, especially on
* architectures with IO MMUs).
*/ */
static unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter, static int t1_poll(struct net_device *dev, int *budget)
unsigned int qid)
{ {
struct sge *sge = adapter->sge; struct adapter *adapter = dev->priv;
struct cmdQ *Q = &sge->cmdQ[qid]; int effective_budget = min(*budget, dev->quota);
struct cmdQ_e *e;
struct cmdQ_ce *ce; int work_done = process_responses(adapter, effective_budget);
dma_addr_t mapping; *budget -= work_done;
unsigned int credits, pidx, genbit; dev->quota -= work_done;
if (work_done >= effective_budget)
return 1;
unsigned int count = 1 + skb_shinfo(skb)->nr_frags; __netif_rx_complete(dev);
/* /*
* Coming from the timer * Because we don't atomically flush the following write it is
* possible that in very rare cases it can reach the device in a way
* that races with a new response being written plus an error interrupt
* causing the NAPI interrupt handler below to return unhandled status
* to the OS. To protect against this would require flushing the write
* and doing both the write and the flush with interrupts off. Way too
* expensive and unjustifiable given the rarity of the race.
*/ */
if ((skb == sge->pskb)) { writel(adapter->sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);
return 0;
}
/*
* Returns true if the device is already scheduled for polling.
*/
static inline int napi_is_scheduled(struct net_device *dev)
{
return test_bit(__LINK_STATE_RX_SCHED, &dev->state);
}
/*
* NAPI version of the main interrupt handler.
*/
static irqreturn_t t1_interrupt_napi(int irq, void *data, struct pt_regs *regs)
{
int handled;
struct adapter *adapter = data;
struct sge *sge = adapter->sge;
struct respQ *q = &adapter->sge->respQ;
/* /*
* Quit if any cmdQ activities * Clear the SGE_DATA interrupt first thing. Normally the NAPI
* handler has control of the response queue and the interrupt handler
* can look at the queue reliably only once it knows NAPI is off.
* We can't wait that long to clear the SGE_DATA interrupt because we
* could race with t1_poll rearming the SGE interrupt, so we need to
* clear the interrupt speculatively and really early on.
*/ */
if (!spin_trylock(&Q->Qlock)) writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);
return 0;
if (atomic_read(&Q->credits) != Q->entries_n) {
spin_unlock(&Q->Qlock);
return 0;
}
}
else
spin_lock(&Q->Qlock);
genbit = Q->genbit;
pidx = Q->pidx;
credits = atomic_read(&Q->credits);
credits -= count;
atomic_sub(count, &Q->credits);
Q->pidx += count;
if (Q->pidx >= Q->entries_n) {
Q->pidx -= Q->entries_n;
Q->genbit ^= 1;
}
if (unlikely(credits < (MAX_SKB_FRAGS + 1))) { spin_lock(&adapter->async_lock);
sge->intr_cnt.cmdQ_full[qid]++; if (!napi_is_scheduled(sge->netdev)) {
netif_stop_queue(adapter->port[0].dev); struct respQ_e *e = &q->entries[q->cidx];
}
spin_unlock(&Q->Qlock);
mapping = pci_map_single(adapter->pdev, skb->data, if (e->GenerationBit == q->genbit) {
skb->len - skb->data_len, PCI_DMA_TODEVICE); if (e->DataValid ||
ce = &Q->centries[pidx]; process_pure_responses(adapter, e)) {
ce->skb = NULL; if (likely(napi_schedule_prep(sge->netdev)))
pci_unmap_addr_set(ce, dma_addr, mapping); __netif_rx_schedule(sge->netdev);
pci_unmap_len_set(ce, dma_len, skb->len - skb->data_len); else
ce->single = 1; printk(KERN_CRIT
"NAPI schedule failure!\n");
} else
writel(q->cidx, adapter->regs + A_SG_SLEEPING);
handled = 1;
goto unlock;
} else
writel(q->cidx, adapter->regs + A_SG_SLEEPING);
} else
if (readl(adapter->regs + A_PL_CAUSE) & F_PL_INTR_SGE_DATA)
printk(KERN_ERR "data interrupt while NAPI running\n");
handled = t1_slow_intr_handler(adapter);
if (!handled)
sge->stats.unhandled_irqs++;
unlock:
spin_unlock(&adapter->async_lock);
return IRQ_RETVAL(handled != 0);
}
e = &Q->entries[pidx]; /*
e->Sop = 1; * Main interrupt handler, optimized assuming that we took a 'DATA'
e->DataValid = 1; * interrupt.
e->BufferLength = skb->len - skb->data_len; *
e->AddrHigh = (u64)mapping >> 32; * 1. Clear the interrupt
e->AddrLow = (u32)mapping; * 2. Loop while we find valid descriptors and process them; accumulate
* information that can be processed after the loop
* 3. Tell the SGE at which index we stopped processing descriptors
* 4. Bookkeeping; free TX buffers, ring doorbell if there are any
* outstanding TX buffers waiting, replenish RX buffers, potentially
* reenable upper layers if they were turned off due to lack of TX
* resources which are available again.
* 5. If we took an interrupt, but no valid respQ descriptors was found we
* let the slow_intr_handler run and do error handling.
*/
static irqreturn_t t1_interrupt(int irq, void *cookie, struct pt_regs *regs)
{
int work_done;
struct respQ_e *e;
struct adapter *adapter = cookie;
struct respQ *Q = &adapter->sge->respQ;
if (--count > 0) { spin_lock(&adapter->async_lock);
unsigned int i; e = &Q->entries[Q->cidx];
prefetch(e);
e->Eop = 0; writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);
wmb();
e->GenerationBit = e->GenerationBit2 = genbit;
for (i = 0; i < count; i++) { if (likely(e->GenerationBit == Q->genbit))
skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; work_done = process_responses(adapter, -1);
else
work_done = t1_slow_intr_handler(adapter);
ce++; e++; /*
if (++pidx == Q->entries_n) { * The unconditional clearing of the PL_CAUSE above may have raced
pidx = 0; * with DMA completion and the corresponding generation of a response
genbit ^= 1; * to cause us to miss the resulting data interrupt. The next write
ce = Q->centries; * is also unconditional to recover the missed interrupt and render
e = Q->entries; * this race harmless.
} */
writel(Q->cidx, adapter->regs + A_SG_SLEEPING);
mapping = pci_map_page(adapter->pdev, frag->page, if (!work_done)
frag->page_offset, adapter->sge->stats.unhandled_irqs++;
frag->size, spin_unlock(&adapter->async_lock);
PCI_DMA_TODEVICE); return IRQ_RETVAL(work_done != 0);
ce->skb = NULL; }
pci_unmap_addr_set(ce, dma_addr, mapping);
pci_unmap_len_set(ce, dma_len, frag->size);
ce->single = 0;
e->Sop = 0; intr_handler_t t1_select_intr_handler(adapter_t *adapter)
e->DataValid = 1; {
e->BufferLength = frag->size; return adapter->params.sge.polling ? t1_interrupt_napi : t1_interrupt;
e->AddrHigh = (u64)mapping >> 32; }
e->AddrLow = (u32)mapping;
if (i < count - 1) { /*
e->Eop = 0; * Enqueues the sk_buff onto the cmdQ[qid] and has hardware fetch it.
wmb(); *
e->GenerationBit = e->GenerationBit2 = genbit; * The code figures out how many entries the sk_buff will require in the
* cmdQ and updates the cmdQ data structure with the state once the enqueue
* has complete. Then, it doesn't access the global structure anymore, but
* uses the corresponding fields on the stack. In conjuction with a spinlock
* around that code, we can make the function reentrant without holding the
* lock when we actually enqueue (which might be expensive, especially on
* architectures with IO MMUs).
*
* This runs with softirqs disabled.
*/
unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
unsigned int qid, struct net_device *dev)
{
struct sge *sge = adapter->sge;
struct cmdQ *q = &sge->cmdQ[qid];
unsigned int credits, pidx, genbit, count;
spin_lock(&q->lock);
reclaim_completed_tx(sge, q);
pidx = q->pidx;
credits = q->size - q->in_use;
count = 1 + skb_shinfo(skb)->nr_frags;
{ /* Ethernet packet */
if (unlikely(credits < count)) {
netif_stop_queue(dev);
set_bit(dev->if_port, &sge->stopped_tx_queues);
sge->stats.cmdQ_full[3]++;
spin_unlock(&q->lock);
CH_ERR("%s: Tx ring full while queue awake!\n",
adapter->name);
return 1;
} }
if (unlikely(credits - count < q->stop_thres)) {
sge->stats.cmdQ_full[3]++;
netif_stop_queue(dev);
set_bit(dev->if_port, &sge->stopped_tx_queues);
} }
} }
q->in_use += count;
genbit = q->genbit;
q->pidx += count;
if (q->pidx >= q->size) {
q->pidx -= q->size;
q->genbit ^= 1;
}
spin_unlock(&q->lock);
if (skb != sge->pskb) write_tx_descs(adapter, skb, pidx, genbit, q);
ce->skb = skb;
e->Eop = 1;
wmb();
e->GenerationBit = e->GenerationBit2 = genbit;
/* /*
* We always ring the doorbell for cmdQ1. For cmdQ0, we only ring * We always ring the doorbell for cmdQ1. For cmdQ0, we only ring
...@@ -1317,50 +1380,50 @@ static unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter, ...@@ -1317,50 +1380,50 @@ static unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
* then the interrupt handler will detect the outstanding TX packet * then the interrupt handler will detect the outstanding TX packet
* and ring the doorbell for us. * and ring the doorbell for us.
*/ */
if (qid) { if (qid)
doorbell_pio(sge, F_CMDQ1_ENABLE); doorbell_pio(adapter, F_CMDQ1_ENABLE);
} else if (atomic_read(&Q->asleep)) { else {
atomic_set(&Q->asleep, 0); clear_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);
doorbell_pio(sge, F_CMDQ0_ENABLE); if (test_and_set_bit(CMDQ_STAT_RUNNING, &q->status) == 0) {
atomic_set(&Q->pio_pidx, Q->pidx); set_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);
writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);
}
} }
return 0; return 0;
} }
#define MK_ETH_TYPE_MSS(type, mss) (((mss) & 0x3FFF) | ((type) << 14)) #define MK_ETH_TYPE_MSS(type, mss) (((mss) & 0x3FFF) | ((type) << 14))
/*
* eth_hdr_len - return the length of an Ethernet header
* @data: pointer to the start of the Ethernet header
*
* Returns the length of an Ethernet header, including optional VLAN tag.
*/
static inline int eth_hdr_len(const void *data)
{
const struct ethhdr *e = data;
return e->h_proto == htons(ETH_P_8021Q) ? VLAN_ETH_HLEN : ETH_HLEN;
}
/* /*
* Adds the CPL header to the sk_buff and passes it to t1_sge_tx. * Adds the CPL header to the sk_buff and passes it to t1_sge_tx.
*/ */
int t1_start_xmit(struct sk_buff *skb, struct net_device *dev) int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
struct sge_port_stats *st = &adapter->sge->port_stats[dev->if_port];
struct sge *sge = adapter->sge;
struct cpl_tx_pkt *cpl; struct cpl_tx_pkt *cpl;
struct ethhdr *eth;
size_t max_len;
/*
* We are using a non-standard hard_header_len and some kernel
* components, such as pktgen, do not handle it right. Complain
* when this happens but try to fix things up.
*/
if (unlikely(skb_headroom(skb) < dev->hard_header_len - ETH_HLEN)) {
struct sk_buff *orig_skb = skb;
if (net_ratelimit())
printk(KERN_ERR
"%s: Tx packet has inadequate headroom\n",
dev->name);
skb = skb_realloc_headroom(skb, sizeof(struct cpl_tx_pkt_lso));
dev_kfree_skb_any(orig_skb);
if (!skb)
return -ENOMEM;
}
#ifdef NETIF_F_TSO
if (skb_shinfo(skb)->tso_size) { if (skb_shinfo(skb)->tso_size) {
int eth_type; int eth_type;
struct cpl_tx_pkt_lso *hdr; struct cpl_tx_pkt_lso *hdr;
st->tso++;
eth_type = skb->nh.raw - skb->data == ETH_HLEN ? eth_type = skb->nh.raw - skb->data == ETH_HLEN ?
CPL_ETH_II : CPL_ETH_II_VLAN; CPL_ETH_II : CPL_ETH_II_VLAN;
...@@ -1373,40 +1436,72 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -1373,40 +1436,72 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
skb_shinfo(skb)->tso_size)); skb_shinfo(skb)->tso_size));
hdr->len = htonl(skb->len - sizeof(*hdr)); hdr->len = htonl(skb->len - sizeof(*hdr));
cpl = (struct cpl_tx_pkt *)hdr; cpl = (struct cpl_tx_pkt *)hdr;
sge->stats.tx_lso_pkts++;
} else } else
#endif
{ {
/* /*
* An Ethernet packet must have at least space for * Packets shorter than ETH_HLEN can break the MAC, drop them
* the DIX Ethernet header and be no greater than * early. Also, we may get oversized packets because some
* the device set MTU. Otherwise trash the packet. * parts of the kernel don't handle our unusual hard_header_len
* right, drop those too.
*/ */
if (skb->len < ETH_HLEN) if (unlikely(skb->len < ETH_HLEN ||
goto t1_start_xmit_fail2; skb->len > dev->mtu + eth_hdr_len(skb->data))) {
eth = (struct ethhdr *)skb->data; dev_kfree_skb_any(skb);
if (eth->h_proto == htons(ETH_P_8021Q)) return NET_XMIT_SUCCESS;
max_len = dev->mtu + VLAN_ETH_HLEN; }
else
max_len = dev->mtu + ETH_HLEN; /*
if (skb->len > max_len) * We are using a non-standard hard_header_len and some kernel
goto t1_start_xmit_fail2; * components, such as pktgen, do not handle it right.
* Complain when this happens but try to fix things up.
*/
if (unlikely(skb_headroom(skb) <
dev->hard_header_len - ETH_HLEN)) {
struct sk_buff *orig_skb = skb;
if (net_ratelimit())
printk(KERN_ERR "%s: inadequate headroom in "
"Tx packet\n", dev->name);
skb = skb_realloc_headroom(skb, sizeof(*cpl));
dev_kfree_skb_any(orig_skb);
if (!skb)
return -ENOMEM;
}
if (!(adapter->flags & UDP_CSUM_CAPABLE) && if (!(adapter->flags & UDP_CSUM_CAPABLE) &&
skb->ip_summed == CHECKSUM_HW && skb->ip_summed == CHECKSUM_HW &&
skb->nh.iph->protocol == IPPROTO_UDP && skb->nh.iph->protocol == IPPROTO_UDP)
skb_checksum_help(skb, 0)) if (unlikely(skb_checksum_help(skb, 0))) {
goto t1_start_xmit_fail3; dev_kfree_skb_any(skb);
return -ENOMEM;
}
if (!adapter->sge->pskb) { /* Hmmm, assuming to catch the gratious arp... and we'll use
* it to flush out stuck espi packets...
*/
if (unlikely(!adapter->sge->espibug_skb)) {
if (skb->protocol == htons(ETH_P_ARP) && if (skb->protocol == htons(ETH_P_ARP) &&
skb->nh.arph->ar_op == htons(ARPOP_REQUEST)) skb->nh.arph->ar_op == htons(ARPOP_REQUEST)) {
adapter->sge->pskb = skb; adapter->sge->espibug_skb = skb;
/* We want to re-use this skb later. We
* simply bump the reference count and it
* will not be freed...
*/
skb = skb_get(skb);
} }
cpl = (struct cpl_tx_pkt *)skb_push(skb, sizeof(*cpl)); }
cpl = (struct cpl_tx_pkt *)__skb_push(skb, sizeof(*cpl));
cpl->opcode = CPL_TX_PKT; cpl->opcode = CPL_TX_PKT;
cpl->ip_csum_dis = 1; /* SW calculates IP csum */ cpl->ip_csum_dis = 1; /* SW calculates IP csum */
cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_HW ? 0 : 1; cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_HW ? 0 : 1;
/* the length field isn't used so don't bother setting it */ /* the length field isn't used so don't bother setting it */
st->tx_cso += (skb->ip_summed == CHECKSUM_HW);
sge->stats.tx_do_cksum += (skb->ip_summed == CHECKSUM_HW);
sge->stats.tx_reg_pkts++;
} }
cpl->iff = dev->if_port; cpl->iff = dev->if_port;
...@@ -1414,38 +1509,176 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -1414,38 +1509,176 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (adapter->vlan_grp && vlan_tx_tag_present(skb)) { if (adapter->vlan_grp && vlan_tx_tag_present(skb)) {
cpl->vlan_valid = 1; cpl->vlan_valid = 1;
cpl->vlan = htons(vlan_tx_tag_get(skb)); cpl->vlan = htons(vlan_tx_tag_get(skb));
st->vlan_insert++;
} else } else
#endif #endif
cpl->vlan_valid = 0; cpl->vlan_valid = 0;
dev->trans_start = jiffies; dev->trans_start = jiffies;
return t1_sge_tx(skb, adapter, 0); return t1_sge_tx(skb, adapter, 0, dev);
}
t1_start_xmit_fail3: /*
printk(KERN_INFO "%s: Unable to complete checksum\n", dev->name); * Callback for the Tx buffer reclaim timer. Runs with softirqs disabled.
goto t1_start_xmit_fail1; */
static void sge_tx_reclaim_cb(unsigned long data)
{
int i;
struct sge *sge = (struct sge *)data;
t1_start_xmit_fail2: for (i = 0; i < SGE_CMDQ_N; ++i) {
printk(KERN_INFO "%s: Invalid packet length %d, dropping\n", struct cmdQ *q = &sge->cmdQ[i];
dev->name, skb->len);
t1_start_xmit_fail1: if (!spin_trylock(&q->lock))
dev_kfree_skb_any(skb); continue;
reclaim_completed_tx(sge, q);
if (i == 0 && q->in_use) /* flush pending credits */
writel(F_CMDQ0_ENABLE,
sge->adapter->regs + A_SG_DOORBELL);
spin_unlock(&q->lock);
}
mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);
}
/*
* Propagate changes of the SGE coalescing parameters to the HW.
*/
int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p)
{
sge->netdev->poll = t1_poll;
sge->fixed_intrtimer = p->rx_coalesce_usecs *
core_ticks_per_usec(sge->adapter);
writel(sge->fixed_intrtimer, sge->adapter->regs + A_SG_INTRTIMER);
return 0; return 0;
} }
void t1_sge_set_ptimeout(adapter_t *adapter, u32 val) /*
* Allocates both RX and TX resources and configures the SGE. However,
* the hardware is not enabled yet.
*/
int t1_sge_configure(struct sge *sge, struct sge_params *p)
{ {
struct sge *sge = adapter->sge; if (alloc_rx_resources(sge, p))
return -ENOMEM;
if (alloc_tx_resources(sge, p)) {
free_rx_resources(sge);
return -ENOMEM;
}
configure_sge(sge, p);
if (is_T2(adapter)) /*
sge->ptimeout = max((u32)((HZ * val) / 1000), (u32)1); * Now that we have sized the free lists calculate the payload
* capacity of the large buffers. Other parts of the driver use
* this to set the max offload coalescing size so that RX packets
* do not overflow our large buffers.
*/
p->large_buf_capacity = jumbo_payload_capacity(sge);
return 0;
} }
u32 t1_sge_get_ptimeout(adapter_t *adapter) /*
* Disables the DMA engine.
*/
void t1_sge_stop(struct sge *sge)
{ {
writel(0, sge->adapter->regs + A_SG_CONTROL);
(void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */
if (is_T2(sge->adapter))
del_timer_sync(&sge->espibug_timer);
del_timer_sync(&sge->tx_reclaim_timer);
}
/*
* Enables the DMA engine.
*/
void t1_sge_start(struct sge *sge)
{
refill_free_list(sge, &sge->freelQ[0]);
refill_free_list(sge, &sge->freelQ[1]);
writel(sge->sge_control, sge->adapter->regs + A_SG_CONTROL);
doorbell_pio(sge->adapter, F_FL0_ENABLE | F_FL1_ENABLE);
(void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */
mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);
if (is_T2(sge->adapter))
mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);
}
/*
* Callback for the T2 ESPI 'stuck packet feature' workaorund
*/
static void espibug_workaround(void *data)
{
struct adapter *adapter = (struct adapter *)data;
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
return (is_T2(adapter) ? ((sge->ptimeout * 1000) / HZ) : 0); if (netif_running(adapter->port[0].dev)) {
struct sk_buff *skb = sge->espibug_skb;
u32 seop = t1_espi_get_mon(adapter, 0x930, 0);
if ((seop & 0xfff0fff) == 0xfff && skb) {
if (!skb->cb[0]) {
u8 ch_mac_addr[ETH_ALEN] =
{0x0, 0x7, 0x43, 0x0, 0x0, 0x0};
memcpy(skb->data + sizeof(struct cpl_tx_pkt),
ch_mac_addr, ETH_ALEN);
memcpy(skb->data + skb->len - 10, ch_mac_addr,
ETH_ALEN);
skb->cb[0] = 0xff;
}
/* bump the reference count to avoid freeing of the
* skb once the DMA has completed.
*/
skb = skb_get(skb);
t1_sge_tx(skb, adapter, 0, adapter->port[0].dev);
}
}
mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);
} }
/*
* Creates a t1_sge structure and returns suggested resource parameters.
*/
struct sge * __devinit t1_sge_create(struct adapter *adapter,
struct sge_params *p)
{
struct sge *sge = kmalloc(sizeof(*sge), GFP_KERNEL);
if (!sge)
return NULL;
memset(sge, 0, sizeof(*sge));
sge->adapter = adapter;
sge->netdev = adapter->port[0].dev;
sge->rx_pkt_pad = t1_is_T1B(adapter) ? 0 : 2;
sge->jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;
init_timer(&sge->tx_reclaim_timer);
sge->tx_reclaim_timer.data = (unsigned long)sge;
sge->tx_reclaim_timer.function = sge_tx_reclaim_cb;
if (is_T2(sge->adapter)) {
init_timer(&sge->espibug_timer);
sge->espibug_timer.function = (void *)&espibug_workaround;
sge->espibug_timer.data = (unsigned long)sge->adapter;
sge->espibug_timeout = 1;
}
p->cmdQ_size[0] = SGE_CMDQ0_E_N;
p->cmdQ_size[1] = SGE_CMDQ1_E_N;
p->freelQ_size[!sge->jumbo_fl] = SGE_FREEL_SIZE;
p->freelQ_size[sge->jumbo_fl] = SGE_JUMBO_FREEL_SIZE;
p->rx_coalesce_usecs = 50;
p->coalesce_enable = 0;
p->sample_interval_usecs = 0;
p->polling = 0;
return sge;
}
/***************************************************************************** /*****************************************************************************
* * * *
* File: sge.h * * File: sge.h *
* $Revision: 1.7 $ * * $Revision: 1.11 $ *
* $Date: 2005/03/23 07:15:59 $ * * $Date: 2005/06/21 22:10:55 $ *
* Description: * * Description: *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
* * * *
...@@ -36,25 +36,50 @@ ...@@ -36,25 +36,50 @@
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef _CHELSIO_LINUX_SGE_H_ #ifndef _CXGB_SGE_H_
#define _CHELSIO_LINUX_SGE_H_ #define _CXGB_SGE_H_
#include <linux/types.h> #include <linux/types.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#ifndef IRQ_RETVAL
#define IRQ_RETVAL(x)
typedef void irqreturn_t;
#endif
typedef irqreturn_t (*intr_handler_t)(int, void *, struct pt_regs *);
struct sge_intr_counts { struct sge_intr_counts {
unsigned int respQ_empty; /* # times respQ empty */ unsigned int respQ_empty; /* # times respQ empty */
unsigned int respQ_overflow; /* # respQ overflow (fatal) */ unsigned int respQ_overflow; /* # respQ overflow (fatal) */
unsigned int freelistQ_empty; /* # times freelist empty */ unsigned int freelistQ_empty; /* # times freelist empty */
unsigned int pkt_too_big; /* packet too large (fatal) */ unsigned int pkt_too_big; /* packet too large (fatal) */
unsigned int pkt_mismatch; unsigned int pkt_mismatch;
unsigned int cmdQ_full[2]; /* not HW interrupt, host cmdQ[] full */ unsigned int cmdQ_full[3]; /* not HW IRQ, host cmdQ[] full */
unsigned int cmdQ_restarted[3];/* # of times cmdQ X was restarted */
unsigned int ethernet_pkts; /* # of Ethernet packets received */
unsigned int offload_pkts; /* # of offload packets received */
unsigned int offload_bundles; /* # of offload pkt bundles delivered */
unsigned int pure_rsps; /* # of non-payload responses */
unsigned int unhandled_irqs; /* # of unhandled interrupts */
unsigned int tx_ipfrags;
unsigned int tx_reg_pkts;
unsigned int tx_lso_pkts;
unsigned int tx_do_cksum;
};
struct sge_port_stats {
unsigned long rx_cso_good; /* # of successful RX csum offloads */
unsigned long tx_cso; /* # of TX checksum offloads */
unsigned long vlan_xtract; /* # of VLAN tag extractions */
unsigned long vlan_insert; /* # of VLAN tag extractions */
unsigned long tso; /* # of TSO requests */
unsigned long rx_drops; /* # of packets dropped due to no mem */
}; };
struct sk_buff; struct sk_buff;
struct net_device; struct net_device;
struct cxgbdev;
struct adapter; struct adapter;
struct sge_params; struct sge_params;
struct sge; struct sge;
...@@ -63,7 +88,9 @@ struct sge *t1_sge_create(struct adapter *, struct sge_params *); ...@@ -63,7 +88,9 @@ struct sge *t1_sge_create(struct adapter *, struct sge_params *);
int t1_sge_configure(struct sge *, struct sge_params *); int t1_sge_configure(struct sge *, struct sge_params *);
int t1_sge_set_coalesce_params(struct sge *, struct sge_params *); int t1_sge_set_coalesce_params(struct sge *, struct sge_params *);
void t1_sge_destroy(struct sge *); void t1_sge_destroy(struct sge *);
irqreturn_t t1_interrupt(int, void *, struct pt_regs *); intr_handler_t t1_select_intr_handler(adapter_t *adapter);
unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
unsigned int qid, struct net_device *netdev);
int t1_start_xmit(struct sk_buff *skb, struct net_device *dev); int t1_start_xmit(struct sk_buff *skb, struct net_device *dev);
void t1_set_vlan_accel(struct adapter *adapter, int on_off); void t1_set_vlan_accel(struct adapter *adapter, int on_off);
void t1_sge_start(struct sge *); void t1_sge_start(struct sge *);
...@@ -72,8 +99,7 @@ int t1_sge_intr_error_handler(struct sge *); ...@@ -72,8 +99,7 @@ int t1_sge_intr_error_handler(struct sge *);
void t1_sge_intr_enable(struct sge *); void t1_sge_intr_enable(struct sge *);
void t1_sge_intr_disable(struct sge *); void t1_sge_intr_disable(struct sge *);
void t1_sge_intr_clear(struct sge *); void t1_sge_intr_clear(struct sge *);
const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge);
const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port);
void t1_sge_set_ptimeout(adapter_t *adapter, u32 val); #endif /* _CXGB_SGE_H_ */
u32 t1_sge_get_ptimeout(adapter_t *adapter);
#endif /* _CHELSIO_LINUX_SGE_H_ */
/***************************************************************************** /*****************************************************************************
* * * *
* File: subr.c * * File: subr.c *
* $Revision: 1.12 $ * * $Revision: 1.27 $ *
* $Date: 2005/03/23 07:41:27 $ * * $Date: 2005/06/22 01:08:36 $ *
* Description: * * Description: *
* Various subroutines (intr,pio,etc.) used by Chelsio 10G Ethernet driver. * * Various subroutines (intr,pio,etc.) used by Chelsio 10G Ethernet driver. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -40,11 +40,9 @@ ...@@ -40,11 +40,9 @@
#include "common.h" #include "common.h"
#include "elmer0.h" #include "elmer0.h"
#include "regs.h" #include "regs.h"
#include "gmac.h" #include "gmac.h"
#include "cphy.h" #include "cphy.h"
#include "sge.h" #include "sge.h"
#include "tp.h"
#include "espi.h" #include "espi.h"
/** /**
...@@ -64,7 +62,7 @@ static int t1_wait_op_done(adapter_t *adapter, int reg, u32 mask, int polarity, ...@@ -64,7 +62,7 @@ static int t1_wait_op_done(adapter_t *adapter, int reg, u32 mask, int polarity,
int attempts, int delay) int attempts, int delay)
{ {
while (1) { while (1) {
u32 val = t1_read_reg_4(adapter, reg) & mask; u32 val = readl(adapter->regs + reg) & mask;
if (!!val == polarity) if (!!val == polarity)
return 0; return 0;
...@@ -84,9 +82,9 @@ static int __t1_tpi_write(adapter_t *adapter, u32 addr, u32 value) ...@@ -84,9 +82,9 @@ static int __t1_tpi_write(adapter_t *adapter, u32 addr, u32 value)
{ {
int tpi_busy; int tpi_busy;
t1_write_reg_4(adapter, A_TPI_ADDR, addr); writel(addr, adapter->regs + A_TPI_ADDR);
t1_write_reg_4(adapter, A_TPI_WR_DATA, value); writel(value, adapter->regs + A_TPI_WR_DATA);
t1_write_reg_4(adapter, A_TPI_CSR, F_TPIWR); writel(F_TPIWR, adapter->regs + A_TPI_CSR);
tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1, tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1,
TPI_ATTEMPTS, 3); TPI_ATTEMPTS, 3);
...@@ -100,9 +98,9 @@ int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value) ...@@ -100,9 +98,9 @@ int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value)
{ {
int ret; int ret;
TPI_LOCK(adapter); spin_lock(&(adapter)->tpi_lock);
ret = __t1_tpi_write(adapter, addr, value); ret = __t1_tpi_write(adapter, addr, value);
TPI_UNLOCK(adapter); spin_unlock(&(adapter)->tpi_lock);
return ret; return ret;
} }
...@@ -113,8 +111,8 @@ static int __t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp) ...@@ -113,8 +111,8 @@ static int __t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp)
{ {
int tpi_busy; int tpi_busy;
t1_write_reg_4(adapter, A_TPI_ADDR, addr); writel(addr, adapter->regs + A_TPI_ADDR);
t1_write_reg_4(adapter, A_TPI_CSR, 0); writel(0, adapter->regs + A_TPI_CSR);
tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1, tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1,
TPI_ATTEMPTS, 3); TPI_ATTEMPTS, 3);
...@@ -122,7 +120,7 @@ static int __t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp) ...@@ -122,7 +120,7 @@ static int __t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp)
CH_ALERT("%s: TPI read from 0x%x failed\n", CH_ALERT("%s: TPI read from 0x%x failed\n",
adapter->name, addr); adapter->name, addr);
else else
*valp = t1_read_reg_4(adapter, A_TPI_RD_DATA); *valp = readl(adapter->regs + A_TPI_RD_DATA);
return tpi_busy; return tpi_busy;
} }
...@@ -130,20 +128,12 @@ int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp) ...@@ -130,20 +128,12 @@ int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp)
{ {
int ret; int ret;
TPI_LOCK(adapter); spin_lock(&(adapter)->tpi_lock);
ret = __t1_tpi_read(adapter, addr, valp); ret = __t1_tpi_read(adapter, addr, valp);
TPI_UNLOCK(adapter); spin_unlock(&(adapter)->tpi_lock);
return ret; return ret;
} }
/*
* Set a TPI parameter.
*/
static void t1_tpi_par(adapter_t *adapter, u32 value)
{
t1_write_reg_4(adapter, A_TPI_PAR, V_TPIPAR(value));
}
/* /*
* Called when a port's link settings change to propagate the new values to the * Called when a port's link settings change to propagate the new values to the
* associated PHY and MAC. After performing the common tasks it invokes an * associated PHY and MAC. After performing the common tasks it invokes an
...@@ -227,7 +217,7 @@ static int mi1_mdio_ext_read(adapter_t *adapter, int phy_addr, int mmd_addr, ...@@ -227,7 +217,7 @@ static int mi1_mdio_ext_read(adapter_t *adapter, int phy_addr, int mmd_addr,
{ {
u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr); u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr);
TPI_LOCK(adapter); spin_lock(&(adapter)->tpi_lock);
/* Write the address we want. */ /* Write the address we want. */
__t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr); __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr);
...@@ -242,7 +232,7 @@ static int mi1_mdio_ext_read(adapter_t *adapter, int phy_addr, int mmd_addr, ...@@ -242,7 +232,7 @@ static int mi1_mdio_ext_read(adapter_t *adapter, int phy_addr, int mmd_addr,
/* Read the data. */ /* Read the data. */
__t1_tpi_read(adapter, A_ELMER0_PORT0_MI1_DATA, valp); __t1_tpi_read(adapter, A_ELMER0_PORT0_MI1_DATA, valp);
TPI_UNLOCK(adapter); spin_unlock(&(adapter)->tpi_lock);
return 0; return 0;
} }
...@@ -251,7 +241,7 @@ static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr, ...@@ -251,7 +241,7 @@ static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr,
{ {
u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr); u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr);
TPI_LOCK(adapter); spin_lock(&(adapter)->tpi_lock);
/* Write the address we want. */ /* Write the address we want. */
__t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr); __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr);
...@@ -264,7 +254,7 @@ static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr, ...@@ -264,7 +254,7 @@ static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr,
__t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_DATA, val); __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_DATA, val);
__t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP, MI1_OP_INDIRECT_WRITE); __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP, MI1_OP_INDIRECT_WRITE);
mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP); mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP);
TPI_UNLOCK(adapter); spin_unlock(&(adapter)->tpi_lock);
return 0; return 0;
} }
...@@ -277,7 +267,6 @@ static struct mdio_ops mi1_mdio_ext_ops = { ...@@ -277,7 +267,6 @@ static struct mdio_ops mi1_mdio_ext_ops = {
enum { enum {
CH_BRD_N110_1F, CH_BRD_N110_1F,
CH_BRD_N210_1F, CH_BRD_N210_1F,
CH_BRD_T210_1F,
}; };
static struct board_info t1_board[] = { static struct board_info t1_board[] = {
...@@ -308,13 +297,15 @@ struct pci_device_id t1_pci_tbl[] = { ...@@ -308,13 +297,15 @@ struct pci_device_id t1_pci_tbl[] = {
{ 0, } { 0, }
}; };
MODULE_DEVICE_TABLE(pci, t1_pci_tbl);
/* /*
* Return the board_info structure with a given index. Out-of-range indices * Return the board_info structure with a given index. Out-of-range indices
* return NULL. * return NULL.
*/ */
const struct board_info *t1_get_board_info(unsigned int board_id) const struct board_info *t1_get_board_info(unsigned int board_id)
{ {
return board_id < DIMOF(t1_board) ? &t1_board[board_id] : NULL; return board_id < ARRAY_SIZE(t1_board) ? &t1_board[board_id] : NULL;
} }
struct chelsio_vpd_t { struct chelsio_vpd_t {
...@@ -436,7 +427,6 @@ int elmer0_ext_intr_handler(adapter_t *adapter) ...@@ -436,7 +427,6 @@ int elmer0_ext_intr_handler(adapter_t *adapter)
t1_tpi_read(adapter, A_ELMER0_INT_CAUSE, &cause); t1_tpi_read(adapter, A_ELMER0_INT_CAUSE, &cause);
switch (board_info(adapter)->board) { switch (board_info(adapter)->board) {
case CHBT_BOARD_CHT210:
case CHBT_BOARD_N210: case CHBT_BOARD_N210:
case CHBT_BOARD_N110: case CHBT_BOARD_N110:
if (cause & ELMER0_GP_BIT6) { /* Marvell 88x2010 interrupt */ if (cause & ELMER0_GP_BIT6) { /* Marvell 88x2010 interrupt */
...@@ -446,23 +436,6 @@ int elmer0_ext_intr_handler(adapter_t *adapter) ...@@ -446,23 +436,6 @@ int elmer0_ext_intr_handler(adapter_t *adapter)
link_changed(adapter, 0); link_changed(adapter, 0);
} }
break; break;
case CHBT_BOARD_8000:
case CHBT_BOARD_CHT110:
CH_DBG(adapter, INTR, "External interrupt cause 0x%x\n",
cause);
if (cause & ELMER0_GP_BIT1) { /* PMC3393 INTB */
struct cmac *mac = adapter->port[0].mac;
mac->ops->interrupt_handler(mac);
}
if (cause & ELMER0_GP_BIT5) { /* XPAK MOD_DETECT */
u32 mod_detect;
t1_tpi_read(adapter, A_ELMER0_GPI_STAT, &mod_detect);
CH_MSG(adapter, INFO, LINK, "XPAK %s\n",
mod_detect ? "removed" : "inserted");
}
break;
} }
t1_tpi_write(adapter, A_ELMER0_INT_CAUSE, cause); t1_tpi_write(adapter, A_ELMER0_INT_CAUSE, cause);
return 0; return 0;
...@@ -472,11 +445,11 @@ int elmer0_ext_intr_handler(adapter_t *adapter) ...@@ -472,11 +445,11 @@ int elmer0_ext_intr_handler(adapter_t *adapter)
void t1_interrupts_enable(adapter_t *adapter) void t1_interrupts_enable(adapter_t *adapter)
{ {
unsigned int i; unsigned int i;
u32 pl_intr;
adapter->slow_intr_mask = F_PL_INTR_SGE_ERR | F_PL_INTR_TP; adapter->slow_intr_mask = F_PL_INTR_SGE_ERR;
t1_sge_intr_enable(adapter->sge); t1_sge_intr_enable(adapter->sge);
t1_tp_intr_enable(adapter->tp);
if (adapter->espi) { if (adapter->espi) {
adapter->slow_intr_mask |= F_PL_INTR_ESPI; adapter->slow_intr_mask |= F_PL_INTR_ESPI;
t1_espi_intr_enable(adapter->espi); t1_espi_intr_enable(adapter->espi);
...@@ -489,8 +462,7 @@ void t1_interrupts_enable(adapter_t *adapter) ...@@ -489,8 +462,7 @@ void t1_interrupts_enable(adapter_t *adapter)
} }
/* Enable PCIX & external chip interrupts on ASIC boards. */ /* Enable PCIX & external chip interrupts on ASIC boards. */
if (t1_is_asic(adapter)) { pl_intr = readl(adapter->regs + A_PL_ENABLE);
u32 pl_intr = t1_read_reg_4(adapter, A_PL_ENABLE);
/* PCI-X interrupts */ /* PCI-X interrupts */
pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE,
...@@ -498,8 +470,7 @@ void t1_interrupts_enable(adapter_t *adapter) ...@@ -498,8 +470,7 @@ void t1_interrupts_enable(adapter_t *adapter)
adapter->slow_intr_mask |= F_PL_INTR_EXT | F_PL_INTR_PCIX; adapter->slow_intr_mask |= F_PL_INTR_EXT | F_PL_INTR_PCIX;
pl_intr |= F_PL_INTR_EXT | F_PL_INTR_PCIX; pl_intr |= F_PL_INTR_EXT | F_PL_INTR_PCIX;
t1_write_reg_4(adapter, A_PL_ENABLE, pl_intr); writel(pl_intr, adapter->regs + A_PL_ENABLE);
}
} }
/* Disables all interrupts. */ /* Disables all interrupts. */
...@@ -508,7 +479,6 @@ void t1_interrupts_disable(adapter_t* adapter) ...@@ -508,7 +479,6 @@ void t1_interrupts_disable(adapter_t* adapter)
unsigned int i; unsigned int i;
t1_sge_intr_disable(adapter->sge); t1_sge_intr_disable(adapter->sge);
t1_tp_intr_disable(adapter->tp);
if (adapter->espi) if (adapter->espi)
t1_espi_intr_disable(adapter->espi); t1_espi_intr_disable(adapter->espi);
...@@ -519,8 +489,7 @@ void t1_interrupts_disable(adapter_t* adapter) ...@@ -519,8 +489,7 @@ void t1_interrupts_disable(adapter_t* adapter)
} }
/* Disable PCIX & external chip interrupts. */ /* Disable PCIX & external chip interrupts. */
if (t1_is_asic(adapter)) writel(0, adapter->regs + A_PL_ENABLE);
t1_write_reg_4(adapter, A_PL_ENABLE, 0);
/* PCI-X interrupts */ /* PCI-X interrupts */
pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, 0); pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, 0);
...@@ -532,9 +501,10 @@ void t1_interrupts_disable(adapter_t* adapter) ...@@ -532,9 +501,10 @@ void t1_interrupts_disable(adapter_t* adapter)
void t1_interrupts_clear(adapter_t* adapter) void t1_interrupts_clear(adapter_t* adapter)
{ {
unsigned int i; unsigned int i;
u32 pl_intr;
t1_sge_intr_clear(adapter->sge); t1_sge_intr_clear(adapter->sge);
t1_tp_intr_clear(adapter->tp);
if (adapter->espi) if (adapter->espi)
t1_espi_intr_clear(adapter->espi); t1_espi_intr_clear(adapter->espi);
...@@ -545,12 +515,10 @@ void t1_interrupts_clear(adapter_t* adapter) ...@@ -545,12 +515,10 @@ void t1_interrupts_clear(adapter_t* adapter)
} }
/* Enable interrupts for external devices. */ /* Enable interrupts for external devices. */
if (t1_is_asic(adapter)) { pl_intr = readl(adapter->regs + A_PL_CAUSE);
u32 pl_intr = t1_read_reg_4(adapter, A_PL_CAUSE);
t1_write_reg_4(adapter, A_PL_CAUSE, writel(pl_intr | F_PL_INTR_EXT | F_PL_INTR_PCIX,
pl_intr | F_PL_INTR_EXT | F_PL_INTR_PCIX); adapter->regs + A_PL_CAUSE);
}
/* PCI-X interrupts */ /* PCI-X interrupts */
pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_CAUSE, 0xffffffff); pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_CAUSE, 0xffffffff);
...@@ -559,17 +527,15 @@ void t1_interrupts_clear(adapter_t* adapter) ...@@ -559,17 +527,15 @@ void t1_interrupts_clear(adapter_t* adapter)
/* /*
* Slow path interrupt handler for ASICs. * Slow path interrupt handler for ASICs.
*/ */
static int asic_slow_intr(adapter_t *adapter) int t1_slow_intr_handler(adapter_t *adapter)
{ {
u32 cause = t1_read_reg_4(adapter, A_PL_CAUSE); u32 cause = readl(adapter->regs + A_PL_CAUSE);
cause &= adapter->slow_intr_mask; cause &= adapter->slow_intr_mask;
if (!cause) if (!cause)
return 0; return 0;
if (cause & F_PL_INTR_SGE_ERR) if (cause & F_PL_INTR_SGE_ERR)
t1_sge_intr_error_handler(adapter->sge); t1_sge_intr_error_handler(adapter->sge);
if (cause & F_PL_INTR_TP)
t1_tp_intr_handler(adapter->tp);
if (cause & F_PL_INTR_ESPI) if (cause & F_PL_INTR_ESPI)
t1_espi_intr_handler(adapter->espi); t1_espi_intr_handler(adapter->espi);
if (cause & F_PL_INTR_PCIX) if (cause & F_PL_INTR_PCIX)
...@@ -578,41 +544,82 @@ static int asic_slow_intr(adapter_t *adapter) ...@@ -578,41 +544,82 @@ static int asic_slow_intr(adapter_t *adapter)
t1_elmer0_ext_intr(adapter); t1_elmer0_ext_intr(adapter);
/* Clear the interrupts just processed. */ /* Clear the interrupts just processed. */
t1_write_reg_4(adapter, A_PL_CAUSE, cause); writel(cause, adapter->regs + A_PL_CAUSE);
(void)t1_read_reg_4(adapter, A_PL_CAUSE); /* flush writes */ (void)readl(adapter->regs + A_PL_CAUSE); /* flush writes */
return 1; return 1;
} }
int t1_slow_intr_handler(adapter_t *adapter) /* Pause deadlock avoidance parameters */
#define DROP_MSEC 16
#define DROP_PKTS_CNT 1
static void set_csum_offload(adapter_t *adapter, u32 csum_bit, int enable)
{
u32 val = readl(adapter->regs + A_TP_GLOBAL_CONFIG);
if (enable)
val |= csum_bit;
else
val &= ~csum_bit;
writel(val, adapter->regs + A_TP_GLOBAL_CONFIG);
}
void t1_tp_set_ip_checksum_offload(adapter_t *adapter, int enable)
{ {
return asic_slow_intr(adapter); set_csum_offload(adapter, F_IP_CSUM, enable);
} }
/* Power sequencing is a work-around for Intel's XPAKs. */ void t1_tp_set_udp_checksum_offload(adapter_t *adapter, int enable)
static void power_sequence_xpak(adapter_t* adapter)
{ {
u32 mod_detect; set_csum_offload(adapter, F_UDP_CSUM, enable);
u32 gpo; }
void t1_tp_set_tcp_checksum_offload(adapter_t *adapter, int enable)
{
set_csum_offload(adapter, F_TCP_CSUM, enable);
}
static void t1_tp_reset(adapter_t *adapter, unsigned int tp_clk)
{
u32 val;
val = F_TP_IN_CSPI_CPL | F_TP_IN_CSPI_CHECK_IP_CSUM |
F_TP_IN_CSPI_CHECK_TCP_CSUM | F_TP_IN_ESPI_ETHERNET;
val |= F_TP_IN_ESPI_CHECK_IP_CSUM |
F_TP_IN_ESPI_CHECK_TCP_CSUM;
writel(val, adapter->regs + A_TP_IN_CONFIG);
writel(F_TP_OUT_CSPI_CPL |
F_TP_OUT_ESPI_ETHERNET |
F_TP_OUT_ESPI_GENERATE_IP_CSUM |
F_TP_OUT_ESPI_GENERATE_TCP_CSUM,
adapter->regs + A_TP_OUT_CONFIG);
val = readl(adapter->regs + A_TP_GLOBAL_CONFIG);
val &= ~(F_IP_CSUM | F_UDP_CSUM | F_TCP_CSUM);
writel(val, adapter->regs + A_TP_GLOBAL_CONFIG);
/*
* Enable pause frame deadlock prevention.
*/
if (is_T2(adapter)) {
u32 drop_ticks = DROP_MSEC * (tp_clk / 1000);
/* Check for XPAK */ writel(F_ENABLE_TX_DROP | F_ENABLE_TX_ERROR |
t1_tpi_read(adapter, A_ELMER0_GPI_STAT, &mod_detect); V_DROP_TICKS_CNT(drop_ticks) |
if (!(ELMER0_GP_BIT5 & mod_detect)) { V_NUM_PKTS_DROPPED(DROP_PKTS_CNT),
/* XPAK is present */ adapter->regs + A_TP_TX_DROP_CONFIG);
t1_tpi_read(adapter, A_ELMER0_GPO, &gpo);
gpo |= ELMER0_GP_BIT18;
t1_tpi_write(adapter, A_ELMER0_GPO, gpo);
} }
writel(F_TP_RESET, adapter->regs + A_TP_RESET);
} }
int __devinit t1_get_board_rev(adapter_t *adapter, const struct board_info *bi, int __devinit t1_get_board_rev(adapter_t *adapter, const struct board_info *bi,
struct adapter_params *p) struct adapter_params *p)
{ {
p->chip_version = bi->chip_term; p->chip_version = bi->chip_term;
p->is_asic = (p->chip_version != CHBT_TERM_FPGA);
if (p->chip_version == CHBT_TERM_T1 || if (p->chip_version == CHBT_TERM_T1 ||
p->chip_version == CHBT_TERM_T2 || p->chip_version == CHBT_TERM_T2) {
p->chip_version == CHBT_TERM_FPGA) { u32 val = readl(adapter->regs + A_TP_PC_CONFIG);
u32 val = t1_read_reg_4(adapter, A_TP_PC_CONFIG);
val = G_TP_PC_REV(val); val = G_TP_PC_REV(val);
if (val == 2) if (val == 2)
...@@ -633,23 +640,11 @@ int __devinit t1_get_board_rev(adapter_t *adapter, const struct board_info *bi, ...@@ -633,23 +640,11 @@ int __devinit t1_get_board_rev(adapter_t *adapter, const struct board_info *bi,
static int board_init(adapter_t *adapter, const struct board_info *bi) static int board_init(adapter_t *adapter, const struct board_info *bi)
{ {
switch (bi->board) { switch (bi->board) {
case CHBT_BOARD_8000:
case CHBT_BOARD_N110: case CHBT_BOARD_N110:
case CHBT_BOARD_N210: case CHBT_BOARD_N210:
case CHBT_BOARD_CHT210: writel(V_TPIPAR(0xf), adapter->regs + A_TPI_PAR);
case CHBT_BOARD_COUGAR:
t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x800); t1_tpi_write(adapter, A_ELMER0_GPO, 0x800);
break; break;
case CHBT_BOARD_CHT110:
t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x1800);
/* TBD XXX Might not need. This fixes a problem
* described in the Intel SR XPAK errata.
*/
power_sequence_xpak(adapter);
break;
} }
return 0; return 0;
} }
...@@ -663,20 +658,19 @@ int t1_init_hw_modules(adapter_t *adapter) ...@@ -663,20 +658,19 @@ int t1_init_hw_modules(adapter_t *adapter)
int err = -EIO; int err = -EIO;
const struct board_info *bi = board_info(adapter); const struct board_info *bi = board_info(adapter);
if (!adapter->mc4) { if (!bi->clock_mc4) {
u32 val = t1_read_reg_4(adapter, A_MC4_CFG); u32 val = readl(adapter->regs + A_MC4_CFG);
t1_write_reg_4(adapter, A_MC4_CFG, val | F_READY | F_MC4_SLOW); writel(val | F_READY | F_MC4_SLOW, adapter->regs + A_MC4_CFG);
t1_write_reg_4(adapter, A_MC5_CONFIG, writel(F_M_BUS_ENABLE | F_TCAM_RESET,
F_M_BUS_ENABLE | F_TCAM_RESET); adapter->regs + A_MC5_CONFIG);
} }
if (adapter->espi && t1_espi_init(adapter->espi, bi->chip_mac, if (adapter->espi && t1_espi_init(adapter->espi, bi->chip_mac,
bi->espi_nports)) bi->espi_nports))
goto out_err; goto out_err;
if (t1_tp_reset(adapter->tp, &adapter->params.tp, bi->clock_core)) t1_tp_reset(adapter, bi->clock_core);
goto out_err;
err = t1_sge_configure(adapter->sge, &adapter->params.sge); err = t1_sge_configure(adapter->sge, &adapter->params.sge);
if (err) if (err)
...@@ -690,7 +684,7 @@ int t1_init_hw_modules(adapter_t *adapter) ...@@ -690,7 +684,7 @@ int t1_init_hw_modules(adapter_t *adapter)
/* /*
* Determine a card's PCI mode. * Determine a card's PCI mode.
*/ */
static void __devinit get_pci_mode(adapter_t *adapter, struct pci_params *p) static void __devinit get_pci_mode(adapter_t *adapter, struct chelsio_pci_params *p)
{ {
static unsigned short speed_map[] = { 33, 66, 100, 133 }; static unsigned short speed_map[] = { 33, 66, 100, 133 };
u32 pci_mode; u32 pci_mode;
...@@ -720,8 +714,6 @@ void t1_free_sw_modules(adapter_t *adapter) ...@@ -720,8 +714,6 @@ void t1_free_sw_modules(adapter_t *adapter)
if (adapter->sge) if (adapter->sge)
t1_sge_destroy(adapter->sge); t1_sge_destroy(adapter->sge);
if (adapter->tp)
t1_tp_destroy(adapter->tp);
if (adapter->espi) if (adapter->espi)
t1_espi_destroy(adapter->espi); t1_espi_destroy(adapter->espi);
} }
...@@ -764,21 +756,12 @@ int __devinit t1_init_sw_modules(adapter_t *adapter, ...@@ -764,21 +756,12 @@ int __devinit t1_init_sw_modules(adapter_t *adapter,
goto error; goto error;
} }
if (bi->espi_nports && !(adapter->espi = t1_espi_create(adapter))) { if (bi->espi_nports && !(adapter->espi = t1_espi_create(adapter))) {
CH_ERR("%s: ESPI initialization failed\n", CH_ERR("%s: ESPI initialization failed\n",
adapter->name); adapter->name);
goto error; goto error;
} }
adapter->tp = t1_tp_create(adapter, &adapter->params.tp);
if (!adapter->tp) {
CH_ERR("%s: TP initialization failed\n",
adapter->name);
goto error;
}
board_init(adapter, bi); board_init(adapter, bi);
bi->mdio_ops->init(adapter, bi); bi->mdio_ops->init(adapter, bi);
if (bi->gphy->reset) if (bi->gphy->reset)
...@@ -810,14 +793,12 @@ int __devinit t1_init_sw_modules(adapter_t *adapter, ...@@ -810,14 +793,12 @@ int __devinit t1_init_sw_modules(adapter_t *adapter,
* Get the port's MAC addresses either from the EEPROM if one * Get the port's MAC addresses either from the EEPROM if one
* exists or the one hardcoded in the MAC. * exists or the one hardcoded in the MAC.
*/ */
if (!t1_is_asic(adapter) || bi->chip_mac == CHBT_MAC_DUMMY) if (vpd_macaddress_get(adapter, i, hw_addr)) {
mac->ops->macaddress_get(mac, hw_addr);
else if (vpd_macaddress_get(adapter, i, hw_addr)) {
CH_ERR("%s: could not read MAC address from VPD ROM\n", CH_ERR("%s: could not read MAC address from VPD ROM\n",
port_name(adapter, i)); adapter->port[i].dev->name);
goto error; goto error;
} }
t1_set_hw_addr(adapter, i, hw_addr); memcpy(adapter->port[i].dev->dev_addr, hw_addr, ETH_ALEN);
init_link_config(&adapter->port[i].link_config, bi); init_link_config(&adapter->port[i].link_config, bi);
} }
......
/***************************************************************************** /*****************************************************************************
* * * *
* File: suni1x10gexp_regs.h * * File: suni1x10gexp_regs.h *
* $Revision: 1.4 $ * * $Revision: 1.9 $ *
* $Date: 2005/03/23 07:15:59 $ * * $Date: 2005/06/22 00:17:04 $ *
* Description: * * Description: *
* PMC/SIERRA (pm3393) MAC-PHY functionality. * * PMC/SIERRA (pm3393) MAC-PHY functionality. *
* part of the Chelsio 10Gb Ethernet Driver. * * part of the Chelsio 10Gb Ethernet Driver. *
...@@ -21,24 +21,16 @@ ...@@ -21,24 +21,16 @@
* * * *
* http://www.chelsio.com * * http://www.chelsio.com *
* * * *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com * * Maintainers: maintainers@chelsio.com *
* * * *
* Authors: Dimitrios Michailidis <dm@chelsio.com> * * Authors: PMC/SIERRA *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* * * *
* History: * * History: *
* * * *
****************************************************************************/ ****************************************************************************/
#ifndef _SUNI1x10GEXP_REGS_H #ifndef _CXGB_SUNI1x10GEXP_REGS_H_
#define _SUNI1x10GEXP_REGS_H #define _CXGB_SUNI1x10GEXP_REGS_H_
/******************************************************************************/ /******************************************************************************/
/** S/UNI-1x10GE-XP REGISTER ADDRESS MAP **/ /** S/UNI-1x10GE-XP REGISTER ADDRESS MAP **/
...@@ -217,5 +209,5 @@ ...@@ -217,5 +209,5 @@
#define SUNI1x10GEXP_BITMSK_TXXG_FCRX 0x0004 #define SUNI1x10GEXP_BITMSK_TXXG_FCRX 0x0004
#define SUNI1x10GEXP_BITMSK_TXXG_PADEN 0x0002 #define SUNI1x10GEXP_BITMSK_TXXG_PADEN 0x0002
#endif /* _SUNI1x10GEXP_REGS_H */ #endif /* _CXGB_SUNI1x10GEXP_REGS_H_ */
/*****************************************************************************
* *
* File: tp.c *
* $Revision: 1.6 $ *
* $Date: 2005/03/23 07:15:59 $ *
* Description: *
* Core ASIC Management. *
* part of the Chelsio 10Gb Ethernet Driver. *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License, version 2, as *
* published by the Free Software Foundation. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program; if not, write to the Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
* *
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *
* WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *
* *
* http://www.chelsio.com *
* *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com *
* *
* Authors: Dimitrios Michailidis <dm@chelsio.com> *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* *
* History: *
* *
****************************************************************************/
#include "common.h"
#include "regs.h"
#include "tp.h"
struct petp {
adapter_t *adapter;
};
/* Pause deadlock avoidance parameters */
#define DROP_MSEC 16
#define DROP_PKTS_CNT 1
static void tp_init(adapter_t *ap, const struct tp_params *p,
unsigned int tp_clk)
{
if (t1_is_asic(ap)) {
u32 val;
val = F_TP_IN_CSPI_CPL | F_TP_IN_CSPI_CHECK_IP_CSUM |
F_TP_IN_CSPI_CHECK_TCP_CSUM | F_TP_IN_ESPI_ETHERNET;
if (!p->pm_size)
val |= F_OFFLOAD_DISABLE;
else
val |= F_TP_IN_ESPI_CHECK_IP_CSUM |
F_TP_IN_ESPI_CHECK_TCP_CSUM;
t1_write_reg_4(ap, A_TP_IN_CONFIG, val);
t1_write_reg_4(ap, A_TP_OUT_CONFIG, F_TP_OUT_CSPI_CPL |
F_TP_OUT_ESPI_ETHERNET |
F_TP_OUT_ESPI_GENERATE_IP_CSUM |
F_TP_OUT_ESPI_GENERATE_TCP_CSUM);
t1_write_reg_4(ap, A_TP_GLOBAL_CONFIG, V_IP_TTL(64) |
F_PATH_MTU /* IP DF bit */ |
V_5TUPLE_LOOKUP(p->use_5tuple_mode) |
V_SYN_COOKIE_PARAMETER(29));
/*
* Enable pause frame deadlock prevention.
*/
if (is_T2(ap)) {
u32 drop_ticks = DROP_MSEC * (tp_clk / 1000);
t1_write_reg_4(ap, A_TP_TX_DROP_CONFIG,
F_ENABLE_TX_DROP | F_ENABLE_TX_ERROR |
V_DROP_TICKS_CNT(drop_ticks) |
V_NUM_PKTS_DROPPED(DROP_PKTS_CNT));
}
}
}
void t1_tp_destroy(struct petp *tp)
{
kfree(tp);
}
struct petp * __devinit t1_tp_create(adapter_t *adapter, struct tp_params *p)
{
struct petp *tp = kmalloc(sizeof(*tp), GFP_KERNEL);
if (!tp)
return NULL;
memset(tp, 0, sizeof(*tp));
tp->adapter = adapter;
return tp;
}
void t1_tp_intr_enable(struct petp *tp)
{
u32 tp_intr = t1_read_reg_4(tp->adapter, A_PL_ENABLE);
{
/* We don't use any TP interrupts */
t1_write_reg_4(tp->adapter, A_TP_INT_ENABLE, 0);
t1_write_reg_4(tp->adapter, A_PL_ENABLE,
tp_intr | F_PL_INTR_TP);
}
}
void t1_tp_intr_disable(struct petp *tp)
{
u32 tp_intr = t1_read_reg_4(tp->adapter, A_PL_ENABLE);
{
t1_write_reg_4(tp->adapter, A_TP_INT_ENABLE, 0);
t1_write_reg_4(tp->adapter, A_PL_ENABLE,
tp_intr & ~F_PL_INTR_TP);
}
}
void t1_tp_intr_clear(struct petp *tp)
{
t1_write_reg_4(tp->adapter, A_TP_INT_CAUSE, 0xffffffff);
t1_write_reg_4(tp->adapter, A_PL_CAUSE, F_PL_INTR_TP);
}
int t1_tp_intr_handler(struct petp *tp)
{
u32 cause;
cause = t1_read_reg_4(tp->adapter, A_TP_INT_CAUSE);
t1_write_reg_4(tp->adapter, A_TP_INT_CAUSE, cause);
return 0;
}
static void set_csum_offload(struct petp *tp, u32 csum_bit, int enable)
{
u32 val = t1_read_reg_4(tp->adapter, A_TP_GLOBAL_CONFIG);
if (enable)
val |= csum_bit;
else
val &= ~csum_bit;
t1_write_reg_4(tp->adapter, A_TP_GLOBAL_CONFIG, val);
}
void t1_tp_set_ip_checksum_offload(struct petp *tp, int enable)
{
set_csum_offload(tp, F_IP_CSUM, enable);
}
void t1_tp_set_udp_checksum_offload(struct petp *tp, int enable)
{
set_csum_offload(tp, F_UDP_CSUM, enable);
}
void t1_tp_set_tcp_checksum_offload(struct petp *tp, int enable)
{
set_csum_offload(tp, F_TCP_CSUM, enable);
}
/*
* Initialize TP state. tp_params contains initial settings for some TP
* parameters, particularly the one-time PM and CM settings.
*/
int t1_tp_reset(struct petp *tp, struct tp_params *p, unsigned int tp_clk)
{
int busy = 0;
adapter_t *adapter = tp->adapter;
tp_init(adapter, p, tp_clk);
if (!busy)
t1_write_reg_4(adapter, A_TP_RESET, F_TP_RESET);
else
CH_ERR("%s: TP initialization timed out\n",
adapter->name);
return busy;
}
/*****************************************************************************
* *
* File: tp.h *
* $Revision: 1.3 $ *
* $Date: 2005/03/23 07:15:59 $ *
* Description: *
* part of the Chelsio 10Gb Ethernet Driver. *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License, version 2, as *
* published by the Free Software Foundation. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program; if not, write to the Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
* *
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *
* WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *
* *
* http://www.chelsio.com *
* *
* Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *
* All rights reserved. *
* *
* Maintainers: maintainers@chelsio.com *
* *
* Authors: Dimitrios Michailidis <dm@chelsio.com> *
* Tina Yang <tainay@chelsio.com> *
* Felix Marti <felix@chelsio.com> *
* Scott Bardone <sbardone@chelsio.com> *
* Kurt Ottaway <kottaway@chelsio.com> *
* Frank DiMambro <frank@chelsio.com> *
* *
* History: *
* *
****************************************************************************/
#ifndef CHELSIO_TP_H
#define CHELSIO_TP_H
#include "common.h"
#define TP_MAX_RX_COALESCING_SIZE 16224U
struct tp_mib_statistics {
/* IP */
u32 ipInReceive_hi;
u32 ipInReceive_lo;
u32 ipInHdrErrors_hi;
u32 ipInHdrErrors_lo;
u32 ipInAddrErrors_hi;
u32 ipInAddrErrors_lo;
u32 ipInUnknownProtos_hi;
u32 ipInUnknownProtos_lo;
u32 ipInDiscards_hi;
u32 ipInDiscards_lo;
u32 ipInDelivers_hi;
u32 ipInDelivers_lo;
u32 ipOutRequests_hi;
u32 ipOutRequests_lo;
u32 ipOutDiscards_hi;
u32 ipOutDiscards_lo;
u32 ipOutNoRoutes_hi;
u32 ipOutNoRoutes_lo;
u32 ipReasmTimeout;
u32 ipReasmReqds;
u32 ipReasmOKs;
u32 ipReasmFails;
u32 reserved[8];
/* TCP */
u32 tcpActiveOpens;
u32 tcpPassiveOpens;
u32 tcpAttemptFails;
u32 tcpEstabResets;
u32 tcpOutRsts;
u32 tcpCurrEstab;
u32 tcpInSegs_hi;
u32 tcpInSegs_lo;
u32 tcpOutSegs_hi;
u32 tcpOutSegs_lo;
u32 tcpRetransSeg_hi;
u32 tcpRetransSeg_lo;
u32 tcpInErrs_hi;
u32 tcpInErrs_lo;
u32 tcpRtoMin;
u32 tcpRtoMax;
};
struct petp;
struct tp_params;
struct petp *t1_tp_create(adapter_t *adapter, struct tp_params *p);
void t1_tp_destroy(struct petp *tp);
void t1_tp_intr_disable(struct petp *tp);
void t1_tp_intr_enable(struct petp *tp);
void t1_tp_intr_clear(struct petp *tp);
int t1_tp_intr_handler(struct petp *tp);
void t1_tp_get_mib_statistics(adapter_t *adap, struct tp_mib_statistics *tps);
void t1_tp_set_udp_checksum_offload(struct petp *tp, int enable);
void t1_tp_set_tcp_checksum_offload(struct petp *tp, int enable);
void t1_tp_set_ip_checksum_offload(struct petp *tp, int enable);
int t1_tp_set_coalescing_size(struct petp *tp, unsigned int size);
int t1_tp_reset(struct petp *tp, struct tp_params *p, unsigned int tp_clk);
#endif
...@@ -2120,6 +2120,7 @@ ...@@ -2120,6 +2120,7 @@
#define PCI_DEVICE_ID_ENE_1225 0x1225 #define PCI_DEVICE_ID_ENE_1225 0x1225
#define PCI_DEVICE_ID_ENE_1410 0x1410 #define PCI_DEVICE_ID_ENE_1410 0x1410
#define PCI_DEVICE_ID_ENE_1420 0x1420 #define PCI_DEVICE_ID_ENE_1420 0x1420
#define PCI_VENDOR_ID_CHELSIO 0x1425
#define PCI_VENDOR_ID_SYBA 0x1592 #define PCI_VENDOR_ID_SYBA 0x1592
#define PCI_DEVICE_ID_SYBA_2P_EPP 0x0782 #define PCI_DEVICE_ID_SYBA_2P_EPP 0x0782
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment